text
stringlengths
330
20.7k
summary
stringlengths
3
5.31k
Immediately after the public demonstration of a solar module powering a radio transmitter in 1954, this achievement was heralded as the beginning of a new era promising almost limitless harnessing of the sun’s energy.In reality, however, this prospect has largely been hampered by cost constraints.Consequently, much effort has been invested primarily in reducing the cost of solar power to make it economically competitive with fossil fuels and nuclear energy.As the solar spectrum is a wideband radiation spectrum, it has an inherently limited efficiency when fully absorbed and directly converted by a single photovoltaic junction.For this reason, conversion in multiple PV junctions or selective absorption is the only viable way to achieve high conversion efficiencies.Particularly multi-junction solar cells have proved to be expensive, which has limited their deployment to date.Technologies with a reduced efficiency are favored as long as they are available at lower cost, with the result that the development of more complex technologies with higher efficiencies has been delayed.This approach, however, may be challenged when taking into account the externalities examined in the present contribution.Whenever solar absorbers are installed, radiative forcing occurs, which ultimately increases global temperatures.Life-cycle analyses disregard these effects, which should be evaluated analogously to the embodied energy in order to determine a technology’s total global warming potential and system energy payback time.By including radiative forcing in the form of the Earth’s surface reflection coefficient albedo, this conjunction is made possible.Here, the albedo of a surface is defined as the ratio of the radiation reflected by that surface to the total incident irradiation upon it, whereby α varies from 0 to 1.Three different effects must be considered when changing the absorption properties of the Earth’s surface and decreasing its albedo.First, less sunlight is reflected into space, which has a direct impact on global climate.Second, regional atmospheric conditions are affected, which may generate heat islands and concomitant space-conditioning demand for buildings.Third, exposed surfaces heat up considerably under insolation and warm the supporting structures and buildings, which leads to an additional localized space-conditioning demand on the scale of an individual building.The latter effect is notably present for building-integrated PV.To evaluate the GWP and EPBT for solar-energy systems, these effects have to be added to the embodied energy determined from LCAs.A previous study incorporated radiative forcing effects induced by the deployment of PV systems by setting up a radiation balance for the incoming and outgoing radiation of the Earth’s atmosphere.In another study the effects of atmospheric heat islands induced by the deployment of PV systems were investigated.While not all externalities have been considered in these studies, the substitution of fossil fuel by PV systems was still shown to be largely beneficial in both cases.In particular, previous studies have omitted the local heating of buildings in their analyses and did not address the variety of PV technologies with different performance and life-cycle impact characteristics.In this study, the environmental externalities according to Fig. 1 are quantified, for the first time also including the change in energy consumption due to space-conditioning loads induced by the deployment of PV systems.Further, different PV technologies with efficiencies ranging from low to high are evaluated using this methodology.The results provide insight into the overall environmental impact of PV technologies.The methodology will help guide policy selection and implementation recommendations for PV deployment with respect to system efficiency and installation location in order to harvest solar power with minimal environmental impact.In the following, an analysis is presented where roofs with built-in opaque solar absorbers are compared to white roofs.An albedo of 0.9 represents an upper limit for a whitewashed exterior finish, while the former value varies slightly for different PV technologies but is rather consistent.Solar technologies that do not absorb the entire solar spectrum, such as dye-sensitized solar cells or cells covered by a diffusing surface, may have different albedo values, but as this does not change the ratio of the electrical energy conversion efficiency to absorption substantially, the main findings of this study will prevail.The albedo of an aged white roof may be as low as 0.55, which still represents a substantially more reflective surface than a solar absorber in line with the present discussion.Further, the albedo values were considered constant and independent of the angle of incidence.Secondary effects due to reflected radiation, such as radiation entrapment in the urban canopy, were are not considered as these require assumptions about the specific installation geometry and surface properties of the urban environment, which would lead to a loss of generality of the analysis.The equivalent CO2 emissions of the GWP relate to the total electricity production of the PV system during its life time.The EPBT determines the energy break-even point in years considering all environmental externalities.In Eq., not all of the albedo change contributes to radiative forcing since only the fraction of incoming radiation is directly converted to heat upon change from white roof to solar absorber.The PV system life time energy production is defined by the product of the nominal electrical system efficiency, performance ratio, mean annual irradiation, and life time.The performance ratio describes the relationship between the actual and theoretical energy outputs of a PV plant.The annual mean temperature of a city with one million or more inhabitants can be 1–3 K warmer than the surrounding area.This urban heat island effect generally arises due to the different thermal properties of building materials and roads compared to vegetated areas, as well as the lack of evapotranspiration in urban areas.UHIs
Whenever PV systems are installed, the absorption properties of the surface are changed and less sunlight from the Earth's surface is reflected into space.Three different effects need to be considered when changing the absorption properties of the Earth's surface: (1) global albedo impact, (2) regional atmospheric heat islands, and (3) locally heated surfaces.
temperature effects are not explicitly taken into account in the representation in Fig. 4.Generally, PV technologies with more negative temperature coefficients such mono- and polycrystalline silicon are penalized in warmer climates compared to CdTe and multi-junction cells used in HCPV.However, the difference in average module efficiency for silicon-based PV between the warmest and coldest climate in the present study was found to be less than 1% absolute, which implies that differences in ambient temperature between these selected urban climates have a negligible effect compared to the three externalities investigated in this study.Future studies must incorporate a methodology to take local electricity grid information into account.By providing case studies for different locations, improved PV deployment policies can be developed.Further, different building portfolios and soiling of building envelopes may be investigated.The deterioration of solar reflectance over time for cool roofs is well documented and depends on local weathering conditions, while the efficiency degradation of PV modules in the field has also been reported for selected technologies.While the present study has assumed time-invariant surface properties and primarily address the fundamental local and global effects associated with albedo change, future studies should take into consideration the aging of building envelopes both with and without PV module coverage.And lastly it is necessary to develop intertemporal models within the introduced framework, for example to include aspects of energy storage.The UHI effect is typically most pronounced at nighttime and in the winter months, and therefore does not coincide with the temporal availability of the solar resource.In particular, the consideration of diurnal and seasonal variations in the heat island effect as well as building loads would refine the present assessment, as space conditioning loads scale non-linearly with the excursions in building and ambient temperature.Therefore, temporal variations in the UHI may exacerbate space conditioning loads for warm climates in summer, but further reduce the space conditioning demand for temperate climates in winter.Harvesting solar energy comes with the price of radiative forcing.Only when including this externality in environmental and economic considerations can we expect that solar technology development will correct itself toward a trajectory of higher system efficiencies and installations outside urban areas in regions with high irradiance through appropriate policy implementations.The present contribution describes a framework to account for direct radiative forcing due to the albedo change of deploying absorbing PV surfaces as well as the indirect radiative forcing originating from manufacturing and deploying the PV panels as well as from the changes in the space conditioning loads of buildings due to the urban heat island and roof heating effects.When only a subset of these externalities is accounted for, certain PV technologies appear favorable compared to others, but only the consideration of all components of radiative forcing provides a complete assessment of the overall GWP of PV.It was found that the GWP impact due to albedo change tends to considerably outweigh the GWP impact due to materials, manufacturing and deployment.As a result, high-efficiency PV technologies such as concentrated PV tend to have a significantly lower environmental burden compared to low-efficiency PV.As demonstrated in the present study, the importance of local climate is important to accurately weigh the benefits and drawbacks of local warming and the resulting change in building space-conditioning load, which may further aggravate or alleviate the overall GWP impact of locally deployed PV.Installing low-efficiency PV in warm climates is a particularly unfavorable action in terms of GWP impact.Solar technologies are excellent means for counteracting fossil-fuel-induced GHG emissions and global warming.In order to maximize these beneficial effects, the appropriate technology and placement choices have to be made.The authors confirm that there or no financial, personal or other relationships that would result in an actual or potential conflict of interest with respect to the submitted manuscript.This statement takes into account the time period spanning 36 months prior to beginning the work that has resulted in the submitted manuscript.
Recent years have witnessed a remarkable reduction in solar-panel costs, such that low-efficiency, low-cost photovoltaics (PV) currently prevail over more complex, high-efficiency technologies.By including this radiative forcing in the form of the Earth's surface reflection coefficient albedo (α), we take these externalities into consideration in the overall equivalent global warming potential (GWP) of a PV system.The total GWP of four different PV technologies was examined for three different urban climates, temperate, moderate, and warm.To minimize the system energy payback time (EPBT) it is most sensible to install high-efficiency solar-energy systems outside cities and urban developments in locations with high annual irradiance.Only when taking radiative forcing into environmental and economic considerations is it expected that solar-technology development will correct its trajectory away from low-cost systems and toward high-efficiency installations with lower overall GWP.
Pneumonia is the leading infectious cause of death among children aged 1–59 months worldwide.1,According to WHO criteria, pneumonia is a syndrome characterised by non-specific respiratory signs and symptoms, many of which occur in children without primary respiratory disease.2,Along with improved vaccine access, treatment according to WHO pneumonia guidelines by healthcare providers in low-income and middle-income countries over the past 20 years has led to reduced child pneumonia mortality globally.3,Now most pneumonia deaths among children aged 1–59 months in LMICs are concentrated in those with HIV infection or exposure, severe malnutrition, or hypoxaemia.4–6,In the southern African country of Malawi, hospital mortality for children with pneumonia is 3%,7 while mortality of those with pneumonia and severe malnutrition exceeds 33%.8,Nearly 50% of child pneumonia deaths in Malawi are attributed to HIV.6,Children with hypoxaemia and pneumonia in Malawi are five times more likely to die than non-hypoxaemic pneumonia cases,9 and hypoxaemia accounts for 11% of paediatric pneumonia admissions.7,Alongside further improvement in WHO pneumonia guideline implementation, new treatments shown to be effective among children with high-risk conditions such as HIV, malnutrition, or hypoxaemia will be necessary to meaningfully reduce pneumonia mortality globally.3,Evidence before this study,We searched Medline using the following search strategy: AND ““OR “Respiration Disorders”)) AND OR Ghana).The search yielded 19 studies and was completed on Jan 14, 2019.One randomised controlled trial was published before initiating our study and evaluated the effect of nasal bubble continuous positive airway pressure on children 1–59 months old with acute respiratory distress in four rural hospitals in Ghana.It found that 1 h of bCPAP administered by nurses, compared with no bCPAP, was safe and reduced mean respiratory rate.Two relevant RCTs were published while our study was being done.The first examined hospital treatment failure prevalences between three groups of children aged 1–59 months old with hypoxaemic pneumonia randomly assigned to nasal bCPAP, high-flow nasal cannula oxygen, or low-flow nasal cannula oxygen in an intensive care unit of a research hospital in urban Bangladesh.Nurses administered care under direct physician supervision.After enrolling 225 children, the trial was prematurely stopped owing to a lower relative risk of mortality among bCPAP recipients, compared with low-flow oxygen recipients, on interim analysis.The authors reported no severe adverse events attributable to bCPAP.The trial excluded children with chronic illnesses and those meeting predefined treatment failure criteria.A crossover design allowed rescue bCPAP or high-flow oxygen for children failing low-flow oxygen, and mechanical ventilation was available.The second trial was a cluster randomised crossover trial in rural Ghana that evaluated 2-week mortality after emergency ward care with and without modified nasal bCPAP among children aged 1–59 months with acute respiratory distress.Care was initiated and administered by nurses under daily physician supervision.Primary analysis found no difference in the RR of mortality between care that included bCPAP, compared with care without bCPAP.A secondary analysis found a two times lower adjusted odds ratio of mortality among children younger than 12 months old receiving bCPAP, compared with those not receiving bCPAP.Children with more severe illness were excluded; 7% had an oxygen saturation of less than 92%.The authors reported no severe adverse events attributable to bCPAP.Added value of this study,Most children with severe pneumonia in low-income and middle-income countries are cared for at non-tertiary district hospitals where intensive care is unavailable, physicians are scarce, and there is little laboratory and radiographic diagnostic support.Most pneumonia deaths in LMICs occur among children with HIV-infection or HIV-exposure, severe acute malnutrition, or hypoxaemia.We believe that this is the first trial to date to investigate the effect of nasal bCPAP on hospital survival among high-risk African children with severe clinical pneumonia at a non-tertiary district hospital where physicians are unavailable and there is no intensive care.This RCT was designed to balance the controls of a clinical trial with the representativeness of real-world care, and found that the administration of bCPAP by nurses and clinicians without daily physician oversight elevated the risk of mortality, as compared with low-flow oxygen.Few enrolled children had a nasogastric tube inserted despite nearly all being eligible for nasogastric tube placement.The influence of infrequent nasogastric tube use on these findings is unclear.Compared with oxygen, almost all subgroups had excess mortality risk from bCPAP.Implications of all the available evidence,Our findings show that nasal bCPAP might pose an increased risk of mortality, compared with low-flow oxygen, for children with severe pneumonia and high-risk conditions who receive treatment in a non-tertiary LMIC district hospital.These results differ from the aforementioned trials in Bangladesh and Ghana; our trial applied wider inclusion criteria in a district hospital setting without physician oversight.These findings raise important questions about where and how bCPAP should be administered, who should administer bCPAP, and which children should receive bCPAP in LMICs.Taken together, nasal bCPAP should be administered with caution in all care settings and additional studies are needed to establish which populations are most likely to benefit from bCPAP when intensive care resources are unavailable.Non-invasive ventilation with continuous positive airway pressure aims to provide constant positive airway pressure.This constant positive pressure improves lung volumes, ventilation–perfusion mismatch, and the work of breathing, thus improving gas exchange.10,By improving gas exchange, CPAP can reverse or avert respiratory failure.10,In many high-income settings, CPAP is a standard care option for children with respiratory failure, including children without hypoxaemia, although there is variability across settings and evidence is scarce.10,CPAP can also be used to decrease metabolic demand and increase oxygen delivery among children in shock with circulatory failure.11,A low cost version–bubble CPAP–is used in some
Background: Pneumonia is the leading cause of death among children globally.Most pneumonia deaths in low-income and middle-income countries (LMICs) occur among children with HIV infection or exposure, severe malnutrition, or hypoxaemia despite antibiotics and oxygen.Non-invasive bubble continuous positive airway pressure (bCPAP) is considered a safe ventilation modality that might improve child pneumonia survival.We enrolled children aged 1–59 months old with WHO-defined severe pneumonia and either HIV infection or exposure, severe malnutrition, or an oxygen saturation of less than 90%.
pressures between 5 cm and 10 cm H2O is safe.22,23,We used either unvented nasal masks or nasal prongs as the nasal interface, with a preference towards nasal masks given our previous experience of frequent nasal septal trauma and nose bleeds attributed to nasal prong use.14,In the event of a power outage, the nasal mask or prongs were immediately removed until the electrical generator was started.Following initiation, respiratory support for both groups was weaned in 24 h increments based on step-wise procedures published previously.20,We established 24 h weaning procedures based on our initial observations that frequent bCPAP pressure titration was not feasible in this setting.Specifically, bCPAP and low-flow oxygen weaning was only permitted for children who after 24 h of hospitalisation had a decrease in respiratory danger signs.Hypoxaemia, grunting or apnoea were considered life threatening and weaning was not permitted if any of these were present.All children were re-evaluated by clinical staff 60 min after any change in bCPAP or low-flow oxygen delivery.Children were escalated back to the previous level of respiratory support if new danger signs or hypoxaemia developed after weaning.If not already at maximum low-flow or bCPAP support, children with new danger signs or hypoxaemia were escalated at all other times during the trial.For both groups, low-flow oxygen was the final step in weaning and was only stopped when all respiratory danger signs were resolved.The nasal interface was examined for patency and fit and nasal suctioning by means of a catheter and normal saline was done on deteriorating patients or patients with any bCPAP or oxygen change.Chest radiography was done on children with persistent or worsening illness if transportation to the radiography department could be safely completed, as is typical in LMIC district hospitals.Between patient uses, respiratory equipment was decontaminated and reused per Malawi guidelines, consistent with programmatic conditions."The trial's predefined primary outcome was survival to hospital discharge.Predefined secondary outcomes included antibiotic and hospital treatment failure, 30-day post-discharge vital status, and hospital adverse events.Among those alive on day 14 of hospitalisation, hospital treatment failure is defined as the presence of any one of the following: temperature ≥38 °C, any respiratory danger sign, or continued oxygen or bCPAP requirement.Events were graded per DAIDS AE Grading Table version 1.0 by the investigators and were verified by the data and safety monitoring board.Outcomes were stratified by prespecified patient characteristics including SpO2, age, sex, anaemia, wheeze, mental status, diarrhoea, and malaria.With a sample size of 900 participants, the trial had 80% power to detect a 6·0 percentage point lower mortality prevalence with bCPAP, assuming 14·7% mortality with standard care at a two-sided α level of 0·05.Sample size calculations and assumptions were described in detail previously.20,Per-protocol interim DMSB analyses were done at 30% and 60% enrolment."O'Brien-Fleming stopping thresholds of 0·0006 were prespecified for the first interim analyses and 0·0151 for the second interim analyses.The DSMB recommended futility analysis at the second interim point.In this context futility was calculated as a 0·0026% probability that bCPAP might achieve a significantly lower mortality risk, compared with oxygen, if enrolment continued to the target sample size."We assessed unadjusted treatment effects on primary and secondary outcomes in the main and subgroup analyses by means of Stata's epitab package and log binomial regression to estimate relative risks.We tested subgroup by treatment interaction by means of Poisson regression because some models did not converge.Final analyses were intention-to-treat."We summarised continuous data with means and SDs and compared differences by Student's t tests or non-parametric rank sum tests based on data distribution.Categorical data was reported by means of the number and proportion within each category and comparisons by Pearson χ2 tests.We also did ad-hoc exploratory analyses that included time to hospital death by study group, fidelity implementation outcomes per group allocation, baseline characteristics and fidelity implementation outcomes by hospital outcome, and association between study year and hospital outcome by treatment group.We used non-parametric log-rank tests for the survival analysis, and Cox regression analyses to estimate the treatment hazard ratio and its 95% CI.This study is registered with ClinicalTrials.gov, number NCT02484183.The sponsors had no role in the design, implementation, analyses, interpretation, write up, or decision to publish.The corresponding author had access to all data and takes responsibility for the manuscript.We screened children for eligibility between June 23, 2015, and March 21, 2018; children with severe hypoxaemia who were without HIV or severe malnutrition were included after May, 2016.644 children had a co-existing high-risk condition and were randomly assigned with 323 allocated to low-flow oxygen and 321 to bCPAP.17 bCPAP and 13 oxygen patients lacked hospital outcomes and were considered lost to follow-up.Specifically, four low-flow oxygen children and two bCPAP children were referred to a tertiary hospital and nine low-flow oxygen children and 15 bCPAP children withdrew from the trial.88 children died while hospitalised; 35 deaths occurred among the oxygen group and 53 among the bCPAP group.Children randomly assigned to bCPAP, compared with oxygen, had a higher relative risk of hospital death.In analysis of prespecified secondary outcomes, suspected adverse events related to bCPAP or oxygen occurred in 11 of 321 children receiving bCPAP and 1 of 323 children receiving oxygen.Four bCPAP and one oxygen group death were classified as probable aspiration episodes, one bCPAP death as probable pneumothorax, and six non-death bCPAP events included skin breakdown around the nares.The RR of 2-week hospital treatment failure in the bCPAP group, compared with the oxygen group, was 1·32; p=0·081; appendix p 13).Among children discharged from the hospital,
This trial is registered with ClinicalTrials.gov, number NCT02484183, and is closed.Findings: We screened 1712 children for eligibility between June 23, 2015, and March 21, 2018.Four bCPAP and one oxygen group deaths were classified as probable aspiration episodes, one bCPAP death as probable pneumothorax, and six non-death bCPAP events included skin breakdown around the nares.
the prespecified RR 30-day post discharge mortality in the bCPAP group, compared with the oxygen group was 1·42.We also examined hospital mortality prevalences among children with and without hypoxaemia and malaria.Among all 415 hypoxaemic children, the bCPAP group had a higher RR of death compared with the oxygen group.The results from the 229 non-hypoxaemic children were not significant and did not suggest a benefit from bCPAP, compared with oxygen.The RR of hospital death among 144 hypoxaemic bCPAP recipients with no other high-risk condition was 2·67, compared with the 142 hypoxaemic oxygen recipients with no other high-risk condition.Analyses of other hypoxaemic subgroups did not reach significance.Although prespecified analysis by malaria status was inconclusive it did not suggest children with malaria benefited from bCPAP, compared with oxygen.The RR of death from bCPAP, compared with oxygen, among all 197 children that tested rapid malaria positive was 1·26.Among 447 children with a negative rapid malaria test, the RR of death from bCPAP versus oxygen was 1·64.To understand whether a positive rapid antigen test represented a true case of malaria, over the final 12 months of the trial, malaria smears were collected on all patients testing rapid malaria positive and analysed at an internationally certified laboratory.21 of 89 children in the oxygen group who tested rapid malaria positive had a malaria smear, and only three of 21 were smear positive.Among 108 bCPAP children with positive rapid malaria tests, 24 of 108 had a malaria smear collected and only four of 24 were smear positive.One oxygen patient who was smear positive and no bCPAP patients who were smear positive died in the hospital.Although almost all other prespecified high-risk subgroup analyses showed excess mortality for bCPAP, compared with oxygen, none were statistically significant.We also compared the RR for children who did and did not have a condition of interest to examine whether any subgroup had a differential treatment effect from bCPAP, and we found no significant difference for any subgroup.All other secondary endpoints are in the appendix pp 12–15.An exploratory survival analysis did not suggest delayed hospital mortality among either group.Children randomly assigned to bCPAP had a 1·55 HR for hospital death compared with patients in the oxygen group.Most excess bCPAP hospital mortality of 18) occurred on days two and three post-random assignment.We report additional implementation outcomes that measured fidelity of the treatment to care protocols in table 3 and also exploratory analyses in the appendix pp 16–18.Of note, although 619 of 644 children enrolled onto the trial met nasogastric tube insertion criteria, records indicated that only 101 of 619 participants received one.A 7% lower proportion of bCPAP children of 308), compared with oxygen children of 311), were documented to have had a tube inserted.To assess whether respiratory equipment might have deteriorated over the course of the study we evaluated whether there was any interaction between study year and the odds of hospital mortality by trial group, and we observed no significant interaction.
bCPAP outcomes for high-risk African children with severe pneumonia are unknown.Children were randomly assigned 1:1 to low-flow nasal cannula oxygen or nasal bCPAP.Non-physicians administered care; the primary outcome was hospital survival.Primary analyses were by intention-to-treat and interim and adverse events analyses per protocol.The data safety and monitoring board stopped the trial for futility after 644 of the intended 900 participants were enrolled.323 children were randomly assigned to oxygen and 321 to bCPAP.35 (11%) of 323 children who received oxygen died in hospital, as did 53 (17%) of 321 who received bCPAP (relative risk 1.52; 95% CI 1.02–2.27; p=0.036).13 oxygen and 17 bCPAP patients lacked hospital outcomes and were considered lost to follow-up.Suspected adverse events related to treatment occurred in 11 (3%) of 321 children receiving bCPAP and 1 (<1%) of 323 children receiving oxygen.Interpretation: bCPAP treatment in a paediatric ward without daily physician supervision did not reduce hospital mortality among high-risk Malawian children with severe pneumonia, compared with oxygen.The use of bCPAP within certain patient populations and non-intensive care settings might carry risk that was not previously recognised.
is no evidence of PS particles outside the central region.This is because the contact line was free moving so there were no PS particles pinned.All PS particles were concentrated in a spot with a diameter of about 0.5 mm, which is 2 times smaller compared with that observed in Fig. 8b. Compared with the original ring-like stain, the area over which material has been deposited has been reduced by about 40 times.These finding should be very useful for controlling evaporative droplets carrying biological molecules or analyzers such as DNA, proteins and cells in which minimizing the patterned area is strongly desired .It is noted that the PS particles are well adhered to the surface after the evaporation is complete.It has been demonstrated that superhydrophobic surfaces on a common stainless steel can be fabricated by using a cost-effective and compact nanosecond laser system.Despite the processed surfaces exhibiting hydrophilic properties directly after fabrication, their wettability changes over time under ambient conditions and becomes non-wetting after 13 days.Optimized fabrication parameters are studied by investigation of the effect of laser power and scan line separation on surface wettability of textured surfaces.Interestingly, the syperhydrophobic surfaces can be employed for achieving spot homogeneity.Droplets with PS suspended particles on the as-received surface form clear, non-uniform ring-like structures after evaporation is complete.Conversely, on the incomplete wetting surfaces uniform deposition of the PS is observed.For the same droplet volume, the deposited area of the PS particles on the laser textured surfaces is one order of magnitude smaller compared with that on the unprocessed surface.This finding is useful for biological applications, in which minimizing the deposition of DNA, proteins and cell suspensions is highly important, as well as advanced fabrication techniques for uniform coatings and high resolution ink-jet prints.Data availability All relevant data present in this publication can be accessed at http://dx.doi.org/10.17861/a24ec3ae-acbe-4c23-9793-fa24648a6ce6
This work reports the laser surface modification of 304S15 stainless steel to develop superhydrophobic properties and the subsequent application for homogeneous spot deposition.Superhydrophobic surfaces, with steady contact angle of ∼154° and contact angle hysteresis of ∼4°, are fabricated by direct laser texturing.In comparison with common pico-/femto-second lasers employed for this patterning, the nanosecond fiber laser used in this work is more cost-effective, compact and allows higher processing rates.The effect of laser power and scan line separation on surface wettability of textured surfaces are investigated and optimized fabrication parameters are given.Fluid flows and transportations of polystyrene (PS) nanoparticles suspension droplets on the processed surfaces and unprocessed wetting substrates are investigated.After evaporation is complete, the coffee-stain effect is observed on the untextured substrates but not on the superhydrophobic surfaces.Uniform deposition of PS particles on the laser textured surfaces is achieved and the deposited material is confined to smaller area.
The data presented in the paper, shows the performance of non-conventional renewable energy projects in Colombia, which includes photovoltaic, Eolic, biomass and small hydroelectric renewable sources .The registration of NCRE projects started in 2016, thus a period between 2016 and 2018 is assessed.Based on this data three scenarios forecasting the performance of NCRE based power generation between 2019 and 2023 are developed.Primary data was obtained from the national NCRE projects registry database, which is available from Ref. .The database includes the transit of registered projects through the stages of approval.Table 1 shows the NCRE projects registered between 2016 and 2018, including their approval stage.It also includes the sum of the power generation capacity considered in the registered projects for each approval stage.Moreover, Table 2 shows the PV projects registered or forecasted between 2016 and 2023 in 6 ranges of PGC.Table 3 shows the forecast of PGC integrated into the electric system between 2019 and 2023 for PV projects in three scenarios.Table 4 shows the wind projects registered or forecasted between 2016 and 2023 in four ranges of PGC.Table 5 shows the forecasted PGC integrated into the electric system between 2019 and 2023 for Eolic projects in three scenarios.Table 6 shows the biomass projects registered or forecasted between 2016 and 2023 in five power ranges.Table 7 shows the forecast of PGC integrated into the electric system between 2019 and 2023 for biomass projects in three scenarios.Table 8 shows the SHC projects registered or forecasted between 2016 and 2018 in three power ranges.Table 9 shows the forecasted PGC integrated into the electric system between 2019 and 2023 for SHP projects in three scenarios.Finally, Table 10 summarizes the forecast of PGC integrated into the electric system between 2019 and 2023 for the three scenarios considered.The PGC of the NCRE projects between 2016 and 2019 shown in Table 1 is used to interpolate the performance of the generation capacity registered in NCRE projects in Colombia.The interpolation is used to forecast the PGC in NCRE projects to be registered between 2020 and 2023.Projects in the NCRE database goes through the three approval stages defined by the government .Preliminary feasibility assessment: a preliminary study to develop the environmental impact assessment, and the technical and economic feasibility of the project.Complete feasibility assessment: assessment of the technical, economic, environmental and social feasibility of the project.Pre-implementation: completion of the final design of the project, and definition of the implementation schedule.The project changes to the status “ready for implementation”.In total, it takes about four years between the registration of a project to the database and the clearance of the government for its implementation .In addition, it takes around one year to implement the project after its approval .Thus, it takes around 5 years from registering a project to its implementation.This average time is used to forecast the initial exploitation date of the projects registered between 2016 and 2019.The PGC yearly accumulated for each NCRE source is forecasted by adding the generation capacity of the projects after five years, considering the different scenarios of project success.Overall, between 70 and 75% of the renewable-based power generation projects registered at UPME are approved for implementation .Thus, three scenarios considering a high, medium and low implementation of these projects were considered:Scenario 1: 100% of the power generation capacities of NCRE projects registered are implemented.Scenario 2: 50% of the power generation capacities of NCRE projects registered are implemented.Scenario 3: 25% of the power generation capacities of NCRE projects registered are implemented.The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
The data included in this study was calculated based on data provided by the national project registry provided by the Colombian government.The data forecasts the evolution of the power generation capacity registered in non-conventional renewable energy source projects in three scenarios of implementation of the power generation capacity registered in the projects.Results can be used to benchmark non-conventional renewable energy sources in Colombia, interpret the effectiveness of renewable policies, and monitor the evolution of non-conventional renewable-based power generation.The data presented in the article relates to the research study: A look to the electricity generation from non-conventional renewable energy sources in Colombia [1].
cell size is not the best indicator of the developmental stage and as a result not all the cells that were analyzed were necessarily at the same stage.Images were obtained far from the edges of the substrate in order to minimize the impact of the cells that detached during the cutting of the substrate for analysis.The threshold method used with ImageJ had to be similar for all the substrates.However, even within the same substrate slight variations were made in order to maximize the number of focal adhesions captured by the software.The image analysis primarily focused on focal adhesions near the cell׳s boundary since these were significantly more abundant compared to focal adhesion points elsewhere in the cell.In most cases, the vinculin cloud prevented us from “seeing” the focal adhesions underneath this cloud.Secondly, the “cloud” made it impossible to adequately filter focal adhesions.Analyzing a “clouded” image, required us to lower the “intensity” of the filter resulting in possibly a slight size reduction of the focal adhesions.Hence, a decision was made to remove the vinculin “cloud” and optimize the focal adhesions.The vinculin surrounding the nucleus was manually removed from the images taken in order to focus on the focal adhesions present near the cell’s boundary; for this purpose a filter for a minimum particle size of 0.1 μm2 was used.The 0.1 μm2 particle size threshold was selected based on trial and error in maintaining a high degree of similarity in the images processed by ImageJ in reflecting the number and size of the focal adhesion points that could be visually appreciated from the confocal image before and after the removal of the signal from the cloud.This 0.1 μm2 was then used for all other measurements.Removal of the inactive vinculin around the nucleus allowed the software to adequately quantify the focal adhesions to not allow the noise originating from the vinculin to be misinterpreted as focal adhesions.Subsequently the image without the inactive vinculin around the nucleus removed was analyzed at the threshold used for the quantification of focal adhesions in order to quantify the amount of vinculin present in the sample.Images acquired after the analysis were verified.During this step any other background noise that did not belong to the cell under analysis was removed.Twenty cells were evaluated from each particular substrate.Images taken at a magnification of 60× were evaluated using ImageJ in order to quantify the number of focal adhesions, the area of each focal adhesion and total amount of vinculin present on each sample.These values were then divided by their respective cell areas in order to normalize the data and partially remove cell size as a variable.In addition 10 images of the nuclei on each substrate were taken at a lower magnification of 20× and similarly processed through ImageJ® in an attempt to quantify the average number of cells.The data from each test is presented as the mean value±standard error.A factorial analysis of variance was performed on the data in order to determine the degree to which the loading condition and the substrate stiffness significantly interacted.Subsequently a least significant difference Fisher test was performed on the data to establish if a significant difference existed between the loading condition and the substrate stiffness.InfoStat® was used for the statistical analysis of the data obtained from the images.Mean values for number and area of the focal adhesion points of MC3T3-E1 cells on the PDMS substrates, amount of vinculin based on analysis of images from immunofluorescent labeling for each PDMS substrate of varying stiffness and loading condition were obtained based on two experimental runs.Substrate stiffness, expressed as the modulus of elasticity, was determined for the PDMS substrates used in the culture chambers through standard mechanical testing.The values presented are mean values obtained from five different measurements using linear regression on their respective engineering stress-strain curves.Mean and standard errors, for each substrate׳s modulus of elasticity, “E”, are as follows: 2.04±0.06 MPa for PDMS 5, 1.70±0.05 MPa for PDMS 10 and 1.22±0.05 MPa for PDMS 15.A feasible alternative for the identification of focal adhesions is through immunofluorescence labeling of vinculin, since it serves as the stabilizing protein in focal adhesions.The quantity of vinculin relates directly to a cell׳s physiology and behavior on a particular substrate .The experimental data presented in this section are mean values obtained from measurements normalized with respect to the corresponding cell area.Again, data was processed using a factorial analysis and Fisher׳s LSD test for p<0.05 in order to determine if the differences observed between the cyclic and static experiments were significant.Both active and inactive vinculin were labeled using this immunofluorescence technique.Inactive vinculin refers to the vinculin that is not located at the focal adhesion sites .Focal Adhesion Area/Vinculin Area was based on the ratio of focal adhesion area to total vinculin area.Table 1 provides relevant data.Immunofluorescence images of active vinculin expressed as focal adhesion points and inactive vinculin as a “cloud” is shown in Fig. 1.
Loading frequency is known to influence the expression of the focal adhesions of the adherent cells.A small cyclical tensile force was transmitted to mouse pre-osteoblast MC3T3-E1 cells through PDMS substrates of varying stiffness.Changes in cell behavior with respect to proliferation and characteristics of focal adhesions were quantified through immunofluorescence labeling of vinculin.Amount of inactive vinculin was higher on substrates subjected to cyclic stimulation when compared with the results of the static substrates, whereas the number and area of focal adhesion points underwent a reduction.Inactive vinculin appears as a cloud in the cytoplasm in the vicinity of the nucleus.
squared error, and R2 criteria are shown in Tables 2 and 3.RMSE has a quadratic error rule, where the errors are squared before being averaged.As a result, a relatively high weight is given to large errors .This could be useful when large errors are undesirable in a statistical model.From Table 2 it can be deduced that for the Gaussian model the MAE and RMSE is slightly lower as compared to ANFIS.But in order to discriminate between the models for their predictive performance, the error metrics should be capable to differentiate amongst the model results.In this context, the MAE might be affected by large average error values by ignoring some large errors.The RMSE is generally better in reflecting the model performance differences as it gives higher weight to the unfavorable conditions.The difference between the RMSE of the Gaussian model and ANFIS is not immense and hence both the models have comparable performance metrics.Another measure of goodness-of-fit of the model is the R2 criteria.Higher values are indicative that the predictive model fits the data in a better way.By definition, R2 is the proportional measure of variance of one variable that can is predicted from the other variable.Thus ideally the values of R2 to approach one is always desirable.However, a high R2 tells you that the curve came very close to the points but in reality it does not always indicate the model quality .From Table 3, both Gaussian and ANFIS models have similar R2 values which are indicators that in both the modeling techniques, the prediction capability is similar.Using the R2 criteria in conjunction with the MAE and RMSE, it can be fairly deduced that the Gaussian and ANFIS models can be accurately used for the prediction of residual limb temperature.This study addresses the challenges of non-invasively measuring the in-socket residual limb temperature by comparing two different modeling techniques, namely ANFIS and Gaussian processes.The temperature profile of the residual limb skin is dependent on the ambient temperature and the activity level of the subject.The data obtained at ambient temperatures of 10 °C and 25 °C were used to develop an ANFIS model.The results from the ANFIS model were encouraging.These were compared with the previously developed Gaussian model.The performance metrics of both the models indicate that they are very similar in their predictive ability with an accuracy of ±0.5 °C.However this approach has certain limitations as well.The residual limb temperature profile will differ for every amputee as there are variations in physiological responses and variations in properties of the skin parameters.Because of the varying residual limb temperature profile in individuals, these machine learning algorithms have to be personalized by training them with individual datasets for each of the amputee subjects.This study which was conducted on one amputee subject a number of times, verified the success of proposed approach with an accuracy of ±0.5 °C.Thus, this work will be used to figure out the envelope in estimating the statistical power i.e. how many people are needed to make the model clinically significant and will be useful in extending it on a greater population in order to define a generic behavior.Also the temperature profile of the residual limb is dependent on the ambient temperature; it puts a constraint on drawing up a generalized model for all ambient temperatures.This could potentially be resolved by using interpolation or extrapolation techniques in the model at a given temperature to predict the residual limb temperature profile at another ambient temperature provided that the activity state of the subject is known.
Monitoring of the interface temperature at skin level in lower-limb prosthesis is notoriously complicated.This is due to the flexible nature of the interface liners used impeding the required consistent positioning of the temperature sensors during donning and doffing.Predicting the in-socket residual limb temperature by monitoring the temperature between socket and liner rather than skin and liner could be an important step in alleviating complaints on increased temperature and perspiration in prosthetic sockets.In this work, we propose to implement an adaptive neuro fuzzy inference strategy (ANFIS) to predict the in-socket residual limb temperature.ANFIS belongs to the family of fused neuro fuzzy system in which the fuzzy system is incorporated in a framework which is adaptive in nature.The proposed method is compared to our earlier work using Gaussian processes for machine learning.By comparing the predicted and actual data, results indicate that both the modeling techniques have comparable performance metrics and can be efficiently used for non-invasive temperature monitoring.
Despite the arrival of the invasive crustacean species, the three native species, the noble crayfish Astacus astacus, the narrow-clawed crayfish Astacus leptodactylus and the stone crayfish Austropotamobius torrentium are still present in Hungary.However, their current distribution is unclear since the last comprehensive overview of crayfish occurrences in Hungary published in 2013 was mainly based on literature data containing observations till 2006 for native species and observations till 2012 for non-native species.In addition, the overview of Györe et al. presented information on crayfish distribution in large sized grids of 50 km and consequently lacked details on for instance crayfish occurrence per water type Our present investigation contains data from very detailed field monitoring studies spanning more than two decades, from 1995 up to 2016 and includes exact geographical positions of the sampled sites together with information on environmental conditions.The aim of this present study is two-fold.Firstly it aims to investigate the present-day distribution and habitat preference of the native crayfish species A. astacus, A. leptodactylus and A. torrentium and the distribution of the invasive crayfish and crab species O. limosus P. leniusculus, P. fallax f. virginalis, P. clarkii and E. chinensis Secondly it aimed to investigate changes in the distribution patterns of all species over the last 20 years to identify potential risks for native species in Hungary,For our research we used existing data from different short- and long-term projects between 1995 and 2015.In total 1268 water systems comprising 4043 sampling sites have been sampled.Each sampling location was categorized as being ‘lowland’ or ‘upland–hilly’ and ‘calcic’ or ‘siliceous’ type.Furthermore, the size of the waterbody was categorized as ‘extra small’, ‘small’, ‘medium’, ‘large’ and ‘extra large’, following the Water Framework Directive water body typology Furthermore, all standing water bodies were assigned to the category ‘lakes’ In Hungary, extra large lakes do not occur, so these kinds of waters were not included in the database In addition, the surface water velocity was estimated by measuring the distance travelled of a plastic cap in 10 s.Potential trends in the crayfish catches were determined for the period between 1995 and 2015.Quantum GIS Wien 2.8.1 software was used to map the distribution of the crayfish species.To obtain insight in changes in crayfish distributions over time, the observations were divided into four time periods: from 1995 to 2000, from 2000 to 2005, from 2005 to 2010 and after 2010.Comparing the distributions between these periods showed either a decreasing, stable or increasing range per species.We counted the number of crayfish observations in the different types of waterbody categories.Additionally, the average abundance was calculated per species per water body type.Finally, crayfish occurrence in relation to surface water velocity was also calculated.Astacus astacus was found in 68 sites from 40 watercourses and is currently mostly found in the South-Western and Northern part of the country Note that especially between 2000 and 2015 additional habitats of A. astacus were discovered mainly in the northern, southern and western part of the country.A. leptodactylus occurred in 102 sites from 31 watercourses where the species mostly occurred in the main stream of the Danube and Tisza rivers their backwaters and in smaller sized rivers connected to these main streams.Note that before the year 2000, the Tisza River was the main area where this crayfish occurred; Between 2000 and 2010, the species still occurred here and maintained populations in the upstream section of the Tisza River and in the Danube and in its tributaries as well.From 2010 onwards, however, the occurrence of this species decreased here and shifted from the main branches of the Tisza to its tributaries, channels and smaller streams.The distribution of A. torrentium is limited to small streams in the north and west of the country.It was only encountered at 3 sampling locations and consequently this species can be considered very rare in Hungary.O. limosus was found in 90 sites from 39 watercourses.This species seems nowadays the most widespread invasive crayfish species in Hungary.Before 2005 only 3 observations from the Danube river system were known, but the distribution area of this species increased rapidly thereafter since O. limosus colonized backwaters, tributaries and channels connected to Danube and Tisza-lake reservoir.After 2010, the species colonized other watercourses with a connection to Danube and Tisza as well.The other invasive crayfish, P. leniusculus, occurred in 20 sites from 6 watercourses only.Between 1995 and 2005, the species was found only in one sampling site and in the period 2005–2010, the species increased its range but is nevertheless still limited to the western part of the country.In our monitoring we did not find any specimen of the E. sinensis, P. fallax f. virginalis and P. clarkii.According to the data of the last two decades, both coexistence with and the outcompeting of native crayfish species by invaders occur in Hungary.Coexistence of native species occurred only in one sampling site; no evidence was found of one species outcompeting the other.The presence of invasive and native species on the same habitat was more common.O. limosus co-occurred with A. leptodactylus on 5 sampling sites and only on one site with A. astacus, while P. leniusculus was found in one sampling site together with A astacus.On four other sites it was observed that invasive species were indeed outcompeting the native ones, since on 3 sites O. limosus replaced A. leptodactylus and on one site P. leniusculus replaced A. astacus.Astacus astacus was mainly found in upland–hilly water bodies and occurred in small, medium and
Three native crayfish species, i.e.~Astacus astacus, Astacus leptodactylus and Austropotamobius torrentium, occur in Hungary.Consequently, the first object of the present study was to investigate the present-day distribution and habitat preference of the native and invasive crayfish and crab species in Hungary according to water types and surface water velocity values.The second aim was to investigate the changes in the distribution patterns of all species over the last 20 years and to identify potential risks for native species in Hungary.The rarest crayfish is A. torrentium which was found only at three locations, making this the most endangered crayfish in Hungary.Although A leptodactylus used to occur especially in the main branches of the Danube and Tisza Rivers it shifted towards its tributaries after O. limosus appeared here.
exists only on one sampling site each, it indicates a problem, because these two species have nearly the same habitat requirements.In addition, P. leniusculus grows faster, has a higher reproduction rate and displays a more aggressive behaviour in competitive encounters.In all cases where both species have been observed together at the same time in the same river, only the signal crayfish survived and the noble crayfish disappeared.North American crayfishes, like O. limosus and P. leniusculus are carriers of the, for indigenous crayfish lethal, crayfish plague pathogen.For some non-native species, all individuals of a population are infected.When distributions of non-native and indigenous crayfish species overlap, the indigenous species become infected by this pathogen.The E. sinensis has expanded its distribution throughout northern Europe since its introduction into Europe in 1912.The species was collected once in the main arm of the Danube in 2003 near Budapest and 200 km downstream from the same river in 2004.Presumably it was introduced via ballast water or from an intentional introduction as in other European countries, but the species has not been observed in our monitoring and neither in other studies, despite of the fact, that already known in the Austrian and Serbian Danube.In 2013 and 2014, P. fallax f. virginalis was found near Lake Balaton and in waters in the West Balaton region.Another Procambarus species, i.e. P. clarkii was found in Lake Városliget in 2015.However, none of these Procambarus species were observed in our study and we could therefore not confirm their presence.Nevertheless, an increase of P. fallax f. virginalis populations is not unlikely, due to its parthenogenetic reproduction strategy and its overwintering ability in Central European climatic conditions.The same goes for P. clarkii, which also has also been reported to have the potential for asexual reproduction.The habitat of A. torrentium does not overlap with invasive species, but this crayfish is the rarest species in Hungary and is consequently most at risk; if they come into contact with invasive species and thereby with the crayfish plague they will likely go extinct.This is corroborated by the fact that in South-Western Romania, populations of the stone crayfish recently got endangered due to further invasions of O. limosus from both Hungary and Serbia.According to our analysis A. leptodactylus is at risk to be replaced by O. limosus in Hungary, because of an overlap in their occurrence, their habitat requirements and preferred surface water velocity.The risk on replacement for A. astacus is high as well because it is threatened by P. leniusculus in calcic, upland –hilly waters with larger sizes and in waters where surface water velocity values are higher.However, smaller sized waters are not yet occupied by P. leniusculus and here A. astacus is less at risk for now.Maybe these small size waters can in future serve as refuges to A. astacus and provide a first basis in sustainable management in the conservation of noble crayfish populations.To ensure the survival of Hungarian native crayfish, it is of great importance to keep track of the ongoing changes in the distribution area of the native as well as non-native crayfish species.The increase of non-native crayfish species is correlated with especially urbanization level and the Gross domestic product per capita.Where more urbanization results in a higher number of NICS in EU countries, while the number of NICS seem to increase at increasing GDP per capita.Interestingly on the later a tipping point seem to be present because the amount of NICS seems decrease in countries with a GDP higher than $40,000 per capita.Also in Hungary, the urbanization level is well correlated with the number of NICS.The connection between capital accumulation and number of NICS is also well correlated and still increases up to now.As a consequence, protection measures will be required to safe-guard the survival of the native crayfish populations.These measures could for instance comprise the construction of crayfish barriers to frustrate ongoing invasions.However, such barrier can restrict the movement of aquatic wildlife.A solution might be to construct fish-passable barriers or to really fence off certain waters for crayfish protection at the expense of fish All these should be part of an active crayfish management and protection plan supported for instance by crayfish breeding centres for establishing new or strengthen weak populations, like is present in Estonia.
Lately, however, non-indigenous crustaceans have also invaded the country Their most recent distribution and impact on the occurrences of the native species is not clear.Despite their overlapping habitat preference and the fact that O. limosus is a carrier of the lethal crayfish plague, both species still co-occur at a few locations in Hungary Pacifastacus leniusculus is invading the country from the West and although it is not present in large numbers yet, it has replaced the local A. astacus populations and may further impact its distribution when it further increases its range in the future.Although occurrences of the Chinese mitten crab (Eriocheir sinensis) the marbled crayfish (Procambarus fallax f. virginalis) and the red swamp crayfish (Procambarus clarkii) have been reported in the literature, these species were not encountered during the survey Thus indicating that their occurrence in Hungary is not widespread yet.The increasing distribution of the invaders forms a constant threat to native crayfish populations in Hungary To ensure the survival of the native species it is important to keep track of the ongoing changes in crayfish distributions.Nevertheless, additional protection measures will be required to safe-guard the survival of the native crayfish populations in Hungary.
85% at 5 minutes.We previously demonstrated the feasibility and now tested the effectiveness of the PBCC approach using the Concord.11,The next step is to determine whether this approach results in beneficial effects on important clinical outcomes in very preterm infants.In 2019, a large multicentre randomised clinical trial has been started in the Netherlands.In conclusion, stabilisation of very preterm infants performing the PBCC approach results in considerably longer cord clamping times and is at least as effective as stabilisation according to the current routine DCC approach.Larger randomised clinical trials are needed to show potential beneficial clinical effects for preterm infants.AtP is recipient of an NWO innovational research incentives scheme.RK received a grant from the Sophia Children’s Hospital Foundation.This project is sponsored by the Gisela Thier Fund.RK, EB, and AtP wrote the study protocol and all authors participated in reviewing the protocol.RK, EB, TvdA, PD, GP, EL, IR, SH and AtP participated in building and designing the study.RK, EB, TvdA, PD, NvG, EH and AtP coordinated the study, trained the clinicians and collected and analysed the data.RK and EB wrote the first draft and submitted the article.All authors participated in reviewing and editing the manuscript.All authors have read and approved the final manuscript.None declared.The Concord tables used in this study were manufactured by the Department of Medical Engineering of the Leiden University Medical Centre.The company of Concord Neonatal was not involved in this study and the authors do not have a financial relationship with Concord Neonatal.
Aim: To test whether stabilising very preterm infants while performing physiological-based cord clamping (PBCC) is at least as effective as the standard approach of time-based delayed cord clamping (DCC).Methods: A randomised controlled non-inferiority study was performed in two centres from May until November 2018, including preterm infants born below 32 weeks of gestational age.Infants were allocated to PBCC or standard DCC.Infants receiving PBCC were stabilised on a purpose-built resuscitation table with an intact umbilical cord.The cord was clamped when the infant had regular spontaneous breathing, heart rate ≥100 bpm and SpO2 >90% while using FiO2 <0.40.In infants receiving DCC, the cord was clamped at 30–60 seconds after birth before they were transferred to the standard resuscitation table for further treatment and stabilisation.Primary outcome was time to reach respiratory stability.Results: Thirty-seven infants (mean gestational age 29 + 0 weeks) were included.Mean cord clamping time was 5:49 ± 2:37 min in the PBCC (n = 20) and 1:02 ± 0:30 min in the DCC group (n = 17).Conclusion: Stabilisation of very preterm infants with physiological-based cord clamping is at least as effective as with standard DCC.Clinical Trial Registration: Netherlands Trial Register (NTR7194/NL7004).
and the impact calculations using the process described by Ivanov et al.The study evaluates the environmental impact of REO production from a life cycle perspective at Songwe Hill.Analysis indicates that overall Songwe Hill performs favourably compared other REO production LCA studies.The precipitation stage has the greatest contribution to a number of environmental impact categories, with impacts mostly attributed to sodium hydroxide consumption.Carrying out an LCA during the pre-feasibility stage of a project would allow the results generated to influence decision making in project development.The LCI data required can be generated through process simulation.The on-site acid regeneration had lower life cycle environmental impacts in seven of theht impact categories measured."When comparing the energy scenarios, the option of using the Kam'mwamba coal fired power plant that is under development in Malawi performed poorly in all impact categories.If Songwe Hill uses this energy the project would have high environmental impact scores.The second scenario, purely hydroelectric, performed well in all categories.Unfortunately, this scenario is unrealistic as the power is not consistently available during peak hours.From the final four scenarios investigated, the sixth scenario, which included off-peak hydroelectric combined with photovoltaic and energy storage, had the lowest environmental impact in the acidification, global warming, and human toxicity categories.As the project moves through to the feasibility stages, there is increased the certainty in both the geology and the project processes and infrastructure, the LCA can be updated and used in the subsequent decision making the granularity of the approach used, it is possible to assess the contributions of individual processes.The study used a combination of mineral processing simulation software to generate LCI data.The advantage of this approach is that as a project moves through development stages and refines the process flowsheet, the simulation can quickly and efficiently integrate these changes into the LCA model.Updated mineral processing simulations can feed into the LCA model allowing for project changes to be examined in terms of environmental performance, whilst maintaining a consistent system boundary and more reliable LCI data.In order to develop this into a process that is easy to adopt for mining companies and allows for comparison across projects, a harmonisation of methodologies for particular commodities needs to be made.In other sectors, such as with product manufacturing it is through the development of product category rules, which refers to the calculation rules for the underlying LCA of a product or process, as well as provides information and the format for presentation in an environmental product declaration.There is currently a lack of PCR for specific commodities such as REE, which would provide clarity for future LCA.This study presented an approach to include mineral process simulation linked LCA during the early stages of project development, such as following a pre-feasibility study.The results indicate that the lowest global warming impact would be obtained using hydroelectric power as an energy source with the inclusion of on-site acid regeneration.However, use of hydropower for off-peak energy combined with photovoltaic power and energy storage is the best solution in terms of global warming impact while providing reliable power on-site.The process simulation-LCA method has the advantage of being easily updated during the life of a project as new data becomes available.For example, as new drill data uncovers more information about the mineralogy and grade of the deposit, this data can be fed into the process simulation software which in turn can generate LCI data such as energy and material requirements for the process flowsheet which will update the LCIA results.This approach also ensures the generation of robust and reliable environmental impact results due to the closed mass and energy balance LCI data and consistent system boundary definition.This method could be developed as a standard approach for LCI generation for PCR in the future and can ultimately, better inform decision making during the development of a project and help reduce its life cycle environmental impacts.
This study demonstrates that a process simulation-based life cycle assessment (LCA) carried out at the early stages of a REE project, such as at the pre-feasibility stage, can inform subsequent decision making during the development of the project and help reduce its environmental impacts.It is important that the environmental consequences of different production options are examined in a life cycle context in order that the environment footprint of these raw materials is kept as low as possible.Here, we present a cradle-to-gate and process simulation-based life cycle assessment (LCA) for a potential new supply of REE at Songwe Hill in Malawi.We examine different project options including energy selection and a comparison of on-site acid regeneration versus virgin acid consumption which were being considered for the project.A scenario that combines on-site acid regeneration with off-peak hydroelectric and photovoltaic energy gives the lowest global warming potential and performs well in other impact categories.This approach can equally well be applied to all other types of ore deposits and should be considered as a routine addition to all pre-feasibility studies.
Over the past decades, exposure to health risk behaviors has become one of the most widely investigated subjects in studies with young populations.1,2,The interest in investigations focusing on this subject can be explained, at least in part, by the fact that such behaviors can be established and incorporated into the lifestyle at an early age,3,4 and due to their connection with biological risk factors5 and the presence of established metabolic or cardiovascular disease.6,The prevalence of co-occurrence of health risk behaviors in adolescents has been described in several studies.7–17,However, it was observed that the studies developed in Brazil, except for the survey performed by Farias Júnior et al.,15 relied on very specific samples: laboratory school students17 and day-shift students from public schools in a city from southern Brazil.16,Therefore, the results of these studies cannot be extrapolated to other regions of the country due to socioeconomic and cultural contrasts, which are known to differentiate the exposure to health risk behaviors, as reported by Nahas et al.18,Epidemiological surveys on the co-occurrence of health risk behaviors in adolescents and their associated factors can help to identify risk groups and to monitor the health levels of this population, which can support the development of public policies to promote health, using earlier intervention strategies to prevent these habits.Thus, the aim of this study was to analyze the prevalence and factors associated with co-occurrence of health risk behaviors in adolescents.This is a secondary analysis of data from a cross-sectional epidemiological survey, school-based and statewide, called “Lifestyle and health risk behaviors in adolescents: from prevalence study to intervention”.The research protocol was approved by the Institutional Review Board of Hospital Agamenon Magalhães, in compliance with the standards established in Resolutions 196 and 251 by the National Health Council.The target population, estimated at 352,829 individuals, according to data from the Education and Culture Secretariat of the State of Pernambuco, consisted of high-school students enrolled in public schools, aged 14–19 years.The following parameters were used to calculate sample size: 95% confidence interval; sampling error of 3% points; prevalence estimated at 50%, and the effect of sample design, established at four times the minimum sample size.Based on these parameters, the calculated sample size was 4217 students.Considering the sampling process, we tried to ensure that the selected students represented the target population regarding the geographic regions, school size and shift.The regional distribution was analyzed based on the number of students enrolled in each of the 17 GEREs.Schools were classified according to the number of students enrolled in high school, according to the following criteria: small – less than 200 students; medium – 200–499 students, and large – 500 students or more.Students enrolled in the morning and afternoon periods were grouped into a single category.All students in the selected classes were invited to participate.We used cluster sampling in two stages, using the school and class as the primary and secondary sampling units, respectively.In the first stage, we performed the random selection of the schools, aiming to include at least one school of each size by GERE.In the second stage, 203 classes were randomly selected among those existing in the schools selected in the first stage.Data collection was performed using an adapted version of the Global School-Based Student Health Survey questionnaire.This tool had both face and content validity evaluated by experts, and had its indicators of co-occurrence validity and reproducibility tested in a pilot study.Consistency indicators of test–retest measure ranged from moderate to high19–21 for most items.The test–retest reproducibility coefficients of the measures used in this study were: 0.86 for physical activity; 0.66 for the consumption of fruits; 0.77 for the consumption of vegetables; 0.76 for alcohol consumption; 0.62 for tobacco use, and 0.74 for sedentary behavior.Data collection was carried out from April to October 2006.The questionnaires were applied in the classroom.The students were advised by two previously trained applicators, which clarified and assisted in filling out the data.All students were informed that their participation was voluntary and that the questionnaires did not contain any personal identification.Students were also informed that they could leave the study at any stage of data collection.A passive informed consent form was used to obtain the permission of parents for students younger than 18 years to participate in the study.Participants aged 18 or older signed the term, indicating their agreement to participate in the study.The dependent variable was obtained from the sum of five risk behaviors: low level of physical activity; sedentary behavior; occasional consumption of fruits and vegetables; alcohol consumption, and smoking.These factors were chose because they are lifestyle modifiable factors that appear to be more strongly associated with non-communicable chronic diseases, and represent the highest global burden of disease and mortality worldwide.22,Sedentary behavior was included because it is treated as a distinct behavior from low levels of physical activity, and it has a high prevalence in the population, in addition to being an important impact on adolescent health.23,Information regarding the description of these variables can be found in previous studies.19–21,The obtained responses resulted in an outcome with zero to five identified risk behaviors.Subsequently, for analysis purposes, the occurrence of risk behaviors was divided in four categories.The independent variables were: gender; age; school shift; school size; maternal education; occupational status; ethnicity; geographic region and place of residence.The data tabulation procedure was carried out in a database created with the EpiData Entry software.To perform the analysis, Stata software was used.In the bivariate analysis, the chi-square test was used for heterogeneity
Data were obtained using a questionnaire.The co-occurrence of health risk behaviors was established based on the sum of five behavioral risk factors (low physical activity, sedentary behavior, low consumption of fruits/vegetables, alcohol consumption and tobacco use).The independent variables were gender, age group, time of day attending school, school size, maternal education, occupational status, skin color, geographic region and place of residence.
self-esteem, autonomy and personal responsibility, characteristics that may favor the adoption of healthier behaviors.Adolescents who live in the semi-arid region of Pernambuco showed a 39% increase in the chance of simultaneous exposure to a higher number of health risk behaviors compared to their peers living in the metropolitan area.Comparative studies with analysis of simultaneous exposure to lifestyle habits are scarce, making the comparisons impossible.However, Matsudo et al.28 carried out a study in the state of São Paulo, observing that the individuals who lived on the coast were more active than those living in the countryside.This may be related to the low supply of leisure and physical facilities for physical activities in the countryside.Moreover, it may be related to the availability, accessibility and quality of food preservation in this region, where there is an acknowledged shortage of water resources, indispensable for both the production and the processing of fresh food.On the other hand, adolescents who live in rural areas had a 22% decrease in the chance of simultaneous exposure to a higher number of health risk behaviors when compared to those living in urban areas.This can be explained by the specific characteristics of the types of activities carried out in rural areas, which require greater energy expenditure to be performed,29 in addition to greater access to foods such as cereals and derivatives and tubers, which are essentially products of family agriculture, as well as the lower access to ready-made meals and industrialized mixes.30,The lack of similar studies makes it difficult to compare the findings of the present study.What was found in the literature was limited to studies that evaluated the association of these factors with isolated exposure to one or another risky behavior.Similar studies available13–17 used very different methodological procedures, particularly regarding the type, quantity and definition of characterizing risk variables.The generalization of the results of this study must be made with caution, as only adolescents attending public schools were included.One can assume that the results are different in samples of adolescents attending private schools and among those who are not enrolled in the formal educational system.On the other hand, the decision to not include private schools in the sampling planning was due to the fact that more than 80% of adolescents from Pernambuco were enrolled in public schools.It is noteworthy that the prevalence shown in this article discloses a scenario observed some time ago and, therefore, the interpretation of these parameters should be made carefully, as social and demographic changes that have occurred in the Brazilian northeast region during this period may have affected these indicators.On the other hand, it is not plausible to assume that the associations that were identified and reported in this study would be different due to possible changes in the prevalence of some factor.Despite the good reproducibility levels of the tool, one cannot rule out the possibility of information bias, as adolescents tend to overestimate or, at other times, underestimate the exposure to risk behaviors.However, the findings of this survey add important evidence to the available body of knowledge on the prevalence and factors associated with co-occurrence of health risk behaviors in adolescents.Additionally, the study was performed with a relatively large sample, representative of high-school students from public schools in the state of Pernambuco.It is believed that the evidence shown in this study may help identify more vulnerable subgroups, thus contributing to decision-making and appropriate intervention strategy planning.Moreover, it can lead to the development of other investigations.Considering these findings, it can be concluded that there is a large portion of adolescents exposed to simultaneous health risk behaviors.It was also verified that older adolescents, with mothers of intermediate educational levels and living in the semi-arid region had higher chance of simultaneous exposure to a higher number of health risk behaviors, thus configuring higher-risk subgroups, whereas adolescents who worked and those living in rural areas were less likely to have simultaneous exposure to a higher number of health risk behaviors.Study supported with financial assistance from the Conselho Nacional de Desenvolvimento Científico e Tecnológico, Coordenação de Aperfeiçoamento de Pessoal de Nível Superior and Fundação de Amparo à Ciência e Tecnologia do Estado de Pernambuco by granting of scholarships.The authors declare no conflicts of interest.
Objective To analyze the prevalence and factors associated with the co-occurrence of health risk behaviors in adolescents.Methods A cross-sectional study was performed with a sample of high school students from state public schools in Pernambuco, Brazil (n=4207, 14-19 years old).Results Approximately 10% of adolescents were not exposed to health risk behaviors, while 58.5% reported being exposed to at least two health risk behaviors simultaneously.There was a higher likelihood of co-occurrence of health risk behaviors among adolescents in the older age group, with intermediate maternal education (9-11 years of schooling), and who reported living in the driest (semi-arid) region of the state of Pernambuco.Adolescents who reported having a job and living in rural areas had a lower likelihood of co-occurrence of risk behaviors.Conclusions The findings suggest a high prevalence of co-occurrence of health risk behaviors in this group of adolescents, with a higher chance in five subgroups (older age, intermediate maternal education, the ones that reported not working, those living in urban areas and in the driest region of the state).
Meta-analyses of prospective cohort studies have shown that psychosocial stress might increase the risk of cardiovascular disease and diabetes.1–4,The underlying pathophysiological mechanisms include disturbed sympathetic-parasympathetic balance and dysregulation of the hypothalamic–pituitary–adrenal axis, which can accelerate the development of metabolic syndrome and lead to left-ventricular dysfunction, dysrhythmia, and proinflammatory and procoagulant responses.5,6,Stress has also been linked to worsening health-related lifestyle factors, such as physical inactivity and increased alcohol consumption, and, in people with existing illness, suboptimal treatment adherence.6,Although prevention guidelines for cardiovascular disease do not prioritise the management of stress in the general population,7–9 some guidelines recommend stress management for individuals with established cardiovascular disease or major cardiovascular risk factors, such as diabetes.7,The rationale for these recommendations is that people with cardiometabolic disease have many more adverse health events than do the general population—therefore, assuming that the relative risk associated with stress is the same for all people exposed, a greater number of adverse events will be prevented by targeting those already at high risk.However, the evidence base for this recommendation is weak, relying on studies of disease incidence1–4,10,11 with very few large-scale studies of mortality11–15 and stress in patients with cardiometabolic disease.13–17,Importantly, it is unknown whether the excess risk associated with stress at work and private life can be mitigated by controlling conventional risk factors and improving lifestyle.Evidence before this study,Work stressors, such as job strain and effort–reward imbalance at work, are common sources of stress in adulthood.Work stressors have been examined as risk factors for cardiometabolic disease, such as coronary heart disease, stroke, and diabetes, but few studies are available on their role as prognostic factors for these diseases.We searched PubMed and Embase databases from inception up to Feb 1, 2018 using the search terms: “work stress”, “job stress”, “job strain”, “effort–reward imbalance”, and “mortality”, without language restrictions.We identified no large-scale studies comparing the association between work stressors and mortality in people with and without cardiometabolic disease.Added value of this study,We pooled individual-participant data from seven European cohort studies, including a total of 102 633 men and women.Job strain was associated with substantial relative and absolute increases in mortality risk in men with cardiometabolic disease.The mortality difference between groups with and without job strain was clinically significant and independent of socioeconomic status and several conventional and lifestyle risk factors, including hypertension and dyslipidaemia and their pharmacological treatments, obesity, smoking, physical inactivity, and high alcohol consumption.In absolute terms, the difference in age-standardised mortality was greater for current smoking versus not smoking than for with versus without job strain, but, job strain was associated with a greater mortality difference than were high cholesterol, obesity, high alcohol consumption, and physical inactivity.In women and participants without cardiometabolic disease, the work stress–mortality associations were small or absent, both in relative and absolute terms.Implications of all the available evidence,The finding that job strain increases mortality risk, even in subgroups of men with cardiometabolic disease but a favourable cardiometabolic risk profile, suggests that standard care targeting conventional risk factors is unlikely to mitigate the mortality risk associated with job strain.Subsequent research should employ intervention designs to establish whether systematic screening and management of work stressors such as job strain would contribute to improved health outcomes in men with coronary heart disease, stroke, or diabetes.The Individual-Participant-Data Meta-analysis in Working Populations consortium is the largest multicohort research collaboration on work stress and clinically verified cardiovascular disease and diabetes.1,3,10,In this study, we sought to clarify the status of stress as a risk factor in cardiometabolic disease by investigating the associations of two common work stressors, job strain and effort–reward imbalance, with mortality in individuals with pre-existing diabetes or coronary heart disease or a history of stroke.For comparison, we examined the stress–mortality association in individuals without these diseases.To investigate whether management of conventional and lifestyle risk factors is likely to eliminate any excess risk associated with work stress, we also assessed the stress–mortality relation among patients with cardiometabolic disease who otherwise had low risk factor levels.If stress was associated with excess mortality, even in subgroups of low-risk patients, then better stress management might be an improvement on standard care.105 284 people were recruited into the seven studies between 1985 and 2002.Of the eligible population, 102 663 participants had data on prevalent cardiometabolic disease, at least one of the work stressors, and mortality, and were therefore included in this study.Characteristics were similar between the eligible and included populations in terms of the proportion of men, mean age, and proportion of participants of low-socioeconomic status.Mean follow-up for mortality was 13·9 years.During 1 423 753 person-years at risk, we identified 3841 deaths, of which 397 were among the 3441 individuals with cardiometabolic disease at baseline.Of the 1975 men with cardiometabolic disease at baseline, 396 had a history of coronary heart disease, 214 had stroke, 1425 had diabetes, 54 had two of these disorders, and three had all three.Of the 1466 women with cardiometabolic disease at baseline, 73 had a history of coronary heart disease, 153 had stroke, 1266 had diabetes, 18 had two of these disorders, and four had all three.In men without cardiometabolic disease at baseline, effort–reward imbalance was associated with an increased risk of mortality, and this association remained after multivariable adjustment and correction for multiple testing.There was no significant heterogeneity in the study-specific estimates or interaction with age.In absolute terms, the difference in mortality between those with and without effort–reward imbalance was smaller than those related to conventional lifestyle factors,
Background: Although some cardiovascular disease prevention guidelines suggest a need to manage work stress in patients with established cardiometabolic disease, the evidence base for this recommendation is weak.We sought to clarify the status of stress as a risk factor in cardiometabolic disease by investigating the associations between work stress and mortality in men and women with and without pre-existing cardiometabolic disease.We extracted individual-level data on prevalent cardiometabolic diseases (coronary heart disease, stroke, or diabetes [without differentiation by diabetes type]) at baseline.Results: We identified 102 633 individuals with 1 423 753 person-years at risk (mean follow-up 13.9 years [SD 3.9]), of whom 3441 had prevalent cardiometabolic disease at baseline and 3841 died during follow-up.Standard care targeting conventional risk factors is therefore unlikely to mitigate the mortality risk associated with job strain in this population.
such as smoking, physical inactivity, and obesity.In women without cardiometabolic disease at baseline, effort–reward imbalance was not associated with mortality.Job strain alone, or in combination with effort–reward imbalance, was not associated with mortality in men or women without cardiometabolic disease.In men with cardiometabolic disease at baseline, job strain was associated with an increased risk of mortality that remained after multivariable adjustment and correction for multiple testing.The age-adjusted mortality rate per 10 000 person-years was 149·8 in men with job strain and 97·7 in those without.The excess mortality risk in men with cardiometabolic disease who reported job strain was apparent across the entire follow-up period, rather than becoming apparent only in the early or late phases of the follow-up, and was robust to adjustment for lifestyle risk factors.There was no significant heterogeneity in study-specific estimates or difference in the association between age groups.After further adjustment for blood pressure and total cholesterol in the subgroup of participants with these data available, the HR for job strain compared with no job strain was 1·84.In analyses of cause-specific mortality, job strain had a minimally adjusted HR of 1·71 for risk of mortality from cardiovascular disease, but no robust associations were observed with cancer mortality or non-cardiovascular, non-cancer mortality.Effort–reward imbalance seemed to be associated with lower risk of death in men with previous cardiometabolic disease, but this association was lost after correction for multiple testing.In women with cardiometabolic disease at baseline, the age-adjusted death rates per 10 000 were 64·0 for job strain and 53·2 for no job strain and mortality was not significantly associated with job strain or effort–reward imbalance.Job strain in combination with effort–reward imbalance was not associated with mortality in men or women with cardiometabolic disease.To examine the relative importance of job strain as a risk factor for mortality in men with cardiometabolic disease, we compared death rates associated with job strain with those associated with established risk factors.The mortality difference between men with and without job strain was almost the same as that for current smokers versus never or former smokers, and higher than those for the presence of hypertension, high total cholesterol, obesity, physical inactivity, and high alcohol consumption.Furthermore, job strain was associated with a two to six times higher risk of mortality in subgroups of men with cardiometabolic disease but favourable risk factor profiles, including participants who were not obese, physically inactive, smokers, or heavy drinkers and normotensive participants, those with no dyslipidaemia, and those who adhered to antihypertensive, lipid-lowering, or anticoagulation treatments, according to prescription records.Additional adjustments did not alter these findings.Evidence from our pooling of individual-participant data from seven European cohort studies suggests that job strain is a risk factor for mortality in men with cardiometabolic disease, as defined by the presence of coronary heart disease, stroke, or diabetes.The mortality difference between groups with and without job strain was clinically significant and independent of socioeconomic status, the conventional and lifestyle risk factors measured, and pharmacotherapy.This finding is unlikely to be attributable to type I error due to multiple testing because it was robust to correction for multiple comparisons.In absolute terms, the difference in age-standardised mortality among men with cardiometabolic diseases was greater between current and non-smokers than between men with and without job strain.However, high cholesterol, obesity, high alcohol consumption, and physical inactivity were associated with smaller mortality differences than job strain.Our findings agree with those of a study of patients with acute myocardial infarction from the USA, in which individuals who reported life stress had higher mortality than those free of life stress14 and the few previous, small-scale prognostic studies on patients with cardiovascular disease,23–26 which in combination suggest a 1·6 times increased risk of recurrent events associated with job strain.17,The observed associations are also biologically plausible.The stress hormone cortisol stimulates glucose production in the liver and antagonises the action of insulin in peripheral tissues—both processes have the potential to contribute to worse prognoses in people with diabetes.5,6,Stress can also have adverse effects on cardiometabolic systems by inducing transient endothelial dysfunction, myocardial ischaemia, and cardiac arrhythmia and thus increasing the risk of fatal and non-fatal cardiac events.6,To our knowledge, this is the first large-scale study to examine the work stress–mortality association stratified by cardiometabolic risk profile.Our data showed that job strain substantially increased mortality risk even in subgroups of men with prevalent cardiometabolic disease but a favourable cardiometabolic risk profile, suggesting that standard care targeting conventional and lifestyle risk factors does not necessarily mitigate the excess mortality risk associated with job strain.The European prevention guidelines7 and the American Heart Association policy statements8 highlight psychosocial stress as a potential barrier to healthy lifestyles and optimal medication adherence, and recommend management of stress in individuals with high cardiovascular risk or established cardiovascular disease.Our findings are consistent with these recommendations, but also suggest that harmful effects of stress in men were not attributable to the lifestyle risk factors measured or poor adherence to pharmacotherapy; excess mortality risk was observed even among patients successfully treated for cardiometabolic disease who were normotensive, non-obese, physically active, had normal blood cholesterol, and were not smokers or heavy drinkers.There are various ways of expanding standard care to address work stress in patients, including systematic screening for stress and, if needed, interventions such as consultation, rehabilitation, job redesign, reductions in working hours, and retirement on health grounds.6,7, "In a Cochrane review of 35 randomised controlled trials including a total of 10 703 patients with coronary
This mortality difference for job strain was almost as great as that for current smoking versus former smoking (78.1 per 10 000 person-years) and greater than those due to hypertension, high total cholesterol concentration, obesity, physical inactivity, and high alcohol consumption relative to the corresponding lower risk groups (mortality difference 5.9–44.0 per 10 000 person-years).Excess mortality associated with job strain was also noted in men with cardiometabolic disease who had achieved treatment targets, including groups with a healthy lifestyle (HR 2.01, 95% CI 1.18–3.43) and those with normal blood pressure and no dyslipidaemia (6.17, 1.74–21.9).In all women and in men without cardiometabolic disease, relative risk estimates for the work stress–mortality association were not significant, apart from effort–reward imbalance in men without cardiometabolic disease (mortality difference 6.6 per 10 000 person-years; multivariable-adjusted HR 1.22, 1.06–1.41).
heart disease who had at least 6 months' follow-up, psychological interventions that alleviated stress and other psychological symptoms were successful in reducing cardiac mortality for people with coronary heart disease.27",However, it is unclear whether those interventions would benefit men with job strain and cardiometabolic disease.For other groups stress-related differences in mortality were small or absent, both in relative and absolute terms.In working-aged women with cardiometabolic disease, for example, job strain was not associated with a significant increase in mortality risk and the absolute mortality difference between those with and without job strain was only 10·8 per 10 000 person-years.Similarly, effort–reward imbalance was not associated with increased mortality in men or women with cardiometabolic disease, suggesting that job strain and effort–reward imbalance are of different prognostic value."Job strain encompasses only external sources of stress, whereas effort–reward imbalance also involves the individual's own behaviours.People with more severe cardiometabolic disease tend to shorten their working hours as a consequence of their condition, thus potentially reducing any effort–reward imbalance through reduced effort.10,28,This change could mitigate the link between effort–reward imbalance and mortality.By contrast, external characteristics of work that relate to job strain remain unchanged after the onset of disease.Finally, as expected, in healthy people work stress did not substantially increase mortality risk, although in men free of cardiometabolic disease, we observed a moderate association between effort–reward imbalance and risk of death.Our study benefits from a large sample size, predefined exposure assessment, coverage of several European countries, and a mortality outcome assessed via record linkage with very little loss to follow-up.The limitations of our study include the use of a single measurement of work stressors and risk factors, which does not include any measure of chronicity or change over time.There is also the possibility that prevalent cardiometabolic disease was underestimated in those studies with no measures of undiagnosed diabetes and cardiovascular disease.These drawbacks could contribute to an underestimation or overestimation of associations with mortality.We adjusted the associations for several conventional and lifestyle risk factors, but data for blood pressure and blood cholesterol concentration were not available in all the studies.This limitation could lead to overestimation of the status of job strain as an independent predictor of mortality, although there was no evidence to support this possibility in supplementary analyses of the three cohort studies with relevant data.We did not have detailed data on the duration or severity of the cardiometabolic diseases.Several factors that are more common in individuals with stress that can precipitate a fatal cerebrovascular or cardiovascular event, or otherwise increase risk of premature death, were not covered by our baseline measurement.These include, for example, stress-induced ischaemia, cardiac arrhythmia, low-grade systemic inflammation, increased blood viscosity, platelet activation and increases in the levels of coagulation and fibrinolytic factors, short and long sleep durations and sleep disorders, and reduced self-care.6,29,30,Further research is needed to establish the role of such factors in the excess mortality risk seen in men with job strain and cardiometabolic disease and to examine mechanisms underlying the observed sex differences in the effects of job strain.In conclusion, the results of this large pan-European study suggest that in men with cardiometabolic disease, the contribution of job strain to risk of death is clinically significant and independent of conventional risk factors and their treatment, as well as the lifestyle factors measured.Subsequent research should employ intervention designs to establish whether systematic screening and management of work stressors, such as job strain, would contribute to improved health outcomes in men with prevalent coronary heart disease, stroke, or diabetes.Established in 2008, the objective of the IPD-Work Consortium1,10 is to provide a large-scale harmonised database for the longitudinal estimation of associations between predefined psychosocial working conditions and chronic disease outcomes.The participating studies comply with the Declaration of Helsinki and were approved by local ethics review boards.Informed consent was obtained from all participants.Of the 12 original studies in the IPD-Work Consortium,1 seven independent cohort studies, initiated between 1985 and 2002 in Finland, France, Sweden and the UK, had data relevant to the present research.From each cohort study, eligible participants were those who were employed at the time of the baseline assessment, had data for age, sex, job strain, effort–reward imbalance at work, and prevalent cardiovascular disease and diabetes, and were being followed up for mortality.Data were anonymised and available at the individual level.Details of the studies included in the present multicohort analysis are summarised in the appendix."Baseline characteristics recorded were age, sex, and harmonised measures of smoking, alcohol consumption, leisure-time physical activity, BMI, and socioeconomic status.1",In three studies, assessments of systolic and diastolic blood pressure and total cholesterol concentration were also available.1,In four studies, it was possible to assess prescriptions in participants with diabetes, coronary heart disease, or stroke by linkage to prescription registers during the baseline year.Prescriptions for antidiabetes, antihypertensive, lipid-lowering, and anticoagulation medications were considered to indicate adherence.Analyses were based on two indicators of work stress: job demand–control and effort–reward imbalance at work.Reports from the IPD-Work consortium are based on predefined, harmonised, and validated definitions of work stress.The psychometric properties of these data were published before the extraction of outcome data.18,19,Job strain, referring to a combination of high demands and low control at work, was measured with sets of questions from the validated Job Content Questionnaire and Demand-Control Questionnaire, which were included in the baseline self-report questionnaire of all of seven studies.18,Using both questionnaires, we defined high job demands as having a job-demand score that was greater than
Methods: In this multicohort study, we used data from seven cohort studies in the IPD-Work consortium, initiated between 1985 and 2002 in Finland, France, Sweden, and the UK, to examine the association between work stress and mortality.Work stress was denoted as job strain or effort–reward imbalance at work.Work stressors, socioeconomic status, and conventional and lifestyle risk factors (systolic and diastolic blood pressure, total cholesterol, smoking status, BMI, physical activity, and alcohol consumption) were also assessed at baseline.In men with cardiometabolic disease, age-standardised mortality rates were substantially higher in people with job strain (149.8 per 10 000 person-years) than in those without (97.7 per 10 000 person-years; mortality difference 52.1 per 10 000 person-years; multivariable-adjusted hazard ratio [HR] 1.68, 95% CI 1.19–2.35).Interpretation: In men with cardiometabolic disease, the contribution of job strain to risk of death was clinically significant and independent of conventional risk factors and their treatment, and measured lifestyle factors.Funding: NordForsk, UK Medical Research Council, and Academy of Finland.
the study-specific median score; similarly, we defined low job control as having a job control score that was lower than the study-specific median score.The Pearson correlations between the harmonised scales used in this study and complete versions of the Job Content Questionnaire and Demand Control Questionnaire all had r greater than 0·9, apart from one study in which r was 0·8.18,In the present analyses, the exposure was defined as job strain versus no job strain according to the job strain model.1,The Effort–Reward Imbalance at Work questionnaire at baseline included items on work demands and efforts and monetary and non-monetary rewards at work.Different questionnaire versions were harmonised and validated across the constituent studies before the mortality analyses.19,Pearson correlation coefficients between the harmonised scales used in this study and complete versions of the Effort–Reward Imbalance questionnaire were high: r was greater than 0·9 for the effort scales and greater than 0·8 for the reward scales.19,For each participant, mean response scores were calculated separately for the effort and reward items.We constructed a ratio of the two scores to quantify the degree of mismatch between effort and rewards.The effort–reward ratio was dichotomised at a cutoff point of 1 with a ratio greater than 1 indicating effort–reward imbalance and a ratio of 1 or lower indicating no effort–reward imbalance at work.19,To examine the combined effects of job strain and effort–reward imbalance, we constructed a three-level exposure variable, where 0 represented no job strain or effort–reward imbalance, 1 represented either job strain or effort–reward imbalance, and 2 represented both job strain and effort–reward imbalance.2,More details of the work stress measurements are provided in the appendix.Baseline cardiometabolic diseases included common causes of death: coronary heart disease, stroke, and diabetes.Coronary heart disease was ascertained from national hospital admission records and discharge registries and denoted with version 10 of the International Classification of Diseases,1 or clinical examination with the MONICA definition.20,Agreement between the national hospital admission records and clinical examinations for coronary heart disease has been shown to be high.21,We identified history of stroke using self-reported doctor-diagnosed events, event tracing, and linkage to national hospital admission records.10,Prevalent diabetes was defined with information from any of the following data sources: hospital admission records with ICD-10 diagnoses, antidiabetes drug reimbursements,3 or 2 h oral glucose tolerance test complemented by self-report of diabetes diagnosis and medication.22,Mortality data, including date and cause of death, were obtained from national death registries.In each study, participants were linked to mortality records using their unique identification numbers.Because these records do not include date of death for people who emigrate and die abroad, such participants were flagged as emigrants and censored at the date of emigration.Means and SDs were calculated according to cardiometabolic disease status at baseline.Each participant was followed up from the date of the assessment of their work stressors and prevalent cardiometabolic disease to the earliest event out of death, loss to follow-up, or end of follow-up.We computed time to death to obtain age-adjusted incidence rates per 10 000 person-years in men and women in the pooled dataset.We used Cox proportional hazards regression to study the associations of work stressors with mortality in men and women with and without cardiometabolic disease.Bonferroni correction was used to compensate for multiple testing.Minimally adjusted models included age and study as covariates.Multivariable models were also adjusted for socioeconomic status, BMI category, smoking status, alcohol consumption, physical activity and, in a subgroup analysis of cohort studies with relevant data, blood pressure and total cholesterol concentration.Interactions with age were tested by grouping participants by age.Heterogeneity in study-specific estimates was examined by repeating the main analyses with a two-step procedure, with separate analyses in each cohort study and then pooling of the study-specific hazard ratios by use of random-effects meta-analysis.Robustness of the association between work stressors and mortality in subgroups was tested in analyses stratified by number of lifestyle risk factors.To examine whether any effect of job strain is present in individuals with cardiometabolic disease but an otherwise low-risk profile, we assessed the association between work stressors and mortality in subgroups of participants who had met treatment targets—ie, they had adhered to pharmacotherapy, had normal blood pressure, and normal fasting total cholesterol concentration.To minimise residual confounding, systolic blood pressure and total cholesterol concentration, treated as continuous variables, were added to the model as covariates.In further analyses, we examined the association of exposure to neither, either, or both of the work stressors with mortality and the associations of obesity, current smoking, high alcohol consumption, and physical inactivity with mortality.All analyses were done with SAS statistical software version 9.4.Statistical significance was inferred at a two-sided p value less than 0·05.The funders of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report.MKi and JP had full access to all the data in the study and MKi and JD had final responsibility for the decision to submit for publication.
Mortality data, including date and cause of death, were obtained from national death registries.We used Cox proportional hazards regression to study the associations of work stressors with mortality in men and women with and without cardiometabolic disease.
Gaussian processes are the leading class of distributions on random functions, but they suffer from well known issues including difficulty scaling and inflexibility with respect to certain shape constraints.Here we propose Deep Random Splines, a flexible class of random functions obtained by transforming Gaussian noise through a deep neural network whose output are the parameters of a spline.Unlike Gaussian processes, Deep Random Splines allow us to readily enforce shape constraints while inheriting the richness and tractability of deep generative models.We also present an observational model for point process data which uses Deep Random Splines to model the intensity function of each point process and apply it to neuroscience data to obtain a low-dimensional representation of spiking activity.Inference is performed via a variational autoencoder that uses a novel recurrent encoder architecture that can handle multiple point processes as input.
We combine splines with neural networks to obtain a novel distribution over functions and use it to model intensity functions of point processes.
than C. parvum as the most prevalent infecting species in the study region.Evidence for genetic recombination has been identified for the first time in C. ryanae, which was previously reported to have a clonal population structure.Finally, the high rate of Cryptosporidium infection among calves, and our findings of oocyst contaminated water and zoonotic transmission, suggest a serious public health risk for the dairy farm workers and villagers living in close proximity.The following are the supplementary data related to this article.Table showing the scoring system for clinical signs and diarrhea,List of gene specific PCR primers, used in the study.Table showing the summary results of clinical samples analyzed in the study,Analysis by Epi-Info version 3.5.4 software revealed significant association of Cryptosporidium infection with age, sex and clinical symptoms of infected animals.Analysis of genetic diversity among Cryptosporidium isolates based on 18SrRNA gene: A. Phylogenetic tree.Previously published 18SrRNA sequences of Cryptosporidium species were aligned with the representative sequences of our study isolates using the ClustalW multiple alignment program of MEGA version 7 software.The phylogenetic tree was constructed from this alignment using maximum likelihood matrix algorithm.The bootstrap values were also analyzed to estimate confidence intervals.B. Restriction fragment length polymorphism profiles of Cryptosporidium isolates on an agarose gel,Intragenic linkage disequilibrium and recombination analysis at 18SrRNA locus of our study isolates using DnaSP version 5.10.01software.Supplementary data to this article can be found online at https://doi.org/10.1016/j.fawpar.2019.e00064.
Cryptosporidium sp.is an enteric parasite with zoonotic potential, and can infect a wide range of vertebrates, including human.Determining the source of infection and the mode of transmission in a new endemic region is crucial for the control of cryptosporidiosis.In the present study, we have assessed the importance of dairy cattle as a potential source of Cryptosporidium infection for humans in a newly recognized endemic region.Cryptosporidium isolates from dairy calves, humans (farm workers) and nearby water bodies were genetically characterized based on 18SrRNA and hsp70 genes.A high incidence of Cryptosporidium infection was identified in our study region.This finding is of public health concern.Cryptosporidium ryanae rather than Cryptosporidium parvum has been identified as the most prevalent infecting species in the study region.Infections were associated with clinical symptoms of infected animals.An incomplete linkage disequilibrium (LD) value with potential recombination events at 18SrRNA locus were identified for the first time in C. ryanae, which was previously reported as a clonal population.Phylogenetic analysis revealed the presence of identical genotypes of a Cryptosporidium sp.from dairy calves, farm workers and nearby water bodies and indicates an association between water contamination and zoonotic transmission of Cryptosporidiosis in our study region.
has to be done with caution, a marine source rock with early oil maturities of ∼0.6%VR.In concert with the regional geology and results from drillings at the Lomonosov Ridge, Lower to Middle Eocene eventually Azolla rich shales would be a plausible candidate.Our data of sorbed gases on the Hinlopen Margin, southern Nansen Basin and the E Yermak Plateau suggest oil window maturities for potential source rocks, with slightly higher maturities at the Yermak Plateau.These low maturities argue against abundant gas generation in the studied areas, but the presence of petroleum systems with oil generation is at least feasible.Data off the western coast of Svalbard suggest a less homogenous situation compared to our working area.There, isotope compositions of the sorbed gases in sediments indicate source rock maturities from 0.6 to 3% VR.However, their and our data are in accordance with regionally differing petroleum systems and variable source rocks in the entire region.Sorbed gases and for selected samples extractable hydrocarbons of sediments from the E Yermak Plateau as well as from the Hinlopen Margin slope and the Nansen Basin were studied for compositions and for gas isotopic signatures.Concentrations of sorbed gases were partially high with highest abundances in samples from the southern Nansen Basin.Indications for active seepage were not found as pore water methane concentrations were very low.Interpretations of isotopic signatures in the sorbed gases can be taken as indication for a low maturity of the potentially corresponding source rocks.However, differences were observed between the E Yermak Plateau, the Hinlopen Margin and particularly the southern Nansen Basin.For the latter two, maturities of about 0.6% VR are suggested, while those for the first were slightly higher.Higher C1/C2+-ratios and the isotopic composition of methane may be taken as hint to a mixture of mostly Type II with minor contributions from Type III source rocks for the situation at the E Yermak Plateau, whereas for sorbed gases in sediments from the Hinlopen Margin and the Nansen Basin a marine Type II source rock is suggested.For two selected samples, the extractable organic matter was also studied.Samples from the Yermak Plateau and the southern Nansen Basin contained high molecular weight hydrocarbons as well as long chain n-alkanes, indicative for an immature terrigenous plant source.Results of sorbed gases and extractable organic matter analyses and 1D modelling can be explained by two plausible alternatives the gas stems from migrated hydrocarbons from deeper source rocks and the major portion of extractable organic matter is of ice-transported material from e.g. Svalbard or hydrocarbon gas and extractable organic matter are both of allochthonous origin.We favor explanation one, because 1D modelling has shown that petroleum generation is possible in the Nansen Basin and at the E Yermak Plateau similar or even higher sediment thicknesses suggest that petroleum generation is also possible there.Our observation that no correlation between abundances of TOC and sorbed gases exists and also Rock Eval data suggest that the vast majority of OM in the youngest sediments is less mature than the known source rocks and coals of Svalbard further supporting the hypothesis above.Consequently, paleofluid upflow from marine Type II source rocks is considered as most plausible scenario to explain the sorbed gases on the E Yermak Plateau and the Hinlopen Margin but particularly in the adjacent Southern Nansen Basin.
The Barents Sea is considered as an important target for oil and gas exploration, but the petroleum potential of its shelf and slope regions is unknown.Here we present results of a research cruise to the Northern Hinlopen Margin at the transition to the Southern Nansen Basin and the Eastern Yermak Plateau.Multichannel reflection seismic data acquisition, heat flow measurements, and geochemical analyses of near-surface sediments obtained by gravity coring were conducted to study the northern Barents Sea shelf and the early opening of the Nansen Basin and decipher their petroleum potential.Seismic data indicate high thicknesses of up to ~2000 m of Cenozoic sediments.The sediment samples were analysed for bulk geochemistry and sorbed hydrocarbon gases and for two sites for extractable hydrocarbons.Data from extractable (n-alkanes > n-C25) and bulk (HI and OI from Rock Eval) organic matter demonstrate predominantly terrigenous organic material, most likely derived from ice-transported allochthonous sediments.However, thermogenic gases sorbed to the sediment matrix (clay minerals, organic matter and/or carbonates) were found in concentrations of up to 600 ppb (on sediment wet wt.For the samples from the Northern Hinlopen Margin and particularly from the adjacent Nansen Basin, a paleo fluid flow of thermogenic gas is indicated and accompanied by higher n-alkanes with a modal, petroleum-like distribution.δ13C values of methane, ethane and propane and gas compositions point at a mainly marine source rock origin of all studied gases with early oil window maturities of the associated rocks (0.6-0.9%VR).From this data an admixture of Type III derived thermogenic gases is indicated for some of the Yermak Plateau sediments for which also the lowest abundances of sorbed gases (50-100 ppb) were observed.Gas geochemical characteristics in the samples with low gas abundances can partially be explained by an input of gases through ice-transport of allochthonous hydrocarbons, which were bound to mature organic matter.For a site on the Northern Hinlopen Margin NE of Svalbard, right at the southern termination of the Nansen Basin a different situation is indicated.In this area the highest concentrations of sorbed gases most likely derived from sediments with an early-oil window maturity and a marine kerogen Type II-typical isotopic distribution.At this location a pseudo well was constructed from 2D seismic data for reconstruction of thermal and maturity evolution.The simulation results indicate that an Early to Middle Eocene source rock would be in the early oil window since the Early Miocene.A possible source rock here and in the circum-Arctic region could have been formed by Azolla algae and other flourishing primary producers.
Transcranial direct current stimulation is a non-invasive brain stimulation method, which can be used to modulate spontaneous cortical activity in the human brain in a polarity-dependent way.Increasingly, the method is used as a therapeutic tool.Recent functional neuroimaging studies have investigated how changes in connectivity within resting-state networks are related to stimulation.For example, anodal tDCS, thought to increase cortical excitability, has been shown to alter connectivity within large-scale functional networks when delivered either before or during resting-state functional magnetic resonance imaging.However, the mechanism by which an externally applied field may interact and modulate neuronal activity during a given cognitive task-state and how it relates to changes in behaviour has yet to be determined.The present study is unique in this regard.Here, we used Dynamic Causal Modelling to explore changes in effective connectivity during a concurrent tDCS-fMRI study of overt picture naming.Resulting model parameters from the DCM were used to provide a measure of both the strength and direction of neuronal interactions between pre-specified left frontal regions known to be important for speech production.Using this approach our data provide novel insights into the underlying neuronal dynamics of anodal tDCS that operate on the naming network.In some cognitive and neurobiological models, cognitive functions are specified in distributed, inter-connected, overlapping and highly parallel processing networks.This theoretical framework has been used to characterize a variety of different complex cognitive skills, including picture naming.From this perspective, connections within a distributed naming network can be altered and differentiated via exposure, or experience-based learning.Similarly, behavioural and neural facilitation, or priming, of naming performance can also be seen as mediated via refinement and adjustment of connections between collaborating brain regions.The neural correlate associated with learning and facilitation is neural priming, which is characterized by a decrease in focal brain activity reflecting processes of neuronal adaptation.Neuronal adaptation is mediated by changes in effective connectivity between and within regions of the neural network.Consistent with predictions of neuronal adaptation, we previously demonstrated that anodal tDCS applied to the left inferior frontal cortex during overt picture naming concurrent with fMRI had a regionally specific neural priming effect on the BOLD signal in left inferior frontal sulcus and left ventral premotor cortex.Priming of the neural response significantly correlated with the behavioural priming of naming response times.This response profile suggests that anodal tDCS promotes neural efficiency during naming.How these neural adaptation effects are mediated by changes in effective connectivity remains unclear.Considering these data in the context of a learning framework, one may predict that facilitatory tDCS effects would be mediated by changes in inter-regional connectivity affected by anodal tDCS, or in intra-regional activity via self-connections.To explicitly test these predictions in the present study we used DCM to determine: changes in the strength and direction of neuronal coupling within and between left IFS and VPM during anodal tDCS compared to sham; and, how, at an individual level, the variability in effective connectivity values between these same two frontal regions related to variations in observed facilitation of picture naming behaviour during anodal tDCS.10 right-handed, healthy native speakers of English participated in a functional neuroimaging study of overt picture naming concurrent with anodal tDCS.All participants had normal hearing and no previous history of metallic implants, neurological or psychiatric disease.All participants were left hemisphere dominant for speech production as determined by a previous fMRI naming study.The simple main effect of anodal tDCS on the naming network in the same subjects has been reported previously.We targeted left frontal activity using 2 mA anodal tDCS or sham stimulation delivered for 20 min during an fMRI study of overt spoken picture naming.To avoid problems of tDCS and sham group comparability with regard to common confounding variables we used a within subject cross-over design where each of our 10 subjects served as his/her own control."In our previous study we investigated both order and cross-modal repetition effects as each picture was presented twice across two fMRI blocks on each scanning day: once with the target's spoken name as a cue and once with an acoustic control cue.For the DCM analysis, we were only interested in the simple main effect of anodal tDCS vs. sham during naming compared to rest.We therefore included data only from the first scanning block on each scanning day and collapsed across auditory cue types.This ensured that we avoided any potential confounds of – and interactions with – the expected behavioural and neural priming effects of practice or cross-modal repetition which would be associated with repeated exposure of items to be named on each scanning day.See Fig. 1 for a visualization of the study and task design.Full details of stimuli used and experimental procedures have been reported previously.On their first scanning day, during their first naming block half the participants received sham stimulation.On their second scanning day, the order of intervention was reversed i.e., they received an A-tDCS naming block first.The remaining five participants had the opposite order of intervention across scanning days.Using this sequencing, the order of intervention was fully counterbalanced across participants and scanning days.A minimum of 5 and maximum of 7 days separated the two scanning days.This approach permitted measurement of both the behavioural and neural consequences of anodal tDCS during: real anodal tDCS, and sham stimulation.Fig. 1A displays the run procedure.The order of stimuli was pseudo-randomized.In their first scanning day, during their first naming block, participants saw each of the 107 high frequency monosyllabic pictures to name paired with either a word or noise cue."Participants then saw the
Noninvasive neurostimulation methods such as transcranial direct current stimulation (tDCS) can elicit long-lasting, polarity-dependent changes in neocortical excitability.In a previous concurrent tDCS-fMRI study of overt picture naming, we reported significant behavioural and regionally specific neural facilitation effects in left inferior frontal cortex (IFC) with anodal tDCS applied to left frontal cortex (Holland et al., 2011).
about the long-term effects.One hypothesis is that DC stimulation modulates ongoing neural activity, which then translates into lasting effects via physiological plasticity.Converging with this interpretation is evidence from an in vitro animal hippocampal model of gamma oscillations and DC stimulation that shows during 10 min of stimulation, the power and frequency of gamma oscillations, as well as multiunit activity, were modulated in a polarity specific manner.Importantly, the effects on power and multiunit activity persisted for more than 10 min after stimulation terminated.In humans the data is less clear.Magnetic Resonance Spectroscopy investigations have shown that anodal tDCS can modulate GABA-ergic activity.While recently it has been suggested that anodal tDCS enhances neural synchrony, including in the gamma frequency range.In principle, future experiments using this DCM approach in larger samples of subjects could provide a more mechanistic understanding of this anodal tDCS phenomenon, and how this might be related to the enhanced cortical communication that is thought to be facilitated by gamma frequency oscillations, and thereby might be positively correlated with effective connectivity within or between areas.Our results provide novel insight into the causal neuronal dynamics within a task-state network in response to anodal tDCS at the group and individual level."They also support the importance of the left inferior frontal cortex in naming and point to Broca's area as a candidate site for anodal tDCS in rehabilitation protocols aiming to improve naming difficulties in brain-damaged patients.The presentation of a concurrent cognitive task during anodal tDCS, as reported here, may be critical to maximally facilitate task-relevant cortical connectivity change.Our study is unique in this regard, as the changes in effective connectivity induced by anodal tDCS were evaluated online during an overt speech production task.The use of DCM to determine strength and directionality of neuronal interactions is an important addition to existing studies of tDCS and connectivity.We hope that the model described in this paper may allow more mechanistic questions about how anodal tDCS modulates behaviour and cortical processing and the role of effective connectivity in that to be addressed.DCM using Bayesian Model Averaging and a fixed effects level of inference identified the fully inter-connected model with driving inputs to VPM was the winning model for both anodal tDCS and sham.See DCM model Fig. 3A.As often seen in empirical studies, this simulated architecture comprised positive forward connections and negative backward connections.Furthermore, the following connections showed significantly different distributions for anodal tDCS vs. sham: IFS backward connection to VPM and VPM self-connections.As illustrated by the green and orange effective connection arrows in Fig. 3A.There was no correlation between the Euclidean distance between the centre of the two spherical VOIs calculated for each subject and these results.The remaining connections showed no significant distribution differences.In addition, there was tuning of and an increase in the magnitude of the feedback connection strength from IFS to VPM during anodal tDCS compared to sham.That this connection was more negative during anodal tDCS indicates that there was stronger backward connection between these two regions during anodal tDCS compared to sham tDCS.This is turns suggests greater inhibition.In other words, anodal tDCS increased the inhibitory effective connectivity of the IFS to VPM connection.This is illustrated by the green and orange effective connection values and distributions in Fig. 3A, top plot.In contrast, the self-connection value within the VPM was greater during sham compared to anodal stimulation, Fig. 3A, right plot.Here, decay in the VPM self-connection value was*log, which for sham was 2.67 s and for anodal tDCS was 7.70 s.This indicated that activity was more enduring within the region during anodal tDCS vs. sham.There was no correlation between the VPM self-connection changes and the changes in connection strength from IFS-to-VPM.The remaining connection values were not significantly modulated by tDCS: IFS self-connections; VPM feedforward connection to IFS.
Although distributed connectivity effects of anodal tDCS have been modelled at rest, the mechanism by which ‘on-line’ tDCS may modulate neuronal connectivity during a task-state remains unclear.Here, we used Dynamic Causal Modelling (DCM) to determine: (i) how neural connectivity within the frontal speech network is modulated during anodal tDCS; and, (ii) how individual variability in behavioural response to anodal tDCS relates to changes in effective connectivity strength.Results showed that compared to sham, anodal tDCS elicited stronger feedback from inferior frontal sulcus (IFS) to ventral premotor (VPM) accompanied by weaker self-connections within VPM, consistent with processes of neuronal adaptation.During anodal tDCS individual variability in the feedforward connection strength from IFS to VPM positively correlated with the degree of facilitation in naming behaviour.These results provide an essential step towards understanding the mechanism of ‘online’ tDCS paired with a cognitive task.They also identify left IFS as a ‘top-down’ hub and driver for speech change.
as a function of degree of precisely synchronous common input in the complete absence of gap junction coupling.This showed that for nonzero degrees of common input, correlated activity displays peaks at 0 ms, indicating perfect synchrony in spike timing due to synaptic inputs alone.This result indicates that peaks at ±1 ms in spike time cross-correlations are caused by electrical coupling, not common synaptic inputs, and are thus a useful signature of electrical coupling in cells exhibiting correlated activity.Finally, we determined the degree and temporal characteristics of correlated activity in coupled Golgi cells in response to sensory stimulation.Since mossy fiber terminals conveying sensory stimuli terminate profusely in the granule cell layer and may simultaneously activate many coupled Golgi cells, precisely correlated activity between Golgi cells may be enhanced by a sensory stimulus.Alternatively, sensory stimuli might desynchronize or lessen the degree of correlated activity.Furthermore, the temporal precision of correlated activity may broaden as a result of heterogeneous sensory-evoked responses in individual cells.We first characterized the synaptic inputs driven by sensory stimulation in single Golgi cells using whole-cell voltage clamp recordings.Sensory stimulation induced a burst of high-frequency excitatory inputs in Golgi cells, with a mean latency to the peak of the EPSC burst of 28 ± 7 ms.In current-clamp recordings, this sensory-evoked synaptic input triggered single action potentials in most trials, with a mean latency of 35 ± 6 ms.The latencies of the peak of the EPSC burst and the resulting spikes in the same cells were highly correlated, with the number of individual EPSCs leading to a spike ranging from 4 to 14.Thus, under these experimental conditions, Golgi cells need to integrate a minimum of 4–14 individual EPSCs before reaching action potential threshold.In simultaneous recordings from pairs of Golgi cells, sensory stimulation triggered spiking in both cells, resulting in correlated activity that was more pronounced than spontaneous correlated activity in the same pairs.Accordingly, the percentage of correlated spikes of all spikes was higher during sensory stimulation compared to that during spontaneous correlated activity.The bidirectional symmetry of correlated spiking between coupled Golgi cells did not change during sensory evoked stimuli.Given that dominant inputs to only one cell would cause asymmetry due to the fact that electrical coupling is bidirectional, this suggests that both cells were driven approximately equally by the sensory stimulus.The sensory-evoked cross-correlograms exhibited peaks at −1, 0, and +1 ms, as in spontaneous spiking, suggesting that while the peak at 0 ms is larger during sensory stimuli, electrical coupling still strongly determines spike timing during sensory stimuli.However, we also observed a broader foot in the cross-correlogram of the sensory-evoked spikes, representing enhanced correlations at longer time lags.To determine whether the variability of evoked spiking in individual cells underlies this increase in slower time correlations, we shuffled spike times across sensory trials which revealed broad peaks around zero, similar to the autocorrelations of spike times across trials for each cell in a pair.Across pairs, these shuffled cross-correlograms revealed broad correlations lacking sharp peaks around 0 ms, suggesting that the main effect of sensory stimuli is to enhance temporal correlations at longer time lags.Together, these data suggest that sensory stimuli enhance the degree of correlated spiking between pairs of Golgi cells and enhance temporal correlations at longer time lags.Nevertheless, the temporal precision of the correlated activity that is ensured by electrical coupling still dominates the spike timing of coupled Golgi cells.We report the first demonstration of correlated activity with millisecond precision in an identified electrically coupled interneuron network in vivo.Electrical coupling is essential for the temporal precision of this correlated activity, and acts in a dual, cooperative manner: by equalizing slow subthreshold membrane potential depolarizations and by transmitting fast depolarizing spikelet currents.Sensory stimuli evoke bursts of synaptic inputs that evoke spikes with variable timing across trials in individual cells, but they enhance the degree of precisely correlated activity.Since many interneurons in the mammalian brain are electrically coupled, our results provide a mechanistic understanding of how electrical coupling can orchestrate millisecond-scale correlated activity under conditions of spontaneous and sensory evoked synaptic activity.Correlated activity with millisecond precision has not previously been reported in the Golgi cell population in vivo because of the difficulty of recording unambiguously from neighboring Golgi cells in the intact brain.Previous findings of weaker, less precisely correlated activity among Golgi cells, presumably mediated by common parallel fiber input, were made with electrode spacings 300–2,100 μm apart, well beyond the ∼200 μm span of the Golgi cell dendritic tree that defines the spatial dimension of the precisely correlated activity that we have observed.Our dual whole-cell recordings between coupled Golgi cells demonstrate that gap junctional coupling enables precisely correlated activity via the cooperative effect of two mechanisms.First, electrical coupling helps to equalize subthreshold membrane potentials, effectively allowing the neurons to share synaptic input by transmitting slow membrane potential fluctuations across the junction.Second, when both cells are driven close to threshold, then a spike triggered in one cell is able to trigger a spike in a coupled cell by transmission of a spikelet across the gap junction.These two mechanisms work together to ensure precisely correlated activity: the efficacy of the spike-to-spike transmission via the spikelet is dependent on the membrane potentials of both cells depolarizing together so that both cells are near threshold at the same time.Our simulations using a generalized mathematical model of dendritic electrical coupling reveal further insights into the biophysical mechanisms of correlated activity.While the spikelets are responsible for the millisecond precision
First, electrical coupling ensures slow subthreshold membrane potential correlations by equalizing membrane potential fluctuations, such that coupled neurons tend to approach action potential threshold together.Electrical coupling therefore controls the temporal precision and degree of both spontaneous and sensory-evoked correlated activity between interneurons, by the cooperative effects of shared synaptic depolarization and spikelet transmission.
to determine the exact spatial anatomical divergence and convergence of Golgi cell axons to granule cell dendrites and the location of the granule cells that are contacted by coupled ensembles of Golgi cells.The ability to generate correlated inhibitory activity suggests that precisely timed feedforward inhibition and “time-windowing” of activity play an important role in cerebellar computations.The exact relevance of precisely correlated activity between neighboring Golgi cells for cerebellar sensorimotor behavior remains to be determined, but some clues may be found in the phenotype of Cx36-KO mice, which have been shown to display impaired timing of locomotion, conditioned eye-blink responses, and motor learning.However, as Cx36 is also expressed in other cell types in the cerebellar circuit, including molecular layer interneurons, as well as in the inferior olive, which provides input to the cerebellum, a general Cx36-KO is not a good model to identify the behavioral relevance of Golgi cell-correlated activity.Nevertheless, specific ablation of Golgi cells in a transgenic mouse line expressing human interleukin-2 receptor α subunit using an immunotoxin-mediated cell targeting technique resulted in severe acute ataxia and a chronic inability to perform compound movements, highlighting the importance of this cell type in regulating motor control.Our data provide a mechanistic understanding of how electrical coupling causes correlated spikes in electrically coupled interneurons with millisecond precision in the context of the spontaneous and sensory-evoked synaptic drive in vivo.Importantly, the mechanism we describe should be relevant to most electrically coupled neurons in the brain, as it is generic and independent of spike shape.This suggests that electrically coupled interneurons will spike together with millisecond precision irrespective of the exact temporal structure of common synaptic input.Our results provide further support to the idea that electrical coupling is crucial for temporal computations performed in mammalian neural circuits.Please see Supplemental Experimental Procedures for the full experimental procedures.Briefly, all animal procedures were performed under license from the UK Home Office in accordance with the Animal Act 1986.Male and female transgenic wild-type and connexin36 KO mice expressing EGFP under the GlyT2 promotor were used to identify and target Golgi cells.In vivo targeted patch-clamp recordings were performed using a custom two-photon microscope to visualize EGFP positive Golgi cells in Crus II under ketamine/xylazine anesthesia.Sensory stimulation was performed used an airpuff delivered to the perioral region and/or whisker pad.Simulations were performed using NEURON 7.1.Two reduced compartmental neuron models of Golgi cells, each consisting of a soma and a dendrite, were coupled via a gap junction at a proximal dendritic location.The gap junction was operated in different modes in which it was switched on or off selectively during subthreshold signaling, the DJP, and/or the HJP.Ongoing synaptic inputs in vivo were simulated by Poisson spike trains triggering synaptic conductances that were distributed uniformly over the soma and dendrites of both neurons.I.v.W. performed all experiments and analyses with the exception of the single whole-cell recordings of sensory-evoked responses and their analyses, which were performed by S.S.H. and S.K.The simulations were performed by A.R.The study was designed by I.v.W. and M.H., who also wrote the paper.
Many GABAergic interneurons are electrically coupled and in vitro can display correlated activity with millisecond precision.However, the mechanisms underlying correlated activity between interneurons in vivo are unknown.Using dual patch-clamp recordings in vivo, we reveal that in the presence of spontaneous background synaptic activity, electrically coupled cerebellar Golgi cells exhibit robust millisecond precision-correlated activity which is enhanced by sensory stimulation.This precisely correlated activity results from the cooperative action of two mechanisms.show using double patch-clamp recordings that cerebellar Golgi cells display millisecond precise correlated activity in vivo, which is enhanced during sensory processing.
nervous system by chemogenetic methods causes changes in the number of proopiomelanocortin and agouti-related protein neurons in ARC of the hypothalamus.Our results support that DMV is a node of parasympathetic nervous system including both afferent and efferent signals.The signal from DMV reaches to ARC of the hypothalamus and activates the number of AGRP and POMC expressing neuron.C-Fos and either POMC or AGRP double co-staining show that the parasympathetic nervous system activation affects both POMC and AGRP expression in ARC of the hypothalamus.Interestingly, while chemogenetic activation of DMV neurons simultaneously is increasing peripheral blood glucose, the serum insulin contents are not increased followed by increased blood glucose levels.These results suggest that modulation of DMV may cause an error in central glucose sensing or another common pathway.Previously studies showed that vagus nerve stimulation affects energy expenditure, demonstrated that cephalic phase of digestion induced gastric acid secretion and motility .These cholinergic pre-ganglion also travels to the pancreas within the bulbar outflow tract and the hepatic and gastric nerves of the vagus.In addition to peripheral organs, arcuate nucleus is particularly reciprocal connected with the dorsal vagal complex and integration of endocrine and behavioral aspects of food intake satiety .The activation of preganglionic parasympathetic neurons in the DMV generates action potentials and the AP travels through the vagus nerve.Parasympathetic neurons innervate with peripheral organs.When AP arrives on the target organs, acetylcholine is released into a synapse and it binds to the receptor expressed on the target organs .For example, when the VN is stimulated, the terminals of the preganglionic nerves in the intra-pancreatic ganglia, release acetylcholine to the synapse with the acetylcholine, which in turn causes the release of acetylcholine from their terminals within the islet .Our results support previous studies that the parasympathetic nervous system regulates blood glucose levels and pancreatic insulin secretion independently.We provided the evidence for these functional connectivity among the parasympathetic nervous system, hypothalamic neurons, and peripheral tissues.Chemognetic modulation of parasympathetic nervous system on feeding behavior and energy metabolism had changed feeding and glucose and energy expenditure in ChAT-Cre mice.However, we were not able to define which neural pathway interplay to control parasympathetic nervous system.Therefore, more detailed working mechanism of the neurons and the researches for functional neural connections are still required.The present study suggest that chemogenetic acute activation or inhibition of the parasympathetic nervous system regulates feeding behavior and energy expenditure.Conceived and designed the experiments: CNK, HJC.Performed the experiments: CNK, CYK, DHC, SSH.Discussion and Wrote the paper: CNK, WJS, SJW, HJC.
Parasympathetic nervous system (PNS) innervates with several peripheral organs such as liver, pancreas and regulates energy metabolism.However, the direct role of PNS on food intake has been poorly understood.In the present study, we investigated the role of parasympathetic nervous system in regulation of feeding by chemogenetic methods.Adeno associated virus carrying DREADD (designer receptors exclusively activated by designer drugs) infused into the target brain region by stereotaxic surgery.The stimulatory hM3Dq or inhibitory hM4Di DREADD was over-expressed in selective population of dorsal motor nucleus of the vagus (DMV) neurons by Cre-recombinase-dependent manners.Activation of parasympathetic neuron by intraperitoneal injection of the M3-muscarinic receptor ligand clozapine-N-oxide (CNO) (1 mg/kg) suppressed food intake and resulted in body weight loss in ChAT-Cre mice.Parasympathetic neurons activation resulted in improved glucose tolerance while inhibition of the neurons resulted in impaired glucose tolerance.Stimulation of parasympathetic nervous system by injection of CNO (1 mg/kg) increased oxygen consumption and energy expenditure.Within the hypothalamus, in the arcuate nucleus (ARC) changed AGRP/POMC neurons.These results suggest that direct activation of parasympathetic nervous system decreases food intake and body weight with improved glucose tolerance.
time versus virus number, may need to be increased by optimizing the reaction, including adjustments to enzyme and buffer concentrations or to incubation temperature.Additional optimization would need to be performed in order to improve the lower limit of detection of the reaction.Although these improvements may be made, a lysed whole blood approach will be inherently limited in its capabilities by the reaction volume and dilution factor in lysis buffer.For this reason, a variation on the approach involving digital LAMP may be considered.Digital LAMP and PCR approaches have been widely described in Refs. .The primary advantage of a digital approach is that it relies only on an endpoint measurement—whether a reaction of a small volume amplifies or not—from which concentration can be approximated by measuring hundreds, thousands, or millions of droplets.The approach we describe here could be scaled up for digital LAMP by constructing large arrays of droplets with an automated distribution method.The upper and lower limits of viral load detection in a digital approach are defined by the number of individual reaction droplets and the total volume of the sample tested.Since the distribution of viruses in reaction droplets is governed by Poisson statistics, we have briefly reviewed these principles in the Supplementary Information for this paper in order to demonstrate the theoretical utility in clinical HIV management of a scaled-up platform that tests a finger-prick droplet of whole blood.For example, a digital LAMP approach requiring just 9 μL of whole blood would be capable of indicating viral loads lower than 500 mL−1 of whole blood, with much greater accuracy in the range of 104−106 mL−1.While this approach cannot compete with the technical specifications of state-of-the-art systems, it would be of practical value, such as in showing declines in viremia following a drug regimen change or in identifying cases of viral rebound in settings where the gold standard of care is inaccessible .The capacity of this platform as a digital LAMP test increases with larger sample sizes and increased number of reaction droplets.In performing these experiments and the preparation of this manuscript, the challenge of finding a sample-to-answer point-of-care HIV viral load quantification solution was viewed as two parallel objectives: ① sample processing, which traditionally involves isolating or enriching the analyte from its complex matrix, and ② analyte detection.We broadly considered various approaches to sample processing that might be integrated with a microchip LAMP approach, many of which were guided by the notion that the presence of cellular material is not compatible with nucleic acid amplification methods.Many of these approaches resulted in the dilution of the analyte by a factor of ten or more, while our approach results in a dilution by only a factor of five, prior to the addition of LAMP reagents with one simple and easily implemented processing step that neither purifies nor enriches.This key merit of our approach can significantly reduce the complexity of and cost for a point-of-care device.The measurements presented here demonstrate that an RT-LAMP quantification approach is indeed compatible with minimally processed whole blood.To our knowledge, RT-LAMP in lysed whole blood has only been employed by one group, which performed a non-quantitative measurement in a reaction tube on a portable heating device .We demonstrate quantitative detection with the ability to resolve 10-fold changes in concentration above 6.7E+4 μL−1 and 100-fold changes in concentrations above 670 μL−1.We observed 60 nL droplets with as few as three viruses per reaction amplify, which corresponds to a whole blood virus concentration of 670 μL−1.We also discussed that the true power of this approach may be in a quantitative digital LAMP format rather than a kinetic measurement.Our implementation of the lysed whole blood approach in a microchip format with mobile phone imaging represents a significant stride toward a practical solution to viral load measurements in resource-limited settings.
Viral load measurements are an essential tool for the long-term clinical care of human immunodeficiency virus (HIV)-positive individuals.The gold standards in viral load instrumentation, however, are still too limited by their size, cost, and sophisticated operation for these measurements to be ubiquitous in remote settings with poor healthcare infrastructure, including parts of the world that are disproportionately affected by HIV infection.The challenge of developing a point-of-care platform capable of making viral load more accessible has been frequently approached but no solution has yet emerged that meets the practical requirements of low cost, portability, and ease-of-use.Our integrated assay shows amplification from as few as three viruses in a ~ 60 nL RT-LAMP droplet, corresponding to a whole blood concentration of 670 viruses per μL of whole blood.The technology contains greater power in a digital RT-LAMP approach that could be scaled up for the determination of viral load from a finger prick of blood in the clinical care of HIV-positive individuals.We demonstrate that all aspects of this viral load approach, from a drop of blood to imaging the RT-LAMP reaction, are compatible with lab-on-a-chip components and mobile instrumentation.
decreases to 6.80 with annealing time of 120 min.The variation of texture intensity is accordance with the misorientation change, which indicates the transformation of texture is related to the misorientation.During the deformation and heat treatment processes, when the orientation of some grains is concentrated in the vicinity of one or some orientation positions, the macroscopic expression is preferred orientation.The preferred orientation of polycrystal structure is called texture.During the recovery process, the deformed grains are softened, and the texture intensity decreases with further occurrence of recovery.However, during the recrystallization, the nucleation and growth of recrystallized grains are affected by the deformation and heat treatment conditions, resulting in the preferred orientation of the recrystallized grains, which is the so called recrystallization texture.Thus, the texture intensity increases in specimens AT120 and AT240 with the occurrence of recrystallization.In order to accurately analyze the texture types, the ODF sections of φ2 = 0°, 45° and 65° are shown in Fig. 7.The strong 〈1 1 2〉 Brass, 〈1 1 2〉 Copper, 〈6 3 4〉 S and 〈1 1 2〉 R components and some weak 〈2 3 4〉 S/Brass and 〈0 0 1〉 Goss can also be observed.With the increase of annealing time from 30 min to 240 min, the variation of texture intensity in the ODF sections is in accordance with the change of the texture intensity shown in Fig. 6.According to the previous study , during the rolling process, the grain boundaries slip with the rotation of grains due to the complex stress state, and the rolling texture is easily to be formed.As the FCC material of Al alloy, the rolled textures of Copper, S and Brass components are easily to be found in the cold rolled sheets.Moreover, the Goss components is also easy to be formed due to the occurrence of dynamic recrystallization during hot rolling.On the other hand, dislocation slip is the main mechanism of plastic deformation of Al alloys due to their high stacking fault energy.Spies et al. reported that the 〈1 1 1〉 and 〈1 1 2〉 orientation are both stable in FCC materials, and the grains continuously flowed to these two orientations during rolling.However, for pure 1070 Al alloy, the cold rolled sheet with severe reduction usually has two kinds of major recrystallized textures of R and 〈1 0 0〉 Cube.For Al alloy with high stacking fault energy, the defect density and the driving force of recrystallization are obviously decreased by the recovery, and thus the recrystallization occurs with a slow rate, resulting in the high fraction of R component.Fig. 8 shows the variation of the fraction of some main texture components with the extension of annealing time.As is seen, with increasing the annealing time from 30 min to 60 min, the fractions of Brass, R and S/Brass increase, while the fractions of Goss, Copper and S decrease.It is considered that clipping and climbing of large amount of dislocations contribute to the formation of more subgrains, and the growth of these subgrains is affected by the grain orientation during the recovery.With the proceeding of recovery, the deformed grains are softened and the subgrains rotated and transformed to new orientation.The recrystallization becomes more complete with the annealing time of 120 min, and thus the fractions of Brass and S/Brass decrease and the fractions of Goss, Copper and S increase.During recrystallization process, when the deformed matrix has certain orientation relationship with the nucleation core, the nucleation rate reaches maximum.This kind of orientation shows one grain rotate 40° around the axis of 〈1 1 1〉 and arrive the orientation location of another grain.In pure Al alloy, the S and R components have the 40° 〈1 1 1〉 orientation, and thus S component increases slightly and R component decreases slightly.Brass belongs to the rolled texture, and it decreases obviously due to the formation of large amount of recrystallized grains.Moreover, during the growth of recrystallized grains, the fraction of Brass decreases continuously.Besides, when the annealing time reaches 240 min, the fractions of R, S/Brass and Goss also decrease and fractions of Copper and S increase obviously.It is considered that some grains with Brass, R, S/Brass and Goss transform to the new structure with Copper and S components due to migration of the grain boundaries during the growth process.Fig. 9 presents the results of electrical conductivity and hardness.As is seen from Fig. 9, the homogenized specimen HT owns the highest electrical conductivity of 61.4 %IACS, and the cold rolled specimen AR owns the smallest value of 60.7%IACS.Overall, the 1070 Al alloy exhibits excellent electrical conductivity.Moreover, the electrical conductivity of annealing specimens increases gradually with the increase of annealing time.According to the classical electron theory, the electrical resistance of materials is the result of the collision of electrons and lattice, and the conductivity is proportional to the density of free electrons and to the average free path of electrons.The structural defect of materials can reduce the average free path of electrons.The defects such as impurity atoms, vacancies, internal dislocations, and external surfaces, cause the lattice distortion and scattering of electron waves, and thus affect the electrical conductivity.The increase of lattice distortion and grains defect, especially the vacancy concentration, will result in the non-uniform electric field of the lattice and intensify the scattering of electron wave, resulting in the increase of material resistivity.Thus, compared with the homogenized specimen HT, the conductivity of cold rolled specimen AR is decreased due to severe cold plastic deformation.Then, both the recovery and
The strong {1 1 0} 〈1 1 2〉 Brass, {1 1 1} 〈1 1 2〉 Copper, {1 2 3} 〈6 3 4〉 S and {1 2 4} 〈1 1 2〉 R components and some weak {4 1 4} 〈2 3 4〉 S/Brass and {1 1 0} 〈0 0 1〉 were found in the annealed alloys.
proportion of discordant studies increases.The highest proportion of discordant studies is for the NEG class for all three classification schemes, the 1A class for GHS/CLP, and the STRONG class for ECETOC.If only the discordant chemicals are considered, the pattern of distribution is almost the same as when the solvent is considered.This study highlights the importance of considering the variability of the LLNA when this assay is used as the standard against which to compare the performance of alternative methods.Considering the POS/NEG classification scheme, LLNA studies resulting in negative classifications tend to be less reliable than studies with positive classifications.When potency classification is considered study results leading to classifications in the strongest potency classes seem to be more reliable than those in the weakest classes.As already observed by Hoffmann, studies with discordant classifications mainly classify the chemical in an adjacent class, but it does not appear possible to draw general conclusions on whether such studies lead to a more severe or less severe classification.Considering the current discussion at regulatory level on the acceptance of results from in vitro methods and their use within defined approaches, it would be important, for a given regulatory application, to discuss and agree on the level of accuracy required for acceptance of defined approaches based on the use of non-animal data.The definition of such a level should be informed by the uncertainty associated with the reference data.In the case of non-animal methods for skin sensitisation, acceptable levels of accuracy for different classification schemes can be derived from the results presented in Table 3.For example, a level of accuracy for identifying NEG, 1B and 1A chemicals of 70%, 70% and 80%, respectively, would be comparable to the performance of the LLNA based on our most conservative analysis conducted by considering the solvent.
The knowledge of the biological mechanisms leading to the induction of skin sensitisation has favoured in recent years the development of alternative non-animal methods.During the formal validation process, results from the Local Lymph Node Assay (LLNA) are generally used as reference data to assess the predictive capacity of the non-animal tests.This study reports an analysis of the variability of the LLNA for a set of chemicals for which multiple studies are available and considers three hazard classification schemes: POS/NEG, GHS/CLP and ECETOC.As the type of vehicle used in a LLNA study is known to influence to some extent the results, two analyses were performed: considering the solvent used to test the chemicals and without considering the solvent.The results show that the number of discordant classifications increases when a chemical is tested in more than one solvent.Moreover, it can be concluded that study results leading to classification in the strongest classes (1A and EXT) seem to be more reliable than those in the weakest classes.This study highlights the importance of considering the variability of the reference data when evaluating non-animal tests.
with self-reports of caregiving stroking behaviors.These correlations have to be interpreted with caution, since we merely collected self-reports from parental questionnaires.Yet, they suggest that infants’ cardiac reaction to strokes of CT-optimal velocity varies and that it is stronger in infants whose caregivers have low social anxiety towards touch, and engage frequently in caregiving stroking behavior directed towards the infant.Furthermore, the correlation between infants’ heart rate deceleration in response to CT-optimal velocity strokes and caregivers SQT scores remained significant after controlling for caregivers S-PICTS scores.This additional result suggests tentatively that part of the relationships between the caregivers’ attitude towards interpersonal touch and the infants’ cardiac reaction to CT-optimal velocity strokes might be independent from the infants’ experience with parental stroking behaviors.Touch has been argued to play a key role in building a representation of the bodily self, which in turn is crucial to distinguish oneself from others, engage in social interaction, and predict and interpret the behaviors of others.How the modulation of infants’ responses to the source of interpersonal touch that we observed in our study builds upon a representation of the interacting bodily and social selves is an important question for future research.More generally, more work on touch is needed to understand the early ontogeny of social cognition.Currently, the overwhelming majority of studies on early social cognition focus on the role of visual inputs.Yet, touch serves social and communicative functions from the first year of life and it is a privileged route for early social interactions between caregivers and infants.Moreover, interpersonal touch is central to the social life of humans and non-human primates and is processed by specific channels that are likely to contribute to social cognition.Finally, as our data suggest, human infants do not treat interpersonal touch as a purely mechanical event, and they react to its social source.The authors declare no conflict of interest.
The evaluation of interpersonal touch is heavily influenced by its source.For example, a gentle stroke from a loved one is generally more pleasant than the same tactile stimulation from a complete stranger.Our study tested the early ontogenetic roots of humans’ sensitivity to the source of interpersonal touch.We measured the heart rate of three groups of nine-month-olds while their legs were stroked with a brush.Depending on the Identity condition (stranger vs. parent), the person who acted as if she was stroking the infant's leg was either an unfamiliar experimenter or the participant's caregiver.In fact, the stimulation was always delivered by a second experimenter blind to the Identity condition.Infants’ heart rate decreased more in reaction to strokes when their caregiver rather than a stranger acted as the source of the touch.This effect was found only for tactile stimulations whose velocity (3 cm/s) is known to elicit maximal mean firing rates in a class of afferents named C-tactile fibers (CTs).Thus, the infants’ reaction to touch is modulated not just by its mechanical properties but also by its social source.
from the micelles or liposomes and binding to serum or plasma proteins.Foslip® showed a steady increase in fluorescence when incubated for 8 h in serum or plasma regardless of mTHPC concentrations, indicating slow-release.This is consistent with previous studies, where a slow-release of m-THPC from liposomes in 5% human serum was observed.The release of mTHPC from Foslip® is medium and medium concentration dependent.The fluorescence at low mTHPC concentrations levelled off at 50 and 90% serum or plasma solutions, indicating release of mTHPC from liposome was complete.The higher intensities of Foslip® compared to Foscan® at corresponding mTHPC concentrations is due to the presence of liposomes which aid in the solubilisation of mTHPC.All mTHPC loaded micelles, irrespective of the loading%, exhibited a rapid increase in fluorescence intensity in the first 30 min and then remained stable.This dequenching process indicates burst release.Like Foslip®, the amount of release of mTHPC from micelles is dependent on medium and medium concentration.At lower mTHPC loading, 2.5% for instance, the release is almost complete in 50% FBS or plasmas because fluorescence was similar to that of Foscan® and did not increase significantly more at 90% serum or plasma.In the pharmacokinetic study, after i.v. administration of the mTHPC loaded micelles, the t1/2 value is 1.5 h, regardless of mTHPC to polymer ratio.This is very much consistent with data reported by Cramers et al. after administration of Foscan®, indicating rapid release of mTHPC from the micelles in circulation.This rapid release in vivo is in line with the in vitro release in serum and plasmas, suggesting the importance of pre-testing the in vitro release behavior in plasma for predicting the fate of a delivery system in vivo.This fast release behavior can be caused by high binding affinity of mTHPC with serum and plasmaproteins leading to mTHPC redistribution from intact micelles to lipoproteins.Also protein binding of polymer unimers can play a role, which in combination with dilution upon injection causes an adverse shift of the micel-unimer equilibrium and the destruction of micelles.Combined with premature release and concomitant pharmacokinetic profile of mTHPC loaded micelles, the observed fluorescence of the atherosclerotic lesions in mice by accumulated mTHPC is probably due to released mTHPC that subsequently binds to lipoproteins, particularly in low-density lipoprotein.LDL as an endogenous carrier can be rapidly transported across an intact endothelium and then ingested by macrophages in the form of oxidized LDL particles.Indeed, it has been reported that the association of photosensitizers with lipoproteins promotes selective accumulation into tumour tissues and atherosclerotic plaques, thus enhancing their therapeutic potential.The photosensitizer mTHPC was loaded in Ben-PCL-mPEG micelles with high efficiency and capacity and without any aggregation of the photosensitizer."mTHPC-loaded micelles' photo-cytotoxicity is induced by the degradation of the PCL block of Ben-PCL-mPEG micelles by lipases.In accordance to their higher lipase activity, RAW264.7 macrophages degrade the micelles faster and thus activate the photosensitizer earlier than C166 endothelial cells, thus creating a window for selective killing of the RAW264.7 macrophages."Despite the selective accumulation of mTHPC in atherosclerotic plaques of mice aorta's, likely due to the proposed binding to lipoproteins, translation from in vitro to in vivo of the beneficial selective macrophage photocytotoxicity was not possible due to the premature release from the micelles.Therefore, our current aim is to improve the stability of mTHPC-loaded PEG-PCL based micelles to fully take advantage of the macrophage selectivity and passive targeting to atherosclerotic plaques.If this aim is achieved, the Ben-PCL-mPEG micelles might be a very interesting delivery system with dual targeting effect, i.e. passive targeting to atherosclerotic lesions and faster degradation by macrophages and thus higher selectivity to PDT compared to healthy endothelial tissue.
Selective elimination of macrophages by photodynamic therapy (PDT) is a new and promising therapeutic modality for the reduction of atherosclerotic plaques.m-Tetra(hydroxyphenyl)chlorin (mTHPC, or Temoporfin) may be suitable as photosensitizer for this application, as it is currently used in the clinic for cancer PDT.In the present study, mTHPC was encapsulated in polymeric micelles based on benzyl-poly(ε-caprolactone)-b-methoxy poly(ethylene glycol) (Ben-PCL-mPEG) using a film hydration method, with loading capacity of 17%.Because of higher lipase activity in RAW264.7 macrophages than in C166 endothelial cells, the former cells degraded the polymers faster, resulting in faster photosensitizer release and higher in vitro photocytotoxicity of mTHPC-loaded micelles in those macrophages.However, we observed release of mTHPC from the micelles in 30 min in blood plasma in vitro which explains the observed similar in vivo pharmacokinetics of the mTHPC micellar formulation and free mTHPC.Therefore, we could not translate the beneficial macrophage selectivity from in vitro to in vivo.Nevertheless, we observed accumulation of mTHPC in atherosclerotic lesions of mice aorta's which is probably the result of binding to lipoproteins upon release from the micelles.Therefore, future experiments will be dedicated to increase the stability and thus allow accumulation of intact mTHPC-loaded Ben-PCL-mPEG micelles to macrophages of atherosclerotic lesions.
they show a mixture of positive and negative response, which is generally weaker than in the open ocean.In the permanently stratified seas we consider here the impact of climate change on stratification is highly modulated by changes in circulation and overturning/mixing in its effect on nutrient resupply and primary production.These more dynamic processes are seen to be the leading order effects in these cases.Hence, the global view of climate change causing a decrease in netPP must be reconsidered at a regional scale, particularly in the light of all of these regional models showing regions of increasing netPP.The interplay of the seasonal cycle and sea ice cover forms another important driver for the Arctic and sea ice covered regions, the Barents Sea and the northern Baltic Sea.The view we have provided here only considers a subset of the possible range of processes mediating climate impacts in regional seas.This is due to limitations of model resolution excluding some process, only considering some regional seas, and the depth of the analysis presented not being able to fully elucidated the role of some processes.We can see where the processes treated here fit in the broader picture, by speculating on a wider range of possible effects and identifying those we have considered here and those excluded.This is summarised in Fig. 11, along with a hypothesised sign of change in netPP.This particularly identifies the climatic impacts on mesoscale and near coastal processes have been neglected by this study.Other neglected processes include changes to optical properties, effects of sea level rise on tides and changes to the nutrient concentrations in upwelling water.The resolution considered in these models is marginal for the consideration of many processes.This is particularly the case in near coastal, frontal and shelf-slope regions of the ocean margins; in fact where we would expect the primary production to be highest.Several physical processes show a ∼h0.5 relationship between horizontal scale and water depth, i.e. a ∼10-fold decrease in scale from 4000 m to 40 m. Hence, a 1/10° regional model in shallow water is in some sense comparable to a 1° global model.The model domains used here are well-established and so have not fully taken advantage of the continued growth in computer power over recent years, in terms of their grid resolution and there is now an opportunity to address these issues, accepting that increases in resolution must be tensioned against the need for multiple process experiments, longer simulations and ensembles.As far as the ecosystem models are concerned, the parameterisation of temperature effects is obvious area for potential development, e.g. in ERSEM all groups share the same parameter value.Hence, all aspects of the system are changed by an equal amount and tend to cancel in terms of the annual production, but not the biomass.A more sophisticated approach, for example that differentiates between autotrophic and heterotrophic processes, might be expected to give a larger direct temperature response.In terms of forcing, changes in the large scale ocean circulation and details of the gyre structure potentially have important implications for the characteristics of the water transported on-shelf that is not well treated by the coarse resolution ocean model used for forcing here.Moreover, improved atmospheric resolution either in the global model or through downscaling and fully coupled systems has the opportunity to represent the changes in drivers with more fidelity, particularly in wind driven effects such as those in the Black Sea, Baltic Sea and the North Atlantic storm track.The enclosed basins’ connections with the wider ocean are at very narrow straights, which require high resolution coupled basin models and high resolution wind forcing, to accurate simulating the processes driving these exchanges, and so capture events such as the ‘Major Baltic Inflows’, and how these might change.While the view of several competing processes acting with both positive and negative sign really needs to be considered on a case-by-case basis, some general principle can be identified.When there are multiple effects of different sign these will tend to mitigate the climate change impact, suggesting some regional seas will generally be less vulnerable to climate change effects than the open ocean.This can act both locally and spatially, i.e. advective and diffusive transport will tend to reduce effects across gradients of negative and positive impact.This will not be the case in enclosed regional seas, where a single dominant effect can have a large impact that is not mitigated by exchange with neighbouring regions of a different sign.Hence, we might expect the enclosed regional seas to be more highly impacted.This is indeed the case in these experiments for the Black Sea and Baltic Sea in contrast to the response of the North Sea and Celtic Seas.In both the Black and Baltic Seas this impact is driven by changes in wind effects leading to changes in circulation patterns in the former and upwelling rates in the later, i.e. both highly dependent on the detailed conditions in the basin.Another consequence of multiple, competing processes is that uncertainties are enhanced.Simplistically, uncertainties for uncorrelated processes add in quadrature even if the effects are of different sign and cancel, i.e. fractional uncertainty can substantially increase.This is compounded by the fact that many of the processes considered here relate to less well modelled, regionally specific, aspects of the OAGCM forcing such as details of the wind fields.Hence, the climate change signal in these is substantially less certain than, for example, changes in air temperature.Assessing and addressing this uncertainty remains an ongoing
Regional seas are potentially highly vulnerable to climate change, yet are the most directly societally important regions of the marine environment.The combination of widely varying conditions of mixing, forcing, geography (coastline and bathymetry) and exposure to the open-ocean makes these seas subject to a wide range of physical processes that mediates how large scale climate change impacts on these seas' ecosystems.In this paper we explore the response of five regional sea areas to potential future climate change, acting via atmospheric, oceanic and terrestrial vectors.These include the Barents Sea, Black Sea, Baltic Sea, North Sea, Celtic Seas, and are contrasted with a region of the Northeast Atlantic.Our aim is to elucidate the controlling dynamical processes and how these vary between and within these seas.We focus on primary production and consider the potential climatic impacts on: long term changes in elemental budgets, seasonal and mesoscale processes that control phytoplankton's exposure to light and nutrients, and briefly direct temperature response.We draw examples from the MEECE FP7 project and five regional model systems each using a common global Earth System Model as forcing.We consider a common analysis approach, and additional sensitivity experiments.Comparing projections for the end of the 21st century with mean present day conditions, these simulations generally show an increase in seasonal and permanent stratification (where present).However, the first order (low- and mid-latitude) effect in the open ocean projections of increased permanent stratification leading to reduced nutrient levels, and so to reduced primary production, is largely absent, except in the NE Atlantic.Even in the two highly stratified, deep water seas we consider (Black and Baltic Seas) the increase in stratification is not seen as a first order control on primary production.Instead, results show a highly heterogeneous picture of positive and negative change arising from complex combinations of multiple physical drivers, including changes in mixing, circulation and temperature, which act both locally and non-locally through advection.
Depression is the leading cause of disability worldwide.1, "The number of people living with depression increased by around 18% between 2005 and 2015, and depression affects 322 million people, or about 4% of the world's population.1",Pharmacotherapy and psychotherapy are the two mainstays of depression treatment.In particular, second-generation antidepressants, including selective serotonin reuptake inhibitors, are the first-line options in the pharmacological management of major depression.2,However, there is still uncertainty about the dose dependency and optimal target dose of second-generation agents.Current practice guidelines provide conflicting recommendations: the National Institute of Health and Care Excellence guideline in UK states that no dose dependency has been established within the therapeutic range of SSRIs,3 whereas the American Psychiatric Association guideline recommends titration up to the maximum tolerated dose: “Initial doses should be incrementally raised as tolerated until a therapeutic dose is reached…doses of antidepressant medications should be maximized, side effects permitting.”4,Systematic and comprehensive reviews of the literature examining dose dependency of antidepressants should clarify the issue and inform the guideline recommendations.Unfortunately, the available reviews are few and their conclusions disagree.5–7,Moreover, they addressed mainly dose-efficacy relationships and gave little attention to the balance between efficacy, tolerability, and overall acceptability of treatment.Evidence before this study,Second-generation antidepressants, including selective serotonin reuptake inhibitors, are the mainstay in the pharmacological management of major depression; however, current practice guidelines provide conflicting recommendations as to their optimum target dose.The National Institute of Health and Care Excellence guideline in the UK states that no dose dependency has been established within the therapeutic range of SSRIs, whereas the American Psychiatric Association guideline recommends titration up to the maximum tolerated dose.We searched for reviews that analysed dose-response relationships for second-generation antidepressants in PubMed using the search terms “depressive disorder”, “antidepressive agents, second-generation”, and “dose-response relationship, drug”, and in the references of the identified studies, up to March 21, 2019.We identified three systematic reviews: one concluded 21–40 mg fluoxetine equivalents provided maximum efficacy, another found 40–50 mg fluoxetine-equivalent dose category offered the greatest efficacy, and the third confirmed a linearly increasing dose-efficacy relationship from placebo, to low doses, to high doses of SSRIs.Only two studies examined dose dependency for adverse effects, and only one examined dose dependency for acceptability of treatment.Added value of this study,The current study is based on the largest and most comprehensive dataset of double-blind, randomised controlled trials, published and unpublished, that examined fixed doses of SSRIs, venlafaxine, or mirtazapine in the acute treatment of adults with major depression.Efficacy was dose dependent up to 20–40 mg fluoxetine equivalents for SSRIs, up to 75–150 mg for venlafaxine, and up to approximately 30 mg for mirtazapine.Above these limits, no further increase in efficacy for SSRIs or mirtazapine occurred, but there was a slight increase in efficacy for venlafaxine.There was clear dose dependency in dropouts due to adverse effects for all drugs.Consequently, the overall acceptability of treatments was optimal towards the lower end of the licensed range for SSRIs, venlafaxine, and mirtazapine.In this state-of-the art dose-response meta-analysis, dose was treated as a continuous variable, allowing greater resolution of change points and avoiding misleading categorisation of doses.Implications of all the available evidence,For the majority of patients receiving SSRIs, venlafaxine, or mirtazapine, the lower range of their licensed dose will probably achieve the optimal balance between efficacy, tolerability, and acceptability.This information should inform treatment guidelines and clinical decision making in routine clinical practice.We therefore did a dose-response meta-analysis of fixed-dose studies of commonly prescribed antidepressants for the treatment of adults with major depression,2 examining not only their efficacy, but also their tolerability and acceptability, to provide summative evidence to inform future guideline recommendations.We included double-blind, randomised controlled trials comparing antidepressants among themselves or with placebo as oral monotherapy for the acute-phase treatment of adults of both sexes, with a primary diagnosis of major depressive disorder according to standard operationalised diagnostic criteria.Trials of antidepressants for patients with depression and a serious concomitant physical illness were excluded.9,This study focused on the most frequently prescribed new-generation antidepressants in the UK according to Open Prescribing,10 namely five SSRIs, venlafaxine, and mirtazapine.The dataset was based on our 2016 network meta-analysis,9 which was based on searches of the Cochrane Central Register of Controlled Trials, CINAHL, Embase, LILACS, MEDLINE, MEDLINE In-Process, PsycINFO, AMED, the UK National Research Register, and PSYNDEX.We scrutinised reference lists of all relevant papers.We searched files of the national drug licensing agencies in six countries, the European Medicines Agency, and several trial registries for published, unpublished, and ongoing RCTs.We contacted all pharmaceutical companies marketing second-generation antidepressants and asked for supplemental unpublished information about their pre-marketing and post-marketing trials.We contacted the National Institute for Health and Care Excellence, the Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, and other relevant organisations and individuals for additional information not already identified.We used broad search terms for depression, and generic and commercial names of all antidepressants under review.We imposed no language restriction and the search was updated until Jan 8, 2016.The complete dataset from the above search is available at Mendeley.There was no indication of small study effects, including publication bias, in this dataset.2,The reporting of the study followed the PRISMA guidelines.8,The protocol is available on the institutional websites of Kyoto University and University of Oxford.To examine dose-dependency relationships, we included all trials that compared two or more fixed-dose treatment groups including placebo within a trial.We included treatment groups within and outside the licensed dose range according to international drug approval agencies.We evaluated the risk of bias
Second-generation antidepressants are the first-line option for pharmacological management of depression.We have aimed to summarise the currently available best evidence to inform this clinical question.Methods: We did a systematic review and dose-response meta-analysis of double-blind, randomised controlled trials that examined fixed doses of five selective serotonin reuptake inhibitors (SSRIs; citalopram, escitalopram, fluoxetine, paroxetine, and sertraline), venlafaxine, or mirtazapine in the acute treatment of adults (aged 18 years or older) with major depression, identified from the Cochrane Central Register of Controlled Trials, CINAHL, Embase, LILACS, MEDLINE, PsycINFO, AMED, PSYNDEX, websites of drug licensing agencies and pharmaceutical companies, and trial registries.We imposed no language restrictions, and the search was updated until Jan 8, 2016.Trials of antidepressants for patients with depression and a serious concomitant physical illness were excluded.Funding: Japan Society for the Promotion of Science, Swiss National Science Foundation, and National Institute for Health Research.
in generation of allocation sequence, allocation concealment, masking of study personnel and participants, masking of outcome assessor, attrition, and selective outcome reporting.Studies were classified as having low risk of bias if none of these domains was rated as high risk of bias and three or fewer were rated as unclear risk; moderate if one was rated as high risk of bias or none was rated as high risk of bias, but four or more were rated as unclear risk; and all other cases were assumed to have high risk of bias.9,At least two independent reviewers selected the studies, extracted data, and assessed risk of bias.We included the following outcomes after 8 weeks of treatment: treatment response, dropouts due to adverse effects, and all-cause dropouts.For treatment response, we prioritised the Hamilton Depression Rating Scale,11 then Montgomery-Åsberg Depression Rating Scale,12 and if neither was used, any other validated observer-rating scale.9,When this outcome was not reported, we calculated response using a validated imputation method.13,We set the number of patients who were randomly assigned as the denominator for all outcomes, assuming that patients lost to follow-up had dropped out without experiencing response or dropout due to adverse effects.Dose equivalence can be defined and calculated via several methods.14,One method assumes the optimum doses found in double-blind, flexible-dose trials to be equivalent.15,In the main analyses, we used the most recent and comprehensive review of dose equivalence of antidepressants based on this method.16,Previous studies on dose dependency of antidepressants used similar conversion algorithms.5,7, "Where no empirical data for dose conversion were available, we assumed the daily defined dose to be equivalent.Another method assumes the average prescribed doses in the real world for the indication to be roughly equivalent.For this purpose, we used the nationally representative Medical Expenditure Panel Survey in USA.18,19,The dose conversion algorithms are provided in table 1.We first estimated the dose dependency for the three primary outcomes by synthesising studies of all SSRIs."In this analysis we converted doses to fluoxetine equivalents using Hayasaka and colleagues'16 method, supplemented by the daily defined dose method.We fitted a single-stage, random-effects meta-analysis of dose-outcome model20 using the dosresmeta package in R.21 The approach estimates the association between the dose and the logarithm of risk ratio for each outcome within and across studies in a single model.We used flexible restricted cubic splines with knots at 10 mg, 20 mg, and 50 mg to have comparable numbers of studies in each quartile between placebo, knots, and the maximum dose.We also did separate analyses for individual SSRIs, and for venlafaxine and mirtazapine.We did the following sensitivity analyses to examine the robustness of the main findings: setting a different number of knots and at different doses; using the conversion algorithm in the previous study by Jakubovski and colleagues,7 or the conversion algorithm based on average doses actually prescribed for major depression;18,19 limiting the included studies to those at low risk of bias; and taking remission as the outcome.Remission was as per the original studies, which typically defined it as scoring seven or less on the Hamilton Depression Rating Scale or ten or less on the Montgomery-Åsberg Depression Rating Scale.The data and the analysis R code that generated the results and figures can be found online.The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.We identified 24 524 published records through electronic search, manual search, or personal communication, and 4030 unpublished records through industry and regulatory agency websites, contact with authors, and trial registries.561 published and 121 unpublished full-text records were assessed for eligibility, and we included 77 studies examining various fixed doses of the included drugs: 27 studies based on published articles only, 21 studies based on unpublished records only, and 29 based on both published and unpublished records.The 77 studies included 201 treatment groups: placebo, citalopram, escitalopram, fluoxetine, mirtazapine, paroxetine, sertraline, and venlafaxine.The study year ranged between 1986 and 2013.The full references and the characteristics of the included studies are presented in the appendix.The 77 studies included 19 364 participants.Their mean age was 42·5 years, and 7156 of 11 749 participants for whom gender was reported were women.The median length of trial was 8 weeks.Nine studies had five treatment groups, 29 had four treatment groups, 30 had three treatment groups, and nine had two treatment groups.40 were done in North America, 13 in Europe, and ten were cross-continental.18 took place in secondary or tertiary care, four in primary care, six in both primary and secondary care, and 48 studies did not specify their study settings.The results of the risk of bias assessment are provided in the appendix.Many domains were rated as unclear and the overall risk of bias was rated as low in 21 studies, moderate in 55, and high in one.The dose-outcome relationship for treatment response, dropout due to adverse effects, and dropout for any reason for all SSRIs after the dose equivalence conversion are presented in figure 2.The RR for efficacy gradually increased from 1·0 for placebo, to 1·24 for 20 mg, 1·27 for 40 mg, and then showed a flat to decreasing trend through the higher doses.Above 50 mg only a few doses were examined, resulting in less precise estimates with wider CIs in the upper dose range.The association between the dose and the dropouts due to adverse effects was linear to exponential,
Findings: 28 554 records were identified through our search (24 524 published and 4030 unpublished records).561 published and 121 unpublished full-text records were assessed for eligibility, and 77 studies were included (19 364 participants; mean age 42.5 years, SD 11.0; 7156 [60.9%] of 11 749 reported were women).These results were robust to several sensitivity analyses.
increasing from 1·0 for placebo, to 1·94 at 40 mg, and to 3·73 at the upper limit of the licensed range.The association between the dose and the dropouts for any reason, reflecting dropouts for lack of efficacy and low tolerability, indicates optimal acceptability in the lower range between 20 mg and 40 mg.The point estimates and their 95% CIs of RRs and risk differences for response, dropouts due to adverse effects, and dropouts for any reason at 10–80 mg of fluoxetine equivalents of SSRIs are presented in table 2.The relationships between the dose and the three outcomes for each SSRI separately are presented in the appendix.The dose-outcome curves and their 95% CIs substantially overlapped, suggesting little heterogeneity among the SSRIs in their dose-outcome relationships.The dose-outcome relationships for venlafaxine and mirtazapine are presented in figures 3 and 4, respectively.The efficacy of venlafaxine increased fairly steeply up to around 75–150 mg and more modestly with higher doses, whereas the efficacy of mirtazapine increased up to a dose of 30 mg and then decreased.Dropouts due to adverse effects increased steeply with increasing doses for both drugs, resulting in a dose-acceptability curve that was convex at the lower licensed range, which approximately corresponded with 20–40 mg of fluoxetine equivalents in both cases.However, studies for each individual drug were few, and the 95% CIs of the spline curves remained wide.We examined various knots when drawing spline curves for the dose-outcome curves for SSRIs.All curves overlapped with our primary analyses.When we examined different dose-equivalence calculations among SSRIs or limited to low risk of bias studies, all the results were similar to the primary results.The dose-remission relationship showed the same tendencies as the dose-response for SSRIs, venlafaxine, and mirtazapine.We did a comprehensive dose-response meta-analysis of the most commonly prescribed antidepressants, based on the largest pool of antidepressant trials for major depression.2,For SSRIs, the probability of response increased up to doses between 20 mg and 40 mg of fluoxetine equivalents, with no further increase or even a slight decrease at higher doses within the licensed dose range up to 80 mg.Dropouts due to adverse effects showed a steep, linear to exponential, increase with increase in dose.Consequently, dropouts from all causes, including for lack of efficacy and for adverse effects, were lowest between 20 mg and 40 mg.Venlafaxine and mirtazapine showed slightly different dose-efficacy relationships; however, all drugs showed optimal acceptability towards the lower end of their licensed dose range.These results are in line with psychopharmacological investigations.Using PET scans, Meyer and colleagues22 showed that approximately 80% serotonin transporter occupancy occurs at minimum therapeutic doses of citalopram, fluoxetine, paroxetine, sertraline, or venlafaxine, and that further increments in dose resulted in only small increases in transporter occupancy: a hyperbolic relationship typical of ligand-receptor binding.23,Our data also suggest that increases in transporter occupancy above 80% do not result in greater treatment efficacy.Similarly, conventional antipsychotic drugs require dopamine D2 receptor occupancy of about 70% to achieve a therapeutic effect; greater D2 occupancy does not increase efficacy, but raises the incidence of side-effects.24,Venlafaxine is a serotonin and noradrenaline reuptake inhibitor; however, functional noradrenaline reuptake blockade might become apparent only at higher doses.25,This dual action might be responsible for the observed increase in efficacy with higher doses for venlafaxine.Mirtazapine has more complex psychopharmacological properties, and the precise mechanisms underpinning its antidepressant action are less well understood.Mirtazapine is thought to increase noradrenaline and serotonin release through antagonism of central α2-adrenergic autoreceptors and heteroreceptors.It also exhibits antagonism to some serotonin receptor subtypes, while overall increasing tonic activation of post-synaptic 5-HT1A receptors.26,Earlier reviews examining the clinical dose-efficacy relationship of antidepressants have produced variable results; some of this confusion might be due to arbitrary categorisation of doses and their inconsistent naming.Based on 33 studies of new-generation and older-generation antidepressants, Bollini and colleagues5 used four categories for dose and concluded that the response rate showed a gradual increase up to 21–40 mg, with no further increase for the two high-dose categories.The inclusion of older tricyclic antidepressants limits applicability of their review to modern practices.Hieronymus and colleagues6 analysed individual patient data from 11 trials of citalopram, paroxetine, and sertraline and found that “doses below or at the lower end of the recommended dose range were superior to placebo” and “inferior to higher doses”, suggesting a linearly increasing dose-efficacy relationship for SSRIs.Within their high-dose categories corresponding with citalopram 40–60 mg, paroxetine 20–40 mg, or sertraline 100–200 mg, they found no indication of dose dependency.However, their category of low dose included subtherapeutic doses as well as therapeutic doses, which might explain why the low-dose category did less well than the high-dose category in their analysis.In 2016, Jakubovski and colleagues7 found a statistically significantly greater response for high doses in a meta-regression analysis of 40 studies comparing SSRIs with placebo; an accompanying editorial27 concluded, “there is a modest but clear dose-response effect for SSRIs.,However, when the findings were categorised into less than 20 mg, 20–39 mg, 40–50 mg, and more than 50 mg fluoxetine equivalents, the authors found the 40–50 mg dose category offered the greatest efficacy.Moreover, a majority of their included studies were flexible-dose studies and they took the maximum of the flexible range as the dose representing the treatment groups.Our dose-dependency curves were based on models using splines, which treated dose as a continuous variable, and shed new light on the existing evidence.Our results for dose-efficacy relationships for SSRIs are largely in line with Bollini and colleagues.5, "Hieronymus and colleagues'6 observation that
For SSRIs (99 treatment groups), the dose-efficacy curve showed a gradual increase up to doses between 20 mg and 40 mg fluoxetine equivalents, and a flat to decreasing trend through the higher licensed doses up to 80 mg fluoxetine equivalents.The relationship between the dose and dropouts for any reason indicated optimal acceptability for the SSRIs in the lower licensed range between 20 mg and 40 mg fluoxetine equivalents.Venlafaxine (16 treatment groups) had an initially increasing dose-efficacy relationship up to around 75–150 mg, followed by a more modest increase, whereas for mirtazapine (11 treatment groups) efficacy increased up to a dose of about 30 mg and then decreased.
there was a stepwise increase in efficacy from placebo, to low dose, to higher dose of SSRIs might be considered compatible with our finding of dose response up to 20–40 mg of fluoxetine equivalents; however, their categorisation of doses did not capture the change points identified in our analyses. "Jakubovski and colleagues'7 conclusion that 40–50 mg of fluoxetine equivalents offered the greatest efficacy might be overstated if we take into account that most of their included studies used flexible-dose regimens and the prescribed doses could be significantly lower than 40–50 mg.Two studies examined the relationships between doses and side-effects.Bollini and colleagues5 found a monotonic increase in adverse event rates, from 20% to 50%, as the dose increased, from zero to more than 50 mg fluoxetine equivalents.Jakubovski and colleagues7 found a monotonic dose relationship for dropouts due to side-effects, but an almost flat relationship for all-cause dropouts.These findings might be confounded by the inclusion of both fixed-dose and flexible-dose studies.By contrast, by focusing on fixed-dose studies and using flexible spline curves, we found a linear to exponential relationship between dose and dropouts due to side-effects, and a curvilinear relationship between dose and all-cause dropouts for SSRIs.Our study is not without limitations.First, the findings mainly pertain to patients with major depression who were judged eligible for placebo-controlled trials.For patients who suffer from physical comorbidities or for older patients, the optimal dose might be lower than 20–40 mg fluoxetine equivalents.In this context, the positive dose-efficacy association through zero to 30 mg of fluoxetine equivalents is clinically relevant.Second, the fixed-dose regimen might be regarded as not reflecting clinical practice, especially when no or rapid titration scheme is used, and might overestimate withdrawal due to adverse effects.Tolerability might then confound efficacy because interventions with high dropouts are likely to show lower endpoint efficacy because the majority of patients leave the study early and therefore have less time to improve; however, strict examination of dose dependency is possible only with fixed-dose studies.Third, there is some uncertainty about how best to calculate dose equivalency among antidepressants.We used the most comprehensive empirically derived conversion algorithms,16 which were similar to the ones used in previous systematic reviews on this topic.5,7,We tested our results via sensitivity analyses that used alternative conversion algorithms and confirmed the primary results.Fourth, spline curves for individual drugs were based on a few studies, resulting in wide CIs.We therefore meta-analysed all SSRIs because they share a key therapeutic mechanism.The curves for individual SSRIs provided little evidence of heterogeneity.For venlafaxine and mirtazapine, the numbers of available trials were few and the resultant wide CIs, particularly regarding tolerability and acceptability, warn against overinterpreting the results.Fifth, we used dropouts for side-effects from the acute-phase treatment as an index of tolerability: these numbers do not necessarily reflect rare but severe adverse events, nor do they include long-term side-effects, including withdrawal symptoms.Lastly, the search date for the relevant studies could be considered old; however, an update search for eligible trials in PubMed on March 21, 2019 revealed only one additional eligible study with a small sample size,28 which would not change the results of our analysis with 19 364 patients.Our study has a number of strengths.First, we used state-of-the art dose-response meta-analysis, and treated dose as a continuous variable, allowing greater resolution of change points and avoiding misleading categorisation of doses.Second, we examined dose dependency not only for efficacy but also for tolerability and acceptability.Third, our study is based on the largest and most comprehensive dataset of fixed-dose, double-blind, randomised controlled trials, published or unpublished, of second-generation antidepressants in the acute-phase treatment of major depression.The included numbers of studies and participants were two to eight times larger than the previous analyses.We analysed not only SSRIs, but also two widely prescribed non-SSRI antidepressants, venlafaxine and mirtazapine.Our findings will have clinical implications, especially for practitioners or countries where fluoxetine is routinely prescribed at doses of 40 mg or more.In conclusion, our analyses showed dose dependency in efficacy up to around 20–40 mg fluoxetine equivalents, beyond which there is no further increase in efficacy for SSRIs or mirtazapine, but a possibly slight increase for venlafaxine; a clear dose dependency in dropouts due to adverse effects for all drugs through the examined dose range; and the overall acceptability of treatments appears to be optimal towards the lower end of the licensed range.We therefore conclude that for the majority of patients receiving an SSRI, venlafaxine, or mirtazapine for the acute-phase treatment of their major depressive episode, the lower range of the licensed dose will probably achieve the optimal balance between efficacy, tolerability, and acceptability.Clinical guidelines need to incorporate these findings.
Background: Depression is the single largest contributor to non-fatal health loss worldwide.Optimising their use is crucial in reducing the burden of depression; however, debate about their dose dependency and their optimal target dose is ongoing.Doses of SSRIs were converted to fluoxetine equivalents.The main outcomes were efficacy (treatment response defined as 50% or greater reduction in depression severity), tolerability (dropouts due to adverse effects), and acceptability (dropouts for any reasons), all after a median of 8 weeks of treatment (range 4–12 weeks).We used a random-effects, dose-response meta-analysis model with flexible splines for SSRIs, venlafaxine, and mirtazapine.Dropouts due to adverse effects increased steeply through the examined range.Both venlafaxine and mirtazapine showed optimal acceptability in the lower range of their licensed dose.Interpretation: For the most commonly used second-generation antidepressants, the lower range of the licensed dose achieves the optimal balance between efficacy, tolerability, and acceptability in the acute treatment of major depression.
Although it is generally assumed that farmers in rural areas of developing countries are risk averse, little is known about the actual form of their risk preferences.When economists attempt to measure risk preferences, they typically assume that risk preferences follow the constant relative risk assumption utility function, Delavande et al. or Hurley for recent reviews of the literature).However, the consequences of simply making this assumption without testing it are unclear.Few studies actually test risk preferences in the field without making the CRRA assumption.An important exception is Holt and Laury who consider a more flexible parameterization of the utility function, although they do so in a laboratory experiment setting.Furthermore, it is likely that risk preferences among farmers in developing countries are important constraints that keep farmers from reaching their productive potential.Smallholders in developing countries face risk at several points in the production process.Dercon and Christiaensen explicitly show that Ethiopian farmers are constrained in technology adoption by risk.Furthermore, Boucher et al. argue theoretically that a class of farmers is risk-rationed in Peru; that is, due to risk, some farmers will not try to access the formal credit market, even if it would raise their productivity and income levels.Overcoming such barriers to risk, then, could help farmers in developing countries improve their livelihoods along several dimensions.Understanding the heterogeneity of risk preferences and the implications of making specific assumptions about the form of risk preferences may have consequences as programs are designed to help farmers in developing countries overcome several different potential sources of risk.Several impact evaluations have recently been conducted on pilot projects related to weather insurance, with mixed success.Cole et al. test the importance of the insurance contract price on take up in India by randomizing price offers, and find that average take up in participating villages is around 25%, though almost no one takes up insurance in neighboring villages that did not receive a visit from insurance agents."Hill and Robles find similar take up in a pilot project in southern Ethiopia that offered small amounts of insurance, rather than attempting to insure the farmer's entire production.Additional information about the type and distribution of risk preferences among farmers might be important in informing the design of weather insurance contracts, to improve take up.In this paper, we use experimental data collected in rural Mozambique to elicit risk preferences of farmers participating in an agricultural program that promoted orange fleshed sweet potatoes.The experiment to elicit risk preferences was framed around the adoption of sweet potato varieties and consisted of presenting a menu of ordered lottery choices over hypothetical gains to the farmers.The data were collected in the final survey of a randomized evaluation designed to evaluate an intervention that provided farmers with OFSP vines, information about how to grow OFSP, and the relative nutritional benefits of consuming orange rather than white sweet potatoes, particularly for women of child bearing age and children under five years old.One unique aspect of the experiment is that it was conducted separately with both the household head and spouse when both were present.It was therefore conducted with 682 farmers from a total of 439 households."Within households in which both head and spouse were present, we examine the correlation between the husband's and the wife's preferences.We use the data to consider and test several models of risk preferences against one another.We initially compare two contending models of choice under uncertainty, Expected Utility Theory and Rank Dependent Utility.Quiggin have proposed a Rank Dependent Utility framework that can be considered a generalization of EUT.Under RDU, subjective probabilities are not constrained to be equal to objective probabilities, as in EUT.Instead, agents are allowed to make their choices under uncertainty according to a nonlinear probability weighting function.1,We then consider a general class of value functions that explicitly allows for variation in relative risk aversion, relaxing the assumption of constant relative risk aversion that is often made in the literature.Our primary contribution to the literature is that we use data collected in a lab-in-the-field experiment to nest different potential models of risk preferences, and then we develop and test these models against one another."We are also able to examine risk preferences among the head and the spouse, and to consider whether they predict one another's risk preferences within this hypothetical context.We further construct a model that allows for heterogeneity in the theoretical basis for risk preferences; namely, EUT or RDU.Our experiment is related to the lab experiment conducted by Andersen et al., who conduct a lab experiment among 150 subjects and elicit both risk preferences and subjective probabilities, using real payoffs.In general, our finding is relatively consistent with both Kahneman and Tversky and Andersen et al.; we find that the RDU dominates EUT, and we generally reject the hypothesis of CRRA, regardless of the form of preferences.We then show the magnitude of errors that take place if one assumes CRRA preferences.We find that farmers who are less risk averse are more susceptible to mischaracterization under the CRRA assumption than more risk averse farmers, based on the results of our model.Furthermore, we find that the risk premium implied by RDU is substantially higher than that of EUT, suggesting that one explanation for low take ups of rainfall insurance in developing countries may be a mischaracterization of risk preferences.The paper proceeds as follows.The next section will discuss the literature on the measurement of risk preferences, both in the laboratory and in field experiments.The third section describes the
Although farmers in developing countries are generally thought to be risk averse, little is known about the actual form of their risk preferences.In this paper, we use a relatively large lab-in-the-field experiment to explore risk preferences related to sweet potato production among a sample of farmers in northern Mozambique.A unique feature of this experiment is that it includes a large subsample of husband and wife pairs.After exploring correlations between husband and wife preferences, we explicitly test whether preferences follow the constant relative risk aversion (CRRA) utility function, and whether farmers follow expected utility theory or rank dependent utility theory in generating their preferences.If we make the common CRRA assumption in our sample, we poorly predict risk preferences among those who are less risk averse.
association with the strands.Close apposition of targeted mitochondria to ER domains was also seen by super-resolution microscopy.SIM imaging of cells treated with IVM and stained for ATG13, WIPI2, mitochondria, and ER showed that the engulfed mitochondrial fragments were encased within ER strands where the autophagy and mitophagy machinery was also assembled.To address the possibility that such structures were formed only when mitophagy proceeded to completion, we used HEK293 cells lacking ATG13.In these cells, FIP200 and ubiquitin still respond to IVM but downstream events are blocked.SIM analysis of those cells showed that FIP200 and NDP52 were still capable of translocating to ubiquitin-marked mitochondrial fragments associating with ER, although these structures did not appear fully formed.We obtained similar results with all of the KO lines that responded either partially or fully to IVM: the formed structures were associated with fragmented mitochondria.Examples of tight association between the ER and forming mitophagosomes are evident in some previous publications using Parkin overexpression, suggesting a generalized ER involvement for mitophagy.To explore this further, we used live imaging after OA treatment.Under this alternative mitophagy induction, we showed that ATG13 structures translocated and rotated on mitochondrial fragments akin to the IVM response.Importantly, translocation of ATG13 to forming mitophagosomes was on ER regions and the full engulfment by ATG13 was on fragments encased by the ER.These results suggest that the basic characteristics of the mitophagy pathway we have described are maintained across more than one induction protocol.To examine the spatial relationship between mitophagosomes and the ER at higher resolution, we used a combination of live microscopy followed by EM first described for our autophagy work.Cells expressing ATG13 and a mitochondrial marker were treated with IVM during live imaging, fixed on stage, and prepared for EM serial sectioning or tomography.We found examples where ATG13 structures surrounding mitochondria were recognized after EM preparation in the form of double membrane phagophores.The double membrane phagophore surrounded the mitochondrion very tightly.In this particular example, three separate phagophores are evident, and interestingly, the space devoid of phagophores was tightly occupied by a membrane cistern resembling ER.A reconstruction of this event shows a very tight association between the ER, the phagophore, and the targeted mitochondrion.Additional examples of the engulfment process are shown in Figure S7.In addition to proximity to the ER and phagophore, other vesicles can be seen in the surrounding region, and in one example, a mitochondrion is targeted by ER strands without the double-membrane phagophore being visible.This is a particularly informative event because the ATG13-positive structure was very early based on live imaging, and it is likely that a double-membrane phagophore had not yet formed.Another such example is shown in Figures 6M–6P in more detail.The ATG13 particle formed around the mitochondrial fragment was clearly identified in the live imaging, but no phagophore was apparent in the EM tomogram.Instead, the targeted mitochondrion was surrounded by ER strands and a few vesicles.We hypothesize that this may be one of the earliest visible intermediates in the engulfment process where elements of the early autophagy machinery, such as proteins of the ULK complex and ATG9 vesicles, associate with the targeted mitochondrion and with the ER before a proper phagophore begins to form.The last few frames are a reconstruction of a bigger area showing the various components in sequence of appearance.With respect to the ubiquitination step, super-resolution microscopy revealed a close association of the ubiquitin signal with the targeted mitochondria and with both autophagy and mitophagy components.The ubiquitin layer on the targeted mitochondria was also in tight apposition to the ER.The temporal relationship of the ubiquitinated mitochondrial fragments with the ER was complicated.When fragments first became ubiquitinated, they interacted with ER strands but were not encased by them.Several minutes later, the ubiquitin and the ER signal overlapped significantly, with the ER surrounding the ubiquitinated structures.Once such an overlap was established, it lasted for over 10 min, and it was evident as the mitochondrial fragments moved around the cell.Association of ubiquitinated mitochondria with the ER could take place in the presence of a mitophagy signal but in the absence of functional autophagy and mitophagy machineries: MEFs deleted for FIP200 and treated with BX-795, the TBK1 inhibitor, still showed that the ubiquitinated mitochondria were encased in ER strands.Longer IVM treatments produced more examples of ER-encased ubiquitinated structures, consistent with the rest of our work showing that the interaction with the ER is a relatively later step after ubiquitination.During ubiquitin-dependent mitophagy, several protein complexes participate in the targeting of damaged mitochondria for degradation: ubiquitination machinery, receptors that recognize damaged mitochondria, and the autophagic machinery creating the double membrane to engulf them.How these machineries are coordinated is unknown.An additional unknown is whether this process depends on a pre-existing membrane, as is the case for the ER during non-selective autophagy, or is it exclusively triggered on the surface of the damaged mitochondria.In this work, we have provided an integrated view of the sequence of steps involved in making a mitophagosome together with the dynamics of the pathway as seen by live imaging.Mechanistically, IVM reduces oxygen consumption in agreement with previous work and fragments mitochondria prior to the induction of mitophagy.Following fragmentation, mitochondria become ubiquitinated, and this was inhibited by inactivation of the DNM1L GTPase that is responsible for mitochondrial fission.Ubiquitination does not involve the PINK1/Parkin proteins but depends on the ubiquitin E3 ligases CIAP1, CIAP2, and TRAF2.These proteins are frequently found in complex and have been linked to some form of autophagy in the
Also unknown is whether mitophagy depends on pre-existing membranes or is triggered on the surface of damaged mitochondria.Targeted ubiquitinated mitochondria are cradled by endoplasmic reticulum (ER) strands even without functional autophagy machinery and mitophagy adaptors.We propose that damaged mitochondria are ubiquitinated and dynamically encased in ER strands, providing platforms for formation of the mitophagosomes.
Hydrocarbon based fuels are the primary resource for the worlds energy supplies.With increasing demand for global energy, however, there has been rapid growth in the field of energy development with a view towards the generation of an alternative energy source to provide global sustainability.Extensive focus is now being given to alternative sources such as wind, biomass, tidal, nuclear, solar, CO2 recycling and H2 production.Research into photocatalytic water splitting technology has become abundant with emphasis on both catalyst and system design .Systematically the concept of clean energy driven large scale photocatalytic H2O splitting is reliant on two factors; the efficient delivery of natural photons and the transfer of target species to photocatalyst surfaces.Despite the simplistic concept of photocatalytic driven H2 production, there are limited publications that investigate reactor engineering for increased H2 production.A number of more recent publications have focused on pure water splitting in which a two-compartment or ‘H-shaped’ reactor is used in conjunction with a membrane to yield separate production streams of H2 and O2 .This approach generally adopts a Z-scheme process which utilises two photocatalysts to overcome the stringent requirements faced by a single catalyst .The single catalyst approach typically utilises a one-compartment reactor and will focus on increasing hydrogen production through maximising mass transfer and light distribution .In view of this there are a number of advantages in developing a fluidised photocatalytic system including; improved mass transfer, excellent irradiation distribution and a high photocatalysts surface area-to-volume ratio .Fluidisation is achieved as a result of the gravitational force of the catalyst particles becoming equal to the aerodynamic drag.Traditional methods of fluidisation utilise an upward stream of gas or fluid , however, previous examples have reported the use of external fields such as sound- and magnetic-assisted based fluidisation methods .Examples of fluidised photocatalytic reactors have been reported using a range of model compounds including toluene , trichloroethylene , formaldehyde , NOx and oxalic acid .There are a limited number of fluidised based systems presented in the literature for water splitting or water reduction, however, in 2012 Reilly and colleagues reported the construction of a UV-irradiated fluidised bed for H2 production.The design concept of the reactor employed a low power internal illumination source in a recirculating annular style fluidised system, which produced an optimum H2 rate of 198 μmol h−1 over Pt loaded TiO2.At the critical point of fluidisation, the light penetration of a system can be increased as the dense particle bed becomes suspended and shows fluidic characteristics.In an attempt to utilise light efficiently, previous studies have used internal illumination sources ‘light guiding’ particles and optical fibres that can direct light deeper into the catalyst bed.Kuo and colleagues found that the addition of SiO2 particles to their system increased light scattering and thus, toluene removal.In this investigation, a propeller-stirred fluidised system is adopted which creates cavitation of the liquid reaction medium and thus maximises light penetration.Irradiation characteristics must be carefully considered in any photocatalytic system, particularly in a fluidised system to ensure a more efficient method for future scale up.Moreover, if photocatalytic production of H2 is to be utilised as an alternative fuel the development of photocatalytic reactors must also focus towards systems which are capable of utilising solar radiation.A common approach used in a number of the systems reported, however, involves high power lamps, which are positioned above a reactor to provide illumination through a quartz window typically with an area of ∼15 cm2 .These systems utilise medium and high power lamps with a range of 300–450 W .These lamps have been used in conjunction with novel catalysts to produce a range of yields of H2 production.In 2012, Liu and Syu reported a production rate of 14.9 μmol g−1 h−1 of H2 under irradiation from a 450 W xenon lamp.Increased rates of H2 have been shown by Khan and Qureshi and Jeong et al. .Khan and Qureshi produced H2 at a rate of 180 μmol/h over BaZr0.96Ta0.04O3 under irradiation from a 300 W xenon lamp.While there are numerous examples of systems that use high power lamps in the visible region, examples of reactors being deployed for H2 production under natural irradiation are limited.Previous publications have used compound parabolic concentrators based units to generate H2 from H2O, activated sludge and H2S .The system presented here utilises low power artificial irradiation sources that are easily scalable in an array structure.An alternative approach to CPC designs was adopted by utilising a solar telescope to concentrate natural illumination.This paper reports the development of a novel fluidised photo reactor suitable for use with UV–Visible and natural solar illumination.The performance of the reactor has been evaluated through monitoring the evolution of H2 over two photo catalysts, Pt-C3N4 and NaTaO3·La.H2 production using sustainable sources is a desirable approach in achieving clean driven fuel production.The application of low powered visible lamps were examined with a view towards incorporation into large scale array structures.The feasibility of solar driven photocatalysis for H2 production was demonstrated using the George Ellery Hale Telescope.This investigation aimed to demonstrate that photo reactor properties can significantly enhance the activity of a photo catalytic system, which can aid the applicability of photocatalytic technology for industrial applications.Catalysts graphitic carbon nitride, platinised carbon nitride and lanthanum doped NaTaO3 were all synthesised by collaboration partners for this study.The full details of the catalyst development along with the methodology can be found in the following publications .Catalyst g-C3N4 was synthesized via solid state synthesis by thermal condensation of
With advancements in the development of visible light responsive catalysts for H2 production frequently being reported, photocatalytic water splitting has become an attractive method as a potential 'solar fuel generator'.This is particularly important as many reactor configurations are mass transport limited, which in turn limits the efficiency of more effective photocatalysts in larger scale applications.This paper describes the performance of a novel fluidised photo reactor for the production of H2 over two catalysts under UV-Visible light and natural solar illumination.
TiO2 catalyst developed by Reilly et al. was restricted to UV light.If photocatalytic production of H2 is to be utilised as an alternative fuel it is essential that the development of photocatalytic reactors focuses towards systems which are capable of converting solar energy.Solar driven photocatalysis has been reported in the literature primarily for the applications of water detoxification , CO2 reduction and H2 production .To date, however, there are no examples of solar H2 production from the use of a solar telescope to concentrate the stream of photons.Jing et al. discussed a number of parameters which restrict the application of solar driven photocatalysis including the inability to efficiently utilise natural light.One such limitation of solar driven photocatalysis is the requirement to first concentrate the stream of photons, which can result in a change in temperature and subsequently a change in pressure.In using the solar telescope these limitations are eliminated which prevents a thermal reaction taking place over that of a photocatalytic one.Solar H2 production over NaTaO3·La in the PFPR is shown in Table 4.Under entirely natural solar light a total concentration of 4.6 μmol H2 was generated which gave a total production rate of 8 μmol h−1 g−1.The evolution of H2 from the corrosion of the propeller was also monitored simultaneously in a second PFPR unit and was found to be lower at 1.7 μmol, 63.0% lower than when NaTaO3·La was present.Therefore, a calculated photocatalytic H2 evolution rate of 5 μmol h−1 g−1 was achieved under concentrated solar illumination.The total production of H2 over NaTaO3·La and the photocatalytic production of H2 under UV–Visible and natural solar light are also summarised in Table 3.While H2 production is lower under solar light, the results remain promising.In order for photocatalysis to progress there is a need for both solar activated materials and reactors that can maximise their efficiency.The PFPR quartz body provided a large surface area for solar irradiation whilst the propeller increased mass transfer to increase the activity of the catalyst.While the production of H2 under solar illumination was lower than under artificial light, there are factors which must be considered; the electromagnetic spectrum of natural and artificial light, the photon delivery system of the solar irradiation source and variable nature of solar irradiation itself.The solar telescope used in this preliminary investigation continually monitored and tracked sun light during the day and concentrated it through a series of gold plated mirrors and illumination shafts.This allowed for a concentrated stream of solar photons to be guided to the tubular body of the PFPR.It has been reported by Jing et al. 2009 that the concentration of solar light is essential, stating that ‘a solar reactor cannot be designed without first concentrating natural sunlight in an attempt to enhance the intensity’.As the electromagnetic spectrum of solar light has a broad spectrum in comparison to the narrow range emitted from high power lamps, which are typically adopted in photocatalytic systems, the intensification of the illumination source is essential.The concentration of natural light in this circumstance was achieved while maintaining atmospheric conditions and room temperature which ensured the production of hydrogen was photocatalytic and not thermally activated.While the mirrors and shafts of the solar telescope concentrated the irradiation stream, the light path was subject to contaminants.Bickley et al. stated that the number of particles and contaminants between the source of exposure and the catalyst should be kept to a minimum.Under UV–Visible light, the source of illumination is typically immersed or externally positioned in close proximity to the reactor and as such prevents the presence of contaminants in the light path.In comparison the light path of the George Ellery Hale telescope is approximately 40 ft, and is exposed to various environmental contaminants.The weather dependency and variable nature of solar light as an irradiation source is also a key factor.Cloud coverage and unexpected weather can reduce and restrict the delivery of light through the telescope.The result of which can be dark periods which reduces the photocatalytic efficiency.In summary, the production of H2 over Pt-C3N4 and NaTaO3·La under UV–Visible light and natural solar irradiation was achieved in a novel fluidised photo reactor.The reactor properties were also shown to enhance the photocatalytic activity of the system with an increase from 24 μmol h−1 g−1 at 0 rpm to 89 μmol h−1 g−1 at 1035 rpm.At increased speeds of 1384 and 1730 the rate of H2 produced decrease indicating mass transport limitations no longer predominant at speeds greater than 1035 rpm.A limitation to the current design of the PFPR is the uncontrolled production of H2 as a result of SS corrosion by oxalic acid.The corrosion was localised to the propeller and was reduced by operating the propeller at 1035 rpm and applying a PTFE coating to the surface.The evolution of H2 over NaTaO3·La under solar illumination was also achieved with a production rate of 5 μmol h−1 g−1.These initial results demonstrate the potential for solar water reduction for H2 production from a sustainable source.This is a key area of focus for current ongoing work being conducted.
The development of novel photo reactors which can enhance the potential of such catalyst, however, is rarely reported.Catalysts Pt-C3N4 and NaTaO3.La were dispersed in the reactor and the rate of H2 was determined by GC-TCD analysis of the gas headspace.Reactor properties such as propeller rotational speed were found to enhance the photo activity of the system through the elimination of mass transport limitations and increasing light penetration.The optimum conditions for H2 evolution were found to be a propeller rotational speed of 1035rpm and 144W of UV-Visible irradiation, which produced a rate of 89μmol h-1 g-1 over Pt-C3N4.
Generative Adversarial Networks have made data generation possible in various use cases, but in case of complex, high-dimensional distributions it can be difficult to train them, because of convergence problems and the appearance of mode collapse.Sliced Wasserstein GANs and especially the application of the Max-Sliced Wasserstein distance made it possible to approximate Wasserstein distance during training in an efficient and stable way and helped ease convergence problems of these architectures.This method transforms sample assignment and distance calculation into sorting the one-dimensional projection of the samples, which results a sufficient approximation of the high-dimensional Wasserstein distance.In this paper we will demonstrate that the approximation of the Wasserstein distance by sorting the samples is not always the optimal approach and the greedy assignment of the real and fake samples can result faster convergence and better approximation of the original distribution.
We apply a greedy assignment on the projected samples instead of sorting to approximate Wasserstein distance
values in all three cases.The highest pixel values are present in case B, where the reflected sun-disk is visible on the luminaire with a maximum luminance of ≈850,000 cd m−2.The glare assessment of the three cases considers not only the opaque surfaces of the office interior, but also the sky as visible through the fenestration systems.Reflection by specular surfaces such as the luminaire in the suspended ceiling can contribute to glare if they reflect directional light.The results of the evaluation employing evalglare are listed in Table 8.Detected glare sources are colored in Fig. 19.Since a task area cannot be defined for the given view, a fixed threshold of 2000 cd m−2 was applied to the pixel values to identify glare sources, and a threshold of 1,000,000 cd m−2 to extract peaks such as reflections of the sun disk.Discomfort glare as predicted by the DGP is extremely high for case B.The predicted value of 0.896 exceeds the defined range of the metric, and is clearly above the upper limit of tolerable glare defined as 0.45.For cases A and C, the computed DGPs are in the valid range of 0.2–0.8.The prediction for case A is below, case C just above the threshold of 0.35 for perceptible glare.According the DGP, good visual comfort conditions in terms of discomfort are maintained by both diffuse and the retro-reflective coating, but the latter is preferable.The assessment based on DGI contradicts the predictions by the DGP metric.The result for cases A and C are in the acceptable range 22–24 with case C achieving a minimally better result.Case B is clearly higher and must be considered intolerable according to the metric.A novel extension to a scanning gonio-photometer for the measurement of retro-reflection has been developed.Applicability and validity of the approach, employing two beam-splitters to compensate for its wavelength dependent transmission and reflection properties, were demonstrated.Based on these initial tests, a fully functional setup shall be developed that reduces error due to misalignment compared to the presented prototype.The evaluated coating achieves a highly directional, retro-reflective effect.This property is confirmed in both evaluated wavelength ranges Vis and NIr and over a wide range of incident directions θi = 5–70°.Compiled from measured BSDF, the data-driven reflection model in Radiance is capable to accurately replicate all characteristic features of the sample.Since Radiance implements an advanced algorithm for interpolation, but has no means to extrapolate, the applicability of the model is limited to the range of measured incident directions.Based on the results of this work, the presented apparatus to measure retro-reflection shall be modified accordingly so that a wider range of incident directions can be covered.Yet, the measurement of reflection for incident directions close to grazing is inherently limited.The implementation of an extrapolation algorithm that predicts peaks either in the forward mirror direction or the direction of ideal retro-reflection based on a given set of measurements remains a challenge to overcome limitations of the data-driven model.Since the retro-reflective effect is achieved independently from the profile geometry, the coating allows to develop Venetian blinds with low profile height.Effective sun-shading could be demonstrated in the comparison with diffuse and specular blinds even with flat, horizontal slats.Since most incident sunlight is directionally reflected toward the outside, visible light is blocked and solar gains are minimized.The application of the coating in future Venetian blinds assemblies promises to achieve high performance as a sun-shading device while maintaining view to the outside.This study focuses on the effect of the coating rather than the performance of a complete fenestration system.Further research shall assess the performance of the coating when applied in realistic cases with optimized blinds profiles.Different strategies to set the inclination angle of the blinds to maximize view-through and daylight supply shall be tested over extended time-spans employing CBDM.This research was supported by the doctoral funding scheme of Lucerne University of Applied Sciences and Arts, by the Swiss Commission for Technology and Innovation CTI within the SCCER FEEB&D, and by the Swiss Federal Office of Energy SFOE as part of the project “High Resolution Complex Glazing Library”.Sole responsibility for content and conclusions lies with the authors.Pellini S.p.A. as a collaborator within the BIMSOL project provided the sample and funding to cover the measurements.
Retro-reflective coatings applied to blinds of reduced geometric complexity promise to provide view to the outside while effectively controlling solar gains and glare.To characterize the reflection characteristics of such coatings over the entire solar spectrum, a novel extension to a scanning gonio-photometer is developed.The extended instrument is tested and applied to measure a coating's Bidirectional Reflection Distribution Function including the region of the retro-reflected peak.The results indicate the potential of the coating to effectively shade direct sunlight even if applied on blinds with minimalistic geometries.
Deep Convolutional Networks have been shown to be sensitive to Universal Adversarial Perturbations: input-agnostic perturbations that fool a model on large portions of a dataset.These UAPs exhibit interesting visual patterns, but this phenomena is, as yet, poorly understood.Our work shows that visually similar procedural noise patterns also act as UAPs.In particular, we demonstrate that different DCN architectures are sensitive to Gabor noise patterns.This behaviour, its causes, and implications deserve further in-depth study.
Existing Deep Convolutional Networks in image classification tasks are sensitive to Gabor noise patterns, i.e.small structured changes to the input cause large changes to the output.
was corroborated using other methods.Both a viscometry assay and a low-speed centrifugation approach were used to determine the effect of wild-type or mutant Vps1 on the extent of actin crosslinking.As shown in Figure 5E, Vps1 addition greatly retarded the passage of a ball through a capillary containing polymerized actin.Reduction in viscosity at the highest Vps1 concentration is possibly indicative of greater bundling that generates channels in the filament mix thus allowing less impeded movement of the ball .The mutant protein did not affect the rate of fall in a capillary compared with actin alone indicating that it was unable to crosslink the filaments.While single actin filaments are pelleted at speeds of ∼300,000 × g, only crosslinked or bundled actin can be pelleted at speeds of 10–15,000 × g.As shown in Figures 5F and 5G, both actin and wild-type Vps1 were able to pellet at low speeds, while neither actin nor the vps1 RR-EE mutant showed an increase in the pellet fraction when co-incubated.Taken together, these data indicate that actin filaments can induce or stabilize a higher-order dynamin structure, and that dynamin in turn is able to interact with multiple filaments and generate a crosslinked network.The RR-EE mutation reduces actin binding, and consequently an actin network is unable to form.In this study, we sought to determine whether the yeast endocytic dynamin Vps1 was able to bind to actin and whether binding was required for all or just a subset of Vps1 functions in membrane trafficking.Strikingly, the ability of Vps1 to bind actin is conserved and appears to involve the same region as in the mammalian dynamin-1 protein .Most importantly, mutational analysis reveals that altering the Vps1-actin interaction causes specific defects in endocytosis but not in other functions of Vps1.An important finding is that Vps1, like its mammalian counterparts, can form a ring structure.The ring is slightly smaller than that shown for dynamin-1 but at the level of EM appears remarkably similar .While incubation with actin appears to induce or stabilize a double-ring structure, the organization of Vps1 within this structure is not currently known and awaits further, structural analysis.The importance of the ring structure for Vps1 function may explain why attempts to localize Vps1 tagged with GFP as the sole form of Vps1 in cells have failed to observe it at endocytic sites, while studies co-expressing tagged and untagged protein have detected such localization .The presence of a large C-terminal tag on all Vps1 molecules might well be expected to impact negatively on ring formation.Analysis of Vps1 mutants identified a key site as critical for both actin interaction and endocytic function, while two other mutations led to defects affecting all functions of Vps1 in addition to endocytosis.The RR-EE mutant protein is unable to bundle actin filaments in vitro and causes a defect in scission in vivo.Together with earlier data, we propose that Vps1 is recruited to the invaginated membrane where it binds directly to actin.This then triggers increased recruitment and stabilization of Vps1 oligomeric structures, potentially equivalent to the increase in dynamin localization observed just before scission in mammalian clathrin-mediated endocytosis .In addition, the Vps1 rings bind and bundle actin filaments at its site of localization.This re-organization of actin may facilitate a switch in the site of force transduction from the tip of the invagination where the force was used to drive the inward membrane movement , to the point at which scission is required, on the side of the invagination.Our data provide evidence for the previously postulated idea that force on the membrane itself could be an important contributory factor in the vesicle scission step .If the interaction between Vps1 and actin is reduced, as in the vps1 RR-EE mutant, such that the actin network can no longer be re-organized, force will not be transduced to the appropriate scission site, thus precluding release of the vesicle.This model also suggests that a scission defect would only be predicted to occur in cell types that require the force generated by actin for endocytosis due to membrane tension.Further studies in cell types requiring actin of endocytosis will allow the importance of the direct dynamin-actin interaction to be examined further in mammalian cells.In light of the recent reports of the interplay between these proteins from live-cell analyses, it seems likely that a dynamin-actin interaction will form part of the functional protein network that forms prior to vesicle scission.Overall, these data provide strong evidence for a mechanism for endocytic vesicle scission in which dynamin brings about a re-organization of the actin network at endocytic sites to allow the force from actin polymerization to drive membrane rearrangement culminating in scission.Unless stated otherwise, chemicals were obtained from Sigma-Aldrich, Fisher Scientific, or Formedium.Yeast strains and plasmids used in this study are listed in the Supplemental Information.Point mutations in VPS1 were generated using site directed mutagenesis with template plasmids pKA677 and pKA850.All strains carrying tags have growth properties similar to control strains.Bimolecular fluorescence complementation assays used strains carrying Venus constructs crossed for co-expression of both N- and C-terminal halves of Venus.Expression levels of Vps1 were measured from western blots of whole-cell extracts using anti-Vps1 antibody .Carboxypeptidase Y processing was analyzed from cell extracts .Pre-cleaned CPY antibodies were used at 1:100 dilution and GAPDH antibodies at 1:5,000.Wild-type VPS1 and vps1 mutants were expressed in E. coli SOLOs) as His tag fusions and were purified .Vps1 was either imaged directly for EM analysis, actin bundling assays, and circular dichroism analysis or
However, questions remain as to how force generated through actin polymerization is transmitted to the plasma membrane to drive invagination and scission.Here, we reveal that the yeast dynamin Vps1 binds and bundles filamentous actin.In vitro analysis of the mutant protein demonstrates that-like wild-type Vps1-it is able to form oligomeric rings, but, critically, it has lost its ability to bundle actin filaments into higher-order structures.A model is proposed in which actin filaments bind Vps1 during invagination, and this interaction is important to transduce the force of actin polymerization to the membrane to drive successful scission.
pre-spun at 90,000 × g, 15 min at 4°C.Unless stated, purified Vps1 for in vitro assays was from the resulting supernatant, the concentration of which was usually between 1 and 5 μM.G-actin was purified from rabbit muscle and yeast lysates .Purified rabbit or yeast F-actin was polymerized for 1 hr at room temperature with 10% KME and left overnight at 4°C.Purified Vps1 and F-actin were mixed and incubated for 15 min then spun 90,000 × g. Supernatant and pellet were separated by SDS-PAGE.Quantification was by densitometry.KD was calculated using a single site binding curve equation in GraphPad Prism 6.KD from three or more independent experiments was averaged.Falling-ball assays were performed with rabbit actin .To analyze Vps1 and actin using electron microscopy, 1 μM Vps1 was added to 1.5 μM F-actin and incubated for 15 min.5 μl of a 1:10 dilution of mix was adsorbed on carbon-coated grids, and proteins were visualized by negative staining with uranyl formate.Electron micrographs were recorded using Gatan MultiScan 794 charge-coupled device camera on Philips CM100 electron microscope.Epifluorescence microscopy was performed using Olympus IX-81 microscope with DeltaVision RT Restoration Microscopy with 100× 1.40 numerical aperture oil objective and Photometrics Coolsnap HQ camera.Imaging and image capture was performed using SoftWoRxTM.Experiments were carried out at 21°C.For uptake of FM4-64, 0.25 μl of 16 mM FM4-64 was added to 500 μl culture for 90 min.Following washing, z stack images were collected with step sizes of 0.2 μm.For lucifer yellow uptake, cells were incubated with lucifer yellow for up to 90 min.Cells were washed in buffer before imaging .For peroxisomal fission, cells were transformed with GFP-PTS1.For live-cell imaging, cells were visualized in synthetic medium.Time-lapse live-cell imaging of GFP-tagged Sla2 and Rvs167 was performed with 1 s time-lapse and 0.5 s exposure.For cells with both Sla1-GFP and Abp1-mCherry time lapse was 1.5 s with 0.25-s exposure for both.For Abp1mCherry alone microscopy was performed using Nikon Eclipse Ti microscope with 100× oil objective and Andor Zyla sCMOS camera.Imaging and image capture was performed using NIS Elements 4.20.01 software.Images were 60 ms exposure with frame interval 0.11 s for total 60 s. TIRF microscopy used Nikon TIRF with 100× oil objective and Photometrics Evolve EMCCD camera.Image capture used NIS Elements 4.20.01 software.Images were 60 s with 2-s interval.All image data sets were deconvolved.Statistical analysis of lifetimes was performed using GraphPad Prism.Preparation and analysis of yeast for electron microscopy was performed as described previously .Cells were harvested and frozen in a Leica EMPACT high-pressure freezer followed by freeze substitution.Following processing, sections were stained.Cells were viewed at 100 kV in a Hitachi H7600 transmission electron microscope.Lengths of invaginations were measured using ImageJ software.S.E.P. designed and performed experiments and data analysis for Figures 1, 2, 3, 5, and S1; I.I.S.-d.R. performed experiments and analysis for Figures 3, S3, and S4; C.J.M. designed, performed, and analyzed data for Figures 5 and S5; E.G.A. made initial findings on which the project was based and was involved in subsequent experimental design; R.M., S.J., and M.W.G. performed experiments and data analysis for Figure 4.K.R.A. undertook experimental design and data analysis for Figures 2, 5, and S2 and wrote the manuscript.
Actin is critical for endocytosis in yeast cells, and also in mammalian cells under tension.Ultrastructural analysis of yeast cells using electron microscopy reveals a significant increase in invagination depth, further supporting a role for the Vps1-actin interaction during scission.
potency of GalNAc-LNPs with low PEG and high PEG density.These data show very similar, dose-dependent efficacy for both LNP1.5 and LNP5, indicating that limitations in efficacy due to increased PEG shielding can successfully be overcome by incorporating exogenous targeting ligands.The findings described in this article emphasize the importance of the PEG-lipid shield on the physicochemical and functional properties of LNPs.We modulated LNP PEG-lipid density by utilizing a two-stage formulation process whereby additional PEG-lipid could be titrated on the surface of preformed LNPs of a defined particle size range, resulting in similarly sized particles for LNP1.5, LNP5, and LNP10.Previous data by Bao et al.26 demonstrated that increasing the PEG-C-DMA to lipid ratio did not alter the hepatic distribution of LNPs but reduced hemolytic activity and negatively affected gene silencing efficacy.However, the LNPs used in these studies were assembled using the conventional formulation technique of mixing all the components at a single stage, which results in smaller particles at higher PEG-DMA concentrations due to faster stabilization kinetics.Specifically, the LNPs described by Bao et al. varied in final particle size from 103 nm for 1% PEG-DMA to 65 nm for 10% PEG-DMA.In contrast, our formulation method produced similarly sized particles for LNP1.5 and LNP10.Keeping the final particle size consistent is critical for evaluation of the functional properties of the LNPs since particle size may impact pharmacokinetics, biodistribution, and cellular uptake, which can in turn affect both gene silencing efficacy and immunostimulation.Our data demonstrate that increasing the LNP PEG density shield from 1.5 to 5% or 10% resulted in a concomitant decrease in particle surface charge and hemolytic activity.These data are in agreement with the previously observed reduction in hemolytic activity with increased PEG-C-DMA to lipid ratio which was assumed to be due to charge shielding.26,However, our detailed analysis of particle surface charge titration with pH combined with hemolytic activity indicated that the physical steric barrier effect provided by high PEG density appeared to be more significant than the charge shielding effect in inhibiting membrane fusion and disruption; LNP10 was observed to have a very low hemolytic activity as compared to LNP1.5, even at low pH where LNP10 was found to have positive surface charge.Interaction of LNP particles with cellular and humoral components of the immune system is also affected by surface charge and composition.6,35,36,In a study using gold nanoparticles, size and surface chemistry were found to control particle serum protein binding and subsequent uptake by macrophages.14,Increasing PEG grafting density decreased the total serum protein adsorption, including the major opsonin Complement protein 3, and reduced particle macrophage uptake.14,Our study showed that increased PEG shielding reduced overall cytokine production both in a murine in vivo model and a human in vitro model.All the cytokines that were measured can be produced by a variety of cells, including phagocytes, endothelial cells and hepatocytes.IL-6 is an acute phase response protein that plays a role in fever induction while G-CSF promotes granulocyte maturation.Both were also observed in clinical application at higher doses.24,IP-10 is a chemokine that is induced as part of the antiviral IFN response and has also been observed in preclinical and clinical studies at higher doses.24,MCP-1 and KC are pleiotropic chemokines that attract phagocytes such as monocytes and neutrophils.Interestingly, TNFα induction was not observed, which is in agreement with previous findings demonstrating that a TNF-α response to unmodified siRNA can be successfully abrogated through chemical modification of the siRNA.23,37,Because the liver is the main organ of drug distribution,24,26 it is likely that the liver phagocytes and potentially the hepatocytes contribute to the observed immune response.26,38,Taken together, our data suggest that increasing PEG density is a valuable strategy to reduce undesirable interactions between the particle and the immune system.The effect of PEG density on LNP hepatic delivery and RNAi efficacy appears to be dependent on the mode of hepatic targeting.Increased PEG density negatively impacted the efficacy of LNPs, likely by inhibiting the endogenous ApoE-mediated targeting mechanism.Our observations further support the hypothesis that a high PEG lipid shield has the potential to reduce the interaction of serum ApoE with LNPs.In fact, the LNP systems developed for hepatic siRNA delivery have been designed to have minimal PEG shielding that is rapidly de-shielded in vivo.39,The rate of deshielding of the LNP is determined by the exchangeability of the PEG-lipid into serum lipoproteins, which is controlled by dialkyl chain length of the lipid to which the PEG is conjugated.40,Specifically, PEG-lipids with a dialkyl chain length of 14 carbons are rapidly exchanged out of the LNP, whereas those with a dialkyl chain length of 18 carbons are stably associated with the LNP.However, our data show that this apparent limitation of increased PEG shielding may be successfully overcome by using an ApoE-independent delivery strategy.Incorporation of an exogenous targeting ligand containing a multivalent N-acetylgalactosamine-cluster, which binds to the asialoglycoprotein receptor on hepatocytes, successfully circumvented the requirement for ApoE association.In summary, increasing PEG density shields LNP surface charge and reduces immunostimulatory potential, which may contribute to improved safety and reduced clearance by the mononuclear phagocyte system.The negative impact of increased PEG density of LNP efficacy may be successfully overcome by incorporating exogenous targeting ligands, thereby circumventing the requirement for ApoE association.Therefore, these studies provide useful information for the rational design of LNP-based siRNA delivery systems with an optimal safety and efficacy profile.Lipid nanoparticle formulation.LNPs were formulated using the rapid precipitation method.Organic lipid solutionbutanoate30: distearoylphosphatidylcholine: cholesterol in a molar ratio of 51:10:39 in ethanol) was mixed with
Increased PEG density not only shielded LNP surface charge but also reduced hemolytic activity, suggesting the formation of a steric barrier.This effect could be overcome by incorporating an exogenous targeting ligand into the highly shielded LNPs, thereby circumventing the requirement for ApoE association.Therefore, these studies provide useful information for the rational design of LNP-based siRNA delivery systems with an optimal safety and efficacy profile.
aqueous siRNA solution via a T-junction.The total lipid to siRNA ratio was 9.75,and the siRNA concentration was 0.6 mg/ml in the solution after mixing.Polyethylene glycol-dimyristolglycerol solution in ethanol was added to the mixed solution via a T-junction to provide steric stabilization to formed nanoparticles.Nanoparticle solution was instantly transferred for overnight dialysis at room temperature against 1× phosphate buffer saline to remove ethanol.Zetasizer was used to determine the particle size and zeta potential.Zeta potential was measured on particles after suspending them in deionized water at pH = 5.Hemolysis assay.Whole blood from C57BL/6 mice was collected into EDTA containing vacutainer tubes.Whole blood was centrifuged to harvest RBC and washed three times with 150 mmol/l saline.RBC were then suspended into 100 mmol/l phosphate buffer at various pH. For the hemolysis assay, RBC were diluted 1:10 in desired pH maintained phosphate buffer.0.3 ml of the diluted RBC solution was mixed with 0.1 ml LNP or with triton X-100, and incubated at 37 °C for 20 minutes.The mixed solution was then centrifuged and the supernatant was collected for heme-absorbance analysis at 541 nm.TNS assay.This assay was adapted from Heyes et al.32 Briefly, MES and HEPES were used to make buffers in the pH range of 4–9.The assay was performed in 96-well plates and per well, 157 µl of buffer with varying pH was added and mixed with 10 µl of LNP and 16.7 µl of 0.01 mmol/l TNS.The fluorescence was measured at λem = 445 nm with λex =321 nm.Human whole blood cytokine assay.Whole blood from four anonymous, healthy donors was collected in Sodium Heparin Vacutainer tubes, diluted 1:1 in 0.9% Saline, and plated in 96-well flat bottom tissue culture plates at 180 μl/well.Whole blood was treated with Opti-MEM or 300 nmol/l siRNA formulated in LNP1.5, LNP5, or LNP10.Treated whole blood was incubated at 37 °C, 5% CO2 for 24 hours after which plasma was collected and stored at −80 °C until analysis.Cytokine levels were measured using a custom Bio-Plex Pro Magnetic Cytokine Assay, analytes included: IL-1β, IL-1RA, IL-6, IL-8, IL-10, IL-12, IP-10, G-CSF, IFN-γ, MCP-1, MIP-1α, MIP-1β, and TNF-α.Mouse in vivo cytokine assay.All animal procedures were conducted in accordance with the protocol approved by the Alnylam Institutional Animal Care and Use Committee.Male CD1 mice were obtained from Charles River Laboratories.siRNA formulated in LNP1.5, LNP5, or LNP10 was diluted with sterile phosphate-buffered saline and animals were administered a total dose of 10 mg/kg or saline control via intravenous bolus injection using a 1 ml syringe and 27G needle at a dose volume of 10 ml/kg.After 4 hours, blood was collected via cardiac puncture into serum separator tubes and serum was stored at −80 °C until analysis.Cytokine analysis was done using a custom Milliplex MAP Mouse Cytokine/Chemokine Magnetic Bead Panel and the analytes included were G-CSF, TNF, IL-6, IP-10, KC, and MCP-1.
This study systematically evaluated the effect of polyethylene glycol (PEG) density on LNP physicochemical properties, innate immune response stimulation, and in vivo efficacy.
the Leader around the circumference of the circle in a way that would be reflected in the correction gains.Larger correction gains were expected for the Follower relative to the target hand providing the timing cue on the side nearer the Leader than the correction gain computed with respect to any other participant.To complete the circle, an individual designated the Integrator, sat between the ends of the two chains and observed movements from the two participants on their left and right sides.A primary finding was that participants in the Leader and Follower positions used a strategy of minimising their asynchrony variance whereas, in the Integrator position, participants switched to a strategy that minimised their own movement variability.However, from the point of the present review, particularly interesting was the finding of an asymmetry in the correction gains that reflected access to visual information.Thus Leader-F1 and F1 — F2 gains were consistently larger than gains estimated with respect to all other pairings of group members.Thus, correction gain was influenced by the spatial constraints of the task.In summary, we have reviewed how event-based linear models for a single performer tapping with a fixed metronome can generalise into testable accounts of interpersonal timing.We have taken a linear systems approach in which behaviour, and underlying generative mechanisms, are treated as stable over time.However, instability over repetition is often observed in cyclic movement tasks.Non-linear dynamical systems approaches to timing focus specifically on such instabilities and there is a growing body of work taking this perspective focused on two-person timing interactions.Within the linear approach described here, a number of further questions might be explored in future research, some of which we highlight here.In the above, we have not examined mean asynchrony, but it is interesting to ask, for instance, whether mean asynchrony of the leader in a group increases with group asynchrony variance, which would be useful in aiding a listener to detect the timing lead.In thinking about asynchrony variance, it is pertinent to investigate if variance reduction, for example with practice, is more determined by sensory, cognitive, or motor constraints.More specifically, are reductions in asynchrony variance related to familiarity between players or to the musical material being performed?,We might also consider if correction gain values are consciously adjusted higher or lower by the group to influence the players’ experience of group timing.We note one further potential area for research and that involves brain mechanisms underlying synchronisation processes.For example, group timing effects on brain activation might be investigated in neuroimaging studies following seminal work by Keller and colleagues on brain activations when synchronising with an adaptive metronome .Taking all these questions together, we therefore feel confident in claiming that here is a fertile area for quantitative models of social interaction in areas of activity that impact on many aspects of cultural life and, as such, this is an area which merits further research blending empirical study with theoretical tools of the kind described in this review.This work was supported by grants from the Engineering and Physical Sciences Research Council, UK and the Economic and Social Research Council, UK.The funder had no role in: the study design; the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication.Papers of particular interest, published within the period of review, have been highlighted as:• of special interest,•• of outstanding interest
A linear phase correction model has been shown to accurately reflect the corrective processes involved in synchronising motor actions to an external rhythmic cue.The model originated from studies of finger tapping to an isochronous metronome beat and is based on the time series of asynchronies between the metronome and corresponding finger tap onsets, along with their associated intervals.Over recent years the model has evolved and been applied to more complex scenarios, including phase perturbed cues, tempo variations and, most recently, timing within groups.Here, we review the studies that have contributed to the development of the linear phase correction model and the associated findings related to human timing performance.The review provides a background to the studies examining single-person timing to simple metronome cues.Finally, recent studies investigating inter-personal synchronisation between groups of two or more individuals are discussed, along with a brief overview on the implications of these studies for social interactions.We conclude with a discussion on future areas of research that will be important for understanding corrective timing processes between people.
certification has been present for a while in developed organic markets such as the European Union, where regulation on organic production of agriculture already exists since 1991.In less developed organic markets, organic certification is at a much earlier stage and may still be developing.Organic farming affects the whole farming system and is not an adoption of a single technique – in developed countries, converting can mean a significant change to established farming practices, and can result in failure due to risks in the transition period.Farmers in poor areas who cannot afford such experimentation could, therefore, be less likely to convert.Evidence from developing countries, however, also suggests that conversion to organic farming might be a smaller obstacle, as agriculture can be organic by default, but not yet certified.Conversion to organic farming in such conditions can be merely a continuation of existing cropland management, fertilization and pest control practices.Nevertheless, optimization of farm operations, implementation of biological plant protection and fertilization, with potential changes to crop types would be necessary to achieve a successful conversion to organic.This is especially the case when there is poor cropland management due to lower accessibility to agricultural inputs.In this study, we focused on a set of socio-economic variables that can explain a considerable part of the spatial distribution of organic crop farmers.However, explaining the socio-economic processes that limit or drive conversion is complex.Organic farmers are a heterogeneous group with a variety of attitudes towards the choice of farming method, including changes to lifestyle and environmental values.Moreover, there are large differences in the spatial determinants for certification of different crop types.The data collected for this study did not allow distinguishing producers of different crop types.Other potential obstacles for certification we were unable to address are related to bureaucracy and the required financial resources, lower education and insufficient information, and organizational support.Our inventory shows that significant efforts for improving the accessibility of data on organic farmers are necessary.Only then will it be possible to identify location characteristics that drive and limit certified organic farming in developing countries in more detail as well.To achieve this, certifiers and national institutions need to work together to establish common databases of publicly accessible information on organic certification.Due to unavailability and inaccessibility of data, we were so unable to map most farmers in Africa and Asia.Certifier reports, however, indicate that our collection of certificates includes considerably more organic crop farmers than the number of mapped certificates suggests.In some regions one certificate often presents a group association.Exact numbers of farmers in such groups are often unknown.In Peru, 40% of our records are cooperatives.Membership data are available for only 29 of these cooperative certificates, but these certificates cover 41,000 producers.These cooperatives also differ, to some extent, from individual organic crop farmers.They are more likely found in areas with higher poverty levels, and lower access to markets.This suggests, that such institutional support, either from governments or collective associations can help with certification in areas with less favorable socio-economic conditions, that can otherwise be less likely to convert to organic.Other studies support our results and show that farmers in Latin America are more likely to adopt organic agriculture if they are part of a cooperative.We can therefore assume, that we covered considerably more organic crop farmers in these Latin American countries than the numbers of mapped locations suggest.When assuming that our records cover all certificates for Central and Latin America, our dataset would include 346,000 additional crop farmers and 12 more countries would have representative data.Organic farming is promoted as a way to provide food in a more sustainable way, reducing the environmental impacts of agriculture.Our results indicate, that, particularly in countries where organic agriculture is less developed, organic crop farmers are present in areas with relatively favorable socio-economic conditions.To sustain trends of increases in organic production in these countries, efforts are needed to support access to certification for farmers in poorer regions and by providing better market access.In developing countries, organic certification often implies a continuation of existing cropland management.Therefore, it is mainly the certification and access to value chains to reach consumers that is hampering a further expansion of certified organic production.Targeting such areas with less-favorable socio-economic conditions can indeed pose a higher risk for establishing a steady and successful supply of organic products due to a potentially higher rate of certification failures and problems in establishing value chains."Nevertheless, this way organic farming can become a tool for improving famers' livelihoods while at the same time limiting the input of artificial fertilizers and pesticides.The process can, therefore, be seen as a component of sustainable intensification strategies.The outcomes of this study can help with identifying areas with a high potential for organic crop production and potential increases in the number of organic producers in the future.Most importantly, our results are a step forward towards providing support for more efficient certification to farmers in economically less developed and poorly connected areas.To achieve this, certifiers, national institutions and collective associations, need to work together to improve access to certification, reduce its costs, and target areas where accessing markets is too difficult or costly by individual farmers.The datasets analyzed in this study are publicly available from the sources listed in the article or the Supplementary material.All data prepared in this paper on the approximate locations of organic farmers are available freely on https://dataverse.nl/dataverse/BETA.
Organic farming has been proposed as a feasible way to reduce the environmental impacts of agriculture, provide better products to consumers, and improve farmers' income.How organic farmers are distributed worldwide, however, remains unknown.Within developed countries, the locations of organic crop farmers often do not differ significantly from the locations of conventional crop farmers.In developing countries, there are, however, larger differences and organic crop farmers concentrate in the more accessible and developed regions.Our results suggest that crop farmers in poor areas may not have sufficient access to certification and markets.To promote the spread of organic farming, certification and other incentives could target farmers in areas with lower market access and higher levels of poverty which could improve value chains for organic products in these areas.
Electricity generation currently produces around 25% of global greenhouse gas emissions, or about 12.4 GtCO2e/year.More than two-thirds of the electricity generated is consumed by commercial and industrial users.To reduce electricity consumption-related emissions effectively at the level of individual firms, it is essential that they are measured accurately and that decision-relevant information is provided to managers, consumers, regulators and investors.The compilation and public reporting of corporate GHG inventories, ostensibly for this purpose, is becoming mainstream business practice.However, an emergent ‘market-based’ method for quantifying emissions associated with electricity consumption, which allows reporting entities to purchase and claim the GHG attributes associated with renewable generation, is not aligned with reducing emissions or providing accurate or relevant GHG information.This issue is highly topical as recently published reporting guidance from the GHG Protocol has endorsed the market-based approach, while the forthcoming update of ISO 14064-1 for corporate GHG inventories provides an opportunity to establish a more robust approach.This perspective article aims to inform the development of GHG accounting practice and international standards by providing a synthesis of existing studies, with additional analysis on the implications for GHG accounting.Following a brief introduction to corporate GHG accounting practice and the quantification methods for purchased electricity, we set out two interrelated problems with the market-based method, and then explore why, despite these problems, the market-based approach has been accepted by many stakeholders.We conclude with recommendations for a more robust accounting method, and briefly reflect on the applicability of the lessons learned for GHG accounting more broadly.The first internationally recognised guidance for corporate GHG inventory reporting was published by the GHG Protocol in 2001, with a corresponding ISO standard published in 2006.The GHG Protocol has since published revisions and other standards, including guidance for emissions associated with purchased electricity, termed ‘scope 2’ emissions."‘Scope 2’ denotes the point-of-generation emissions from purchased grid electricity, while ‘scope 1’ covers direct emissions from facilities and machinery owned by a reporting company, and ‘scope 3’ includes any other indirect emissions associated with a reporting company's broader value chain, such as business travel or the disposal of waste.Standard practice is to estimate emissions using activity data for each source and GHG.For example, if a small diesel car is used for business travel, then the CO2 emissions associated with the car should be calculated using data on its actual fuel consumption or distance travelled, multiplied by a technology-specific emission factor, such as 0.1448 kgCO2/km travelled in a small diesel car.If the specific source is not known then an average emission factor may be used instead, e.g. the emission factor for an average car, 0.1856 kgCO2/km travelled.One feature of purchased electricity from a public distribution grid, which makes it difficult from an accounting perspective, is that it is not possible to trace the electricity consumed by an entity back to any particular grid-connected power plant.To address this physical reality, it has been standard practice to use a grid average emission factor to estimate scope 2 emissions or eGRID), which is derived by dividing the total emissions from all the generation sources supplying a defined transmission and distribution grid area by the total amount of electricity supplied over a given period.This approach is termed the ‘locational’ or ‘grid average’ method, as it reflects the average emissions for the location in which the consumption occurs.An alternative accounting method is the ‘market-based’ or ‘contractual’ approach, which permits a reporting company to apply an emission factor associated with electricity from a specific generation facility, such as a wind farm, with which the reporting company has a contractual agreement to claim the associated emissions attributes.In the case of most renewable technologies, the point-of-generation emissions are zero, and so the reporting company will claim a zero emission factor for its purchased electricity.Contractual arrangements can take place through various instruments, such as Renewable Energy Certificates, Guarantees of Origin, utility green tariffs, or power purchase agreements.It is worth emphasizing that these contractual arrangements do not entail any changes to how electricity from a renewable facility is physically delivered or consumed.The only thing transacted is a claimed right to use the emission factor associated with a certain amount of generation from a particular renewable energy facility."The GHG Protocol's Scope 2 Guidance, published in 2015, requires that companies use both the locational grid average method and the market-based method to report scope 2 emissions.However, the guidance also allows companies to choose a single method for meeting their reduction targets and for reporting their supply chain emissions.The same guidance has been adopted by CDP, formerly the Carbon Disclosure Project.In its current form, the guidance does not require the electricity associated with any purchased emission factor to be additional: in other words, it could be from a long-established facility, or one which receives government or other subsidies sufficient to justify its operation already, in the absence of the contractual arrangement.This section sets out two interrelated problems with contractual emission factors and the market-based accounting method: purchasing contractual emission factors does not influence or affect the amount of renewable electricity generated; and the market-based accounting method fails to provide accurate or relevant information in GHG reports.Lack of additional renewable energy generation,There are structural reasons for expecting that markets for contractual emission factors will fail to influence renewable energy supply.In many countries there are now large amounts of renewable generation available, because of government subsidies, legacy investments or because renewables are already economically viable.The attributes associated with some of this electricity
Electricity generation accounts for approximately 25% of global greenhouse gas (GHG) emissions, with more than two-thirds of this electricity consumed by commercial or industrial users.To reduce electricity consumption-related emissions effectively at the level of individual firms, it is essential that they are measured accurately and that decision-relevant information is provided to managers, consumers, regulators and investors.We identify two interrelated problems with the market-based method: 1. purchasing contractual emission factors is very unlikely to increase the amount of renewable electricity generation; and 2. the method fails to provide accurate or relevant information in GHG reports.
the absence of an intervention.In contrast, attributional inventories of emissions, such as corporate GHG inventories, only need to allocate total emissions between reporting entities without double-counting.However, to be accurate and relevant, GHG inventories must reflect the emissions caused by the reporting entity.The fundamental issue with the contractual method is that it does not represent any causal relationship between the reporting entity and the emissions reported.Another example of conceptual confusion is the argument, again present in the Scope 2 Guidance, that the market-based method reflects choices companies make about their electricity products.However, the choice in question relates only to the purchase of contractual emission factors and is not about the physical delivery or generation of electricity, so the argument is equivalent to claiming that contractual emission factors are justified because they reflect the decision to purchase contractual emission factors.The resulting accounts only reflect the accounting arrangements themselves, and do not provide any decision-relevant information about actual emissions.To be useful, environmental accounts must represent something other than their own accounting rules.The above list of explanatory factors demonstrates the range and interaction of issues at play in obscuring the two problems with the market-based method, and indicates a considerable opportunity for further social, political, critical and normative research.Further empirical analysis of the effects of voluntary contractual arrangements with respect to causing increased renewable electricity generation in specific markets, and whether these markets could be improved, would also be useful.Focusing for now on the practical implications, the following section provides a number of recommendations for the treatment of purchased electricity within GHG inventories.ISO 14064-1 for organisational GHG inventories is currently under revision, with the updated standard expected to be published in 2018.We strongly recommend that this ISO standard, and also a revised version of the current GHG Protocol Scope 2 Guidance, should adopt the following approach:The locational grid average method should be the only method used to calculate and report scope 2 emissions.This is not to suggest that the locational method is perfect, nor that locational emission factors cannot be improved.For instance, ideally grid average emission factors should be specific to the time at which consumption takes place, e.g. using smart meters, but currently such high temporal resolution emission factors are not commonly available.In addition, the boundary for calculating the grid average should be based on the grid balancing area in which electricity supply is balanced with demand, whereas at present many grid average factors are often based on more arbitrary national or regional jurisdictional boundaries.Notwithstanding these issues, the locational grid average is the best available method for reflecting the emissions caused by companies’ contribution to aggregate demand on the grid.Actions that genuinely result in additional grid-connected renewable energy generation should be quantified using a consequential accounting method, and reported separately to the corporate GHG inventory."For example, if a company enters into a long-term power purchase agreement that enables investment in new renewable energy generation capacity that would not otherwise have been viable, the emission reductions caused by that action could be quantified using methods such as ISO 14064-2 or the GHG Protocol's Guidelines for Quantifying Reductions from Grid-Connected Electricity Projects, and reported as additional information to the corporate GHG inventory.In addition to the standards for corporate GHG accounting, the draft text for ISO 14067 for product carbon footprinting also endorses the use of contractual emission factors for grid electricity.Exactly the same problems arise with contractual emission factors at the product-level, and the same recommended approach, above, should therefore be applied within ISO 14067.In summary, how companies measure and report the GHG emissions arising from their consumption of electricity is important not only for those companies, but also for consumers, regulators, investors, and society as a whole.The generation of electricity for commercial and industrial consumption makes a significant contribution to global GHG emissions, in the order of 8 GtCO2e/year.Misrepresenting responsibility for these emissions through using the market-based method therefore has the potential to significantly undermine global climate change mitigation efforts.As a guiding principle, we recommend that all environmental inventories must reflect the impacts caused by the reporting entity, otherwise they are simply not useful for managing environmental impacts.This applies to corporate-level inventories, but also any other types of inventory, such as product-level, city-level, or building-level inventories.We also hope that the case of contractual emission factors will provide governments, UNFCCC negotiators, and corporate managers with a useful cautionary tale against similar creative accounting methods that distort rather than measure reality.
However, an emergent GHG accounting method for corporate electricity consumption (the ‘market-based’ method) fails to meet these criteria and therefore is likely to lead to a misallocation of climate change mitigation efforts.The case is important given the magnitude of emissions attributable to commercial/industrial electricity consumption, and it also provides broader lessons for other forms of GHG accounting.
correlated, suggesting that some of the adjacent time points convey similar information.To discard the noninformative data collection points, we performed stochastic evolutionary search through a genetic algorithm, which retained 8 informative time points, and 6 were dropped as uninformative.Comparing the clustering of the models using 8 time points with the clustering from the model using the full data set showed a satisfactory level of agreement, with a Rand index and ARI of 82 and 64%, respectively.To understand how the trajectories and estimated phenotypes changed over a sequence of increasing numbers of classes and how children move from one class to another in models with increasing numbers of classes, we developed 3 LCA models with 4, 5, and 6 classes.Persistent and never/infrequent wheezing classes had similar patterns in all 3 models, with a slight decrease in estimated prevalence from a 4- to 6-class solution.With the addition of a fifth latent class, transient early wheezing was divided into 3 remitting classes, whereas late-onset wheezing remained almost identical.The addition of a sixth class resulted in division of late-onset wheezing into 2 similarly sized subgroups.We then assigned participants to the most likely phenotype based on the maximum membership probability and calculated transition probabilities reflecting the proportion of participants moving from one phenotype to another when the number of phenotypes increased from 4 up to 6.Fig 3 shows whether members of distinct phenotypes remained in the same phenotype or shift into another one with increasing numbers of phenotypes.The figure also demonstrates from where the intermediate phenotypes arise and which phenotypes become separated or remain undivided with increasing numbers of latent classes."The results based on analysis of participants with incomplete reports of wheezing did not materially differ from those obtained among children with a complete data set and are presented in Figs E1 and E3-E6 in this article's Online Repository at www.jacionline.org.Of 3797 participants who attended follow-up at age 23 to 24 years, 1492 had complete reports of wheezing, of whom 240 reported current asthma; 1345 had valid lung function.The proportion of subjects with current asthma was greatest in the persistent wheeze phenotype.In the 2 early-onset transient phenotypes, the proportion of asthmatic patients was significantly greater in midchildhood-remitting compared with the preschool-remitting phenotypes.In the 2 late-onset phenotypes the proportion of asthmatic patient was significantly greater in the school-age onset compared with late-childhood onset phenotypes."Prebronchodilator and postbronchodilator lung function differed significantly across phenotypes and was significantly less in the persistent wheezing and early-onset preschool remitting wheeze phenotypes compared with the never/infrequent wheeze phenotype, with little evidence of differences between other phenotypes.The preschool-onset remitting phenotype mostly overlapped with no asthma, but prebronchodilator and postbronchodilator lung function at age 24 years was significantly less in this class compared with the never/infrequent wheeze phenotype.Our results suggest that the number and nature of wheeze phenotypes from infancy to adolescence identified by using LCA are dependent on several factors, including sample size, frequency, timing and distribution of data collection time points, model dimensionality, and the combination of these factors.Transition analysis revealed that subjects assigned to the never or persistent wheeze phenotypes tend to stay in these phenotypes, whereas most of the switching goes on in the intermediate classes.Given the strong interplay between the birth cohort design and the optimal number of phenotypes identified by means of developmental trajectory modeling, care should be taken when interpreting wheeze phenotypes emerging from small studies with few data collection points.When the sample size is small, a wheeze phenotype that exists in the population might be unidentifiable, whereas excessive data collection can result in identification of trivial or clinically irrelevant phenotypes.In general, increasing data collection frequency helps detect more complex structure and larger numbers of phenotypes by capturing less frequently observed subgroups.However, it also increases the risk of violating the fundamental assumption of LCA modeling where indicator variables are independent of each other.When frequent data collection and large sample sizes are not obtainable, collecting data at critical time points might help counterbalance the effects of suboptimal conditions.In our study the time points that proved most informative in distinguishing wheeze phenotypes were months 18, 42, 57, 81, 91, 140, 157, and 166.There are several limitations to our findings."Despite latent models' usefulness in disentangling disease complexity, 1 unresolved issue in the application of LCA is that there is not one commonly accepted statistical indicator for deciding on the number of subgroups in a study population.The limitation of this study is that we do not know how many true phenotypes there are, and we assumed that the classification obtained on the largest sample and using all time points corresponded to the best available approximation of the “true classification.,In the absence of clear statistical requirements for identifying clinically important groups of small size, validation of the phenotypes with late asthma outcomes provides the only clues about their clinical relevance.However, we acknowledge that in our study information on asthma and lung function measures at age 23 to 24 years was available for approximately 45% of participants used to derive wheeze phenotypes.Another limitation is that we could only vary conditions using the sampling framework that was available to us, which was fixed by the study design, and therefore this analysis has limited direct application to other studies that have used different sampling frames.We also acknowledge that the definition of current wheezing, which we used in our models, is based on parental reporting using validated questionnaires and that this might lead to overestimation of
Methods: We used LCA to derive wheeze phenotypes among 3167 participants in the ALSPAC cohort who had complete information on current wheeze recorded at 14 time points from birth to age 16½ years.A variable selection algorithm identified 8 informative time points (months 18, 42, 57, 81, 91, 140, 157, and 166).The proportion of asthmatic patients at age 23 to 24 years differed between phenotypes, whereas lung function was lower among persistent wheezers.
the true prevalence.28,As most previous studies, we used information on current wheeze for our modeling.It is possible that a more holistic examination of other features and/or other symptoms22 and lung function29 might allow better distinction of the underlying pathophysiologic mechanisms.The key advantage of our study is the large sample size with complete data on wheezing collected frequently and prospectively.Another advantage is that participants were followed from birth to late adolescence, covering a longer period compared with most prior studies.1,13,18,19,30,Finally, it is worth noting that subtypes discovered by using data-driven methods are not observed but are latent by nature and ideally should not be referred to as “phenotypes”.However, because the term “phenotype” has been used in this context for more than a decade, we have maintained this nomenclature."A number of previous studies embarked on identifying wheeze phenotypes from birth to mid–school age.However, the inconsistency of findings has led to a debate on the validity and clinical value of phenotyping studies,10,31,32 hampering the discovery of pathophysiologic endotypes and translation into clinically actionable insights.The 4 phenotypes of persistent, never, transient early, and late-onset wheezing have been long postulated in descriptive2 and data-driven33 studies.We found that when the sample size is relatively small, a particular wheeze phenotype that exists in the population might be undetectable.Therefore relatively smaller sample size in some studies might have contributed to the inability to detect intermediate wheeze phenotypes with a relatively low prevalence.Using more time points allowed identification of less common phenotypes by increasing possible response patterns.When the data collection was frequent, a sample size of approximately 2500 was found to be sufficiently large to distinguish 6 phenotypes.However, even a larger sample size of 3167 might be insufficient to detect uncommon phenotypes under certain conditions.Our findings suggest that increasing data collection frequency might help compensate for a modest sample size in phenotype identification.In line with this finding, Depner et al30 identified an intermediate phenotype in the PASTURE cohort that existed during the first 6 years of life by using a similar sample size but more data collection points than those used in the TCRS.2,However, the selection of follow-up points needs careful thought.Our analyses have shown that although adding more time points to the latent model increased the number of identified phenotypes with distinguishable interpretations, in some cases the same number of randomly selected data collection points resulted in a different optimal solution.This suggests that the timing and distribution of follow-up is important and that there might be critical data collection points that are more informative than others.A variable selection method that we applied to the data identified 6 time points that were not carrying additional useful information.The proportion of asthmatic patients was greatest in the persistent wheeze phenotype, and subjects in this phenotype had diminished prebronchodilator and postbronchodilator lung function compared with all other phenotypes.The proportion of asthmatic patients differed between intermediate phenotypes.These findings suggest that all phenotypes are distinct and that this might be a true classification.However, we acknowledge that the observed associations might also be a proxy of severity.The preschool-onset remitting phenotype mostly overlapped with no asthma, but the prebronchodilator and postbronchodilator lung function at age 24 years was significantly lower in this class compared with the never/infrequent wheeze phenotype.Although this can be seen as a contradiction, we would stress that diminished lung function does not equate to asthma.29,There is evidence that early transient wheezing is associated with low lung function34-37; as lungs/airways grow, the symptoms regress, but lung function impairment can persist.In TCRS the lowest infant lung function test values were associated with low lung function at 22 years,38 and therefore early wheezing that remits might be a marker of low lung function in early life that persists to adulthood but without the development of airway inflammation or asthma.In conclusion, our findings add to the understanding of childhood wheeze phenotypes by extending the knowledge on potential causes of variability in classification of wheezing.Sample size, frequency, and timing of data collection have a major influence on the number and type of phenotypes identified by using data-driven techniques.Our results, which include information on the most informative follow-up points, are important to interpret existing studies and to inform better design of future cohorts.However, we wish to note that these data collection points should not be regarded as absolute; rather, they should be treated as relative values with respect to our population and considerations for investigators when designing future studies.The number and nature of wheeze phenotypes identified by using LCA are dependent on the sample size, frequency, timing and distribution of data collection time points; model dimensionality; and combinations of these factors.Not all data collection points carry useful information in distinguishing wheeze phenotypes.
Background: Latent class analysis (LCA)has been used extensively to identify (latent)phenotypes of childhood wheezing.However, the number and trajectory of discovered phenotypes differed substantially between studies.Objective: We sought to investigate sources of variability affecting the classification of phenotypes, identify key time points for data collection to understand wheeze heterogeneity, and ascertain the association of childhood wheeze phenotypes with asthma and lung function in adulthood.We examined the effects of sample size and data collection age and intervals on the results and identified time points.We examined the associations of derived phenotypes with asthma and lung function at age 23 to 24 years.Results: A relatively large sample size (>2000)underestimated the number of phenotypes under some conditions (eg, number of time points <11).Increasing the number of data points resulted in an increase in the optimal number of phenotypes, but an identical number of randomly selected follow-up points led to different solutions.Conclusions: Sample size, frequency, and timing of data collection have a major influence on the number and type of wheeze phenotypes identified by using LCA in longitudinal data.
We present a large-scale empirical study of catastrophic forgetting in modern Deep Neural Network models that perform sequential learning.A new experimental protocol is proposed that takes into account typical constraints encountered in application scenarios.As the investigation is empirical, we evaluate CF behavior on the hitherto largest number of visual classification datasets, from each of which we construct a representative number of Sequential Learning Tasks in close alignment to previous works on CF.Our results clearly indicate that there is no model that avoids CF for all investigated datasets and SLTs under application conditions.We conclude with a discussion of potential solutions and workarounds to CF, notably for the EWC and IMM models.
We check DNN models for catastrophic forgetting using a new evaluation scheme that reflects typical application conditions, with surprising results.
the versatility of the T-cell library method, a CD4+ T-cell library, was generated from a healthy HLA-DR1+ donor, and simultaneously screened via IFNγ ELISpot for reactivity against two HLA-DR1-restricted peptide pools.The first peptide pool contained three putative peptides from HA of flu, and the second peptide pool contained five putative peptides from 5T4 oncofetal protein.Positive wells from the screen were enriched based on IFNγ production in response to peptide, and then expanded once with PHA and irradiated allogeneic feeder cells to produce lines.The lines were subsequently screened against individual peptides in an IFNγ ELISpot, and then cloned to the single-cell level.From this, three 5T4-clones were generated and tested against decreasing doses of peptide via MIP-1β ELISA, in order to establish their sensitivity to the corresponding epitope.Thus, these data indicate that our T-cell library strategy can also successfully produce CD4+ T-cells with desired specificities.It is noteworthy that autologous EBV immortalised B-cell lines were initially used for the screening of CD4+ libraries, but these induced high numbers of positive wells, presumably because of T-cells with reactivity against EBV.Finally, we reasoned that this method of T-cell clone generation could also be used to generate peptide-specific T-cell clones from vaccinated individuals.Therefore, a CD8+ T cell library was generated from the PBMC of a healthy HLA-A2+ donor, who had previously participated in a clinical trial for an EBOV DNA vaccine.The library was screened via IFNγ ELISpot for reactivity against a pool of three predicted HLA-A2-restricted EBOV-Z nucleoprotein epitopes.Positive wells from the screen were pooled, subjected to IFNγ/TNFα dual enrichment, and then cloned to the single-cell level.Six EBOV-Z-specific clones were generated, all reactive to the EBOV-Z-NP150–158 peptide, as determined by MIP-1β ELISA.Peptide dose–response curves for three of the clones are shown as an example.These data demonstrate the ability of this T-cell library method to rapidly produce viral-specific T-cell clones from the blood of a vaccinated donor.Modern advances in cell sorting, using fluorescence or magnetic beads, have allowed the generation of T-cell clones following physical isolation with pMHC multimers, or functional detection using antibodies specific for cellular activation markers.Although these techniques have worked well in our laboratory for some antigens, we have failed to grow robust clones using these standard methodologies more often than we have succeeded.In order to circumvent this difficulty, we developed the T-cell library strategy described here.Previous studies have applied a T-cell library approach to study T-cell frequencies, but instead used PHA in combination with irradiated allogeneic feeder cells for T-cell expansion.The CD3/CD28 beads used in our strategy have been shown to better preserve the TCR repertoire during in vitro expansion.Nevertheless, while this methodology maintains the general TRBV families and dominant antigen-specific T-cell responses faithfully, it remains possible that extremely rare clones are lost during this expansion phase.Using the methodology we describe here, we have been able to simultaneously generate many hundreds of peptide-specific T-cell clones, with at least one being grown from each library.T-cell libraries have become the method of choice for generating monoclonal T-cells in our laboratory, as they avoid the need for pMHC multimers, ample donor material, or time-consuming DC production.Furthermore, we consider it an advantage to have the T-cells already adapted to in vitro culture prior to screening, and also to avoid repeated exposure to antigenic peptide, which can often lead to T-cell exhaustion.Importantly, we have found T-cell clones to be extremely advantageous for improving pMHC multimer staining protocols, T-cell epitope identification, defining T-cell cross-reactivity, obtaining monoclonal TCRs, and peptide vaccine development.In summary, we have developed an efficient and reproducible library-based strategy for the successful detection and isolation of peptide-specific human T-cell clones from polyclonal CD8+ or CD4+ T-cell populations.By introducing a degree of clonality at the start of culture, and by coupling this with effective cytokine-mediated enrichment strategies, our methodology permits the relatively rapid generation of fully validated clones in as little as 6 weeks.Overall, T-cell libraries provide a useful tool for the T-cell immunologist, as they can be used for the simple parallel generation of multiple T-cell clones with numerous research applications in infectious disease, autoimmunity and cancer.
Isolation of peptide-specific T-cell clones is highly desirable for determining the role of T-cells in human disease, as well as for the development of therapies and diagnostics.However, generation of monoclonal T-cells with the required specificity is challenging and time-consuming.Here we describe a library-based strategy for the simple parallel detection and isolation of multiple peptide-specific human T-cell clones from CD8+ or CD4+ polyclonal T-cell populations.T-cells were first amplified by CD3/CD28 microbeads in a 96U-well library format, prior to screening for desired peptide recognition.T-cells from peptide-reactive wells were then subjected to cytokine-mediated enrichment followed by single-cell cloning, with the entire process from sample to validated clone taking as little as 6 weeks.Overall, T-cell libraries represent an efficient and relatively rapid tool for the generation of peptide-specific T-cell clones, with applications shown here in infectious disease (Epstein-Barr virus, influenza A, and Ebola virus), autoimmunity (type 1 diabetes) and cancer.
men.Suicide deaths in 2016 accounted for 3·07% of all deaths in the high ETL state group, 3·02% in the higher-middle ETL state group, 2·35% in the lower-middle ETL state group, and 1·74% in the low ETL state group.The high and higher-middle ETL state groups had significant declines in age-standardised SDR for women from 1990 to 2016, but no significant change for men.The low and lower-middle ETL state groups did not have a significant change in SDR during this period among women or men.In 2016, the high and higher-middle ETL state groups continued to have higher SDRs than the low and lower-middle ETL state groups for both women and men.However, there were variations between the states within the ETL state groups.Significant declines in age-standardised SDR were observed for women from 1990 to 2016 only in Tamil Nadu in the high ETL group, in Haryana, Jammu and Kashmir, and West Bengal in the higher-middle ETL group, and in Uttarakhand in the lower-middle ETL group.No state had a significant change in age-standardised SDR for men during this period.The age-standardised SDR of 14·7 per 100 000 women in India in 2016 was 2·1 times higher in India than the global average in 2016, and the observed-to-expected ratio was 2·74.The highest SDR among women in 2016 was in the states of Tamil Nadu and Karnataka, followed by West Bengal, Tripura, Andhra Pradesh, and Telangana.These six states had age-standardised SDR of more than 18 per 100 000 women; only three countries in the world had SDRs higher than this level among women in 2016.The SDR in women ranged ten-fold between the states of India.Even within the group of eight states in the northeast region of India, this ranged eight-fold.The observed-to-expected ratio of SDR among women ranged from 0·45 to 4·54 in 2016."Six states with 25·7% of India's population had an observed-to-expected ratio of more than three, ten states with 48·8% of India's population had an observed-to-expected ratio of between two and three, and six states with 11·8% of India's population had an observed-to-expected ratio of between 1·5 and two.The numbers of suicide deaths among women in each state are shown in the appendix.The age-standardised SDR of 21·2 per 100 000 men in India in 2016 was 1·4 times higher in India than the global average in 2016, and the observed-to-expected ratio was 1·31.The eight states with the highest SDR among men in 2016 included the six states that had the highest SDR among women as well as Kerala and Chhattisgarh.The crude SDR in men ranged six-fold between the states of India.The highest and the lowest crude SDR for men were within the group of eight states in the northeast region of India.The observed-to-expected ratio of SDR in men in 2016 among the states of India ranged from 0·40 in Nagaland to 2·42 in Tripura."Eight states with 33·9% of India's population had an observed-to-expected ratio of between 1·5 and three, ten states with 48·0% of India's population had an observed-to-expected ratio between one and 1·5, and 12 states with 17·7% of India's population had an observed-to-expected ratio of less than one.The numbers of suicide deaths among men in each state are shown in the appendix.The age-specific SDR among girls and women decreased significantly from 1990 to 2016 for those aged 10–34 years, but increased significantly for those older than 80 years.Among men, the only significant change during this period was an increase in the SDR for those older than 80 years.In 2016, the highest SDR among younger women were in the age groups of 15–29 years, and SDR among older women increased from 20·9 per 100 000 women in the age group of 75–79 years to 40·6 per 100 000 women in those 95 years or older.Among men, the SDR in 2016 was in a similar range between ages 20–74 years, and then increased in the older age groups to as high as 80·8 per 100 000 men in those 95 years or older.The age distribution of SDR was generally similar in the ETL groups in 2016 for both men and women.A large proportion of the suicide deaths in India in 2016 was in the age groups 15–39 years in both women and men, although these proportions were somewhat lower than those in 1990 among both women and men.Suicide was the leading cause of death for those in the age groups of 15–29 years and 15–39 years in India in 2016 for both sexes combined.Suicide was the leading contributor to deaths in these age groups among women, and the second leading contributor among men in India.Suicide deaths ranked first among all causes of death in women aged 15–29 years in 26 of the 31 states, and in women aged 15–39 years in 24 states; for men, suicide was the leading cause of death in nine states for those aged 15–29 years and ten states in those aged 15–39 years.The men-to-women ratio of crude SDR increased from 0·96 in 1990 to 1·34 in 2016, which was due to the decrease in SDR among women during this period especially in the younger age groups.Even with this decrease, the men-to-women ratio of crude SDR continued to be less than one up to 24 years of age in 2016, but was more than one in the older age groups.In 1990, the men-to-women ratio of crude SDR was similar in all ETL state groups, but this ratio was smaller in the lower-middle
Age-standardised SDR among women in India reduced by 26.7% from 20.0 (95% UI 16.5–23.5) in 1990 to 14.7 (13.1–16.2) per 100 000 in 2016, but the age-standardised SDR among men was the same in 1990 (22.3 [95% UI 14.4–27.4] per 100 000) and 2016 (21.2 [14.6–23.6] per 100 000).The highest age-specific SDRs among women in 2016 were for ages 15–29 years and 75 years or older, and among men for ages 75 years or older.
High throughput deep RNA-sequencing was done to generate a de novo reference sequence for red drum using cDNA libraries constructed from liver, testis and head kidney tissues.Sequencing on the HiSeq.2500 generated a total of 128.9 million paired-end reads with an average length of 100 bp.Trinity de novo assembly produced 161,438 transcripts and 116,036 genes with a Contig N50 of 2224 bp.Size distribution of the 116,036 genes showed that 70% of the de novo assembled genes were greater than 1000 bp.Gene ontology analysis of the 161,438 transcripts was done using the Blast2GO® blastx program against the non-redundant vertebrata database of NCBI with the default cutoff e-value of 10−3.This analysis generated 79,927 transcripts with blast hits to known proteins, of those 34,841 transcripts were assigned at least one functional GO term.The annotated transcripts were categorized by GO distribution level with the top 20 categories of one of three main levels: biological process, molecular function, and cellular components.KEGG pathway analysis was done to understand the biological processes of the annotated transcripts for each tissue.The number of transcripts assigned enzyme codes to run the KEGG pathway analysis were 769 for liver, 817 for head kidney, and 860 for gonad.Tissue distribution of all identified transcripts by the three tissues is presented in, showing the majority were expressed in all the tissues.Red drum were reared from captive spawned broodstock and obtained from Mote Aquaculture Research Park.Liver, testis and head kidney tissues were aseptically dissected from two adult red drum males, euthanized using MS-222 at 300 ppm for 15 min.At sacrifice, tissues were rinsed with PBS and placed immediately in RNAlater™.Total RNA was extracted from 30 mg of tissue using Tri-Reagent® following manufacturer׳s instructions.RNA quantity was evaluated by the Qubit 3.0 Fluorometer and quality with the 2100 Bioanalyzer.Only samples with an RIN >7.0 were used for sequencing.One μg of total RNA from each tissue sample was sent for RNA sequencing.Construction of the RNA-seq libraries, Illumina RNA sequencing and de novo assembly were carried out by Omega Bioservices.Briefly, paired-end 100 bp sequencing was performed on the Illumina HiSeq.2500 sequencer.The sequencing quality was assessed using FASTQC.Data was filtered using Trimmomatic v0.30, removal of primer and adaptor sequence, truncation of sequence reads with both pair end quality < 25, truncation of sequence reads not having an average quality of 25 over a 4 bp sliding window based on the phred algorithm.All reads were combined to assemble a de novo transcriptome for red drum using Trinity .The trimmed reads from each sample were aligned to the assembled transcriptome, separately.The expression abundance for genes/transcripts was calculated using eXpress .Expression levels of genes/transcripts were normalized across samples using the trimmed mean of M-values normalization method.Gene ontology and Kyoto Encyclopedia of Genes and Genomes pathway analysis were determined using Blast2GO® PRO version 4.1.7 .Tissue distribution of the transcripts was evaluated using the open access Orange version 3.4.5 .
Red drum (Sciaenops ocellatus) is an estuarine Sciaenid with high commercial value and recreational demand.During the past 50 years, overfishing has caused declines in the population that resulted in the development of red drum commercial and stock enhancement aquaculture fisheries.Despite the potential high economic value in both wild and aquaculture commercial fisheries the availability of transcriptomic data for red drum in public databases is limited.The data here represents the transcriptome profiles of three tissues: liver, testis and head kidney from red drum.The data was generated using Illumina high-throughput RNA sequencing, Trinity for de novo assembly and Blast2GO for annotation.Six individual libraries were pooled for sequencing of the transcriptome and the raw fastq reads have been deposited in the NCBI-SRA database (accession number SRP11690).
study the variability of the microstructure of the Lower Bowland shale was highlighted from the cm to μm scale using the traditional techniques of microscopy combined with XRD and TOC measurements.The samples show a high variability of: microtexture types, mineralogy, TOC, organic-matter particles, and fractures which are organized in a complex network.The majority of samples are quartz-rich and high TOC.Some samples have low overall TOC but their microstructure shows local cyclicity between organic-rich and organic-poor laminae.This confirms the presence of narrow and periodic, qualitatively organic-rich deposits in Bowland shale as has been previously suggested in the literature.The low clay content, the high detrital and cemented quartz content, and the presence of a complex and multi-scale fracture network support the developing interest in the Lower Bowland shale as a potential hydrocarbon resource.The planar geometry of the vertical calcite veins means that the rock was already compacted and lithified before the carbonate-sealed fractures could form.Subsequently, mineral replacements and authigenic growths of clay minerals and secondary carbonate grains formed.Later, horizontal bitumen-bearing fractures provided routes for hydrocarbon migration and may have formed as maximum principal stress became horizontal as a result of stress relaxation during creep coupled with erosional offloading.The vertical bitumen-bearing fractures were interpreted as being due to the influence of weak joints generated by the previous carbonate veining episode, which increases the brittleness of the shale.The microstructural and mineralogical hetereogeneities in areas heavily affected by veins may have preferentially facilitated formation of open fractures during specimen recovery and handling.The identification of various microtexture types and their heterogeneities in terms of mineralogy and structure will aid the selection of specific types of samples for 2D and 3D high-resolution imaging in the cement and clay lenses, and for geomechanical characterisation.In the same way, the description of the various organic matter particles should guide the selection of key particles for characterization of the pore network within kerogen and bitumen-filled fractures.Evidence of a range of geological episodes such as carbonate-veining, feldspar alteration to kaolinite, ankerite neoformation and bitumen-driven fracturing highlight periods of fluid passage or circulation within the Bowland sequence and temperature-driven reactions occurring during burial, for over the temperature range between approximately 30 and 150 °C.The evidence presented here provides new aspects to aid understanding of the geological history of the Lower Bowland sequence, which should aid in the development of a more generalized understanding of the sequence through future studies across larger sample quantities.In future studies of Lower Bowland Shale microstructure, a regular separation distance between the samples should be taken and further observations of the cement should be made to evaluate the connectivity of the various fractures, especially the numerous horizontal micrometre-scale empty fractures F4, and to make a better estimation of mineralogy and modal fraction of clays present.
Variability in the Lower Bowland shale microstructure is investigated here, for the first time, from the centimetre to the micrometre scale using optical and scanning electron microscopy (OM, SEM), X-Ray Diffraction (XRD) and Total Organic Carbon content (TOC) measurements.A significant range of microtextures, organic-matter particles and fracture styles was observed in rocks of the Lower Bowland shale, together with the underlying Pendleside Limestone and Worston Shale formations encountered the Preese Hall-1 Borehole, Lancashire, UK.Organic matter particles are classified into four types depending on their size, shape and location: multi-micrometre particles with and without macropores: micrometre-size particles in cement and between clay minerals; multi-micrometre layers; and organic matter in large pores.Fractures are categorized into carbonate-sealed fractures; bitumen-bearing fractures; resin-filled fractures; and empty fractures.We propose that during thermal maturation, horizontal bitumen-fractures were formed by overpressuring, stress relaxation, compaction and erosional offloading, whereas vertical bitumen-bearing, resin-filled and empty fractures may have been influenced by weak vertical joints generated during the previous period of veining.For the majority of samples, the high TOC (>2 wt%), low clay content (<20 wt%), high proportion of quartz (>50 wt%) and the presence of a multi-scale fracture network support the increasing interest in the Bowland Shale as a potentially exploitable oil and gas source.The microtextural observations made in this study highlight preliminary evidence of fluid passage or circulation in the Bowland Shale sequence during burial.
upon exposure to easily soluble Cu salt.It can also be noted that since the ROS levels were increased by the presence of Cu NPs in PBS, ROS could also contribute to the membrane damage.Precipitation of Cu-complexes can also play a role for the induced toxicity in PBS, as at least a non-soluble Cu–phosphate compound is likely to form that possibly may interact with the cells.Speciation studies of Cu released from Cu NPs in biomolecule-containing media, e.g., the commonly used cell medium DMEM, showed essentially no free Cu2+ ions in solution, as released Cu was completely complexed to biomolecules.This observation is valid up to a high investigated particle loading of 100 μg/mL, effectively showing that the released fraction, including effects of free Cu2+ ions, will not take any part for the membrane damage induced by the Cu NPs.The complexation of released Cu also inhibited ROS production.The membrane damage in the biomolecule-containing media was instead linked to corrosion reactions taking place at the surface of the Cu NPs that resulted in ROS formation.Corrosion was accelerated by the destabilizing effect of the biomolecules on the surface oxide, which dissolved during the first hours in the biomolecule-containing media followed by relative constant corrosion rates.The absence of biomolecules in PBS resulted in minor corrosion and ROS formation compared with corresponding effects in media containing biomolecules.Instead, released Cu from Cu NPs in PBS formed labile Cu-complexes that were able to induce cellular damage directly.The role of Cu–phosphate precipitates cannot be excluded.The different mechanisms of induced membrane damage of Cu NPs in PBS and biomolecule-containing media highlight the important influence of the buffer and medium used in tests of Cu NP toxicity.In all, the benefit of using corrosion and speciation investigations for understanding acutely induced membrane damage by Cu NPs is highlighted.It is suggested that an array of techniques is used in nanotoxicological studies, e.g., when screening NPs, to assess the correlation between particle properties and induced toxicity.
Copper nanoparticles (Cu NPs) are increasingly used in various biologically relevant applications and products, e.g., due to their antimicrobial and catalytic properties.This inevitably demands for an improved understanding on their interactions and potential toxic effects on humans.The aim of this study was to investigate the corrosion of copper nanoparticles in various biological media and to elucidate the speciation of released copper in solution.Furthermore, reactive oxygen species (ROS) generation and lung cell (A549 type II) membrane damage induced by Cu NPs in the various media were studied.The used biological media of different complexity are of relevance for nanotoxicological studies: Dulbecco's modified eagle medium (DMEM), DMEM+ (includes fetal bovine serum), phosphate buffered saline (PBS), and PBS + histidine.The results show that both copper release and corrosion are enhanced in DMEM+, DMEM, and PBS + histidine compared with PBS alone.Speciation results show that essentially no free copper ions are present in the released fraction of Cu NPs in neither DMEM+, DMEM nor histidine, while labile Cu complexes form in PBS.The Cu NPs were substantially more membrane reactive in PBS compared to the other media and the NPs caused larger effects compared to the same mass of Cu ions.Similarly, the Cu NPs caused much more ROS generation compared to the released fraction only.Taken together, the results suggest that membrane damage and ROS formation are stronger induced by Cu NPs and by free or labile Cu ions/complexes compared with Cu bound to biomolecules.
iron than the resident magma thus was more primitive and had a greater density.This contributed to pooling of the injected magma between the resident magma and the cumulate pile and to allow downwards percolation into the underlying cumulates.The lack of indicators of magma flow throughout most of the kakortokite series and the planar nature of the unit boundaries reflect an exceptionally quiescent resident magma throughout the development of the layered series, except during the roof collapse event.This reduces the potential for mixing of the injected magma with the resident magma, allowing for the formation of the basal layer.No evidence is found in the present study for xenocrysts within the kakortokites, indicating that the replenishing magma was aphyric.Thus, the injecting magma is inferred to have been saturated in each of the key components as well as being enriched in volatile elements.Whilst it is near impossible to determine accurately the thickness of the basal layer that developed each unit, we estimate the scale to be similar to the units.Since the units have a mean thickness of 7 m, we infer that they developed from a basal layer of magma ~ 10 m thick.It should however be noted that we observe variations in unit thicknesses from 2.5 m to 17 m and this may correspond to variations in the volume of injected magma.The resident magma is inferred to have always separated the developing kakortokite sequence from the naujaite, but the magma chamber is inferred to have undergone inflation during development of the rock sequences, thus the resident magma may have had a vertical thickness of a few hundred metres.The sharp boundary between Unit − 1 and Unit 0 formed from combined thermal and chemical erosion of a semi-rigid crystal mush as shown by the embayed alkali feldspar crystals at the unit boundary.Previous authors have noted structures within the layered kakortokites described as slumps.A full description of these rocks and their genesis is outside the scope of the present study, but they do not petrologically or chemically correspond to the kakortokites.Thus we infer that Unit − 1 was semi-rigid at the time of development of Unit 0, although the roof autolith in Unit + 3 indicates that ~ 20 m of crystal mush was unconsolidated enough to undergo compaction.The initial high concentration of halogens, as indicated by the Cl-enriched eudialyte crystals in black kakortokite, will inhibit nucleation of all mineral phases, resulting in supersaturation of each in the magma.As the basal layer of magma cools, due to thermal equilibration, arfvedsonite is the first phase to nucleate as it can crystallise at higher concentrations of volatile elements than the other phases.This main nucleation event of arfvedsonite primarily occurred in situ at the crystal mush–magma interface, potentially enhanced by epitaxial effects, whereas a smaller number of arfvedsonite crystals grew in suspension in the basal layer.Rapid growth, followed by settling of these crystals provided a crystal population that is notably larger than the crystals that grew in situ.These combined processes developed the black kakortokite layer of Unit 0 above the boundary with subordinate crystallisation of the minor phases.Crystallisation of the black kakortokite results in a decrease in the concentration of fluorine in the basal magma layer, as crystallisation of arfvedsonite takes F from the magma during crystallisation.Minor processes of upwards loss of halogens along concentration gradients, which develop as a result of the quiescent state of the resident magma and the coeval crystallisation of the sodalite-rich roof rocks, would also reduce the halogen concentration.This would facilitate the nucleation of eudialyte and a continuous change in halogen concentrations and desaturation in arfvedsonite developed the gradational boundary between the black and red kakortokites.Enhanced nucleation of eudialyte primarily occurred in situ at the crystal mush–magma interface, with minor nucleation of the other phases, and developed the red kakortokite.The concentration of chlorine decreased throughout the formation of red kakortokite.This is due to gradual equilibration of the basal magma layer with the resident magma, through loss of volatile elements due to crystallisation of the Cl-rich eudialyte and minor upward loss along a concentration gradient associated with sodalite crystallisation at the roof of the chamber.This, combined with desaturation of the melt in eudialyte, would allow alkali feldspar and nepheline to nucleate, both in suspension in the magma and in situ, developing the white kakortokite above a gradational boundary.This change in primary accumulation method is inferred to be occurring as the basal magma layer equilibrates with the resident magma body.At this stage nucleation occurred in a relatively halogen-poor magma with a density equivalent to the resident magma.The control on the order of nucleation may be directly related to the halogen content of the magma.High concentrations of halogens depolymerise silicate melts, reducing the length of the silicate chains that can crystallise.Arfvedsonite has a chain structure; eudialyte a ring structure; whereas alkali feldspar and nepheline are tectosilicates, thus the silicate connectivity of the mineral structure increases upwards through the unit.As arfvedsonite has the least complex structure, it may nucleate at high concentrations of halogens that inhibit crystallisation of the other phases.Arfvedsonite will take up fluorine from the magma during crystallisation, which would allow for crystallisation of more complex silicates, i.e. eudialyte.Crystallisation of eudialyte will take up chlorine from the melt, which again combined with upwards loss of volatile elements, would then allow alkali feldspar and nepheline to nucleate.Although Unit 0 is particularly well developed, the model here can be applied to the entire layered sequence.The present
The peralkaline to agpaitic Ilímaussaq Complex, S. Greenland, displays spectacular macrorhythmic (> 5 m) layering via the kakortokite (agpaitic nepheline syenite), which outcrops as the lowest exposed rocks in the complex.This study applies crystal size distribution (CSD) analyses and eudialyte-group mineral chemical compositions to study the marker horizon, Unit 0, and the contact to the underlying Unit − 1.Unit 0 is the best-developed unit in the kakortokites and as such is ideal for gaining insight into processes of crystal formation and growth within the layered kakortokite.The findings are consistent with a model whereby the bulk of the black and red layers developed through in situ crystallisation at the crystal mush–magma interface, whereas the white layer developed through a range of processes operating throughout the magma chamber, including density segregation (gravitational settling and flotation).Primary textures were modified through late-stage textural coarsening via grain overgrowth.An open-system model is proposed, where varying concentrations of halogens, in combination with undercooling, controlled crystal nucleation and growth to form Unit 0.Our observations suggest that the model is applicable more widely to the layering throughout the kakortokite series and potentially other layered peralkaline/agpaitic rocks around the world.
The data describes the cell migration speed and the dynamical behavior of myosin regulatory light chain of wild-type human fibroblasts and AAVS1-modified cells.Immunofluorescent micrograph and time series of light microscopy data for WT cells and cells with a green fluorescent protein gene knocked-in at the AAVS1 locus were analyzed and compared.Representative immunofluorescent images of phosphorylated myosin regulatory light chain in WT and KI cells are shown in Fig. 1.KI cells showed both peripheral and interior P-MRLC fibers.However, WT cells showed interior P-MRLC fibers.WT and KI cell migration was observed using phase contrast and fluorescent microscopy.Cells migrated to the margin of the glass substrate and Supplementary Figs. 1 and 2).Representative cell migratory trajectories are shown in Fig. 2.We analyzed the mean square displacement from the cell tracking raw data and plotted MSD as a function of the time interval.The natural logarithm of MSD and t was plotted and fitted by least-squares regression to clarify the directionality of cell migration).For the data set, power indices were 1.8 and 1.6, respectively.These data indicate that the manner of cell migration of each cell type was mono-directional rather than random.To analyze cell migration speed, the MSD and t data were fit to the theoretical equation of cell migration and), and cell migration speed was obtained as one of the fit parameters.Statistical analysis of cell speed from four independent experiments is shown in Fig. 3.Although the migration speed of WT and KI cells was not significantly different, as analyzed by Student’s t-test, WT cells tend to migrated faster than KI cells.Dynamics of the cytoskeletal protein was observed using confocal microscopy.Fluorescent protein-tagged myosin regulatory light chain-transfected WT and KI cells were observed before and after local photo bleaching and Supplementary Figs. 3 and 4).Time course of the averaged intensity around the photo-bleached region was plotted and Supplementary Data 3 and 4).Development of KI cells was described in the previous publication .We compared cell migration and dynamics of the cytoskeletal protein in KI and WT cells.The human fibroblast cell line MRC-5 SV1 TG1 was purchased from the RIKEN Cell Bank.GFP gene knocked-in cells were established previously .These cells were cultured in low-glucose Dulbecco’s modified Eagle’s medium supplemented with 10% bovine serum and 1% antibiotics in 5% CO2 at 37 °C.Plasmids were transfected using Lipofectamine 2000.A plasmid for Kusabira Orange-tagged MRLC expression was constructed as follows.The GFP-coding region of pAcGFP-N3 was removed by BamHI–NotI digestion and replaced with the PCR product from the monomeric Kusabira Orange 2-encoding plasmid.The PCR product of wild-type non-muscle MRLC was obtained from the MRC-5 cDNA pool and inserted into the EcoRI–KpnI site of pKusabiraOrange-N3.Cells were fixed with 4% formaldehyde/PBS and permeabilized with 0.5% Triton X-100/PBS.Phosphorylated MRLC was detected using the anti-P-MRLC antibody and Alexa Fluor-594 rabbit IgG.Filamentous actin was stained with Alexa Fluor-488 phalloidin.Fluorescence images were obtained with a confocal laser-scanning microscope.WT and KI cells were cultured on a glass substrate with a polydimethylsiloxane barrier.After formation of a cell monolayer on the glass substrate, the PDMS barrier was peeled off and cells began to migrate into the empty space .Cell nuclei, stained with Hoechst 33342, were used as cell tracking markers.Time-lapse imaging was performed with an inverted microscope equipped with a digital CMOS camera.Cell tracking was analyzed with the TrackMate plugin in FIJI image analysis software .Data sets for WT and KI cell tracking are attached in the Supplemental data.Kusabira Orange-tagged MRLC-expressing WT and KI cells were cultured on a glass substrate and temporal changes in the fluorescent signal were observed using confocal microscope with a 60× objective lens.During observation, the scanning area was narrowed to the cell, which forms a photo-bleached spot, and then the scanning area was resized to the original.
This data article describes cellular dynamics, such as migration speed and mobility of the cytoskeletal protein, of wild-type human fibroblast cells and cells with a modified adeno-associated virus integration site 1 (AAVS1) locus on human chromosome 19.Insertion of exogenous gene into the AAVS1 locus has been conducted in recent biological researches.Previously, our data showed that the AAVS1-modification changes cellular contractile force (Mizutani et al., 2015 [1]).To assess if this AAVS1-modification affects cell migration, we compared cellular migration speed and turnover of cytoskeletal protein in human fibroblasts and fibroblasts with a green fluorescent protein gene knocked-in at the AAVS1 locus in this data article.Cell nuclei were stained and changes in their position attributable to cell migration were analyzed.Fluorescence recovery was observed after photobleaching for the fluorescent protein-tagged myosin regulatory light chain.Data here are related to the research article "Transgene Integration into the Human AAVS1 Locus Enhances Myosin II-Dependent Contractile Force by Reducing Expression of Myosin Binding Subunit 85" [1].
fragment.Stable transgenic Xenopus laevis animals were generated with the 5.1 kb fragment as described in the Materials and Methods.As the main aim for generating this transgenic Xenopus laevis was to label the migrating neural crest, and because the expression of GFP in the Sox10-GFP is weak in the pre-migratory neural crest, we focus our description of this new transgenic line on the neural crest migratory stages, where the expression of Sox10-GFP is stronger.Sox10 is normally expressed in all streams of migratory neural crest and in the otic vesicle.This neural crest pattern is reproduced in the Sox10-GFP stable transgenic embryos, showing clear GFP fluorescence in the cephalic neural crest streams at all migratory stages.The expression of Sox10 in the otic vesicle is only partially recapitulated in this transgenic frog, suggesting that not all of the regulatory elements for the Sox10 gene are present in the 5.1 kb fragment.However, the expression of GFP in the neural crest is strong enough to allow direct visualization of GFP fluorescence and to perform time-lapse imaging of the migrating neural crest.Histological sections of Sox10-GFP stable transgenic Xenopus laevis embryos followed by immunostaining against GFP show that GFP is expressed in the mandibular, hyoid, branchial and trunk neural crest.Supplementary material related to this article can be found online at doi:10.1016/j.ydbio.2018.02.020.The following is the Supplementary material related to this article Movie 1.Time lapse imaging of a Sox10-GFP stable transgenic embryo.Images were taken every 6 minutes over 12 hours using a Leica M205 FA Stereo fluorescence microscope.These results show that this new Sox10-GFP stable transgenic Xenopus laevis could be used to study neural crest migration in amphibians.This new tool will allow researchers to perform live imaging time lapse microscopy and in addition, could be used to specifically label neural crest cells for FACS-sorting and purification for further molecular studies.The Sox10-GFP stable transgenic line is maintained at the European Xenopus Resource Centre and at the National Xenopus Resource at the Marine Biology Laboratory where they are available upon request.We have generated both Pax3-GFP and Sox10-GFP transgenic Xenopus laevis frog lines in which most of the endogenous embryonic expression of Pax3 and Sox10 are recapitulated.Pax3-GFP could be used for neural crest induction studies, whereas Sox10-GFP would be ideal to study neural crest migration.Thus, these two transgenic lines represent invaluable new tools to mechanistically study these two distinct processes of neural crest development.
The neural crest is a multipotent population of cells that originates a variety of cell types.Many animal models are used to study neural crest induction, migration and differentiation, with amphibians and birds being the most widely used systems.A major technological advance to study neural crest development in mouse, chick and zebrafish has been the generation of transgenic animals in which neural crest specific enhancers/promoters drive the expression of either fluorescent proteins for use as lineage tracers, or modified genes for use in functional studies.Unfortunately, no such transgenic animals currently exist for the amphibians Xenopus laevis and tropicalis, key model systems for studying neural crest development.Here we describe the generation and characterization of two transgenic Xenopus laevis lines, Pax3-GFP and Sox10-GFP, in which GFP is expressed in the pre-migratory and migratory neural crest, respectively.We show that Pax3-GFP could be a powerful tool to study neural crest induction, whereas Sox10-GFP could be used in the study of neural crest migration in living embryos.
identify what is required of process models to be useful for on-line control, and to evaluate how existing approaches meet these requirements.Section 3.2 presented a generalised mathematical form for the closed-loop control of metal forming processes.If on-line solution of the optimisation problem takes a time T, then this defines the sample time of the controller, so the dynamic bandwidth of the control system is limited to variations which occur at frequencies lower than 1/2T.In some processes this is useful.For example, in a large batch deep-drawing process producing the same product over many days, it may be useful to solve the optimisation problem and adjust the actuator settings during the batch.However, in general, full solution is too time-consuming, so some form of model approximation is required.The term for model error Δ in Section 3.1 provides a basis for evaluating the trade-off between model accuracy and speed.Given a range of approaches to developing a process model, including different choices about model resolution and simplification, there will be some function that relates model solution time to model error, say, Tmodel = S.The sample time of the control actions T must be greater than this so for each model design, it would be possible to solve a form of Eq. to allow prediction of the maximum magnitude of the disturbances that can be controlled.Alternatively, given knowledge of the disturbances, it would be possible to specify the required trade-off between model solution time and error, S.As yet this analysis has not been developed, but would be a useful contribution to the design of future closed-loop control systems, and the approach could be extended to incorporate the value of increasingly accurate sensor feedback in improving the quality of model predictions at a given solution speed.Nevertheless, a broad range of modelling approaches are available, so the next
This uncertainty leads to a degradation of product properties from customer specifications, which can be reduced by the use of closed-loop control.This leads to a survey of current and emerging applications across a broad spectrum of metal forming processes, and a discussion of likely developments.
extended follow-up was not part of the original ISREP trial protocol and thus ethical approval was sought and granted to recontact and reconsent study participants.Participants who had consented to take part in the ISREP study were contacted by letter and telephone to invite them to take part in the follow-up assessment.Following informed consent, assessments were conducted by trained research assistants who were blind to treatment allocation.Where possible, assessments were conducted using face-to-face interviews and this occurred in 75% of cases.However, the primary outcome measure could also be administered via telephone or discussions with care co-ordinators.We first report frequencies for engagement in competitive employment, voluntary work, and education at 2-year follow-up for participants with affective and non-affective early psychosis and descriptive statistics for secondary outcomes.Chi-square tests are used to test for any significant differences in engagement in work, education, and voluntary work between the treatment and control group."Where the expected count was < 5 for > 20% of the cells, Yates' corrections were employed.Analysis of Covariance models were used to test the significance of differences on secondary outcome variables between the treatment and control groups.For each ANCOVA, outcome at the 2 year follow-up was used as the dependent variable; allocation to treatment, centre, and diagnosis were used as fixed factors; and three key variables assumed to be associated with outcome and predictive of drop out were used as covariates.Non-significant interactions were removed before final testing for main effects.Frequency of engagement in work, education, and voluntary work at 2 years is shown in Table 1.Descriptive statistics for other outcome variables are given in Table 2.These are broken down by treatment and diagnostic group.In the combined sample of individuals with affective and non-affective psychosis, more individuals in the SRT + TAU group had engaged in paid work over the 15 months since the end of the intervention period compared to the TAU alone group.However, there were no significant differences between the SRT + TAU and TAU alone groups in terms of engagement in work, education or voluntary work.The 9 individuals from the SRT + TAU group who had engaged in work reported having done so for an average of 305.39 h over the follow-up period.Data on hours spent in paid work was available for 4 of the 6 individuals from the TAU group.In the non-affective psychosis TAU group, 0 out of 24 participants had engaged in paid employment in the year following the end of the intervention period, compared with 5 out of 20 participants in the non-affective psychosis SRT + TAU group."This difference was found to be significant using a chi-square test with Yates' correction, χ2 = 4.52, p = 0.03.The 5 individuals who had engaged in work reported having done so for an average of 162 h over the follow-up period.There was no difference between the non-affective SRT + TAU and TAU groups in terms of engagement in education or voluntary work.There were no significant differences between the SRT + TAU and TAU alone groups for those with affective psychosis in terms of frequency of engagement in paid work.The 4 individuals with affective psychosis from the SRT + TAU group who had engaged in paid work reported having done so for an average of 484.63 h.Data on hours spent in paid work over the follow-up period was available for 4 of the 6 individuals with affective psychosis from the TAU group.There was no difference between the affective SRT + TAU and TAU groups in terms of engagement in education or voluntary work.Both the TAU and SRT + TAU groups showed a gradual reduction in symptoms over the study period.At 2-year follow-up there was a strong trend suggesting an allocation by diagnosis interaction for hopelessness, with the non-affective psychosis treatment group scoring lower on the BHS than individuals in the non-affective psychosis control group = 3.39, p = 0.08).However, ANCOVAs revealed no main effects of treatment on symptoms in the total sample or in the affective or non-affective psychosis subgroups.The follow up data for the ISREP trial provide supportive evidence for longer term gains in the use of SRT in young people with early non-affective psychosis.Fifteen months after the end of the intervention, 25% of participants in the SRT + TAU group had engaged in paid work compared to none of the TAU group.In addition to this there was no worsening of symptoms, despite increased engagement in activity; and there was also a suggestion that improvements in hope were maintained.Engagement in other types of activity was equivalent for the SRT + TAU and TAU non-affective psychosis groups with over 50% of both groups engaging in education and voluntary work.This is positive and suggests that some improvement in functioning may take place naturally over time.However, in order to meet longer-term goals in relation to engagement in paid work, targeted intervention is likely to be necessary.As with the post-intervention data for ISREP reported by Fowler et al., the positive effects of SRT seem to be specific to individuals with non-affective psychosis, with no superiority of treatment being shown for the affective psychosis sub-group.Indeed, individuals with non-affective psychosis demonstrated relatively good outcomes with over 40% engaging in education and voluntary work, irrespective of whether or not they received treatment.This replicates literature highlighting better outcomes in individuals with bipolar disorder as compared to individuals with schizophrenia, possibly due to a return to good functioning between episodes.Individuals with affective psychosis may also have different barriers to functional
Method: In the ISREP study 50 participants (86%) were followed up at 2 years, 15 months post treatment.Results: 25% of individuals with non-affective psychosis in the treatment group had engaged in paid work at some point in the year following the end of therapy, compared with none of the control group.Data from the PANSS and BHS suggested no worsening of symptoms and an indication that gains in hope were maintained over the 15 month period following the end of therapy.
recovery which require a different intervention.However, it must be remembered that the affective psychosis subgroup in this study was small and this impacts upon our ability to draw definitive conclusions.This study adds to the growing evidence base for the use of psychological interventions to target social and functional disability following psychosis.Other interventions include supported employment, Social Skills Training, and Cognitive Remediation.However, whereas other interventions tend to focus on individual barriers to recovery, SRT uses an individualised formulation combined with assertive outreach techniques to understand and target a range of barriers and comorbidity.It is also appropriate for individuals who may be ambivalent about change and who demonstrate a pattern of disengagement.As such, our study includes individuals who may not currently be considered suitable for psychological therapy.In addition, SRT differs from traditional CBT for psychosis due to its wider focus on functioning and an emphasis on the use of behavioural techniques.It is difficult to compare the results of the current study with other interventions due to the use of different outcome measures.A review of supported employment studies in individuals with first episode psychosis reports an employment rate of 49% for those receiving supported employment interventions compared to 29% of individuals receiving standard early intervention service provision.Similarly, a meta-analysis of the international evidence for supported employment for people with severe mental illness suggests that individuals in receipt of supported employment interventions are more than twice as likely to find competitive work than those receiving standard care.Although the employment rates in the current study are not quite as high as those from some supported employment trials, it should be remembered that supported employment is generally designed for individuals who are motivated to find work.SRT may be suitable for more chronic and complex cases that may not be ready to engage with supported employment.Indeed, the rates of employment were very low in the TAU group in the current study.This suggests that without targeted intervention, such individuals are likely to remain unemployed and socially disabled.Moreover, some of the reported challenges to implementing supported employment, including fears around relapse from family members and mental health team staff, may be addressed by the systemic components of our SRT intervention.Although all participants in the trial were accessing secondary mental health services and therefore were in regular contact with mental health professionals as part of TAU, there was no control condition.Future studies should aim to compare SRT to a control intervention matched in terms of frequency of contacts and other non-specific factors.It was also not possible to follow-up all participants who were initially entered into the ISREP study and thus the effect of drop-out is not known.However, we did manage to follow-up 86% of participants, which is comparable to many other RCTs.It would have been interesting to look at time spent in a broader range of activities, such as structured leisure and sports activities.Indeed, the TUS was specifically developed to do this.However, this would have required all participants to have engaged with a face-to-face follow-up assessment.The decision was taken to focus on a more limited assessment of functioning which could be assessed via the telephone and from informants in order to maximise follow-up rates.Overall, evidence for the use of SRT with young people with complex social recovery problems associated with non-affective psychosis is growing.This is a highly challenging group to work with who are difficult to engage and present with complex and comorbid difficulties.However, as cases with the worst prognosis it is highly important to target this group as otherwise the likelihood is of long term social disability is high.SRT shows good promise.The SUPEREDEN3 study shows definitive evidence of a gain in activity as a result of treatment at 9 months.Benefits over the longer term are suggestive from modelling of the SUPEREDEN3 study at 6 months post-intervention and from the ISREP follow-up data presented here.Research has suggested that social disability may precede the onset of psychosis.As such, we are in the process of conducting a trial of SRT with individuals with At Risk Mental States who have social recovery problems.Findings from the PRODIGY trial will suggest whether or not these gains can be replicated in individuals at an earlier stage of illness.Further research is also necessary to explore whether SRT could be effective for individuals at a later stage of illness, outside of Early Intervention Services.
Background: Social Recovery Therapy (SRT) is a cognitive behavioural therapy which targets young people with early psychosis who have complex problems associated with severe social disability.This paper provides a narrative overview of current evidence for SRT and reports new data on a 2 year follow-up of participants recruited into the Improving Social Recovery in Early Psychosis (ISREP) trial.Conclusion: Social Recovery Therapy is a promising psychological intervention which may improve social recovery in individuals with early psychosis.
Demand for mobile wireless services continues to explode."Cisco's latest Visual Networking Index reports that “global mobile data traffic grew 70% in 2012,” driven by average connection speeds that more than doubled from 248 kbps to 526 kbps.Further, Cisco estimates that by 2017, global mobile data traffic will exceed its 2012 level by a factor of 12.6.While in many parts of the world, significant portions of expansion in mobile wireless network capacity will continue to be due to expansions in the geographic coverage of wireless data networks, in developed countries such as the U.S., advanced mobile broadband networks already cover 98.5% of potential subscribers.Thus, the network expansions necessary to accommodate demand growth in developed countries will focus most greatly on deepening network capacities.Technically and economically, this presents a different set of challenges from simply expanding coverage scope – a topic that has been addressed extensively in universal service research.1,The purpose of this paper is to describe and quantify the challenges particularly associated with wireless network deepening.This includes an analysis of the technical issues concerning what techniques for capacity deepening are feasible, and also consideration of the costs of these techniques to determine the economic capability of these techniques to keep up with growing demand.While certain parties have suggested that improvements in technology to increase throughput capacity per megahertz of spectrum and increased geographic reuse of spectrum will be adequate to address wireless demand growth in the U.S. over the next five to ten years, this analysis finds otherwise.2,Although methods to improve throughput capacity per MHz or increase spectrum reuse may be technologically feasible, and are expected to be used intensively by wireless providers, by themselves they are likely to be inadequate or to become uneconomic absent significant increases in mobile wireless pricing.Thus, even under the conservative modeling assumptions of this paper, substantial quantities of additional spectrum will need to be deployed for mobile wireless use if currently forecasted demand is to be satisfied over the next decade without significant service quality degradation or rationing from price increases.This finding is consistent with the conclusions developed by several other studies in the literature that have examined the ability of current or expected spectrum assignments and technologies to accommodate forecasted demand.3,The analysis presented in this study will differ from these prior efforts both by improving on the accuracy of their analyses and by projecting certain enhancements in the ability of evolving wireless technologies to carry more mobile traffic.This paper begins by describing the basic techniques that may be used to expand mobile wireless capacity.These include increasing raw amounts of available radio spectrum, increasing the absolute carrying capacity of each MHz of spectrum, reducing the bandwidth required to carry popular applications, and increasing the utilization of each MHz of spectrum or unit of infrastructure through cell-splitting, sharing or multiple use.In Section 3, this history of technological evolution is contrasted with both the growth in available mobile wireless spectrum and the growth in mobile wireless demand.The paper goes on to catalog the possible forward-going capabilities and economics of several of the most well-known potential Fourth Generation Long Term Evolution wireless technology innovations, including innovations whose effects remain highly speculative.By comparing the joint capacity expansion capabilities of these new and old techniques with demand growth estimates, it is possible to evaluate their ability to accommodate demand growth and to reduce upwards pressure on current wireless pricing."In the end, the analysis demonstrates that by themselves, these methods will be inadequate to accommodate fully expected demand growth at today's prices.Thus, increased assignment of radio spectrum to mobile wireless will be essential.This is in contrast to suggestions from certain parties that spectrum scarcity should not be a terribly significant concern for government policymakers.Methods for expanding mobile network capacity divide into three general categories: the deployment of more radio spectrum; more intensive geographic reuse of spectrum; and increasing the throughput capacity of each MHz of spectrum within a given geographic area.The carrying capacity of a mobile wireless system is the total amount of data or voice traffic that the system is able to transfer to and from customers.4,Wireless data are carried by modulating or distorting radio waves.The quantity of waves a wireless system is allowed to modulate each second is called its bandwidth, and is measured in hertz.Everything else equal, a signal with a higher bandwidth can carry more data per second than a signal of lower bandwidth.The total amount of data that a network may transfer over a given period of time relates closely to the rate at which it transfers data bytes.All things equal, a faster network will transfer more bytes than a slower network.Rates of data transfer are measured in terms of bits per second.5,Note, however, that in addition to raw transmission speed, the total amount of data transfer will be higher on a network that operates as a higher usage/fill factor.This can be achieved if a network has traffic offered more uniformly to it over the measurement period – either because the network serves multiple users whose patterns for offering traffic to the network are less than perfectly correlated, or because the network employs packet scheduling protocols that efficiently divide traffic into different queues based on the immediacy of their need for transmission.6,Perhaps the most well-known way for cellular networks to increase the amount of data they carry is by dividing or splitting cells to reduce cell size, and thus increase the number of cells serving a
As demand for mobile broadband services continues to explode, mobile wireless networks must expand greatly their capacities.This paper describes and quantifies the economic and technical challenges associated with deepening wireless networks to meet this growing demand.Methods of capacity expansion divide into three general categories: the deployment of more radio spectrum; more intensive geographic reuse of spectrum; and increasing the throughput capacity of each MHz of spectrum within a given geographic area.The paper describes these several basic methods to deepen mobile wireless capacity.The paper then describes the capabilities of 4G LTE wireless technology, and further innovations off of it, to further improve network capacity.
2014; and by 2020 demand will be more than double available capacity.While beginning to add 300 MHz of additional spectrum in 2014 narrows somewhat the capacity gap, even this is inadequate to keep the U.S. out of deficit beyond 2017.Indeed, to keep the U.S. out of deficit through 2022, the modeling suggests that total spectrum additions over the 2014–2022 period need to amount to roughly 560 MHz – for a total of 1108 deployed MHz.46,The index figures underlying Fig. 8 are provided in Table 6.While this analysis suggests that evolving demand for U.S. mobile wireless services is likely to be stymied by inadequate capacity growth, it is also useful to consider possible reasons why this may fail to occur – or if it does occur, how the market will equilibrate.One possibility is that the forecasted demand figures are wrong and mobile service usage will not expand at the rates forecasted by Cisco."While this is certainly a possibility, it should be noted that until 2012, Cisco's updates of its Global Mobile Data Traffic Forecast generally found actual Total Global Mobile Data Traffic to be larger than what its forecast from the previous year predicted.While Cisco’s 2013 forecast was a revision down from its earlier forecasts, it is also this more conservative forecast that is used by the analysis to project future demand.47,Another possibility is that far greater load-shifting, and thus improved network packing, will take place than suspected.To the extent that disproportionate amounts of new customer demand are for mobile services at times-of-day and in geographical locations where network capacities are not at their limit, it is possible that networks could absorb increased traffic without requiring proportional capacity reinforcements.Absent very granular network traffic data, it is impossible to know the extent of this possible mitigating effect.On the other hand, it is more likely that reductions in service quality or price increases will end up being the principal forces for equilibrating the market.When wireless capacities are tight, data connections will slow down.Either customers will accept slower performance of their mobile applications, or they will discontinue their use, or transfer their use to off-peak periods or locations.It is also possible that they will eschew particularly data-hungry applications in favor of less-desirable substitute applications that have the virtue of reduced data use.Either way, the effective service quality that customers receive will be reduced.Further, it is quite certain that prices will also be a major equilibrator of the market."This is because Cisco's demand forecasts are predicated on customer adoption and use trends that assume a continuation of today's price trajectory – which has been sharply declining prices per byte of traffic. "As such, Cisco's VNI should be considered to provide forecasts of notional demand.To the extent that capacities go into deficit, this will attenuate, and possibly reverse, current downward price trajectories.If this occurs, future demand will be repressed to match more closely to available supply.Expanding mobile wireless capacity in an economic and effective manner requires multifaceted efforts.The analysis presented in this paper has demonstrated the techniques that have been used to deepen mobile wireless networks.These include the allocation of additional spectrum; the development of more spectrally-efficient wireless technologies and the migration of customers to these technologies; increased reuse of available radio frequencies enabled both by cell site splitting and LTE-A support for enhanced small cell and Wi-Fi integration; and by tighter packing of offered data into available transmission capacity.But even though the analysis has tried to be generous in its projections of capacity growth and cautious in its projections of demand growth, it appears that U.S. mobile wireless markets will face capacity deficits possibly as early as 2014, and after 2016 these may be significant.In order to keep these deficits to manageable proportions, it will be essential for U.S. regulatory authorities to allocate quickly all of the 300 MHz of increased spectrum that was proposed in the National Broadband Plan for exclusive mobile use.If less than this sum is allocated, or if allocations are delayed until towards the end of the decade-long prospective study period, shortfalls will be especially severe.Indeed, in order to keep U.S. wireless markets fully on their current trajectory of virtuous growth, the presented model suggests that 560 MHz of additional spectrum will need to be deployed over the 2014–2022 period.
It goes on to measure the contribution of each of these methods to historical capacity growth within U.S. networks.Without significantly increasing current spectrum allocations by 560 MHz over the 2014-2022 period, the presented model suggests that U.S. wireless capacity expansion will be inadequate to accommodate expected demand growth.This conclusion is in contrast to claims that the U.S. faces no spectrum shortage.
is known only from the sandveld on the coastal plain, typically 10–20 km inland, between Lepelfontein and Hondeklip Bay, a total range of about 200 km.Although the species may be locally common it is highly localised, and has been recorded from only four widely separated localities.The species occurs at an altitude of 100–200 m, in firm, acid coastal sand, usually with underlying granite.The habitat is well drained, and the species is not found on the prominent dune fields in the area but rather in the interdune flats.It usually grows together with the Restio T. bachmannii and the shrub Elytropappus rhinocerotis.The species is a true Namaqualand Sand Fynbos endemic.This is the only species of Elegia on the Namaqualand coastal plain, and the specific epithet makes reference to this.E. namaquense is known from only four populations distributed over a range of 200 km.Each population is estimated to support between 500 and 1000 plants, and thus the total number of individuals is likely to be less than 4000 plants.Given the large range over which it occurs it is possible that not all the populations have been discovered, and the total population may thus be larger.The species grows in a habitat that it often targeted for cultivation, and two of the four known populations have suffered loss of an estimated 30% of the plants.This loss has been within the last two years at the latter site.The proposed mineral sand mines in the region pose a further threat to the species.A portion of one of the populations occurs within the Namaqua National Park, which protects about 500 plants.The extent of occurrence is estimated to be 600 km2, and the area of occupancy about 20 km2.We suggest a conservation status of EN B1 ab + 2ab.South Africa.Northern Cape Province.Garies: Gemsbokvley 479, 20 km east of Hondeklipbaai, 11 Aug. 2014, S. Todd s.n.; near Loerduin, de Klipheuvel 435, 35 km west of Garies, 4 Aug. 2014, N.A. Helme 8416; about 12 km north-east of Groen River mouth, southwest of Rietkop, in Namaqua National Park, 4 Aug. 2014, N.A. Helme 8391; about 16 km north-east of Groen River mouth, Roode Heuvel 502; southwest of Rietkop, 2 July 2013, N.A. Helme 8386; near Lepelfontein, SW of Kotzesrus, 6 Nov. 2014, H.P. Linder 8073; farm Witwater south-west of Kotzesrus, east of Bakensduin, 3 Jun. 2009, R.C. Turner 2247.
The new species Elegia namaquense is similar to the widespread Elegia tectorum but differs strikingly in its well-developed rhizomes, smaller flowers and sparser female inflorescences, and in the indehiscent unilocular fruit (nutlet).It is the only species of Elegia found on the Namaqualand coastal plain and the only Restionaceae species endemic to Namaqualand Sand Fynbos.
The World Health Organisation estimates that 10 million people worldwide require surgery to prevent corneal blindness as a result of trachoma, with a further 4.9 million suffering from total blindness due to corneal scarring.Even with adequate numbers of prospective cornea donors, a considerable discrepancy exists between the supply and demand of transplantable corneas.The unmet clinical need for cornea donors has led to increasing effort in the development of artificial corneal substitutes, which must meet specific criteria if they are to functionally mimic the native cornea.The cornea serves as the protective, outermost layer of the eye and is responsible for the transmission and refraction of incident light beams that are in turn focused onto the retina by the lens.Its near-perfect spherical anterior surface, together with the index of refraction change at the air/tear film interface, account for approximately 80% of the total refractive power of the human eye.The ability to recapitulate the rotational symmetric curvature necessary for optical refractive power is therefore fundamental to the design framework that exists for engineering functional corneal substitutes.It is the distinct arrangement of collagen lamellae in the corneal stroma that is responsible for maintaining the strength and shape of the cornea.The corneal stroma comprises somewhere between 200 and 250 lamellae that are assembled heterogeneously throughout its depth, with the lamellae in the mid to posterior stroma lying in parallel while those in the anterior stroma are interwoven with one other.The complexity of corneal microstructure presents an ongoing challenge when using traditional tissue engineering methods that focus on assembling corneal extracellular matrix in vitro.As such, the replication of human corneal geometry remains to be fully realised within the context of tissue engineering.The ability to construct biosynthetic corneal models would be useful for a number of applications, and this has been achieved in recent years where, for example, corneal models have been required for the characterisation of corneal cellular regeneration as well as for modelling corneal fibrosis.In these instances, corneal models were assembled using different techniques; the study by Li et al. made use of plastic contact lens molds into which hybrid collagen hydrogels were injected and crosslinked, while Karamichos et al. plated human corneal fibroblasts onto six-well plates bearing porous polycarbonate membrane inserts that were left in culture over a period of weeks.The former method enabled the fabrication of a curved corneal surface onto which corneal epithelial cells could be seeded, cultured and eventually implanted, while the latter relied upon cell proliferation and ECM secretion to render an in vitro 3D model that could be used to study fibrosis reversal in the cornea.3D bioprinting is a technique that has garnered notable interest for tissue engineering applications for its ability to direct the hierarchical assembly of 3-dimensional biological structures for tissue construction.Since its advent, 3D bioprinting has made possible the layer-by-layer deposition of biological materials in a prescribed pattern corresponding with the anatomy of an organotypic model.This model is usually acquired from clinical images such as CT and MRI scans and is used as a means by which to generate the fundamental printing paths on which 3D bioprinting depends; these are expressed in the form of a unique G-code that can be computed automatically by 3D printing software at resolutions specified by the chosen print parameters.The ability to replicate features such as concavity, undercuts and convoluted patterns is therefore a function of the complexity of two-dimensional figures, such as points, lines and circles.The final, post-printing stage involves the cell-mediated remodelling of the printed biological construct in the presence of appropriate physiological cues to ensure that it develops suitable biomechanical, structural and functional properties.In this study, we examined the feasibility of generating complex 3D bioprinted corneal stroma equivalents using pneumatic 3D extrusion bioprinting.Printed constructs were anatomically analogous to a human corneal model derived from the topographic data of an adult human cornea, acquired in situ post-refractive surgery.Several low viscosity bio-ink combinations were tested for their printability prior to cell incorporation.Printing accuracy was evaluated by quantifying central and peripheral thickness of the corneal construct and the viability of encapsulated corneal keratocytes was evaluated on days 1 and 7 post-printing.Overall, our study provides a basis for further research into the use of 3D bioprinting for the generation of artificial, biological corneal structures for regenerative medicine applications.A patient-specific digital corneal model constructed using a rotating Scheimpflug camera with a Placido disk and discretised by the Finite Element Method was used.The vertical and horizontal diameters of the model measured 12.377 mm × 12.385 mm, respectively, while its thickness measured approximately 500 μm at the centre and 823 μm towards the periphery.The corneal model was used as a template with which to build a digital support structure on AutoCAD 2017 in order to facilitate the 3D bioprinting process.This was made possible by sealing the rim of the model cornea with a planar circle such that it then resembled a dome; the modified model was then subtracted from the centre of one of the square faces of a digital cuboid that was designed to sit neatly inside a 35 mm Petri dish.The resulting support structure was exported as an STL file and 3D-printed at a resolution of 100 μm with white Acrylonitrile Butadiene Styrene using a CEL Robox 3D printer.The 3D printing software Slic3r was configured with an INKREDIBLE bioprinter.A stereolithography file of the corneal model was imported onto Slic3r, from which versions of G-code were subsequently exported.Printing speed was set at 6 mms−1
Due to its limitations, a concerted effort has been made by tissue engineers to produce functional, synthetic corneal prostheses as an alternative recourse.We applied this to the area of corneal tissue engineering in order to fabricate corneal structures that resembled the structure of the native human corneal stroma using an existing 3D digital human corneal model and a suitable support structure.
we observed the gelatine slurry gradually migrate towards the centre of the support structure to make way for the construct being printed.We also noted that, in the absence of a suitable support structure, corneal structures did not retain their curvature after aspiration of the slurry and that this likely occurred due to the rapid dissolution of the small volumes required to hold the printed constructs in place during printing.A bespoke support structure with the inverse shape to the cornea model was therefore constructed and used in conjunction with and after bioprinting.Consequently, the gelatine solution could be aspirated and replaced with growth medium and/or cross-linked while the constructs were still being supported.Collagen constitutes a major component of corneal ECM and therefore presents a natural choice for generating bioengineered corneal structures.Low concentrations of collagen do not however possess the necessary stiffness to enable the fabrication of robust corneal structures via an extruding 3D bioprinter, where precise control over microarchitecture presents a major challenge.Hydrogels have been extensively used for tissue engineering applications for their structural resemblance to the ECM, their low toxicity and tuneable biophysical properties, including use in the cornea.We undertook to compose a range of composite bio-inks comprising both collagen and alginate in an effort to combine the tensile strength of collagen with the biomechanical properties of alginate for the formation of printable corneal structures.Constructs printed with bio-ink Coll-1 in particular exhibited enhanced mechanical stability, which increased with the proportion of incorporated alginate.The peripheral and central measurements acquired from tissue sectioning revealed the effect nozzle diameter had on print accuracy.Reduction in nozzle diameter resulted in a lengthening of the 3D printing path, giving rise to additional points of fusion between deposited filaments and an increase in mechanical stability.With all other print parameters remaining constant, a greater volume of bio-ink is invariably dispensed.Corneal constructs printed with a nozzle diameter of 200 μm appeared more robust, but better print fidelity was observed from those printed with a nozzle diameter of 300 μm.Thus, careful adjustment of print parameters such as printing speed, needle diameter and bio-ink viscosity can ensure both mechanical stability and print accuracy are achieved.The all-encompassing challenge in 3D bioprinting concerns the ability to print intricate structures in which cells are able to retain viability without printability being compromised.Interpenetrating networks of alginate and collagen have previously been shown to provide a favourable environment suited to cell growth, where cells have manifested varying morphologies depending on material stiffness.In this study, high initial viability of encapsulated corneal keratocytes in composite collagen and alginate bio-inks and noticeable spreading were observed.One of the limitations of extrusion 3D bioprinting is the generation of shear stress-induced cell deformation at the needle wall and which is diminished by the use of low viscosity bio-inks to which low air pressures can be applied.The prevention of dehydration due to the presence of the gelatine slurry during printing and the thinness of the printed tissue also likely contributed to initial cell viability.The conservation of high cell viability of encapsulated cells 7 days after bioprinting points to the potential of composite collagen and alginate bio-inks for 3D bioprinting applications.The future success of corneal 3D bioprinting will ultimately depend on the ability of encapsulated cells to mediate ECM remodelling in order to establish tissue functionality.A distinctive feature of corneal fibroblasts presents itself when they are seeded at the base of a curved surface where they have recently been shown to migrate in lattice formation and align collagen in a way that closely resembles its arrangement in the cornea.A significant advantage proffered by the present work is therefore the ability to reproduce curved corneal geometry which is now known to directly influence cell migration and collagen alignment.Thus, cells seeded at the base of a scaffold bearing a close resemblance to corneal anatomy would potentially be capable of remodelling the ECM in a way that is presently unachievable with non-curved geometries.Successful translation of this current proof-of-concept study will first require further analysis of stromal cell phenotype, the biocompatibility of the construct following transplantation, its capacity to support epithelial cell growth and, critically, its ability to impart a functional corneal replacement."A significant benefit of this approach to corneal tissue engineering is that both structural and biochemical components of the equivalent can be rationally designed, allowing, in theory, the printing of anatomical features such as the limbal zone and Bowman's layer replete with appropriate soluble and insoluble factors.In conclusion, our study provides a proof-of-concept for the use of 3D bioprinting as a rapid and effective method by which to fabricate human corneal substitutes from low viscosity bio-inks.Successful realisation of this method presently relies upon a sustained effort towards facilitating long-term matrix remodelling in order to validate clinical suitability.In all, these findings demonstrate great promise for the application of 3D bioprinting for corneal tissue engineering applications.The authors report no conflicts of interest.
Corneal transplantation constitutes one of the leading treatments for severe cases of loss of corneal function.However, successful translation of these therapies into the clinic has not yet been accomplished.These were 3D bioprinted from an in-house collagen-based bio-ink containing encapsulated corneal keratocytes.We established 3D bio-printing to be a feasible method by which artificial corneal structures can be engineered.
This report is associated with the research article aimed at investigating the effect of weight loss due to laparoscopic gastric plication as a new bariatric surgical procedure on the human serum proteome .A total of 288 proteins was identified using a shotgun label-free proteomics experiment, of which 224 proteins were quantifiable with at least two unique peptides in 70% of samples or more.The raw mass data have been deposited to the ProteomeXchange Consortium via the PRIDE partner repository with the dataset identifier PXD010528.The list of submitted proteomics raw-files into the ProteomeXchange and corresponding sample names are shown in Table 1.Significantly regulated proteins between pre- and post-surgery samples were discussed in detail in Savedoroudi et al. .Gene ontology enrichment analysis for biological process and molecular function of differentially regulated proteins at T1 and T2 are represented in Table 2 and Table 3, respectively.In Table 4, molecular networks of differentially regulated proteins are shown.A total of 16 obese subjects undergoing LGP was investigated at three timepoints; pre-surgery, at 1–2 months post-surgery, at 4–5 months post-surgery.The detailed characteristics of patients were mentioned in Savedoroudi et al. .The six most abundant serum proteins were depleted in the serum samples using the Agilent Multiple Affinity Removal column according to the instructions recommended by the manufacturers.Then, the filter-aided sample preparation protocol was utilized to prepare samples as described previously .Written consent was obtained from all participants and the institutional review board and the ethic committee of Shahid Beheshti University approved the study protocol for Human rights.Initially, the peptides were resuspended in 2% acetonitrile, 0.1% FA and 0.1% trifluoroacetic acid.LC-MS/MS analysis was carried out on a UPLC-nanoESI MS/MS setup with a Dionex RSLC nanopump.The system was coupled online with an emitter for nanospray ionization to a Q Exactive HF mass spectrometer.The samples were analyzed in a random order, in triplicates.The peptide material was loaded onto a 2cm C18 trapping column and separated using a 75cm C18 reversed-phase column.Both columns were kept at 60 °C.The peptides were eluted with a gradient of 98% solvent A and 2% solvent B, which was increased to 8% solvent B on a 5-min ramp gradient and subsequently to 30% solvent B on a 45-min ramp gradient, at a constant flow rate of 300nL/min.The mass spectrometer was operated in positive mode using a Top15 data-dependent MS/MS scan method.A full MS scan in the 375–1500 m/z range was acquired at a resolution of 120 000 with an AGC target of 3 × 106 and a maximum injection time of 50 ms. Fragmentation of precursor ions was performed by higher-energy C-trap dissociation with a normalized collision energy of 27.MS/MS scans were acquired at a resolution of 15000 with an AGC target of 2 × 105; maximum injection time was 100 ms. Dynamic exclusion was set to 5s.Mass spectrometry data were analyzed in MaxQuant version 1.6.0.1 and searched against the Uniprot human reference FASTA database .Label-free protein quantitation algorithm was performed with a minimum ratio count of 1.Standard settings in MaxQuant were employed, including carbamidomethylation of cysteines as a fixed modification, and acetylation of protein N-terminals, oxidation of methionine, and deamidation of asparagine and glutamine as variable modifications.A maximum of two tryptic missed cleavages was allowed.The false discovery rate of identified proteins and peptides was set to a maximum of 1%, using a target-decoy fragment spectra search strategy.Hereby, high confidence identifications were ensured.The “match between runs” feature was enabled to transfer peptide identifications across LC-MS/MS runs, based on accurate retention time and mass-to-charge.The output from MaxQuant, containing the list of proteins identified below 1% FDR, was further filtered and processed in Perseus version 1.6.0.2 .All reverse hits and proteins identified only by site were removed from further analysis, and the data were log 2-transformed.At least two unique peptides were required for a protein quantitation.Additionally, the unique peptides were required to be quantifiable in at least 70% of samples.Differentially regulated proteins between pre- and post-surgery subjects were functionally categorized based on gene ontology classification using WEB-based GEne SeT AnaLysis Toolkit .Identification of networks was performed with Ingenuity Pathway Analysis software.Gene symbols and the corresponding protein fold change were imported to IPA software using core analysis.Standard settings in IPA were employed, including: direct and indirect relationships between focused molecules with default settings of 35 molecules/network, based on experimentally observed data were considered.All sources of data from human, mouse and rat studies in the Ingenuity Knowledge Base were included.
Bariatric surgery is an effective treatment for morbid obesity with a sustained weight loss and improvements in metabolic syndrome.We present a label free quantitative shotgun proteomics approach to analyze the serum proteome of obese people who underwent Laparoscopic Gastric Plication (LGP) as a new bariatric surgery.Pre-surgery serum samples of obese individuals were compared with the serum of the same subjects 1–2 months post-surgery (T1) and 4–5 months post-surgery (T2).The data provide a list of 224 quantifiable proteins with at least two unique peptides that were quantifiable in at least 70% of samples.Gene ontology biological processes and molecular functions of differentially regulated proteins between pre- and post-surgery samples were investigated using WebGestalt online tool.In addition, molecular networks of differentially abundant proteins were determined through Ingenuity Pathway Analysis (IPA) software.This report is related to the research article entitled “Serum proteome changes and accelerated reduction of fat mass after Laparoscopic Gastric Plication in morbidly obese patients” (Savedoroudi et al.[1]).Proteomics data have been deposited to the ProteomeXchange Consortium (http://proteomecentral.proteomexchange.org) via the PRIDE partner repository through the identifier PXD010528.
The data of hemolysis analysis of C@ZVI, P@ZVI-Cdts and Cdts are shown in Fig. 1.Blood cell aggregation study data are provided in Fig. 2 and the cytotoxicity study data using normal fibroblast cells,L929 is given in Fig. 3.The fluorescence imaging data of P@ZVI-Cdts acquired using an optical imaging system is shown in Fig. 4.The colour coded MRA data and the dorsal and lateral oblique view data of Magnetic resonance imaging to view the blood vessels are shown in Figs. 5 and 6.The samples were checked for their blood compatibility by examining the hemolysis and the blood cell aggregation response.For hemolysis assay, human blood collected from volunteers was centrifuged at 700RPM for 10 min.The RBC thus obtained was washed and diluted with saline.Then 100 µl of C@ZVI nanoparticles and P@ZVI-Cdts at different concentrations were dispersed in saline and was added to 100 µl of washed RBC and mixed gently and incubated for 2 h at 37 °C.The supernatant was collected and read at 541 nm by UV spectrophotometer.Saline was used as the negative control and distilled water as a positive control.The RBC, WBC, and platelet aggregation study were carried out to rule out the aggregation of the blood cells when in contact with a foreign body for intravenous injection when used as a contrast agent, as well as for optical imaging in vivo.For this, 200 µg of the C@ZVI nanoparticles and P@ZVI-Cdts, were added to 100 µl of the diluted blood cells and incubated for 30 min at 37 °C.Here saline was used as the negative control.Aggregation, if any, was detected through phase contrast microscope at 20X magnification .In order to test the cellular response, in vitro cytotoxicity of the nanoparticles was evaluated on mouse fibroblast cell lines using MTT assay as per ISO 10993–5:2009 standard.The cells were cultured in MEM medium supplemented with 10% fetal bovine serum and kept at 37 °C in a humidified atmosphere of 5% CO2.Cells were seeded in 24 well plates at a density of 0.5 ×105 cells per well and incubated for 24 h with 25, 50, 100 and 200 mM concentrations of nanoparticles.The medium containing test material was removed and MTT was added to each well at a concentration of 100 µg/well and the well plates were incubated at 37 °C for 4 h.The absorbance was measured using a plate reader.Percentage viability was calculated from the ratio of absorbance of the test sample to the absorbance of negative control.The cells cultured in MEM medium with 10% FBS was used as negative control.To check the potential of the Cdts for imaging applications, the fluorescence imaging efficacy of P@ZVI-Cdts was tested for different concentrations using an optical imaging system.Concentration dependent variation in the photoluminescence was observed at an excitation wavelength of 540 nm and the emission at 640 nm .Lower energy wavelengths were chosen to suit the imaging applications to avoid interference from autofluorescence while considering in vivo imaging .Wistar rats weighing 200–250 g were used for MR angiographic studies using C@ZVI nanoparticles.The institutional ethics committee approved the entire animal experiments.MRA was performed by using a 1.5 T MRI scanner equipped with a 12 channel head and neck coil.Animals were anesthetized with ketamine, xylazine and atropine and imaged before and after intravenous tail vein administration of the C@ZVI nanoparticles.After running the routine MRI sequences for acquiring the localizer and T1 weighted sequence, MR angiogram was performed using a 3D-FLASH dynamic contrast-enhanced MRA sequence.Imaging parameters for 3D-FLASH were a TR of 9 ms, a TE of 3.27 ms, a 30° flip angle, a 256 ×232 matrix 250 mm FOV, and a single slab with 17 sections of 1.00-mm effective thickness and 20% interslice gap.The MR angiography was obtained immediately after the bolus intravenous administration of iron nanoparticles followed by a saline flush.Color coded image of MRA of C@ZVI administrated animal along with the dorsal and lateral view are shown in Fig. 5.and Fig. 6, respectively .
Though nanoparticles are being used for several biomedical applications, the safety of the same is still a concern.It is very routine procedure to check the preliminary safety aspects of the particles intended for in vivo applications.The major tests include how the material reacts to a normal cell, how it behaves with the blood cells and also whether any lysis take place in the presence of these materials.Here we present these test data of two novel nanomaterials designed for its use as contrast agent for magnetic resonance imaging and a multimodal contrast agent for targeted liver imaging.On proving the biosafety, the materials were tested for Magnetic Resonance Angiography using normal rats as model.The data of the same were clear identification of the prominent vascular structures and is included as the colour coded MRI image.Lateral and oblique view data are also presented for visualizing other major blood vessels.
An inguinoscrotal hernia is defined as giant if descending below the midpoint of the inner thigh of a patient in upright position .It is an uncommon condition and rarely encountered in clinical practice.Severe complications such as duodenal and gastric perforation have been reported with high morbidity and mortality rate.In line with the SCARE criteria , we are presenting a case report of a gastric perforation in a giant inguinoscrotal hernia which was managed surgically in a two-step approach.A 50-year old male, visiting Dubai from his home country in South Africa was admitted through the emergency room with sudden severe abdominal pain and swelling.He had as well a large amount of hematochezia.The patient had a history of chronic giant inguinoscrotal hernia for over 10 years reaching just above the knee level.A computed tomography scan of the abdomen revealed the presence of free air and free fluid in the abdomen.The patient was taken to the operating room the same day and a mid-line laparotomy was performed revealing the presence of a large amount of gastric content and pus in the abdominal cavity.A 4 cm tear in the anterior wall of the stomach was identified.The stomach was being pulled down by the large hernia, which had caused the tear.The stomach was released by transecting the omentum at the level of the greater curvature allowing a repair of the perforation without any tension.The 4 cm tear was repaired using 2-0 Vicryl running suture.The abdomen was irrigated with several liters of normal saline solution and suctioned until the irrigation fluid was clear.A drain was left in the subhepatic area.The fascia was closed using #1 PDS running suture and the skin was closed using staples.The patient was transferred back to the intensive care unit.He was kept NPO, started on parenteral nutrition and placed on Meropenem and Fluconazole IV.After initial signs of sepsis, the patient’s condition improved with the decrease of the fever and the C-Reactive Protein.On post-operative day 6, he started having again fever and increased CRP.A CT scan of the abdomen showed a large fluid collection in the scrotal hernia sac.He underwent an ultrasound guided drainage of the collection.On post-operative day 7, the patient was transferred to the ward.The bowel function had returned to normal.His nasogastric tube was removed and he was started on oral diet.Continuous drainage was done until the infection had cleared and the CRP normalized.The patient was discharged home on post-operative day 18 with the plan to perform a delayed hernia repair.An inguinoscrotal support device was custom made to prevent the pulling of the organs and further complications.Since the patient was on a visit to Dubai, he preferred to fly back to his home country for the hernia repair.The operation was performed 3 months later through an inguinal incision and was uneventful.He recovered without complications or signs of recurrence on his 6 months follow- up.Giant inguinal hernia complicated by gastric perforation is quite a challenging situation.In addition to the closure of the perforation the surgeon is faced with the repair of the giant hernia which adds complexity and severe morbidity to the operation.In our case, soon after the abdominal cavity was explored and the stomach and bowel were inspected, the mechanism of perforation became more obvious.In fact, the gastric rupture was the result of the downward traction exerted by the heavy content of the hernia on the stomach and not secondary to an incarceration.The bowel was not strangulated as well.Ishii et al. describe a similar mechanism in their report of a duodenal perforation secondary to a giant inguinal hernia.This was probably as well the case in the report of gastric perforation by Fitz et al where the entire bowel was found to be viable.Most morbidities and mortalities reported in the literature were related to the complications that are secondary to the acute increase in intra-abdominal pressure after placing the hernia content back in the abdomen during a concomitant hernia repair .In our patient we opted for a two-step approach.The initial phase consisted in the repair of the perforation, the control of the infection and the stabilization of the patient.The delayed hernia repair was performed in the second phase.By doing so, we were able to avoid an initial repair of a complex hernia in an unstable patient with a severely contaminated abdomen where the intra-abdominal pressure is already increased secondary to the peritonitis and the ileus.In a giant inguinal hernia with gastric perforation, delaying the hernia repair when possible can decrease the complexity of the procedure and most likely its morbidity and mortality.The authors certify that we have no affiliations with or involvement in any organization or entity with any financial interest or non-financial interest in the subject matter or materials discussed in this manuscript.For this case report, our institute exempted us to take ethical approval as this is not a research study.Written informed consent was obtained from the patient for publication of this case report and accompanying images.A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.Paul Sayad performed the exploratory laparotomy to control the bleeding and repair of gastric perforation, revised the paper and made the literature review.Audrey Zhen Tan drafted the manuscript and consolidated the source materials for this case report.The manuscript has been read and approved by all the authors and is not under consideration for publication elsewhere.Not commissioned, externally peer-reviewed.
Introduction: An inguinoscrotal hernia is defined as giant if descending below the midpoint of the inner thigh of a patient in upright position.It is an uncommon condition and rarely encountered in clinical practice that can lead to severe complications such as gastric perforation.Presentation of case: We present a case of a 50-year old male suffering from a gastric perforation in a giant inguinoscrotal hernia that was managed in a two-step approach.Discussion: In our patient, we opted for a two-step approach.The initial phase consisted in the release of the stomach, repair of the perforation, the control of the infection and the stabilization of the patient.The repair of the hernia was performed uneventfully three months later in the second phase.Conclusion: In a giant inguinal hernia with gastric perforation, delaying the hernia repair when possible can decrease the complexity of the procedure and most likely its morbidity and mortality.
L. by Cupido and Nelson.Our observations on hopliine pollination in annual Wahlenbergia species confirm the association of this pollination system with dark floral markings documented by Van Kleunen et al.The unique, large grey stigma lobes in Wahlenbergia capensis appear to mimic the color and texture of visiting hopliine species, suggesting co-adaptation between plant and pollinators in this species.Beetle species recorded visiting W. capensis include Peritrichia pseudursa Schein, to which we now add observations of visits by P. bipilosa sp. nov.and Dicranocnemus sulcicollis Wiedem., from the Cape Peninsula, and by Lepithrix lineata Fabricius and P. pseudopistrinaria near Aurora.These beetles were observed carrying a dense covering of the grey-blue pollen of W. capensis, which adheres to the hairs on the beetle’s prothorax and elytra during feeding and mating.Species of Peritrichia and Lepithrix are typically attracted to blue and white flowers, where they have been observed to feed on pollen and nectar.Examination of gut contents and mouthpart morphology has confirmed the feeding preferences of these species for pollen and nectar.We have also observed and captured hopliines on Wahlenbergia annularis at Leipoldtville, notably Peritrichia pseudopistrinaria but also individuals of Lepithrix longitarsis and Pachynema crassipes Fabricius.The beetles behaved as described by Goldblatt et al., using the flowers as a platform to assemble, engage in agonistic behavior and copulation, and not infrequently flying from flowers of one plant to another.Their behavior, matched in hopliines visiting W. melanops flowers, results in their becoming visibly covered in pollen which they passively transport to other flowers.The flowers of W. annularis are moderately sized, 15–20 mm in diameter, and pale blue with dark longitudinal marks or streaks around the edge of the bowl.We also recorded visits to flowers of W. annularis by worker honey bees which collected pollen but also acquired dorsal deposits of Wahlenbergia pollen.Gess noted honey bees as opportunistic visitors of several Wahlenbergia species and considered them as unreliable pollinators.This was not supported by observations on two Wahlenbergia species in the KwaZulu-Natal Drakensberg, in which solitary halictid bees and social honey bees were frequent and effective pollinators.We observed that the beetles landed on the flower and then immediately crawled to the base of the style with their heads downward, sometimes proceeding to move around the base of the style.This suggests to us that the beetles were feeding on the nectar held at the base of the style but we are unable to confirm this.We saw no evidence of destructive feeding by the beetles, nor were they observed to feed on the pollen during the male phase of the flowers.Two pollination systems were documented among southwestern Cape Wahlenbergia species by Gess.Species with deeply campanulate flowers, including W. paniculata A.DC., were pollinated by masarine wasps, and those with shallowly campanulate flowers, e.g., W. annularis and W. namaquana Sond., were visited by as many as 19 species of bees and wasps but pollinated primarily by melittid bees.We conclude that the pollination strategy of W. capensis and W. melanops represents a third pollination type that conforms to the specialized hopliine system in southern Africa, i.e., shallowly campanulate flowers with dark central markings darker band of color.W. annularis appears to belong to the class of more generalist flowers but in this species transfer by hopliine beetles is also important.South Africa, Western Cape, 3118: road from Lamberts Bay to Vredendal, red sand dunes, 2 Sept. 1961, Van Breda 1270; Farm Kliphoek, 12 km inland from Doringbaai, 29 Aug. 1970, Hall 3798; 6 Sept. 1970, Hall 3812; 11 Sept. 2008, Goldblatt & Porter 13107; 13 Sept. 2008.Cupido 371; east of Vredendal, Fonteintjie, 15 km SE of Doringbaai, 25 Aug. 2006, Helme 4292.Without precise locality: In region trans Olifantsrivier, Zeyher SAM40417.
Plants have alternate leaves scattered along the stem and are otherwise distinguished in part by narrow calyx lobes and the presence of stylar glands at the base of the stigma lobes.Flowers of W. androsacea, W. capensis, and W. melanops conform to a specialized hopliine pollination system and we present observations documenting hopliine pollination for these three species.This now confirms a third pollination system in western South African Wahlenbergia species that includes one other specialized pollination system utilizing masarine (pollen) wasps and another possibly more generalist system using melittid bees.
other tested peptides.While we were never able to unequivocally establish whether or not the TCR-HLA-E*01:01/PM9 interaction was responsible for stimulation of these CD8+ STCL cells, it prompted us to test the PM9 peptide for binding to HLA-E.Binding of PM9 to both HLA-E*01:01 and –E*01:03 alleles was assessed using an HLA-E/peptide cell-surface stabilization assays.Thus, murine RMA-S cells deficient in transporter associated with antigen processing were stably transfected with one of the two HLA-E alleles together with human β2-microglobuline.Then, stabilization of HLA-E/PM9 and HLA-E/VL9 complexes was compared in peptide-titration analyses using peptide concentrations ranging from 0.01 μg/ml to 100 μg/ml by growing transfected RMA-S cells overnight at 26 °C, pulsing them with peptide for 1 h and transferring them to 37 °C for 3 additional hours.Weak half-maximal effective concentrations of over 100 μM were indicated for the interaction of PM9 with the two HLA-E*01:01 and E*01:03 molecules.To further confirm PM9-interaction with the MHC-E molecules, we employed an assay based on cell-surface stabilization of single-chain peptide-β2-microblobuline-MHC-E trimers .Briefly, a DNA fragment coding for the tested peptide, here PM9, was inserted downstream of an HLA-E leader peptide and was followed by flexible 4 linker, β2-microblobuline and Y84 A point-mutated MHC-E heavy chain, which accommodated the flexible linker.The corresponding plasmid was transiently transfected into HEK 293T cells and a successful peptide-MHC-E interaction resulted in an increased cell-surface expression of assembled single-chain trimmers.Thus, using HLA-E*01:01, HLA-E*01:03 and Mamu-E*02:04, expression of the PM9 peptide increased the mean fluorescent intensity relative to a non-binding peptide TVCVIWCIH for all three employed MHC-E molecules.The highest increase was observed for E*01:03 and reached approximately half of the VL9 MFI.From these experiments, we concluded that the PM9 peptide bound to HLA-E molecules, and stabilized and increased the HLA-E expression on the cell surface.STCL were expanded from PBMC of vaccine recipient 409 using peptides KQVDRMRIRTWKSLVK and MRIRTWKSLVKHHLT.These HC102 and HC103 STCLs were then stimulated with LCL721.221 expressing individually all six volunteer’s HLA class I molecules and pulsed with the two ‘parental’ long peptides.The experiments indicated that peptide-pulsed LCL721.221/HLA-A*03:01 stimulated 5.31% and 13.7% of the corresponding CD8+ HC102 and HC103 STCLs, respectively, yielded approximately 2-fold higher frequencies than those induced by peptide-pulsed untransfected and other HLA-expressing LCL721.221 cells, and the HLA-E-cell-surface stabilization was peptide dependent.Again, we were never able to prove that this half-stimulation of CD8+ cells was triggered by the TCR-HLA-E*01:01/peptide interaction, but we did establish that peptide RIRTWKSLV shared between the HC102 and HC103 parental peptides was capable of stabilizing HLA-E*01:03 on LCL721.221 cells stably overexpressing the HLA-E*01:03 molecule and did so weakly, but more avidly than majority of other tested HIV-1 peptides.Weak binding of RV9 to HLA-E*01:03 was also marginally indicated in the single-chain peptide-β2-microblobuline-MHC-E trimer assay.Next, we decided to screen systematically 401 overlapping 15-mer peptides derived from highly conserved regions of HIV-1 proteins employed in the second-generation, conserved-region HIV-1 vaccines for binding to the HLA-E*01:01 molecule on the human LCL721.221 cells and 17 candidates were identified.These 17 15-mer peptides were retested for specific binding to HLA-E*01:01 and -E*01:03-transfected murine RMA-S cells, whereby most of the HLA-E*01:03-stabilizing peptides also increased HLA-E*01:01 expression, but their binding index was approximately 2-fold lower.We reasoned that HLA-E likely binds 9-mer peptides with higher affinities relative to the parental 15-mers perhaps through a better fit into the peptide-binding groove of the HLA-E molecules and, therefore, using 9-mers in the binding assay may result in more pronounced surface stabilization of the HLA-E*01:03 molecules.For these binding studies, we employed LCL721.221 cells strongly overexpressing HLA-E*01:03.The 9-mer peptides were designed based on the 15-mer results above.As positive and negative controls, we used the VL9 and no added peptide, which yielded mean ± SD HLA-E MFIs of 14265 ± 512 and 3380 ± 10, respectively.The MFI for the 32 tested HIV-1-derived 9-mers ranged from 2481 ± 93 to 10132 ± 241 with 7 peptides yielding MFI above 4000.Eight 9-mers were also incorporated into the single-chain-trimer, of which 5 peptides RMYSPVSIL, TALSEGATP, RIRTWKSLV, YFSVPLDEG and MYSPVSILD showed increased HLA-E*01:03 surface stabilization.In the LCL721.221/HLA-E*01:03 assay, the mean ± SD MFIs of these 5 trimer-stabilizing peptides was 10132 ± 249, 6702 ± 537, 4514 ± 467, 3966 ± 10 and 3313 ± 269, respectively.All surface stabilization data are summarized in Table 3.Thus, altogether in this work, we identified 4 novel previously undescribed peptides TP9, RL9, PM9 and RV9 derived from HIV-1 capable of stabilization of HLA-E molecules on the surface of cells in both the SCT assay and over 4000 MFI in the LCL721.221-HLA-E*01:03 assay, and further 5 possible candidate binder peptides NR9, QF9, YG9, EI9 and MD9, which provided a positive stabilization signal in at least one of the two employed assays.HLA-E is a relatively understudied molecule of the immune system.It can potently inhibit NK killing by presentation of peptides derived from leader sequences of other HLA class I molecules, but much less is known about its other more subtle roles involving CD8+ T cells and indeed its peptide repertoire.Immunity is a highly efficient defence system evolved to protect higher organisms against invading microorganisms and cancer, and as such it must be very finely tuned between uncompromising protective responses and self-harm; HLA-E is likely a significant candidate component in maintaining this fine balance as suggested for example by M. tuberculosis challenge experiments in genetically modified mice .In the HIV-1 vaccine field, the Mamu-E presentation got into the spotlight by its as-yet-unproven involvement in conferring protection for half of the rhesus macaques challenged with
Thus, following initial screen of over 400 HIV-1-derived 15-mer peptides, 4 novel 9-mer peptides PM9, RL9, RV9 and TP9 derived from 15-mer binders specifically stabilized surface expression of HLA-E*01:03 on the cell surface in two separate assays and 5 other binding candidates EI9, MD9, NR9, QF9 and YG9 give a binding signal in only one of the two assays, but not both.
pathogenic SIVmac239 and subsequent complete clearance of SIVmac239 from the body .By inference, HLA-E likely plays an important role in balancing the immune responses to HIV-1 infection.However, to date, there has been only one HIV-1-derived epitope AISPRTLNA reported to bind HLA-E and inhibit NK cell attack .Here, to start building a toolbox for deciphering the likely multiple intertwined roles of HLA-E in anti-HIV-1 response, we have identified 4 novel HLA-E-stabilizing epitope peptides RMYSPVSIL, PEIVIYDYM, TALSEGATP and RIRTWKSLV, and 5 other candidate HLA-E-binding peptides EKIKALVEI, MYSPVSILD, NEEAAEWDR, QMAVFIHNF and YFSVPLDEG positive in only one of the two assays used with the potential to provide specific signals to effector cells of the immune response.Two identified HLA-E-binding peptides were optimal epitope peptides for other canonical HLA class I molecules.Thus, PEIVIYDYM is an optimal epitope for HLA-B*18:01 and RIRTWKSLV is restricted by HLA-A*03:01.A quick database search of HIV-1-derived CD8+ T-cell epitopes revealed that the most avidly HLA-E-binding peptide of the new HIV-1 derivatives RMYSPVSIL is a human A2 epitope , predicted to bind TAP and HLA-A*02/A*24/A*31/B*39/B*52 .We previously reported activation of CD8+ T cells by the IHNFKRKGG peptide in two vaccine recipients, but were unable to identify the restricting HLA class Ia element ; there is a possibility that, in this case, CD8+ T cells were activated through CD94/NKG2C stimulatory receptor present on about 10% of circulating CD8+ T lymphocytes .As for the other identified HLA-E binders, none matched precisely as 9-mer any already known optimal epitope, although several near matches of known epitopes were found.Although similarity in peptide-binding motifs between HLA-E and A*02:01 was once suggested , the significance of these observations is presently unclear.For LCL721.221 cells, there is a possibility that INF-γ-production by HC092 SCTL and HC102/HC103 STCL could have been induced through the HLA-C*01:02 molecule signaling also expressed on these cells.However, this was not the case for the murine RMS-A cells, hence supporting the HLA-E involvement.The strongest HLA-E-binding peptides VL9 are strictly 9-mers and the relatively strongly binding non-VL9 peptides are also 9-mers.With the caveat of the possibility of co-purification rather than HLA association , up to 17-amino acid-long self-peptides were eluted from soluble HLA-E*01:01 and HLA*E01:03 molecules and sequenced by nanoflow liquid chromatography tandem mass spectrometry .In the past, we similarly eluted longer peptides from classical HLA class Ia molecules .In the same study above, Celik et al. found that HLA-E*01:01 and HLA-E*01:03 alleles did not share the same peptidome, nevertheless presented peptides, which were derived from pools of closely related protein subtypes such as histones and ribosomal proteins .The MHC-E peptide-binding groove is more open relative to that on canonical class Ia molecules and thus can readily accommodate longer peptides.This also concurs with our identification of 17 HLA-E-stabilizing 15-mer peptides during the initial screen for HIV-1-derived binders.A diversity of SIV-derived 15-mer and optimal 9-mer peptides was presented by Mamu-E*02:03, -E*02:11 and –E*02:20 in RhCMV68-1/SIV-vaccinated macaques, whereby on average 4 Mamu-E-binding peptides were identified per every 100 amino acids of SIV protein length as well as ‘supertopes’ recognized by all vaccinated animals .In the present study, the tHIVconsvX immunogens were 873 amino acids long and 9 binding 15-peptide pairs were identified.This was approximately a 4-fold lower frequency of HLA-E binders in humans compared to RhCMV68-1/SIV vaccinated macaques, which may reflect the generally broader epitope presentation by rhesus macaques relative to humans.In the same macaque study above, search for amino acids enriched in certain 9-mer positions of 11 optimal mapped epitopes and 551 Mamu-E-eluted, LC–MS/MS sequenced peptides failed to provide any obvious binding motif.As the authors commented, this was unexpected given the limited polymorphism of the Mamu-E molecule .So far, we were unable to confirm existence of vaccine- or natural HIV-1 infection-elicited CD8+ T lymphocytes recognizing HIV-1-derived peptides through their TCR and this is not for lack of trying.Our search for PM9-specific HLA-E-restricted CD8+ T-cell responses in natural HIV-1 infection identified one possible responder out of 32 patients with a barely detectable response.Since humans infected with M. tuberculosis have readily detectable CD8+ T cells specific for HLA-E-presented M. tuberculosis peptides, perhaps search for such HLA-E-restricted HIV-1-specific CD8 T cells in dually HIV-1- and M. tuberculosis co-infected individuals might yield some interesting results.HIV-1 vaccine is the best solution to ending the AIDS epidemic.Induction of HLA-E-restricted protective CD8+ T-cell responses by vaccination has several attractions such as limited HLA-E polymorphism, which would help deal with HIV-1 global variability and immune evasion.Given the non-human primate protection and a complete clearance of an immunodeficiency virus from the body, HLA-E-specific responses may have superior efficacy to regular HLA class Ia-restricted T cells.Because these responses are not dominantly utilized during natural infection, HIV-1 has not adapted to them.In the present work, we have expanded the knowledge of HIV-1-derived target peptides stabilizing HLA-E cell surface expression.Authors declare no competing interests.
Non-classical class Ib MHC-E molecule is becoming an increasingly interesting component of the immune response.It is involved in both the adaptive and innate immune responses to several chronic infections including HIV-1 and, under very specific circumstances, likely mediated a unique vaccine protection of rhesus macaques against pathogenic SIV challenge.Despite being recently in the spotlight for HIV-1 vaccine development, to date there is only one reported human leukocyte antigen (HLA)-E-binding peptide derived from HIV-1.In an effort to help start understanding the possible functions of HLA-E in HIV-1 infection, we determined novel HLA-E binding peptides derived from HIV-1 Gag, Pol and Viv proteins.These peptides were identified in three independent assays, all quantifying cell-surface stabilization of HLA-E*01:01 or HLA-E*01:03 molecules upon peptide binding, which was detected by HLA-E-specific monoclonal antibody and flow cytometry.Overall, we have expanded the current knowledge of HIV-1-derived target peptides stabilizing HLA-E cell-surface expression from 1 to 5, thus broadening inroads for future studies.This is a small, but significant contribution towards studying the fine mechanisms behind HLA-E actions and their possible use in development of a new kind of vaccines.
ions that showed the competitive metal ion enhancement such as Cd2+, Pb2+, and Hg2+, especially, Zn2+ strongly increased the fluorescent intensity of 6QOD + Fe3+.In order to gain insightful information about the binding behaviors of 6QOD to Hg2+ and 6QOD to Fe3+, a 1H NMR experiment was conducted to seek more evidence to support the conformation of both complexes at 1:1 equiv.In the case of 6QOD + Hg2+, at the beginning we found that 6QOD hardly dissolved in D2O, but after addition of Hg2+ the solution became more homogeneous detectable from much better baseline in the 1H NMR spectrum.In the same manner, the complex of 6QOD + Fe3+ was tested in CD3CN, revealing the obvious change in the chemical shifts of the protons involved in the binding.In both cases, the protons of DPA were downfield-shifted, demonstrating the definite binding to three nitrogen atoms of the DPA unit.However, the chemical shifts of the oxazolidinone proton and the methylene photon in the case of Hg2+ clearly downfield shifted in the range of 0.3–0.5 ppm, implying the much closer position between the oxazolinone ring and Hg2+ compared to that of the Fe3+ complex.These splitting and shifted patterns of both 6QOD + Hg2+ and 6QOD + Fe3+ were similar to those of EAD + Hg2+ and EAD + Fe3+, respectively, indicating that 6QOD loosely binds with Hg2+ and tightly binds with Fe3+.In order to clearly understand the actual structures of the complexes between fluorophore 6QOD and both metal ions, DFT studies were introduced.The optimized structures of 6QOD + Hg2+ complex in water and 6QOD + Fe3+ complex in acetonitrile are shown in Fig. 9.The 6QOD + Hg2+ was found to be a bidentate complex which two N-donor atoms of the ligand coordinate with the Hg2+ ion, and two bond distances of 2.39 and 2.43 Å were obtained.For the 6QOD + Fe3+ complex, the bidentate coordination complex which bond distances between three N-donor atoms of the ligand and the Hg2+ ion are 1.91, 1.92 and 2.01 Å, respectively, as shown in Fig. 9b."Conclusively, 6QOD probe is considered to be useful for the detection of Hg2+ and Fe3+ with a low detection limit of 1.01 and 0.22 μM, respectively, and both complexes are a 1:1 stoichiometric binding confirmed by Job's plot, 1H NMR, and MS data.This fluorescent enhancement could be also visualized by a fluorescent color change from colorless to green under blacklight.The 1H NMR results also suggest that 6QOD might form bidentate bindings loosely with Hg2+ and much firmly with Fe3+, as evidenced by the chemical shifts of the protons of DPA unit and the oxazolidinonyl protons in the case of Hg2+ probably due to the effect of much closer distance between Hg2+ and the oxazolidinone ring supported by DFT studies.We, therefore, propose the probable binding structures as shown in Scheme 3.In summary, we have designed and synthesized three oxazolidinonyl dipicolylamine containing quinoline derivatives: 3QOD, 4QOD, and 6QOD.Probe 6QOD showed high selectivity and sensitivity toward Hg2+ in aqueous media and Fe3+ in CH3CN by fluorescent enhancement involving the inhibition of PET process.The 6QOD sensor exhibited in micromolar level with a detection limit of 1.01 μM Hg2+ and 0.22 μM Fe3+ in a complexation ratio of 1:1 between probe and sensing metal ion confirmed with the optimized structures of both complexes by DFT.The sensing mechanism is considered to be the static quenching through the suppression of PET process based on 1H NMR experiments and HRMS data.The design idea reported here might provide a new avenue to develop a new fluorescent enhancement probe for mercury detection in aqueous media and Fe3+ in acetonitrile.
One of the quinoline derivatives, 6QOD, shows a remarkable fluorescent enhancement in both Hg2+ in aqueous media and Fe3+ in CH3CN over other metal ions with a detection limit of 1.01 and 0.22 μM, respectively.Moreover, the complexation was proved to be a 1:1 stoichiometric binding by Job's plot and MS data.6QOD was confirmed to form a bidentate binding with Hg2+ (Ka = 6556 M−1) and Fe3+ (Ka = 27,700 M−1) as evidenced by the chemical shifts in 1H NMR experiments of the DPA protons and the oxazolidinonyl protons in an excellent agreement with the most stable complex structures for both metal ions revealed by the DFT study.Both sensing mechanisms probably involve PET inhibition between the DPA unit and quinoline.The advantage of this 6QOD probe is that it can effectively be applied as a selective Hg2+ and Fe3+ chemosensor by adapting the proper dual-sensing system.
Mass production in manufacturing puts greater emphasis on real-time asset location monitoring: pausing a production line to locate an asset can lead to significant losses when, for example, an engine is produced every 30 s at the Ford factory in Dagenham.Additionally, manufacturing agility and resilience increases if supply chain tracking can extend the scope of trusted suppliers and foundries.When location information can be associated with monitored contextual information, e.g. machine power usage and vibration, it can be used to provide smart monitoring information, such as which components have been machined by a worn or damaged tool.The corollary of this is that sensor data without timely positioning has limited use.Indoor positioning systems have not taken a semantic web or future-proofed IoT approach: services and algorithms are embedded on devices and limited to unconventional architectures that are inflexible to business requirements and technological progress.To ensure that accuracy and precision are not trade-offs for scalability and interoperability, semantic technologies offer service-agnostic resource sharing.The IoT supports resource sharing through a global network infrastructure of interconnected, intelligent devices for environmental, healthcare, military and industrial monitoring, often in the form of low-cost and easily deployed WSN.These necessitate solutions to integrate common functionality and the large amounts of data output by these devices.This can be met by both the use of structured, machine processable data and standardised, semantic access to algorithms and toolsets used to sanitise noisy, incomplete datasets.WSN positioning exploits the capabilities of fixed anchor devices, or nodes, to locate mobile ones.The communications of fixed devices, used for contextual IoT monitoring, can be reused to track nearby moving assets.A stimulus, such as the detection of movement with accelerometers, initiates communication with the purpose of localising an asset.Ranging data are collated from multiple nodes and passed to filtering and positioning algorithms.Frequent communications, calls to computationally expensive recursive functions e.g. Particle Filters, and storage directly affect system timeliness and usefulness.Large-scale manufacturing also puts greater emphasis on non-intrusive technologies as pausing a production line to maintain or scale infrastructure represents high productivity losses.Low powered WSN devices with long lifetimes can meet industrial user-requirements of scalability and limited intrusion on supply chains and existing communications infrastructure.Systems considered state of the art in precision tracking, such as UWB, intrude on industrial power and network infrastructure and rely on high granularity of anchors, limited re-usability of software processes and functions and are susceptible to long range and metal obstacles.Validation of indoor industrial positioning approaches must be undertaken subject to realistic spatially, temporally and frequency varying multipath propagation effects of wireless signals that occur in the presence of mobile, dense and metallic obstacles and the consequently dynamic floor-plans of industry.Resulting signal corruption increases the likelihood that the direct path, LoS signal is lost and that ranging is derived from a NLoS signal.LoS signals are characterized by LoS propagation as well as the absence of obstacles, with dimensions larger than the signal wavelength, within the 1st Fresnel volume.Obstacles may occupy the 2nd Fresnel volume.A NLoS signal is characterized by obstacles completely obstructing the direct path between transmitter and receiver, but leaving a clearance zone inside the first Fresnel area.However, existing network validation approaches do not address issues of complex and harsh environments that are typical in industry and many are still untested in real indoor environments.The highest measurement accuracy shown by WSN ranging approaches tested indoors has been 4–6 m in office environments with static floor-plans and no significant obstacles and on a small scale with toy car tracking.These approaches have used software, as opposed to precision hardware acknowledgement and time stamping, and large numbers of ranging measurements – 500 kbps).Economical communication is a critical consideration for IoT devices, which should be capable of monitoring in real-time, without overloading shared network resources.A novel representation ontology-driven IIoT solution that comprises four contributions, is proposed as a solution in this paper:A hybrid positioning approach using short-range RSSI ranging and long-range ToF ranging data, based on clock errors and the increase in RSSI path loss exponent with distance, as discussed in Sections 3 and 2.1.Measurement uncertainty is reduced using hardware interrupt metering and calibration of RTT and propagation delay, RSSI and distance.Design and development of a scalable semantic service and IIoT network architecture that supports distributed, intelligent services and communications in indoor environments.Implementation of the architecture in a reasoned RDF/OWL ontology defining service inputs, outputs, requirements and relationships of each component of the architecture and the middleware for annotated abstraction of cross-layer parameters.Demonstration of the capabilities of the architecture through deployment and remote querying linked-dataset real-time positioning.Ranging in a real industrial environment showed 5.1–6 m RMS accuracy and large-scale positioning, 12.6–13.8 m mean accuracy, comparable with the accuracy of state-of-the-art systems evaluated in Section 2.Positioning consists of assorted stages aimed at improving the accuracy of measured distance data, and converting this to a location coordinate.Filtering, based on probability, can be used to reduce errors in results and positioning is conducted with lateration, angulation or location fingerprinting.Industrial environments are characterised by lack of reliable existing infrastructure and detrimental wireless dynamics as a result of metal obstacles and competing RF signals.For both RSSI and ToF measurements multipath propagation effects introduce ranging errors, while clock precision and bias impact on the usefulness of ToF measurements.Thus, an accurate ranging measurement phase is key in reducing error propagation into subsequent phases.Based on a number of considerations we chose to use an ontology to investigate the development of a semantic architecture, as it
Real-time asset tracking in indoor mass production manufacturing environments can reduce losses associated with pausing a production line to locate an asset.Complemented by monitored contextual information, e.g.machine power usage, it can provide smart information, such as which components have been machined by a worn or damaged tool.Although sensor based Internet of Things (IoT) positioning has been developed, there are still key challenges when benchmarked approaches concentrate on precision, using computationally expensive filtering and iterative statistical or heuristic algorithms, as a trade-off for timeliness and scalability.Wireless, self-powered sensors are integrated in this paper, using a novel, communication-economical RSSI/ToF ranging method in a proposed semantic IIoT architecture.
protocols into error recovery.Again, the longer paths had a greater impact on the RSSI accuracy, resulting in an RMS error of 5.5 m, comparable to previous experiments, notwithstanding the improved multipath conditions .In Table 4 the ranging system is compared to contemporary systems that have been validated in indoor environments.The approaches evidently produce varying results according to the range of office test environments.However, the proposed system produces a comparable RMS error to contemporary RSSI and ToF ranging solutions, while demonstrating robustness to a harsh industrial environment.Obtaining accurate positioning information on the basis of three or more sets of ranging data is key to real-time tracking of assets.While ranging can provide a circular or spherical area within which an asset can be located, positioning produces a specific coordinate location.The positioning approach was validated on a different day from the ranging test, three nodes were deployed in the open sports area and a fourth was carried across an eight point trajectory between these anchors, as indicated in Figs. 13–14.Figs. 13–14 demonstrate the system estimation versus actual position of the nodes.Table 3 compares the hybrid approach to positioning with that based purely on RSSI or ToF data.RMS error shed little light on the benefit of the hybrid approach over the other two.The maximum errors for the ToF and hybrid approaches were comparable at 21.7 and 21.2 m, but the RSSI error was significantly higher at 30.4 m.With the ToF and hybrid approaches the algorithm converged on a coordinate for all location.In this experiment nodes were subject to the shortest internodal distances of only 28 m.However RSSI ranging still performed poorly and the positioning algorithm could not converge on a location in all cases.Ranging errors from each of the fixed nodes propagated into the subsequent positioning phase, resulting in large positioning errors.The highest level of positioning performance was gained by combining RSSI measurements for short-range and ToF for long range with a threshold set at 10 m. RMS errors, mean and maximum errors were the lowest with this approach.Table 5 shows that the mean error of the proposed tracking system is comparable to the mean error of contemporary RSSI and ToF tracking systems reported in the literature.A second bank of ranging tests was conducted, across varying obstacles for the same internodal distance.Results were collected in five conditions: both nodes in LoS, 5 m depth of metal obstacles with LoS, 5 m depth of metal obstacles with NLoS, 10 m depth of metal obstacles with LoS and 10 m depth of metal obstacles with NLoS.The obstacles as seen from both nodes is illustrated in Figs. 15.For all LoS cases, obstacles were present in the second Fresnel volume and for NLoS cases in the first Fresnel volume.This test area was surrounded by metal racks, beams and columns, thus the overall RMS error was significantly higher than in previous environments, exceeding 10 m with all ranging methods.However, actual ranging accuracy varied in each location and decreased, as expected, with the depth and density of the obstacles, as shown in Fig. 15.The ToF distance was within 1 m of the true range for the full LoS case and for LoS with a 5 m depth of metal obstacles.This accuracy reduced to 2.5 m with NLoS and 5 m of obstacles.The RSSI ranging achieved a more consistently increasing error with obstacle density than ToF.However, with 10 m of full racks and NLoS between devices, errors of more than 20 m occurred for all approaches.Evidently, both ToF and RSSI ranging can offer a reasonable degree of ranging accuracy with up to 5 m of metal obstacles in the first Fresnel zone.Assuming that some LoS signals are received, filtering can be used to ignore distance estimates with high variance.This paper has presented the design, implementation and proof of concept evaluation of an industrial, semantic Internet of Things positioning architecture, using low-power embedded wireless sensors.The proposed architecture deployed wired nodes at known locations in order to execute real-time locations of moving assets, without reliance on high processor and power overheads.Performance evaluation has shown comparable RMS ranging accuracy to existing systems tested in non-industrial environments and a 12.6–13.8 m mean positioning accuracy.Future work will extend the architecture to further develop the middleware into a full semantic service bus and its interfaces linked to additional services to access information from accelerometer and gyroscope drivers and to correct ranging results using a complementary filter.The SPARQL endpoint will be used for rapid ontology reasoning.The positioning system is also to be tested in the industrial environment, with devices attached to mobile automotive racks and a fourth anchor in a different plane to increase the capability of the trilateration algorithm to converge.Finally, the positioning system will be tested under NLoS conditions, similar to those considered for ranging in Section 4.3.
Deployed at a working indoor industrial facility the system demonstrated comparable RMS ranging accuracy (ToF 6 m and RSSI 5.1 m with 40 m range) to existing systems tested in non-industrial environments and a 12.6–13.8 m mean positioning accuracy.
and 14/40 for SAS.The third patient was an EM on paroxetine who received 6 mg risperidone daily, and scored 4/14 for BAS, 1/40 for SAS and gained 4 kg.These patients, in theory, should have experienced the worst risperidone-induced movement disorders, but this was not the case.Similarly, a different patient was co-prescribed hydroxyzine, which is listed as an inhibitor of CYP2D6.This patient received 1 mg risperidone per day, was an EM and had higher degree of movement disorders than the previous three patients, scoring 21/40 for AIMS, 9/14 for BAS and 4/40 for SAS.In this case it would appear that hydrazine could have been contributing to perceived risperidone ADRs, but this was the only example in the cohort.Finally citalopram, listed as an inhibitor of CYP2D6, which was discounted previously due to lack of supporting documentation, did not appear in this study to increase movement disorders when co-prescribed with risperidone, although this should be investigated in a larger cohort.The majority of the patients who gained more than 5 kg while on risperidone were not receiving concomitant medication, except one patient who gained 45 kg and who was co-medicated with citalopram.Other genes that affect risperidone efficacy and cause ADRs have previously been considered.Genes affecting metabolism, drug or dopamine clearance as well as drug receptor variability could be responsible for risperidone movement disorders and may be important as pharmacogenetic markers.For example, CYP3A5 and ABCB1 have been shown to influence the risperidone active moiety.Genetic variability of phase II metabolism by a glutathione S-transferase enzyme coded for by GSTM1 has been associated with dyskinesia experienced in risperidone-treated patients.The Ser9Gly mutation in the DRD3 dopamine receptor gene is associated with increased risk of dyskinesia in risperidone treated patients.Weight gain as a result of risperidone treatment has been linked to polymorphisms in 5-HT2A, 5-HT2C, 5-HT6 and BDNF.We are aware that caution needs to be exercised when interpreting these results as the numbers are relatively small and there are confounding factors that have been included in this naturalistic cohort.Although the aim of this study was to identify PMs from patients experiencing ADRs, the sample size may not be sufficient to have identified such relationship.In some cases large cohorts are needed to find significant association.Perhaps if the cohort size is increased an association will be found, but the question is whether this will be a strong pharmacogenetic marker to reduce ADRs, particularly in view of the important environmental influences including concomitant medication.Although no statistical evidence was found, concomitant medication may have confounded the results in our study and this will need to be considered in future studies of larger cohort size.Measuring blood concentrations of risperidone and its active metabolite in future studies may help to clarify the role of potential intermediaries in the hypothesised connection between genetic and clinical phenotypes.CYP2D6 polymorphisms appeared not to associate with risperidone ADRs in this pilot cohort of risperidone-treated South African patients.Weight gain and movement disorders appeared not to be experienced simultaneously when patients were treated with risperidone, but this might have been a peculiarity of the cohort.A novel CYP2D6 mutation, which may have clinical relevance, was identified in this cohort.A larger cohort would be valuable to confirm the results of this study.The authors declare that they have no competing interests.TMD carried out the molecular analysis and drafted the manuscript.AE assisted with the sequencing.CM saw the patients and collected patient data.JLR and CWvS saw the patients and collected patient data, and participated in the design and coordination of the study.CWvS and MSP edited and finalised the manuscript.MSP conceived the study, was responsible for its overall coordination and raised the funding.All authors read and approved the final manuscript.
Background: Contradictory information exists regarding the influence of CYP2D6 polymorphisms on adverse drug reactions (ADRs) (extrapyramidal symptoms (EPS) and weight gain) related to risperidone treatment.This prompted us to evaluate the influence of CYP2D6 genetic variation in a cohort of South African patients who presented with marked movement disorders and/or weight gain while on risperidone treatment.Methods: Patients who were experiencing marked risperidone ADRs were recruited from Weskoppies Public Psychiatric Hospital.Sequencing.Results: No statistically significant association was found between CYP2D6 poor metabolism and risperidone ADRs.An inverse relationship between EPS and weight gain was however identified.Conclusion: CYP2D6 variation appeared not to be a good pharmacogenetic marker for predicting risperidone-related ADRs in this naturalistic South African cohort.Evaluation of a larger cohort would be needed to confirm these observations, including an examination of the role of potential intermediaries between the hypothesised genetic and clinical phenotypes.
carbon stocks in East African in protected areas since 1951.Wekesa et al. reported improvement in forest condition compared with previous studies, for example in Chawia forest, which has been a model of community based forest conservation.As land use change has been detected as one of the major sources for anthropogenic carbon emissions, studies quantifying these changes should also have a link to reasons behind such change.Assessing carbon stock change with spatially explicit data allows for further identification of possible drivers in time and space.Application of airborne laser scanner data provide a powerful method for assessing carbon stocks together with field measurements, but adapting the land cover specific carbon values for land cover classes mapped with satellite imagery is challenging.The average carbon density was the highest for indigenous montane forests compared to exotic forests and woodlands suggesting saving montane forests for carbon sequestration.Croplands above 1220 m a.s.l. had much higher carbon density compared to croplands in the lowlands showing important role of agroforestry practiced in the hills for carbon sequestration.The highest carbon stocks were in croplands due to largest land cover area of croplands in the study area.Croplands have been expanding rapidly in the Taita Hills and its surrounding lowlands, but since 2003 a slowing trend in the process is recognized.Moreover, changes identified between 2003 and 2011 show that while shrublands are still cleared for croplands in the lowlands, in the hills croplands are recently converted to shrublands and woodlands.Land cover change has direct impact on carbon stocks as clearance of forest and shrublands leads to decreased carbon sequestration.A slight increase in carbon stocks from 2003 to 2011 is recognized, which follow the model of forest transition of native forests through logging and clearance for croplands to reforestation."The underlying reasons for the increase of shrublands and woodlands in the hills and increased carbon stocks in croplands are related to forests' role in conservation and increasing biodiversity, providing ecosystem services such as water harvesting and storage, economic reasons in making land-use choices between cropland and woodland, and governmental legislation supporting trees on farms.
Land cover change takes place in sub-Saharan Africa as forests and shrublands are converted to agricultural lands in order to meet the needs of growing population.Changes in land cover also impact carbon sequestration in vegetation cover with an influence on climate on continental scale.The impact of land cover change on tree aboveground carbon stocks was studied in Taita Hills, Kenya.The land cover change between 1987 and 2011 for four points of time was assessed using SPOT satellite imagery, while the carbon density in various land cover types was assessed with field measurements, allometric biomass functions and airborne laser scanning data.Finally, the mean carbon densities of land cover types were combined with land cover maps resulting in carbon stock values for given land cover types for each point of time studied.Expansion of croplands has been taking place since 1987 and before on the cost of thickets and shrublands, especially on the foothills and lowlands.Due to the land cover changes, the carbon stock of trees was decreasing until 2003, after which there has been an increase.The findings of the research is supported by forest transition model, which emphasizes increase of awareness of forests' role in providing ecosystem services, such as habitats for pollinators, water harvesting and storage at the same time when economic reasons in making land-use choices between cropland and woodland, and governmental legislation supports trees on farms.
in the recruitment of the mutant protein to the plasma membrane .In the present study, the recruitment of the dynamin 2 K562E mutant to endocytic pit was drastically decreased, supporting previous findings.Considering that actin remodeling is implicated in clathrin-mediated endocytosis , perturbation of actin dynamics by the K562E mutation may contribute to its inhibitory effect on endocytosis.The 555Δ3 mutation was shown to cause instability of tubulin dynamics but not actin dynamics .In the present study, 555Δ3 mutant-expressing cells showed defective lamellipodia formation with no obvious change of actin distribution.The inhibitory effect on lamellipodia formation could be attributed to tubulin disarray.U2OS and HeLa cells were used in this study to assess the effect of the dynamin CMT-mutants on actin dynamics.These cells are widely utilized in dynamin-dependent cellular function.In addition, we found that NG108-15 cell, a mouse neuroblastoma x rat glioma hybrid cell line, showed similar aberrant dynamin and actin localization.However, the cell dependent-effect of dynamin CMT mutant in clathrin-mediated endocytosis has been observed .Analyses of other cell types including neurons, Schwann cells are required for investigating causality of illness for CMT mutation in the future study.In conclusion, we show for the first time that expression of the dynamin 2 CMT mutant K562E leads to aberrant F-actin distribution as well as lamellipodia formation.These results indicate that disturbance of actin dynamics in dynamin 2 CMT mutants should be considered as a potential mechanism involved in the pathogenesis of CMT.
Specific mutations in dynamin 2 are linked to Charcot-Marie-Tooth disease (CMT), an inherited peripheral neuropathy.However, the effects of these mutations on dynamin function, particularly in relation to the regulation of the actin cytoskeleton remain unclear.Here, selected CMT-associated dynamin mutants were expressed to examine their role in the pathogenesis of CMT in U2OS cells.Ectopic expression of the dynamin CMT mutants 555δ3 and K562E caused an approximately 50% decrease in serum stimulation-dependent lamellipodia formation; however, only K562E caused aberrations in the actin cytoskeleton.Immunofluorescence analysis showed that the K562E mutation resulted in the disappearance of radially aligned actin bundles and the simultaneous appearance of F-actin clusters.Live-cell imaging analyses showed F-actin polymers of decreased length assembled into immobile clusters in K562E-expressing cells.The K562E dynamin mutant colocalized with the F-actin clusters, whereas its colocalization with clathrin-coated pit marker proteins was decreased.Essentially the same results were obtained using another cell line, HeLa and NG108-15 cells.The present study is the first to show the association of dynamin CMT mutations with aberrant actin dynamics and lamellipodia, which may contribute to defective endocytosis and myelination in Schwann cells in CMT.
literature, the term is generally used for Tis, T1, T2 lesions as a group.According to literature,18–22 our study showed that the oncologic results of laser surgery for selected patients in the treatment of Tis–T1 laryngeal cancer are equivalent to those achieved with open partial laryngectomy with less morbidity and usually without the need for tracheostomy.The current literature is now concentrating on the comparison of laser surgery and radiotherapy.Our study focused on margin status and a prognostic role was proven in both group of patients.Concerning management of patients with positive or close margins, nowadays there is no consensus about post-operative strategies.Some authors recommended biopsy;23 it is not unusual that final histological analysis is less favorable than the extemporaneous analysis, discovering non-negative margins.The problem for the clinician is then to decide between surveillance, surgical revision and radiation therapy.24,Some studies found that positive margins after careful resection in macroscopically healthy tissue are not a pejorative factor for overall or recurrence-free survival in T1a patients endoscopically treated.25–27,Therefore, adjuvant treatments, such as radiation therapy or surgical revision, do not seem indicated.In case of macroscopically negative, but microscopically positive margins, some authors recommend endoscopic control with targeted biopsy under general anesthesia 10 weeks after surgery.28–30,Other authors observed that positive margins after tumor resection are associated with a higher rate of local recurrences.31–33,Ansarin et al. found that when the margins were positive, the incidence of local recurrence was higher and DFS was lower compared to patients with free margins.These findings indicate that additional treatment should always be given if positive margins are found.34,In our study positive margins were found in 24 patients; 17 of them underwent adjuvant RT while 5 were treated with surgery.Two patients were managed with watchful waiting approach because of anesthesiological problems and radiation therapy refusal.According to literature, local recurrence rate was higher in patients with positive margins.35,We did not find statistical differences in local recurrence rate between laser and open surgery.In 2 patients of Group A and 3 patients of Group B definitive histological exam was negative for carcinoma.Beyond oncologic results, other evaluated outcomes in literature are morbidity, vocal function, hospitalization length and costs.When performing cordectomy by laryngofissure, the thyroid cartilage and endolaryngeal soft tissues are divided.Sometimes after surgery there could be a compromise of the airways and therefore a need for temporary tracheotomy.With endoscopic resection, tracheostomy is very rarely indicated.Avoiding tracheotomy and preserving the prelaryngeal muscles can facilitate a quick, safe recovery of swallowing.36,Functional results with TLC are generally better than those of conventional open surgery, in terms of time needed to restore swallowing, tracheotomy rates, incidence of pharyngeal fistulae and shorter hospital stays.37,38,These functional benefits may be attributed to the more conservative nature of the endoscopic technique, since normal tissues are not interrupted during the procedure.36,In fact, in transoral laser cordectomies, the functional sequelae are exclusively voice-related.Difficulties in swallowing liquids after the procedure are temporary and resolve spontaneously in a few days.39,Our results confirmed the data reported in literature regarding need for tracheostomy and swallowing function recovery.In literature and in our study, the use of CO2 laser surgery was associated with a shorter hospital stay and earlier return to work than laryngofissure cordectomy.40,For these reasons, CO2 laser cordectomy resulted as a cost-effective treatment modality if compared to open cordectomy or radiotherapy.41–43,In particular, Cragle and Mandeburg observed that CO2 laser cordectomy was almost 58% cheaper than radiotherapy with the same oncologic results.In 1994, a study of Myers obtained a similar result: CO2 surgery is 70% cheaper than radiotherapy.The costs included hospital admission and stay, materials and surgical time, as well as healthcare and non-healthcare personnel associated with the procedure.Specifically, it indicated that transoral laser cordectomy was less expensive than laryngofissure cordectomy.Furthermore, open cordectomy costs increase because of the later return to work.CO2 laser cordectomy and open cordectomy afford optimal oncologic radicality for early glottic cancer.Besides cure, compared to laryngofissure, CO2 laser cordectomy offers different advantages.The absence of need for feeding tube or tracheotomy after CO2 laser procedure eliminates two of the great stigmas regarding laryngeal cancer treatment.Furthermore, a more conservative approach guarantees a shorter hospitalization and lower costs.Finally transoral approach is related to a lower risk of complications.Margin status has an important prognostic role both in open cordectomy and in CO2 laser cordectomy.Therefore additional treatment should be considered in case of positive margins; in order to reduce recurrence rate and consequent need of more aggressive surgery.Concerning management of patients with close margins, further studies are necessary to obtain a consensus about post-operative strategies.The authors declare no conflicts of interest.
Introduction: Cordectomy by laringofissure and transoral laser surgery has been proposed for the treatment of early glottic cancer.Results: Margin status is related to recurrence rate in both groups (p < 0.05) without significant differences between open and laser cordectomy (p > 0.05).Lower tracheostomy rate, earlier recovery of swallowing function and shorter hospital stay were observed in Group A (p < 0.05).Conclusions: Margin status has a prognostic role in T1a–T1b glottic cancer.Transoral laser surgery showed similar oncologic results of open cordectomy, with better functional outcomes.
of the environment by doubling the lifetime of existing buildings, thereby directly minimising the drain on non-renewable natural resources required for the construction of new buildings, but also prevents demolition waste.In addition, the proposed retrofitting solutions can improve safety and eliminate the social risks and alleviate fuel poverty in thermal deficient buildings.Following a brief state-of-the-art review on energy and seismic retrofitting materials and techniques, a novel concept for the simultaneous seismic and energy retrofitting of the RC and masonry building envelopes was explored in this paper, combining TRM jacketing and thermal insulation materials or systems.Apart from achieving cost effectiveness, the hybrid structural-plus-energy retrofitting solutions examined, which are based on inorganic materials, are addressing also the building envelope critical requirement for fire resistance.The overall effectiveness of the combining energy efficiency and seismic retrofitting was demonstrated via a case study of a five stories old-type RC building.Moreover by proposing a common approach based on the expected annual loss, it was possible to evaluate the financial feasibility and benefits of the combined approach.It was shown that the payback of the retrofitting intervention can be significantly reduced when seismic is applied concurrently with energy retrofitting by combining advanced construction materials, thanks to large savings related to the labour costs.The author believes that TRM jacketing may be combined effectively with thermal insulation materials or systems, which can be fire-resistant too, providing promising solutions for the simultaneous seismic and energy retrofitting of the RC and masonry building envelopes.Future research is needed to verify the effectiveness of the proposed solutions experimentally on full-scale prototypes, exploring in this way also their applicability.Moreover, the proposed evaluation method of the combined seismic and energy retrofitting should be systematically studied to cover a series of parameters for RC buildings but also for masonry buildings that have the highest vulnerability under earthquake loading.In addition, new research efforts should be concentrated on the optimization of the proposed techniques but also on the development of new composites materials that integrate multiple functions to provide fast and cost effective solutions for the upgrading of the existing building stock.Finally the development of new anchorage systems for adding prefabricated sandwich panels to the building envelopes could considerably boost the combined building upgrading.
This paper explores innovative techniques by combining inorganic textile-based composites with thermal insulation for the simultaneous seismic and energy retrofitting of the existing old buildings.A brief state-of-the-art review on energy and seismic retrofitting materials and techniques is initially made, followed by the introduction of a novel concept for the simultaneous seismic and energy retrofitting of the Reinforced Concrete (RC) and masonry building envelopes, combining Textile Reinforced Mortar (TRM) jacketing and thermal insulation materials or systems.The hybrid structural-plus-energy retrofitting solutions examined are based on inorganic materials providing both cost effectiveness and fire resistance for the building envelope.The overall effectiveness of the combined energy and seismic retrofitting is demonstrated via a case study on a five stories old-type RC building.Moreover by proposing a common approach based on the expected annual loss (of consumed energy or expected seismic loss), it is possible to evaluate the financial feasibility and benefits of the proposed combined retrofitting approach.It was shown that the proposed concept is economically efficient as the payback period of the intervention (return of the retrofitting investment) can be significantly reduced for seismic zones when energy is applied concurrently with seismic retrofitting by exploiting advanced construction materials, thanks to large savings related to the labour costs.
subsequent exodus of the poor rather than of inclusive economic growth.This case therefore serves to caution both the research and policy community that what may appear as straight-forward and sometimes convenient causal relationships between economic growth and poverty reduction may at closer scrutiny involve more complexly related processes of elite capture, dispossession, and migration.The final paper, by Berdegué, Escobal and Bebbington, presents a synthesis of the main findings of the program regarding the determinants of territorial dynamics and the implications for public policies.On the basis of a systematic comparison of the 19 case studies of rural territories, the DTR program identified five “bundles of factors”, in which the agents–institutions–structures interaction takes place: agrarian structures and, more generally, the governance of natural resources; the relationship of territories with dynamic markets; the productive structure of the territory; the relationship of territories with nearby urban centers; and the governance of public investments.The paper highlights that inclusive and sustainable economic growth does not emerge spontaneously, but requires the concerted or at least tacitly coordinated action of a diversity of social actors, or “transformative social territorial coalitions.,The paper describes each factor and explains how the interactions among them can lead to more successful cases of territorial development.
This article is the introduction to a volume containing findings from a program conducted over five years in 11 Latin America countries, to answer three questions: (1) Are there rural territories that have experienced simultaneous economic growth, poverty reduction, and improved distribution of income?; (2) What factors determine these territorial dynamics?, and, (3) What can be done to stimulate and promote this kind of territorial dynamics?The article outlines the analytical and policy issues and the methodology, summarizes the remaining 10 papers in the collection, and presents a conceptual framework that itself is one of the results of the program.
and validation procedures are under development in the project.A database of problems for verification of codes and models against analytical solutions and a Model Validation Database of experiments for validation of simulations covering a range of phenomena relevant to FCH safety are under construction.How changes in model parameters affect the results is evaluated in the sensitivity study.Model predictions may be sensitive to uncertainties in input data, to the level of rigour employed in modelling relevant physics and chemistry, and to the adequacy of numerical treatments.The sensitivity analysis methodology allows the dominant variables in the models to be highlighted, defining the acceptable range of values for each input variable and therefore informing and cautioning any potential users about the level of care to be taken in selecting inputs and running model.Relevant model parameters include the computational mesh, the time step, the numerical scheme, the boundary conditions, and the domain size for semi-confined, vented and open configurations.In the statistical analysis of the comparison between experimental data and simulation results, Statistical Performance Measures provide a measure of the error and bias in the predictions, i.e. the spread in the predictions such as the level of scatter from the mean and the tendency of a model to over/under-predict.In the validation procedure acceptable numerical ranges for the SPM are going to be defined as quantitative assessment criteria.The key-target variables are identified for each phenomenon that is relevant for hydrogen safety: release, mixing and dispersion, self-ignition, fires, deflagrations, detonations and deflagration to detonation transition.For example, hydrogen concentration and flammable mass are specific key-target variables for the release and mixing of hydrogen leaking from a tank of a vehicle in a private garage, while ignition location and time are target variables for the ignition, and over-pressures and flame velocity are target variables for deflagrations.The final step in the HYMEP is to prepare an assessment report that includes information and data about each stage of the protocol for the specific model that has been evaluated.In the project the content and the level of details required for the report will be defined.In order to support the CFD practitioners in the use of the HYMEP and the implementation of the protocol stages, the SUSANA consortium prepared 4 complementary documents:a review of the state of the art in CFD modelling of hydrogen safety issues, “The state of the art in physical and mathematical modelling of safety phenomena relevant to FCH technologies” .a critical analysis of the CFD modelling for hydrogen safety issues where the suitability of CFD approaches for real-scale applications, existing bottlenecks and model deficiencies are identified and described, “Critical analysis and requirements to physical and mathematical models” .a guide to best practice in numerical simulations with the purpose of supporting the correct application of CFD methods to each relevant phenomenon by the CFD users, “Best practice in numerical simulation” .a report with verification and validation procedures to help practitioners in the hydrogen safety CFD area to determine the fidelity of modelling and simulation processes, “Final report on verification and validation procedures” .The first version of the model evaluation database is available on the project website and currently it includes about 30 experiments.Hydrogen safety related experiments that are available in the literature and in reports are considered for the database.An evaluation of the quality of the experiments and their suitability for the validation process is carried out in the project.All experiments are inserted by one group of experts and then evaluated by another group.In the evaluation stage the reviewers evaluate if the experiment can be used for validation,and assess the quality of the experimental procedure, facility, and the related measurements.The experiments are grouped together according to the relevant phenomena: release and dispersion, ignition, deflagrations, detonations and DDT.Experiments with fires will be added in the next version.A brief description with the experimental set up and procedure, the objective of the experiment, the experimental data and references are included for each experiment.In Tables 1–5, the lists of experiments that have been identified as suitable for the model validation database in the first part of the project are described for each physical phenomenon.In the releases and dispersion section, several experiments are available in the database as shown in Table 1.Different relevant configurations are investigated in the selected experiments: indoors and outdoors, small enclosures and garage facilities, and vented configurations.Most of the experiments are performed with gaseous hydrogen and only one with liquid hydrogen .In the ignition and fires section only one set of experiments is currently available, as described in Table 2.The scope of the experiments is to investigate the self-ignition of gaseous hydrogen in a pressurized tube at different pressures with a T-shaped pressure relief device .In the deflagration section 14 experiments are available.Some relevant configurations are considered in the experiments: different hydrogen concentrations, an open environment, a closed or vented box, and the presence of obstacles.For example, there are deflagration experiments in the large scale RUT facility , in the obstructed closed and vented tube , in a mock-up of a hydrogen refuelling station , and in a completely open environment .Two sets of experiments of DDT are available in the database.In the first set of experiments DDT with hydrogen in straight pipes of three different diameters and with different gas concentrations are performed while in the second set of experiments explosions in an obstructed 12 m long tube are carried out with a 15% hydrogen–air mixture.In the detonation section three
The protocol covers all aspects of safety assessment modelling using CFD, from release, through dispersion to combustion (self-ignition, fires, deflagrations, detonations, and Deflagration to Detonation Transition - DDT) and not only aims to enable users to evaluate models but to inform them of the state of the art and best practices in numerical modelling.
The cyclic fevers typically associated with malaria are caused by repeated cycles of Plasmodium multiplication inside host erythrocytes.During a cycle, which lasts 24–72 hr depending on the Plasmodium species, the merozoite form of the parasite invades an erythrocyte inside a vacuole where it transforms into 10–30 new merozoites that eventually egress from the host erythrocyte.Instead of multiplying, internalized merozoites can also transform into sexual stages, the gametocytes, which do not divide and circulate until they are ingested by an Anopheles mosquito.In the mosquito midgut lumen, gametocytes become activated and transform into gametes that rapidly egress from erythrocytes.After fertilization and parasite development in the mosquito, a process that takes 2–3 weeks, invasive sporozoites form and are transmitted to a new mammalian host where they transform, inside hepatocytes, into first-generation merozoites.To complete its life cycle, the parasite needs to be motile and to actively invade host cells.With the exception of flagellum-based motility used by male gametes, Plasmodium locomotes via a substrate-dependent type of motility called gliding.The ookinete stage glides in the mosquito midgut lumen and crosses its epithelium, while the sporozoite glides in the mosquito salivary system as well as in the skin and liver of the mammalian host.The parasite also needs to invade host cells.Host cell invasion is a process by which the parasite actively enters the target cell inside a parasitophorous vacuole created by the invagination of the host cell membrane.Only the merozoite and sporozoite forms invade host cells—the erythrocytes and hepatocytes, respectively.Gliding motility and host cell invasion are both active processes powered by an actomyosin motor.The motor is located in the space that separates the parasite plasma membrane and a layer of flattened vesicles called inner-membrane complex or alveoli.The motor comprises a single-headed unconventional myosin of the apicomplexan-specific XIV class, called MyoA, bound to the IMC, and dynamic filaments of actin located underneath the plasma membrane.A number of structural proteins called gliding-associated proteins appear to tether MyoA to the IMC as well as hold the PPM and the IMC together.Finally, transmembrane proteins link the submembrane motor to the extracellular environment.Their stable interaction with the matrix/host cell surface constitutes an anchor on which myosins pull to move the parasite forward.To date, the parasite transmembrane proteins that have been identified as links between the parasite motor and the extracellular milieu all belong to the thrombospondin-related anonymous protein family of proteins.These proteins are type I transmembrane proteins that share a functionally conserved cytoplasmic tail that binds actin, and an ectodomain exposing various ligand-binding modules including a thrombospondin type I repeat.They are specific to the apicomplexan phylum of protists, being expressed, among human pathogens, in Plasmodium, Toxoplasma, Babesia, and Cryptosporidium.In Plasmodium, the sporozoite stage expresses three members of the family—TRAP; TRAP-related protein, also called S6; and TRAP-like protein—which all play a role in sporozoite gliding on substrates and within tissues.The ookinete stage expresses a single member, called circumsporozoite protein and thrombospondin-related anonymous protein-related protein, which is essential for ookinete gliding motility.Merozoite TRAP is a TRAP family member that was reported as expressed in the merozoite, which invades erythrocytes but does not exhibit gliding motility.The mtrap gene is conserved and syntenic among Plasmodium species.In P. falciparum, mtrap could not be disrupted, in agreement with the view that MTRAP might be involved in merozoite invasion of erythrocytes.Biochemical approaches found that the P. falciparum MTRAP ectodomain bound to the GPI-linked protein semaphorin-7A on human erythrocytes.In this interaction, two MTRAP monomers were proposed to interact via their tandem TSRs with the Sema domains of a Semaphorin-7A homodimer.More recently, the MTRAP cytoplasmic tail was shown to be sufficient to polymerize actin.These data all favor a role for MTRAP during merozoite invasion of erythrocytes, possibly acting as a bridge between the motor and the erythrocyte surface.Here we address the role of MTRAP using rodent-infecting P. berghei and human-infecting P. falciparum parasites.Results indicate that MTRAP is not critical for merozoite invasion of erythrocytes but is crucial for gamete egress from the PV membrane and thus parasite transmission to mosquitoes.We first investigated the role of MTRAP using the rodent-infecting P. berghei model.mtrap knockout clones, B4 and R8, were derived from WT P. berghei ANKA by replacing the full mtrap coding sequence by two cassettes expressing resistance to pyrimethamine or the red fluorescent protein mCherry.Intravenous injection of PbMTRAPKO or WT parasites in mice resulted in identical parasite growth curves, i.e., an ∼10-fold daily increase in parasitemia during exponential multiplication.The absence of any detectable effect of mtrap deletion on blood stage parasite growth thus raised the hypothesis that MTRAP does not function at that stage.Isolated blood stages were then analyzed by immunofluorescence assays using a polyclonal antibody generated against a peptide sequence from the cytoplasmic tail of P. berghei MTRAP.In WT parasites, only a proportion of merozoites, defined by positive staining of apical membrane antigen 1, displayed a positive MTRAP signal, differently from previous findings in P. falciparum, in which all merozoites are MTRAP positive.MTRAP staining was predominantly associated with sexual stages of the parasite.Isolated P. berghei gametocytes, identified by staining male development-1/protein of early gametocyte 3 in osmiophilic bodies, exhibited a punctate MTRAP staining with asexual trophozoite stages serving as negative controls.The punctate MTRAP staining in nonactivated P. berghei gametocytes did not colocalize with MDV-1/PEG3, and after activation of the gametocytes for 10 min it became more diffuse and mostly peripheral.MTRAP was not detected in any PbMTRAPKO parasite population by immunofluorescence or by western blot.To test whether MTRAP might play a role
Surface-associated TRAP (thrombospondin-related anonymous protein) family proteins are conserved across the phylum of apicomplexan parasites.Blood stage forms of the malaria parasite Plasmodium express a TRAP family protein called merozoite-TRAP (MTRAP) that has been implicated in erythrocyte invasion.Using MTRAP-deficient mutants of the rodent-infecting P. berghei and human-infecting P. falciparum parasites, we show that MTRAP is dispensable for erythrocyte invasion.
membrane rupture.However, our PPLP2 stainings do not favor this view, since PPLP2 is secreted in the MTRAPKO.This also suggests that secretion of EM lysis effectors is not dependent on successful PVM rupture.While the dispensability of MTRAP for asexual growth in the blood excludes MTRAP as a valid target for antimalarial vaccines aimed at preventing merozoite invasion of erythrocytes, MTRAP might still retain potential as a transmission-blocking vaccine.MTRAP function is essential before gamete egress; therefore antibodies will not have access to the target before function.Nonetheless, since MTRAP remains on the surface of egressed gametes, it might serve as a target where bound antibodies might allosterically block gamete function or induce complement-mediated killing.It also remains a possibility that MTRAP might contribute not just to PVM rupture by gametes but also in subsequent steps of zygote formation.Future studies are needed to determine whether specific antibodies to MTRAP can block parasite transmission to mosquitoes.P. berghei WT ANKA strain and MTRAPKO, were maintained in 3-week-old female Wistar rats or 3-week-old female Swiss mice.Mice or rats were infected with P. berghei parasites by intraperitoneal or intravenous injections.Parasitemia was followed daily by blood smears or FACS analysis.Anopheles stephensi mosquitoes were reared at the Centre for Production and Infection of Anopheles at the Pasteur Institute.All experiments using rodents were performed in accordance with the guidelines and regulations of the Pasteur Institute and are approved by the Ethical Committee for Animal Experimentation.P. falciparum 3D7 and NF54 strains were maintained in RPMI-based media containing O+ human erythrocytes at 4% hematocrit and 0.5% AlbuMAX II or 10% A+ pooled human serum, according to established methods.To generate the targeting sequence to knockout MTRAP in P. berghei, the mtrap 5′UTR and 3′UTR were used as homology sequences flanking the hDHFR and mCherry cassettes.The MTRAP complementing plasmid was generated by cloning, in a plasmid bearing the P. berghei centromeric sequence CEN-core, the 5′UTR, and coding sequence of MTRAP, followed by a heterologous 3′UTR from trap.To generate the transfection plasmids for P. falciparum, regions of the N-terminal and C-terminal mtrap coding sequence, including part of the 5′ and 3′ UTRs, were used as homology regions, which were cloned into the pL6-eGFP CRISPR plasmid on either side of the hDHFR selection cassette.The guide DNA sequence was cloned into the same plasmid using the BtgZI-adaptor site, resulting in the completed PfMTRAPKO-pL7 plasmid.P. berghei genetic manipulation was performed as described.P. falciparum genetic manipulation was performed as described.All primers used for PCR amplification, molecular clonings, and genotyping are described in the Supplemental Information.P. berghei gametocytes and merozoites were obtained directly from infected mice blood using a Nycodenz gradient.Samples were fixed with 4% paraformaldehyde and 0.0075% glutaraldehyde, permeabilized with 0.1% Triton X-100, and blocked with BSA 3% prior to stainings.P. falciparum parasites were obtained from in vitro cultures of the 3D7 or NF54 strains.Synchronous production of gametocytes stages was achieved as described.Nonactivated and activated parasites in ookinete medium were spread on glass slides and fixed with ice-cold methanol.All antibodies and dilutions used for stainings are described in the the Supplemental Information.For analysis of WT and MTRAPKO gametocytes, sexual stages were isolated directly from infected mice blood with at least 0.5% gametocytemia after leucocyte removal using a Nycodenz 48% gradient at 37°C.The cells were with 4% PFA and 1% glutaraldehyde immediately after isolation or after activation in ookinete medium.A detailed description of specimen treatment for EM is provided in the Supplemental Information.Conceptualization, D.Y.B. and R.M.; Methodology, D.Y.B., J.B., G.P., C.L., and R.M.; Validation, T.T. and A.C.; Investigation, D.Y.B., S.T., C.T., A.F.C., A.R., F.H., A.L., U.S., T.T., G.P., and C.L.; Resources, P.S., S.S., T. Tsuboi, C.C., and P.A.; Writing – Original Draft, D.Y.B. and R.M.; Writing – Review and Editing, D.Y.B., P.A., J.B., G.P., C.L., and R.M.; Visualization, D.Y.B.; Project Administration, D.Y.B. and R.M.
Instead, MTRAP is essential for gamete egress from erythrocytes, where it is necessary for the disruption of the gamete-containing parasitophorous vacuole membrane, and thus for parasite transmission to mosquitoes.
likely are sufficient to provide protection.In addition, the adjuvant in Circovac®, TS6 Immuneasy™, may have a critical role in protection especially if it can enhance cellular immunity.Cellular immunity was not tested in this study, but in a previous dam vaccination study, Circovac® induced a strong maternally-derived cellular immune response in the offspring of vaccinated sows .One of the VAC + CHAL pigs developed clinical PDNS during this study.To our knowledge this is the first report of PDNS in pigs vaccinated and experimentally infected with PCV2.It has been suggested that excessive PCV2 antibody titers may trigger the development of PDNS .In addition, other etiological agents such as PRRSV and torque teno virus or PCV3 have also been proposed as triggers or causative agents in PDNS.The PDNS-affected pig in this study had no detectable PCV2 antibody titer at challenge and the titer increased towards a low positive level over the following weeks.It also has been suggested that PDNS pigs may have a misdirected, excessive immune response towards a decoy epitope called CP which is located in ORF2 of PCV2 .Tests to detect antibodies against CP were not available.While PDNS in the past often occurred a few weeks following outbreaks of systemic PCVAD, it appears that PDNS became rare after large-scale introduction of PCV2 vaccination.This suggests that PCV2 vaccination prevents development of PDNS.The reasons why the PCV2 vaccinated pig in this study developed PDNS remain unknown but could include an elevated anti-PCV2 IgM response or failure to appropriately vaccinate the pig.Alternatively, the gastric ulceration could perhaps have acted as a predisposing factor for a septic event.While PRRSV was not present in the pigs, another un-recognized co-infecting agent could have contributed to the development of PDNS.PCV2 is ubiquitous, very difficult to remove from a farm and easily transmissible to naïve pigs .In this study, PCV2 in feces collected at dpc 21 was transmissible to naïve pigs from non-vaccinated pigs but not from vaccinated pigs indicating that vaccination reduces PCV2 transmission.This is important considering that routine cleaning procedures on a farm prior to getting a new batch of pigs may not always be sufficient to remove PCV2 and reduction of virus loads by vaccination could assist in preventing transmission of PCV2.Under the conditions of this study, PCV2a vaccination reduced PCV2d viremia, PCV2d tissue loads and PCV2d shedding via nasal and fecal routes.In addition, PCV2a-vaccinated and PCV2d-challenged pigs did not transmit PCV2d to naïve contact pigs whereas non-vaccinated PCV2d infected pigs did.PCV2a vaccination was effective against PCV2d challenge.This study was funded by Merial, although the funder had no influence on the experimental design of the study.Additional funding was provided by the Biotechnology and Biological Sciences Research Council Institute Strategic Programme Grant awarded to the Roslin Institute.The authors declare no financial and personal relationships with other people or organizations that could inappropriately influence this work.
Porcine circovirus type 2 (PCV2) vaccination has been effective in protecting pigs from clinical disease and today is used extensively.Recent studies in vaccinated populations indicate a major PCV2 genotype shift from the predominant PCV2 genotype 2b towards 2d.The aims of this study were to determine the ability of the commercial inactivated PCV2a vaccine Circovac® to protect pigs against experimental challenge with a 2013 PCV2d strain and prevent transmission.Thirty-eight pigs were randomly divided into four groups with 9–10 pigs per group: NEG (sham-vaccinated, sham-challenged), VAC (PCV2a-vaccinated, sham-challenged), VAC + CHAL (PCV2a-vaccinated and PCV2d-challenged), and CHAL (sham-vaccinated, PCV2d-challenged).The CHAL and VAC + CHAL groups were challenged with PCV2d at 7 weeks of age and all pigs were necropsied 21 days post-challenge (dpc).The VAC-CHAL pigs seroconverted to PCV2 by 21 days post vaccination (dpv).NEG pigs remained seronegative for the duration of the study.Vaccination significantly reduced PCV2d viremia (VAC + CHAL) at dpc 14 and 21, PCV2d fecal shedding at dpc 14 and 21 and PCV2d nasal shedding at dpc 7, 14 and 21 compared to CHAL pigs.Vaccination significantly reduced mean PCV2 antigen load in lymph nodes in VAC + CHAL pigs compared to CHAL pigs.When pooled serum or feces collected from VAC + CHAL and CHAL pigs at dpc 21 were used to expose single-housed PCV2 naïve pigs, a pooled fecal sample from CHAL pigs contained infectious PCV2 whereas this was not the case for VAC + CHAL pigs suggesting reduction of PCV2d transmission by vaccination.Under the study conditions, the PCV2a-based vaccine was effective in reducing PCV2d viremia, tissue loads, shedding and transmission indicating that PCV2a vaccination should be effective in PCV2d-infected herds.
run at minimum cost and be able to generate a quick view of sustainability performance in the early phases of the product development, which is essential in a normal automotive product-developing environment.This paper provides helpful clues for researchers interested in exploring full cost accounting by reviewing, analysing and synthesising the broad range of relevant sources from diverse fields in this topic area.A comprehensive literature review of 4381 papers related to FCA methods was undertaken."It used a systematic approach to extract ten important FCA methods and these were: the SAM, FFF's sustainability accounting, monetised LCA, SV concept, E P&LA, extended LCC, CWRT, Ontario Hydro, ExternE and USEPA's method.Based on a careful examination and critical analysis of each approach and existing automotive sustainability measures, the SAM developed by British Petroleum and Aberdeen University has been proposed as a well-developed and potentially practical tool for application in an automotive setting.The SAM can be used by both academics and practitioners to translate a range of conflicting sustainability information into a monetary unit score.This is an effective way of communicating trade-offs and outcomes for complex and multi-disciplinary sustainability decisions in the automotive sector.Its measurement of a broad range of economic, environmental, resource and social effects is currently lacking within the automotive industry.Its other strengths are the ability to provide monetary metrics together with physical metrics for sustainability assessment, its flexibility and the ability to combine multiple sustainability dimensions.The original SAM was developed for the oil and gas industry; therefore, applying this method in the automotive context will require the development of a new set of assessment criteria."Both Volvo's EPS and Ford's PSI methods do not offer complete coverage of the sustainability metrics.Consequently, future research should focus on developing a framework for the automotive SAM that will contain a comprehensive and complete coverage of impact categories for the sustainability assessment of an automobile.Assessment criteria mentioned in this paper are only proposals and cannot be considered as complete and exhaustive.The people who developed the original SAM suggest that sustainability metrics should be developed with the assistance of experts.Hence, specialists in the automotive industry should be consulted to refine and select sustainability assessment criteria which can be used as a framework for the construction of the automotive SAM.
Full cost accounting has been applied in many industrial settings that include the oil and gas, energy, chemical and waste management industries.Presently, it is not known how it can be applied in an automotive industry context.Therefore, the objective of this paper is to review existing full cost accounting methods and identify an appropriate approach for the automotive sector.This literature review of 4381 papers extracted ten full cost accounting methods with a diverse level of development and consistency in application.Based on a careful examination and critical analysis of each approach and existing automotive sustainability measures, the Sustainability Assessment Model developed by British Petroleum and Aberdeen University has been proposed as a well-developed and potentially practical tool for automotive applications.The Sustainability Assessment Model can be used by both academics and practitioners to translate a range of conflicting sustainability information into a monetary unit score.This is an effective way of communicating trade-offs and outcomes for complex and multi-disciplinary sustainable decisions in the automotive sector.It measures a broad range of economic, environmental, resource and social effects (internal and external), which is currently lacking in existing automotive systems.Its other strengths are the ability to provide both monetary and physical metrics for sustainability assessment, its flexibility and the ability to combine multiple sustainability dimensions.Furthermore, this paper provides helpful clues for researchers interested in exploring full cost accounting in the future by reviewing, analysing and synthesising the broad range of relevant sources from diverse fields in this topic area.
Cardiac remodeling refers to the changes in the original material and cardiac morphology of the heart, such as changes in myocardial cell size, myocardial fibrosis, and increased collagen deposition."It is an adaptive reaction process for the repair of the body's lesions and the overall compensation of the ventricles .The pathological changes are mainly manifested as left ventricular myocardial hypertrophy, and the pathological changes of cardiac hypertrophy caused by increased cardiac pressure load are mainly myocardial interstitial fibrosis with myocardial wall articular collagen deposition .The study found that cardiac remodeling is a reversible pathophysiological process if the stimulation of these factors can be removed .Therefore, reducing the incidence of heart failure and mortality by reversing cardiac remodeling has become the key to clinical treatment of heart failure.The development of new anti-cardiovascular remodeling drugs is one of the hotspots in the field of cardiovascular research.In pressure overload cardiac remodeling, some drugs with clear anti-cardiac remodeling effects were found to be involved in nitric oxide synthase .Studies have shown that the right amount of NO produced by endothelial nitric oxide synthase can dilate blood vessels, enhance myocardial relaxation and contraction, and reduce myocardial damage .The regulation of eNOS activity in the eNOS-NO pathway is more pronounced in hypertension and cardiovascular remodeling, and phosphorylation/dephosphorylation is one of the main ways to regulate eNOS activity after translation .In addition, phosphoinositide 3-kinase-serine and protein kinase are important regulatory factors in the upstream of the eNOS-NO signaling pathway and play an important role in regulating cardiomyocyte growth, cardiac hypertrophy, and heart failure .When PI3K is activated by an extracellular signaling molecule, the activated PI3K produces PIP2 by phosphorylation of PI and further phosphorylates it to produce PIP3.PIP3 binds to the PH domain of Akt, locates it on the cell membrane, and the Akt conformation changes by phosphorylation.Akt regulates the downstream signaling pathways by phosphorylation/dephosphorylation, and Akt mediated signaling pathways are dysregulated under pathological conditions, such as myocardial fibrosis, leading to the development and progression of cardiac hypertrophy.Activated Akt acts on its downstream substrate, which promotes cell proliferation, inhibits apoptosis, and regulates the cell cycle .Millettia pulchra Kurz var-laxior Z. Wei, a wild-growing plant of the family Fabaceae is known to possess multifarious medicinal properties.17-Methoxyl-7-hydroxy-benzene-furanchalcone is a flavonoid monomer extracted from its root , which has been used in traditional Chinese medicine, with a long history as a remedy of hypertension and cardiovascular remodeling.In our previous study, we have demonstrated that the cardioprotective effect of MHBFC on myocardial ischemia/reperfusion injury was associated with inhibition of apoptosis and excessive autophagy, and these protective processes required the activation of the PI3K/Akt pathway .In addition, MHBFC could reverse cardiac remodeling in rats.The molecular mechanism was related to its regulation of the endothelial nitric oxide synthase/nitric oxide signaling pathway .However, the mechanism by which MHBFC activates the eNOS-NO pathway is unknown.Therefore, in this study, the phosphorylation level of eNOS, the phosphorylated-phosphoinositide 3-kinase-serine, and phosphorylated-protein kinase protein expressions in the upstream of eNOS-NO signaling pathway were detected to answer this question.MHBFC was isolated from Millettia pulchra,Kurz var.Laxior Z.Wei, characterized by ultraviolet, infrared, electrospray ionization mass spectrometry, nuclear magnetic resonance, and Xray monocrystal diffraction and diluted with 0.5% DMSO-DMEM at the appropriate concentrations as needed .Nω-nitro-L-arginine methyl ester was purchased from Sigma-Aldrich.The NO detection kit was purchased from Nanjing Jiancheng Bioengineering Institute.The eNOS ELISA detection kit was purchased from Shanghai Yuanye Biological Technology Co., Ltd.The In Situ Cell Death Detection Kit was purchased from Roche Diagnostics.Antibodies recognizing Akt, p-Akt, eNOS, and p-eNOS were purchased from Cell Signaling Technology, Inc.PI3K and p-PI3Kp85 were purchased from OriGene Technologies, Inc.Sixty male Sprague–Dawley rats weighing 130–160 g were purchased from Guangxi Medical University Laboratory Animal Center.The feeding environment was well-ventilated, the room temperature was 18–25℃, the relative humidity was 40–70%, and the daily illumination was 12 h.The rats had free access to standard feed and water.The study was approved by the Ethics Committee for the Experimental Use of Animals at Guangxi Medical University and was performed in accordance with the Guide for the Care and Use of Laboratory Animals.The rats were divided into 6 groups, and all rats were given the corresponding drugs in the fourth day after surgery for 6 weeks: Sham group, in which the rats were subjected to a similar procedure without abdominal aortic banding and were intragastrically administered distilled water;, Model group, in which the rats underwent pressure-overload induced by AAB above the renal arteries and were intragastrically administered distilled water;, MHBFC 6 group, in which the rats were intragastrically administered MHBFC after undergoing AAB;, MHBFC 12 group, in which the rats were intragastrically administered MHBFC after undergoing AAB;, L-NAME group, in which the rats were intragastrically administered L-NAME after undergoing AAB.MHBFC and L-NAME were diluted with distilled water to the appropriate concentration. MHBFC12 + L-NAME group, in which the rats were intragastrically administered MHBFC plus L-NAME after undergoing AAB;,The rats were anesthetized with 2% pentobarbital sodium by intraperitoneal injection.Then, a pressure-overload model was implemented in rats as described previously .Under sterile conditions, the rats were confined to the right lateral decubitus, subjected to longitudinal incisions at the lower left rib arch, and the abdominal aorta was exposed above the kidneys.The small round rod was placed parallel to the abdominal aorta in this segment and ligated with a 4-0 thread.Then, the small rod was pulled out quickly, and the abdominal aorta was narrowed to an external diameter of 0.7 mm.A similar procedure was performed
Millettia pulchra Kurz var-laxior (Dunn) Z. Wei, a wild-growing plant of the family Fabaceae is known to possess multifarious medicinal properties.17-Methoxyl-7-hydroxy-benzene-furanchalcone (MHBFC) is a flavonoid monomer extracted from its root, which has been used in traditional Chinese medicine, with a long history as a remedy of hypertension and cardiovascular remodeling.
.Similarly, finding a signaling pathway that blocks MMVEC apoptosis and protects cell function is important for reversing the regulation of cardiac remodeling.Morphological observation by electron microscopy is the most classical and reliable method to determine apoptosis and is regarded as the gold standard for determining apoptosis.In this study, electron microscopy showed that MHBFC could play a role in antagonizing apoptosis.After the eNOS selective blocking agent L-NAME was used alone, the organelles of MMVECs dissolved and formed apoptotic bodies, and the apoptosis rate increased.However, the above symptoms were improved, and the apoptotic rate decreased in the MHBFC + L-NAME group, indicating that apoptosis of MMVECs was closely related to the function of NOS.The apoptosis of MMVECs is inversely proportional to the content of NO and the activity of eNOS.It may be concluded that MHBFC is involved in the fight against apoptosis and protects MMVEC by activating the eNOS-NO signaling pathway.In summary, the results of this study imply that MHBFC can increase eNOS protein phosphorylation by increasing PI3K and Akt protein phosphorylation, and activate the eNOS-NO signaling pathway, increasing eNOS enzyme activity, catalyzing the generation of protective NO, and counteracting MMVEC apoptosis, thereby protecting against myocardial damage and reversing cardiac remodeling.
The present study was conducted to further investigate the regulatory mechanisms of MHBFC based on the endothelial nitric oxide synthase-nitric oxide (eNOS-NO) signaling pathway.The abdominal aorta of the male Sprague–Dawley rats was narrowed to induce cardiac remodeling, and the rats were given corresponding drugs for 6 weeks after operation.At the end of the experiment, the relevant indexes were detected.The results showed that Nω-nitro-L-arginine methyl ester (L-NAME) could increase the myocardial cell cross-section area, myocardial fibrosis, and the cardiac collagen volume fraction.The serum NO and eNOS levels and the expression of p-eNOS, p-PI3K and p-Akt protein were decreased, and myocardial microvascular endothelial cell (MMVEC) apoptosis increased.However, the above changes were reversed after treatment with MHBFC.These results indicated that MHBFC could increase eNOS protein phosphorylation by increasing PI3K and Akt protein phosphorylation, and activated the eNOS-NO signaling pathway, increased eNOS enzyme activity, catalyzed the generation of protective NO, and counteracted MMVEC apoptosis induced by cardiac remodeling, thereby protecting against myocardial damage and reversing cardiac remodeling.
unaffected by the NMDA receptor antagonist, MK-801.Thus, in PD, the loss of dopamine input to the striatum produces a sensitization that allows D1Rs to stimulate JNK/cJun signaling independently of NMDA receptor transmission.The D1R-driven stimulation of JNK occurs specifically in dSPN.Activation of these neurons suppresses the inhibitory control exerted by the basal ganglia on thalamo-cortical neurons, thus promoting motor activity.In dSPN, dopamine via D1Rs activates postsynaptic signaling cascades that increase synaptic strength.This postsynaptic potentiation counteracts the presynaptic inhibition of glutamate release by adenosine A1 receptors, thereby creating a dynamic balance between LTD and LTP mechanisms.The present results show that, in PD mice, JNK activation contributes to the D1R-mediated effects at corticostriatal synapses on dSPN, pointing to a role for JNK signaling in the regulation of striatal synaptic plasticity.Notably, this regulation is absent in naïve mice and occurs only in a PD model, characterized by the loss of dopaminergic input to the striatum.Dopamine depletion has been linked to the development of sensitization at D1R.The present study indicates that this pathological condition acts as a functional switch by conferring on JNK abnormal signaling properties, potentially linked to the actions of anti-parkinsonian medications.D1R sensitization and loss of synaptic downscaling are regarded as causal factors in the development of aberrant motor responses produced by L-DOPA, such as dyskinesia.Therefore, the attenuation of D1R transmission produced by JNK inhibitors and their permissive action on LTD may help to control this serious complication.In line with this possibility, it has been shown that overexpression in dSPN of ΔcJun, a truncated form of cJun which lacks transcriptional activity, attenuates L-DOPA-induced dyskinesia.Recent studies show that administration of L-DOPA increases spine volume and postsynaptic density length, in the dSPN of dyskinetic mice.Interestingly, D1R-mediated activation of JNK may participate in this phenomenon, since JNK has been shown to increase the synaptic content of the PSD-95.In conclusion, this study supports the involvement of the JNK signaling pathway in dopamine transmission.In particular, we show that depletion of dopamine, a pathological hallmark of PD, confers on dopaminergic drugs the ability to activate JNK.This effect occurs in a well-defined population of SPNs and is causally linked to synaptic responses produced by D1R activation."These findings shed new light on the mechanisms by which anti-Parkinson's drugs modify basal ganglia transmission and provide critical information for the development of novel approaches for the treatment of PD.Mice with a unilateral injection of 6-OHDA in the MFB were killed by decapitation and the brains rapidly removed.Coronal slices were prepared using a vibratome and punches corresponding to the dorsal striatum were dissected out from the control, or the 6-OHDA-lesion side.Two punches were placed in individual 5-ml polypropylene tubes containing 2 ml of Krebs-Ring bicarbonate buffer containing 118 mM NaCl, 4.7 mM KCl, 1.3 mM CaCl2, 1.5 mM MgSO4, 1.2 mM KH2PO4, 25 mM NaHCO3 and 11.7 mM glucose, aerated with 95% O2 and 5% CO2.The samples were equilibrated at 30 °C for 30 min, followed by incubation with either SP600125 or JNK-IN-8 for 15 and 45 min, respectively.SKF38393 was added for 5 min with or without JNK inhibitors.After incubation, the tissue punches were removed and sonicated in 1% SDS for Western blotting analysis.C57BL/6J mice with a unilateral 6-OHDA lesion were injected with either saline or L-DOPA and changes in the phosphorylation of JNK and cJun were determined after 15, 30, 60, or 120 min.Fig. 1A shows the loss of dopaminergic innervation produced by 6-OHDA, assessed as lack of TH-immunoreactivity in the DLS ipsilateral to the lesion.The 6-OHDA-lesion per se did not produce any modification in the state of phosphorylation of JNK and cJun.Administration of L-DOPA increased the number of cells immunoreactive for phosphorylated JNK and cJun, in the DLS ipsilateral to the 6-OHDA-lesion.The effects of L-DOPA on both P-JNK and P-cJun peaked 15 min after injection and returned to near baseline after 120 min.In contrast, L-DOPA had no effect in the intact DLS, contralateral to the 6-OHDA-lesion.
In a mouse model of Parkinson's disease (PD), we show that the pharmacological activation of dopamine D1 receptors (D1R) produces a large increase in JNK phosphorylation.This effect is secondary to dopamine depletion, and is restricted to the striatal projection neurons that innervate directly the output structures of the basal ganglia (dSPN).Electrophysiological experiments on acute brain slices from PD mice show that inhibition of JNK signaling in dSPN prevents the increase in synaptic strength caused by activation of D1Rs.Together, our findings show that dopamine depletion confers to JNK the ability to mediate dopamine transmission, informing the future development of therapies for PD.
spend a lot of time and energy uncovering, understanding and keeping up-to-date with massive amounts of contingent details.The downside, that we hope to mitigate, is that useful congruencies between cases – a precondition for the ability to compare and learn – risk drowning in these details along with insufficiently elucidated differences in terminology and types of goals.Theoretically, the SOS mapping suggests commonalities among meta-level generating processes of ontological categories: if self-organization is the causal origin of complexity and assembly the causal origin of complicatedness, then innovation would be the origin of the wickedness.This points toward a possible unifying theme among many emerging approaches to sustainability.This theme is their recognition of the vanity of trying to predict, control or plan-away wickedness, and their shift of focus to embracing and harnessing these troublesome qualities of wickedness instead.This also means a shift towards seeing humans increasingly as fallible as agents and knowers – the future becomes a historical process where problems, and the tools at our disposal for tackling them, are constantly changing as part of a wider societal innovation dynamics.Innovation is essentially unpredictable and cannot be understood in the same way as we may understand systems where the rules of the game remain fixed, such as in the design of a technological artifact.Realistically, we may however hope to understand innovation and wickedness on a meta-level, similarly to how evolution is understood.For example, even if we cannot understand what the consequences of our actions will be, we may understand what types of consequences may arise, and we may use this knowledge to build mechanism for detecting, learning, and handling them specifically as they arise.We see establishing a theoretical connection between innovation and wickedness as a promising future direction of research.In closing, we propose six rough mappings of typical generating processes, governance approaches, directionalities of design and governance, and types of organization into the SOS diagram.These mappings are based on the preceding analysis in this paper, and they can all bear elaboration and debate.Indeed, they are there as much to stimulate thinking about how they can be revised and refined as to communicate conclusions about “how things work.]
Traditional scientific policy approaches and tools are increasingly seen as inadequate, or even counter-productive, for many purposes.In response to these shortcomings, a new wave of approaches has emerged based on the idea that societal systems are irreducibly complex.The new categories that are thereby introduced – like “complex” or “wicked” – suffer, however, by a lack of shared understanding.We here aim to reduce this confusion by developing a meta-ontological map of types of systems that have the potential to “overwhelm us”: characteristic types of problems, attributions of function, manners of design and governance, and generating and maintaining processes and phenomena.This permits us, in a new way, to outline an inner anatomy of the motley collection of system types that we tend to call “complex”.Wicked problems here emerge as the product of an ontologically distinct and describable type of system that blends dynamical and organizational complexity.The framework is intended to provide systematic meta-theoretical support for approaching complexity and wickedness in policy and design.We also points to a potential causal connection between innovation and wickedness as a basis for further theoretical improvement.
The horror of airborne infectious diseases subsided substantially in the 20th century in developed nations, largely due to implementation of hygiene practices and the development of countermeasures such as vaccination and antimicrobials.The recent emergence of zoonotic pathogens such as avian influenza A viruses and coronaviruses and MERS CoV) raises the specter of future pandemics with unprecedented health and economic impacts if these pathogens gain the ability to spread efficiently between humans via the airborne route.While cross-species barriers have helped avoid a human pandemic with highly pathogenic avian influenza A viruses, a limited number of mutations in circulating avian H5N1 viruses would be needed for the acquisition of airborne transmissibility in mammals .A global pandemic by SARS CoV was averted largely by fast identification, rapid surveillance and effective quarantine practices.However, not all emerging pathogens can be contained due to a delay in initial detection, an inability to properly assess pandemic risk, or an inability to contain an outbreak at the point of origin.Before 2009, widely circulating H1N1 swine viruses were largely thought to pose little pandemic risk but, despite early attempts to limit spread, pH1N1 caused the first influenza virus pandemic of the 21st century.Implementation of suitable countermeasures is hampered by our limited capability to anticipate the sequence of events following the initial detection of a novel microorganism in an animal or human host.In the immediate future, the occurrence of, the detection of, and the awareness for novel epidemic agents will likely increase qualitatively and quantitatively.Updating of emergency preparedness plans in an evidence-guided process requires an interdisciplinary concept of research and public health efforts taking into account the multifactorial nature of the problem to aid policy formulation .Here we build on a conceptual framework for the classification of drivers of human exposure to animal pathogens and suggest a framework of drivers determining the efficiency of human-to-human transmission involving the airspace.The airborne transmission of pathogens occurs through ‘aerosol’ and ‘droplet’ means .In a strict sense, airborne transmission refers to aerosols that can spread over distances greater than 1 m, while droplet transmission is defined as the transfer of large-particle droplets over a shorter distance .Here, we consider airborne transmission of infectious agents in a broader sense as any transmission through the air which consists of four steps: Firstly, the pathogen is associated with either liquid droplets/aerosols or dust particles when traveling directly from donor to recipient, but may also be deposited on a surface and re-emerge into the air later; secondly, the pathogen is deposited in the recipient, usually by inhalation, resulting in infection of the respiratory tract; thirdly, the pathogen is amplified, either in the respiratory tract or in peripheral tissues; and finally, the pathogen is emergent at the site of shedding in sufficient loads and capable of expulsion.In the process of transmission, the recipient becomes a donor when microbial replication and subsequent pathophysiological events in the host result in release of the pathogen.Airborne transmission of microbes can follow different aerodynamic principles, and some microorganisms are suspected or proven to spread by more than one route .Moreover, the mode of transmission and anisotropic delivery of a pathogen into the recipient contributes to disease severity .There are no substantive differences between droplet-size distribution for expulsive methods like sneezing, cough with mouth closed, cough with mouth open, and speaking loudly one hundred words ; however, the number of respiratory droplets that likely contain pathogens can differ .After expulsion, successful transmission requires that the pathogen remains infectious throughout airborne movement, with or without an intervening deposition event.Drivers influencing the success of such a process are those that define the chemico-physical properties of both the air mass and the vehicle or carrier, including temperature, ultraviolet radiation, relative and absolute humidity, and air ventilation or air movement .Their interplay ultimately determines pathogen movement and stability .Pathogen survival is also influenced by pathogen structure, for example, enveloped viruses are less stable outside the host than non-enveloped viruses .Among Chlamydia,pneumoniae, Ch.trachomatis LGV2, Streptococcus pneumoniae, S. faecalis, Klebsiella pneumoniae, and cytomegalovirus, the survival of Ch.pneumoniae in aerosols was superior .Variation in RH might influence not only environmental stability of the pathogen but also the droplet size which in turn defines deposition rate .Eighty percent of droplets emitted from a cough deposit within 10 min, and highest deposition rates for all droplet-nuclei sizes range within 1 m horizontal distance .Pathogens like influenza virus can persist in the environment for hours to days and have been found on surfaces in healthcare settings .UV radiation is the major inactivating factor for influenza viruses in the outdoor environment.Pathogen-containing large particles deposit predominantly in the upper airway, medium-sized particles mainly in central and small airways, and small particles predominantly in the alveolar region of the lungs .In general, airborne pathogens tend to have a relatively low infectious dose 50% value.At any specific site of deposition within a host, ID50 of a pathogen is determined by factors such as local immune responses and the cellular and tissue tropism defined by distribution of receptors and/or adherence factors, tissue temperature, pH, polymerase activity of the pathogen, and activating proteases.Co-infections may alter immune responses and factors that govern tropism.Pathogens amplify either at the site of initial deposition in the respiratory tract or in peripheral tissues.For influenza virus or human respiratory syncytial virus, this is the site of initial entry whilst other pathogens have either distinct secondary amplification sites or replicate both locally and systemically, for example, Measles virus,
Airborne pathogens — either transmitted via aerosol or droplets — include a wide variety of highly infectious and dangerous microbes such as variola virus, measles virus, influenza A viruses, Mycobacterium tuberculosis, Streptococcus pneumoniae, and Bordetella pertussis.Emerging zoonotic pathogens, for example, MERS coronavirus, avian influenza viruses, Coxiella, and Francisella, would have pandemic potential were they to acquire efficient human-to-human transmissibility.In particular, we propose a framework of drivers facilitating human-to-human transmission with the airspace between individuals as an intermediate stage.
for example, in terms of health and hygiene standard and age distribution.Examples are tuberculosis in the working class in the age of industrialization and the Spanish flu during World War I.More distal factors are relevant at the country level including host population genetics, demography, public health strategies for treatment or vaccination or access to medical care, which are likewise, to some extent, governed by socio-economic drivers.Air pollution, land use, urbanization and socio-economic changes are important drivers of emergence of airborne infections at the supranational level.Although human population densities have continued to rise and reach unprecedented levels, airborne diseases of public concern in developed countries in the second half of the 20th century have typically comprised relatively self-limiting or preventable diseases like the common cold, seasonal flu and MeV.Continuous developments like agglomeration of settlement areas in developing countries, along with urbanization and rural depopulation, and exponentially increasing human movement in numbers and distances on a global scale, however, may outpace contemporary achievements in disease prevention.Climatic changes also occur at a global level, which could have serious impact on infectious diseases in humans and animals .Extreme weather conditions alter seasonal patterns of emergence and expansion of diseases even though direct proof for the influence of climate change on regional, national, supranational or global level on the emergence of new or frequency of established infections is difficult to obtain.To mirror the complexity of the problem we suggest a concise and weighted framework of drivers taking into consideration different levels, from cell and tissue through to global scale but also the multitude of influences between these levels.While some drivers at an outer level may only imprint on one driver in the level below, other drivers can impact several lower level drivers and in sum may be equally important to drivers considered to act from an outer level.Classification of drivers as acting at a more proximal or more distal level also is not exclusive and inverted imprinting may occur under several circumstances as outlined above.Despite decades of research on ‘airborne transmission factors’, surprisingly few quantitative data is available for factors impacting the majority of infectious diseases that transmit via this route.Furthermore, for some microorganisms, for example, for coronaviruses, epidemiological or experimental evidence that transmission of the pathogens via the airborne route is successful or even contributes importantly to epidemic or pandemic spread of the agent remains weak.A large body of work is focused on influenza viruses and the indoor environment, likely because perturbations of the indoor pathogen transmission ecosystem are easier to generate and quantify."Published studies assessing viable pathogen counts directly from subject's respiratory maneuvers are restricted to a few respiratory pathogens.Evaluation of factors underlying the highly variable levels of pathogen shedding, for example, by long-term examination of individuals to determine how pathogen load changes during infection of the respiratory tract with different viruses and bacterial species are urgently needed.Current technical developments may open novel experimental opportunities .When designing such studies, consideration of zoonotic and human-specific pathogens as well as delineating strategies employed by both viruses and bacterial pathogens will help to identify commonalities in the strategies followed by successful airborne pathogens.Especially investigation of organisms assumed to have no capacity of human-to-human transmission via the airborne route in suitable animal models will help explain why some pathogens, despite having a very low infectious dose, are likely not directly transmitted from infected persons.Examples would be the human-to-human transmission of Yersinia pestis versus Francisella tularensis, or the assessment why Legionella pneumophila transmission can occur over long distances from artificial sources , but usually not inter-personally.Beyond representing an ever-increasing ethical and economic burden, the recent more frequent occurrence of zoonotic pathogens in the human population also inheres in the unique opportunity to further our knowledge of the prerequisites for airborne pandemic spread.Genomics-based methods have already allowed a significant advance in our understanding of the evolution and spread of bacterial pathogens.In the developed world, whole genome sequencing is being established for routine use in clinical microbiology, both for tracking transmission and spread of pathogens, as well as prediction of drug-resistance profiles, allowing rapid outbreak detection and analysis in almost real-time, as evolution occurs in the wild .Except for influenza A viruses, genetic correlates of the ability of a zoonotic pathogen to efficiently overcome the interspecies barrier and allow rapid spread within the human population are poorly defined.Any emergence of novel molecular patterns in microorganisms results from an evolutionary process driven by factors not encoded for in genomes and determined by the frequencies of genome alterations occurring under natural conditions."The broad introduction of ‘omic's’ technologies, advances in global data exchange capabilities and the advent ofinformatic tools allowing processing of large data collections have put the technical capacity to integrate phenotypic data from clinical, from epidemiological and from experimental studies in vitro and in vivo at our disposal, with relevant target species like livestock in particular, and allow genome-wide association studies.Deploying the categorization of drivers and the relative level of their impact as suggested herein will allow for a weighting of the different drivers in the specific framework for any particular pathogen, and help to predict the pandemic potential of airborne pathogens.Papers of particular interest, published within the period of review, have been highlighted as:• of special interest,•• of outstanding interest
Here, we synthesize insights from microbiological, medical, social, and economic sciences to provide known mechanisms of aerosolized transmissibility and identify knowledge gaps that limit emergency preparedness plans.The model is expected to enhance identification and risk assessment of novel pathogens.
There is a growing realization that adipocytes, once believed to act merely as local reservoirs of energy and to provide mechanical and thermal insulation, also have numerous other roles in various tissues in health and disease.These range from systemic metabolic and immune regulation through to key functions in tissue development and cancer progression.In the context of skin, there is a clear link between initial seeding of adipocyte precursors and subsequent dermal differentiation and hair follicle growth.However, rather little is known about the potential function of adipocytes in tissue repair.After skin wounding myofibroblasts have been shown to transdifferentiate into adipocytes.Furthermore, adipocyte precursor cells are known to differentiate into mature adipocytes and these appear to contribute to repair because blocking their differentiation leads to defects in fibroblast migration and matrix deposition.Other known functions of adipocytes include antimicrobial activities, since Staphylococcus aureus infection of otherwise healthy skin leads to rapid proliferation of dermal adipocytes, and impaired adipogenesis results in increased skin infections.The Drosophila fat body is considered to be equivalent to both the vertebrate adipocytes and liver, and is known to play many diverse systemic roles throughout all insect life stages.It regulates metabolism by actively sensing nutritional conditions and accordingly storing or releasing energy in the form of lipids, glycogen, and protein.Importantly, fat storage in intracellular lipid droplets, and the mechanisms and key components responsible for stored-fat mobilization in the Drosophila fat body and mammalian adipocytes, appear to be evolutionarily conserved.In addition to storing energy, the fat body also plays a central role in regulating systemic growth in response to nutrition.Upon sensing dietary amino acids, the fat body secretes several humoral factors, which control systemic growth of the animal.This is achieved, in part, by the regulated secretion of insulin-like peptides by the insulin-producing cells of the brain.Furthermore, the fat body is known also to play a crucial role in systemic immunity.Bacterial and fungal infections activate the Toll and IMD pathways in the fat body, resulting in the systemic expression and secretion of several antimicrobial peptides, including Attacin.While major efforts have been made over the last few decades to elucidate the roles of the fat body in regulating metabolism, growth, and immunity, its potential role in wound repair has not been studied to date.Using live imaging of pupal epithelial wounds we show for the first time that pupal fat body cells are motile cells that actively migrate to wounds.We find that these giant cells move through the hemolymph toward the wound using an adhesion-independent, actomyosin-driven, peristaltic mode of motility.Once they have reached the wound, FBCs assist hemocytes in clearing the wound of cell debris as well as sealing the epithelial wound gap and locally releasing AMPs to repair the wound and fight infection.To investigate the potential functions of FBCs during wound healing, we first studied their location and potential behaviors following tissue injury in pupae, since this developmental stage has proven ideal for live imaging of other wound healing events.We found that 16-hr-old pupae contain large numbers of giant polyploid, dissociated FBCs that populate the body cavity.To study the behaviors of FBCs following tissue injury by live imaging, we used a laser to induce small epithelial wounds in the ventral thorax of pupae, an area sparsely populated by FBCs.Nuclei were labeled with Histone-red fluorescent protein and FBCs were labeled with GFP.Strikingly, we found that FBCs, previously thought to be immotile, were actually highly dynamic, and migrated rapidly toward wounds.Once at the wound site these cells remained tightly associated with the wound until closure, when they detached and actively migrated away.When we compared small, medium, and large wounds, we found that the frequency of FBC recruitment to wounds, as well as the number of wound-associated FBCs, positively correlated with the size of the wound: for small wounds, a single FBC generally plugged the wound, whereas in larger wounds up to 5 FBCs approached and associated with the wounded area.The time of FBC arrival at the wound was variable, depending on their initial distance from the wound; some FBCs arrived after 10 min, with the average arrival time being around 1 hr after wounding, irrespective of wound size.Once FBCs started contacting the wound area, they usually remained associated until reepithelialization was complete, resulting in a longer period of FBC-wound association in larger wounds with longer closure times.To test whether the recruitment of FBCs to wounds was driven by true directed migration and not just a random walk or passive fluid flow, we tracked individual cells in wounded and unwounded pupae and measured the directional persistence of the resulting tracks.Further analysis of these tracks showed an increase in the meandering index and a decrease in the angle of migration, together suggesting that wound-recruited FBCs responded to the wounds with high directional persistence.The movement of FBCs to wounds is not due to passive flow of hemolymph toward the wound; this possibility has previously been ruled out by bead-tracking experiments following epithelial wounding in pupae where we saw no such flow.Moreover, we see no hemolymph leakage from wounds since laser wounding generally results in cuticular holes of <0.5 μm in diameter.Next, we measured the speed of FBCs and found that they did not accelerate toward the wound; their meandering index was increased but their speed remained the same as in unwounded pupae until they reached the wound, when they decelerated and stopped.Once the wound became fully occupied by one or more FBCs, late-arriving cells appeared unable to gain direct
Adipocytes have many functions in various tissues beyond energy storage, including regulating metabolism, growth, and immunity.However, little is known about their role in wound healing.We find that pupal fat body cells are not immotile, as previously presumed, but actively migrate to wounds using an unusual adhesion-independent, actomyosin-driven, peristaltic mode of motility.Once at the wound, fat body cells collaborate with hemocytes, Drosophila macrophages, to clear the wound of cell debris; they also tightly seal the epithelial wound gap and locally release antimicrobial peptides to fight wound infection.Adipocytes and their fly equivalent, fat body cells, have been considered immotile, but Franz et al.Once there, they multitask to clear wound cell debris, plug the epithelial gap, and upregulate AMPs to prevent infection.
access because this space was occupied by earlier-arriving FBCs, but they often remained in the vicinity and circulated at the periphery.Interestingly, both wound-recruited and late-arriving FBCs initially showed an increase in their meandering index, suggesting that both cell populations respond equally to wound attractants.Given our observation that Drosophila FBCs can actively migrate, we used live imaging of the actin cytoskeleton to understand the mechanism by which these cells power their migration.Most cells, whether in tissue culture or in vivo within tissues, migrate by adhering to, and crawling over, a substratum, often using actin-rich lamellipodia at their leading edges.By contrast, FBCs are not adherent to any epithelial surface; rather, they reside within the hemolymph.To our surprise, live imaging of FBCs expressing GMA revealed that these cells were constantly undergoing actin-based contractile waves that initiated from the cortex of the cell center and extended to the rear of the cell, propelling them in the opposite direction in a peristaltic fashion.These waves occurred constantly within FBCs in unwounded pupae but upon wounding became highly directed with respect to the wound.Using markers of the actin regulatory proteins, Fimbrin, Ena, and Fascin, we saw no sign of the more standard lamellipodial structures, observed for example in Drosophila macrophages, as they migrate to wounds, as FBCs “swam” toward the wound.However, once they had reached the wound, FBCs started to form lamellipodia that extended around the wound margin.In order to test whether motility of FBCs is indeed actomyosin driven, we expressed a dominant-negative version of Zipper tagged with YFP specifically in FBCs.During early pupal development, FBCs normally undergo an extensive remodeling, characterized by the dissociation of the fat body into single cells followed by their redistribution in the body cavity.This redistribution leads to the translocation of some cells into the anterior head capsule, which has previously been suggested to be driven by abdominal muscular contractions.Interestingly, we found that expression of dominant-negative Zipper-YFP only in FBCs led to a complete failure in FBC redistribution within the body cavity and translocation into the head.Moreover, when we imaged and tracked these cells in the dorsal abdomen we found that their general motility was strongly reduced.This suggests that the developmental process of FBC redistribution and translocation in pupae is not driven passively by muscular body contractions but is instead an active process driven by actomyosin-dependent migration of FBCs.Similarly, expression of dominant-negative Zipper in FBCs completely blocked their ability to migrate to wounds in the ventral thorax.Together, these data suggest that pupal FBCs are indeed motile cells, which migrate using an adhesion-independent, actomysin-driven peristaltic mode of motility during both their developmental dispersal and their recruitment to wounds.Previous studies have shown that hemocytes, the equivalent of macrophages in Drosophila, are actively drawn to wound sites in embryos and pupae, much as innate immune cells are drawn to wounds in vertebrates.Interestingly, larval hemocytes have been shown to collaborate with and even communicate with FBCs through cytokine release in response to bacterial infections, leading to a scenario whereby hemocytes phagocytose bacteria while FBCs produce AMPs systemically, but these AMP levels are significantly reduced in the absence of hemocytes.To investigate whether hemocytes and FBCs interact with one another during wound healing, we wounded pupae in which both hemocytes and FBCs were labeled with cytosolic GFP and nuclear RFP.Both cell types migrated at approximately the same speed, 2.5–3.5 μm/min, although in general, due to their proximity to the wound and increased numbers, hemocytes often arrived before FBCs.We see the same if these two lineages are labeled with complementary cytosolic GFP and mCherry tags.Interestingly, most hemocytes were swept aside as the first FBC approached the wound.To test whether FBC recruitment might be dependent on the presence of hemocytes at the wound, we genetically ablated hemocytes through lineage-specific expression of apoptosis-inducing Reaper for 16 hr before wounding.This loss of hemocytes did not significantly alter the frequency of FBC recruitment to wounds, suggesting that FBCs are not drawn to wounds by attractant signals released by hemocytes.Given our finding that FBCs are motile and rapidly migrate to wounds, next we wanted to investigate what local functions they might play during wound healing.Efficient wound repair requires the clearance of wound debris from the wound site, which is known to be, in part, orchestrated by hemocytes through phagocytosis.Interestingly, we noticed that, when we ablated hemocytes, the majority of cellular debris at the wound was swept aside by the incoming FBCs.In the presence of hemocytes, this clearance of cell debris away from the wound site by FBCs also occurred, albeit to a lesser extent, and was accompanied by engulfment of the debris by hemocytes.We also observed phagocytic cup formation and subsequent engulfment of debris at the wound site by FBCs in 35% of small and 75% of large wounds, which contained wound-recruited FBCs.Thus FBCs, in concert with hemocytes, appear to play an important local function in clearing cell debris during wound repair: FBCs physically clear the wound site of cell debris by displacing it to the wound periphery, where hemocytes, and to a lesser extent FBCs, take up the debris by phagocytosis.Next, we wanted to investigate whether, in addition to wound repair, FBCs might play local functions in fighting wound infection.Given the large size of FBCs and their apparent tight association with the wound throughout closure, we wondered whether they might play a role in plugging the wound to prevent entrance of pathogens and leakage of tissue fluids, much as a clot in a
Here we use live imaging of fat body cells, the equivalent of vertebrate adipocytes in Drosophila, to investigate their potential behaviors and functions following skin wounding.Thus, fat body cells are motile cells, enabling them to migrate to wounds to undertake several local functions needed to drive wound repair and prevent infections.now show the latter can actively migrate to wounds using a peristaltic-like “swimming” motility.
labs were an expected finding.Variations of the manufacturing process are mainly driven by the resulting TCR transfection levels, as well as on the precision of cell counts and the spike-in process.This variation across labs is acceptable, as the variation across aliquots of each batch is not affected.Even the batch-to-batch variation within one lab is acceptable because each batch is newly validated for its signal intensity and assay-dependent signal variation.The envisioned, RNA-based kit can contain quality-controlled TCR RNA specific for selected viral or tumor-associated antigens.A detailed manual would describe the manufacturing process in general and provides step-by-step instructions for EP and cell processing."The same kit could also contain a protocol for the initial set-up of each user's EP device as well as the required GFP RNA for the establishment, testing and potential optimization of EP settings.Based on TCR RNA as critical starting material all investigators can control the number of spiked TCR+ T cells over a range of frequencies and define the intensity of the antigen-specific signal, the number of frozen cells per aliquot, and the number of aliquots per batch.Each investigator is free to adapt parts of the production process to local preferences and conditions and is able to use own protocols for the isolation of PBMCs, freezing and thawing of cells, MHC-multimer staining, or functional read-outs.Initially, the batch-specific signal size and the acceptable signal range need to be defined for the lab-specific assay protocols and the experimental setup."In the application phase, the pre-tested TERS aliquots can be subsequently used to control assay performance in a variety of ways, such as prior to assessing a series of test samples as well as after the last test sample has been run or in each experiment, on each day, or if required, for each test item depending on the user's preferences.TERS can be used to control and to detect technical sources of assay variation as presented in a range of studies and proficiency panels for most of the commonly used T-cell assays such HLA-peptide multimer staining, intracellular cytokine flow cytometry and ELISpot assay.Furthermore, TERS can be used for the validation and the day to day quality control of one T-cell assay.Controlled assay performance makes it also possible to compare results across centers leading to the desired harmonization of T cell assays.The development of an easy to use and easy to scale kit-based approach will allow the broad use of the TERS technology enabling the labs to continuously produce TERS in-house and will help to generate documented evidence of the validity of assay-results generated throughout their biomarker programs and thus may become a valuable tool to enhance the development of innovative immune-therapies.
Cell-based assays to monitor antigen-specific T-cell responses are characterized by their high complexity and should be conducted under controlled conditions to lower multiple possible sources of assay variation.However, the lack of standard reagents makes it difficult to directly compare results generated in one lab over time and across institutions.Therefore TCR-engineered reference samples (TERS) that contain a defined number of antigen-specific T cells and continuously deliver stable results are urgently needed.We successfully established a simple and robust TERS technology that constitutes a useful tool to overcome this issue for commonly used T-cell immuno-assays.To enable users to generate large-scale TERS, on-site using the most commonly used electroporation (EP) devices, an RNA-based kit approach, providing stable TCR mRNA and an optimized manufacturing protocol were established.In preparation for the release of this immuno-control kit, we established optimal EP conditions on six devices and initiated an extended RNA stability study.Furthermore, we coordinated on-site production of TERS with 4 participants.Finally, a proficiency panel was organized to test the unsupervised production of TERS at different laboratories using the kit approach.The results obtained show the feasibility and robustness of the kit approach for versatile in-house production of cellular control samples.