text
string
summary
string
clusters than on the support.This is possibly because alloying-induced cluster strain effects are generally reduced when the clusters are pinned on the support.For example, the average Au-Au distance increases for TO38 clusters on the support by 0.03–0.05 Å when they bound to the support through Au atoms.Mechanical effects are also reduced on increasing the cluster size so that the d-band model is more relevant for the larger TO79 clusters.When we compare the free and supported clusters, there is a 0.10 eV downward shift on average of the d-band centre values.Thus, the adsorption strength of CO and O2 molecules on supported clusters is generally slightly reduced compared to their free counterparts.The effect of the TiO2 support on both the structures of AuRh nanoalloys and their adsorption properties towards CO and O2 have been investigated using first-principles DFT calculations.In agreement with experiments, on the TiO2 support phase-segregated Janus-type structures are found to compete with the more stable free RhcoreAushell structures.Bader charge analysis shows that there is metal-to-support electron transfer, which is higher when the more electropositive Rh atoms are located at the interface with the titania support.As for the free clusters, the adsorption strengths of reactant molecules such as O2 and CO are greater on the Rh part than on the Au part of the supported nanoalloy clusters.The adsorption properties of smaller clusters are more diverse for free clusters because the mechanical effects reduce with the increasing size.With the presence of the support, however, mechanical effects also decrease for small clusters and it reduces the scattering in the d-band model.A downward shift of the d-band, which is accompanied by a slight reduction in CO and O2 adsorption strengths, is observed for the supported AuRh nanoalloys compared to their free cluster counterparts.Concerning the experimentally more relevant Janus-type structures, having Rh atoms at the interface with the TiO2 surface, adsorption strengths on the Rh side of the cluster were found to be slightly lower than for pure Rh clusters.This may modify the catalytic properties, e.g. reduce poisoning, while still preserving the high reactivity of pure Rh as compared to both pure Au and RhcoreAushell clusters.Furthermore, in contrast to the Janus particles, adsorption on the less stable AucoreRhshell clusters is stronger than on pure Rh.Although the AucoreRhshell structure is highly unstable for free clusters, its relative instability is reduced in the presence of the support, and it may be stabilised further by tuning the molecular environment .
AuRh/TiO2 nanocatalysts have proved their efficiency in several catalytic reactions.In this work, density functional theory calculations are performed to investigate the effect of the TiO2 support on the structures of fcc 38-atom and 79-atom AuRh nanoalloys and their adsorption properties towards the reactant molecules CO and O2.d-band centre analysis shows that the d-band model captures the trends better for both larger and supported alloy clusters due to reduced mechanical effects.Calculations reveal metal-to-support electron transfer, depending mainly on which metal atoms lie at the interface with the support.The adsorption strengths of CO and O2 molecules on experimentally-relevant Janus segregated structures are slightly lower than on pure Rh clusters, which may reduce poisoning effects, while maintaining the high reactivity of Rh.In addition, higher adsorption energies are predicted for the less stable AucoreRhshell structure, which may lead to adsorption-induced restructuring under reaction conditions.
As a result of the CD28 superagonist TGN 1412 monoclonal antibody cytokine storm incident in 2006, cytokine release assays have become more commonly used as hazard identification and risk assessment tools for therapeutic candidates, particularly mAbs with the potential to elicit adverse pro-inflammatory cytokine responses in patients .Although cytokine release syndrome is a relatively rare event in the clinic, evaluating the potential of certain novel therapeutic mAbs to cause CRS is now part of preclinical safety testing .Severe CRS is reported to have occurred in approximately 50% of recipients administered muromonab-CD3, before the introduction of high-dose corticosteroid pre-treatment , although in subsequent protocols using a lower dose, pretreatment with anti-inflammatory agents and a slower infusion rate also reduced the risk.Moderate-to-severe CRS is reported in a small number of multiple sclerosis patients given alemtuzumab, an anti-CD52 mAb .Other therapeutic mAbs currently in use such as the tumor necrosis factor α antagonists infliximab, adalimumab and certolizumab pegol and many others such as bevacizumab and natalizumab are not associated with CRS .Thus, in terms of predicting the safety of novel therapeutic mAbs in man, the CRA should ideally differentiate between mAbs with moderate-to-severe clinical risk < TGN 1412).Significant progress has been made in designing and developing improved methods for CRAs as a result of the CD28 superagonist TGN 1412 incident.In 2007, a solid-phase CRA, which involves the co-incubation of human peripheral blood mononuclear cells with mAbs that have been dry-coated onto a tissue culture plate, was shown to be predictive for the cytokine release potential of TGN 1412 .In 2009, the European Medicines Agency held a workshop to discuss in vitro CRAs, with the conclusion that while a specific assay could not be endorsed at that time, CRAs have a place in predicting the effect of a product in humans .Currently, a number of in vitro assay formats can be considered when evaluating the potential for cytokine release for hazard identification by a novel therapeutic.Various CRA platforms have been designed to identify mAbs that can be associated with CRS, however, not all CRA platforms can discriminate between mAbs inducing mild or moderate cytokine release, nor can they be used to determine a threshold where the levels of cytokines released may be associated with serious adverse events in humans.The diversity in the modes of action of specific drugs in the induction of cytokine release may require the availability of adapted or flexible CRA platforms to identify potential hazard in the clinic for a particular therapeutic candidate.As pharmaceutical companies become more familiar with the mechanisms related to mAb-induced cytokine release, new assays, platforms and data interpretation approaches are being adopted.Considerable progress has been made in understanding mechanistic aspects of CRS as well as in developing CRA formats suitable for hazard identification.Thus, the International Life Sciences Institute - Health and Environmental Sciences Institute Immunotoxicology Technical Committee Cytokine Release Assay Working Group set out to address the scientific issues pertaining to CRA conduct and CRS risk assessment in a multi-pronged approach.First, in 2013, ILSI-HESI ITC sponsored a survey of pharmaceutical companies, contract research organizations, and academic laboratories that demonstrated that a variety of in vitro assay approaches were used, including testing strategies, assay formats and reporting and interpretation of CRA data which was subsequently published .The survey indicated that variations in assay design include solution and/or solid phase based assays, the use of either various dilutions of whole blood, PBMCs, or peripheral blood leukocytes as responder cells, and in some cases, the capture of mAbs on plates or beads via Fc using protein A or antibodies to Fc.The survey also indicated that positive CRA controls vary across laboratories with many using anti-CD3 reagents, while others use anti-CD28 superagonist mAbs or LPS.Some laboratories also include other marketed mAbs as positive controls.Negative controls include phosphate buffered saline, tissue culture medium, isotype mAb controls or marketed mAbs not known to cause clinical cytokine release.Data readouts vary across laboratories from concentration of cytokines, ratios relative to negative controls and/or rank order comparison to other mAbs tested.Overall the results from the survey highlighted that there are no standard approaches, and the alignment of technical procedures for frequently used formats may pave the way for a more harmonized assay system.Next, on October 22, 2013, in Silver Spring, Maryland, the ILSI-HESI ITC Cytokine Release Assay working group sponsored a 1-day workshop entitled, “Workshop on Cytokine Release: State-of-the-Science, Current Challenges and Future Directions”.This workshop brought together 93 experts in the field from pharmaceutical, academic, health authority, and contract research organizations to discuss novel technologies, experimental designs, practices and scientific challenges.The workshop included both oral and poster presentations of the latest science concerning CRA design, use, and interpretation, and concluded with an open panel discussion featuring the speakers.Topics presented encompassed a regulatory perspective on cytokine release and assessment, 2 case studies regarding the translatability of preclinical cytokine data to the clinic, and the latest state of the science of CRAs, including comparisons between mAb therapeutics within one platform and across several assay platforms, a novel physiological assay platform, and assay optimization approaches such as determination of FcR expression profiles and use of statistical tests.This manuscript summarizes the scientific presentations and provides a current view on the approaches being adopted to identify the risk of CRS for novel therapeutic candidates.Following the TGN 1412 incident, testing for cytokine release-inducing activity has been increasingly included in the nonclinical studies conducted to support clinical testing of mAbs .Results of in
In October 2013, the International Life Sciences Institute - Health and Environmental Sciences Institute Immunotoxicology Technical Committee (ILSI-HESI ITC) held a one-day workshop entitled, "Workshop on Cytokine Release: State-of-the-Science, Current Challenges and Future Directions".Topics presented encompassed a regulatory perspective on cytokine release and assessment, case studies regarding the translatability of preclinical cytokine data to the clinic, and the latest state of the science of CRAs, including comparisons between mAb therapeutics within one platform and across several assay platforms, a novel physiological assay platform, and assay optimization approaches such as determination of FcR expression profiles and use of statistical tests.
workshop, the Nonclinical Biologics Subcommittee at the Center for Drug Evaluation and Research of the FDA has initiated a Cytokine Release Data Mining Project to use both nonclinical and clinical cytokine data to determine if CRA formats are good predictors of clinical outcomes; the results from this project will be very informative for the field.Another option to increase confidence in CRA formats may be to perform CRAs with toxicology species samples and measure cytokines in vivo during toxicology studies in that species.Correlates between toxicology species in vitro CRAs and in vivo data may further support the predictive results obtained with human cells and translatability to patients.While there is active research in this area by some groups, the predictive value of the CRA platform for human immune cells may still need to be confirmed with clinical cytokine data.Clearly, close cooperation and collaboration among clinical investigators and research laboratories is critical for confidence in the predictive value of any CRA platform.It is generally agreed that there is need for a clearer understanding of the underlying immune networks that can allow for the escalation of the immune responses to the detriment of the organism itself.Pathologic immune response cascades have been observed with a variety of agents, including bacteria, viruses, and medical interventions such as graft vs host disease and certain mAb therapeutics .A systems biology approach to understand the initiating events of immune cell activation that can lead to development of CRS may be necessary to truly understand all the possible “triggers”.This knowledge could then be applied to the in vitro CRA platforms: permitting the design of a CRA platform that will capture the most relevant potential immune signaling cascades for a particular mAb therapeutic target.Finally, many fields of medicine and biotechnology are increasingly developing personalized approaches whereby drugs or doses are defined based on an individual patient’s genes and characteristics.Examples include warfarin dose selection based on individual patient expression of the genes CYP2C9 and VKORC1 , and Abacavir hypersensitivity based on HLA∗B5701 expression .As most CRA platforms utilize peripheral blood, it is possible that a screening process could be designed for individual patients.This may be most relevant when the health status and/or genetic make-up of a particular patient may affect the CRS potential of novel therapeutic mAbs.For example, if the presence of autoimmune disease results in enhanced expression of a therapeutic target on immune cells, then use of healthy donor WB or PBMCs may not be appropriate for understanding CRS potential in patients.In such a scenario, testing the therapeutic in a CRA platform using immune cells from either a group of donors with autoimmune disease or from prospective patients enrolled in a clinical trial would be optimal.This approach, although an intriguing idea, could be very challenging logistically, especially in clinical trials with large numbers of patients.Growth in our understanding of the mechanisms underlying CRS in vivo should result in the development of in vitro CRA platforms that correlate to in vivo clinical cytokine measurements and greater confidence in translation of in vitro outcomes to the clinic.This HESI scientific initiative is primarily supported by in-kind contributions of time, expertise, and experimental effort.These contributions are supplemented by direct funding that was provided by HESI’s corporate sponsors.A list of supporting organizations is available at http://hesiglobal.org/immunotoxicology/.
The workshop brought together scientists from pharmaceutical, academic, health authority, and contract research organizations to discuss novel approaches and current challenges for the use of in vitro cytokine release assays (CRAs) for the identification of cytokine release syndrome (CRS) potential of novel monoclonal antibody (mAb) therapeutics.The data and approaches presented confirmed that multiple CRA platforms are in use for identification of CRS potential and that the choice of a particular CRA platform is highly dependent on the availability of resources for individual laboratories (e.g.positive and negative controls, number of human blood donors), the assay through-put required, and the mechanism-of-action of the therapeutic candidate to be tested.Workshop participants agreed that more data on the predictive performance of CRA platforms is needed, and current efforts to compare in vitro assay results with clinical cytokine assessments were discussed.In summary, many laboratories continue to focus research efforts on the improvement of the translatability of current CRA platforms as well explore novel approaches which may lead to more accurate, and potentially patient-specific, CRS prediction in the future.
and child demographic information.On a periodic basis after a sufficient follow-up period had elapsed to ensure the outcomes of the sepsis evaluations were known, an analyst ran additional SQL scripts that extracted further information from the EHR database to assign each sepsis evaluation to one of six sepsis groups.This determination was made based on the final results of cultures that were collected on the day of sepsis evaluation, and the number of days of antibiotic treatment.Infants who died while still receiving antibiotics were categorized as if they had continued antibiotic treatment for at least 120 hours."To support more complex analyses, such as the time to antibiotic administration analyses supported by the dataset described in this article, detailed data from the entire hospitalization for infants who experienced at least one sepsis evaluation were transferred to a PostgreSQL database , and transformed into a set of comma separated value files in a format based on the Patient-Centered Outcomes Research Network's common data model .This format was then extended to better accommodate inpatient data of interest to a broader variety of research questions in the manner described by the Pediatric Trials Network .This format contains all lab results, vital signs, medication administrations, clinical bedside assessments, diagnoses, and details about the presence of lines, airways, drains and other devices.The detailed information in the CSV files in the PTN/PCORnet format were then filtered and transformed to create the analytic file for the time to antibiotic administration project using scripts in the R programming language .For purposes of illustration of this process, the supporting R program file that transformed data from the PTN/PCORnet format into the key variables that were necessary for the TTA project have been included with this article.Also, the R program to generate the figures for the primary TTA manuscript have been included in hopes that it will provide interested readers with additional insight regarding how the TTA dataset can be used.
This article describes the process of extracting electronic health record (EHR) data into a format that supports analyses related to the timeliness of antibiotic administration.The de-identified data that accompanies this article were collected from a cohort of infants who were evaluated for possible sepsis in the Neonatal Intensive Care Unit (NICU) at the Children's Hospital of Philadelphia (CHOP).The interpretation of findings from these data are reported in a separate manuscript [1].For purposes of illustration for interested readers, scripts written in the R programming language related to the creation and use of the dataset have also been provided.Interested researchers are encouraged to contact the research team to discuss opportunities for collaboration.
especially with the households more tolerant to thermal discomfort.The level of open-cycle gas turbine capacity operated for those cases decreases significantly during the peak load period from the Reference scenario.Most of the scenarios cause the power producers to increase ramping of the thermal power plants up and down to meet the reduced load during the afternoon.This extra ramping imposes additional costs for the sector, as included in the average cost calculation.Even with the additional costs of ramping to the utilities, the electricity sector may enjoy a reduced overall cost of operation.Crude oil is used for power generation because natural gas supply is constrained to the power plants.Crude oil makes up more than 35% of the fuels used for power generation in the Reference scenario.In the alternative scenarios, the reduced loads and power operation in the summer afternoon ties with more oil burned in more efficient steam turbine plants instead of gas turbines.Fig. 6 shows the amounts of fuels used throughout the summer day as found by KEM.Under a high discomfort tolerance case, we observe higher shares of crude oil use in the middle of the afternoon; this coincides with lower share of natural gas.The sector is, however, optimizing over the whole year, so any natural gas not used then is being used in other seasons.Overall, we do observe reductions in crude oil use year-round.The reduced amount of oil burned would have a positive impact on the upstream sector, which can either export it or save it for future use.The costs for the upstream sector would also decline as that oil is not transported in pipelines.There has been willingness to change the current electricity pricing structure for residences in Saudi Arabia.One pricing scheme that has been suggested is TOU pricing .This paper explores the effects of applying a TOU price on both households and the power utilities in Saudi Arabia, while factoring in consumer behavior."To do this, a residential electricity model, informed by the households' reactions to a price change, is linked with an economic equilibrium model.In a case where we impose a TOU electricity price of 9 US cents per kWh from noon to 5 p.m. in the summer, we analyze:Potential shifts in the load curve,additional electricity costs the households,revenue for the power utilities, and,changes in deployed power technologies as a result of varying electricity demand.Households are able to respond to price changes in two ways in the residential model.First, they may adjust their thermostat set-point to make air-conditioning load lower in the hot summer months.Following Avci et al. , households make the adjustment by weighing the trade-off between their thermal discomfort and the increase in electricity price.Second, they may shift the discretionary use of appliances to different times of the day.The role of inconvenience to reschedule use is incorporated, as Setlhaolo et al. suggested.Archetypes that combine physical and behavioral attributes are designed with these features in mind."Overall, the housing stock in Saudi Arabia is categorized by structural attributes, households' tolerance for discomfort, appliance time-use schedules, income levels and their willingness to shift discretionary loads.Since households are not homogeneous, we study 4 different scenarios of how they may react on average.We take varying combinations of their inconvenience to shift appliances and their tolerance for thermal discomfort.The results offer a range for residential loads, and therefore a range of how the power utilities may operate in response.Looking at the potential shifts in hourly demand, electricity loads would decrease during the afternoon peaks whether the households have high or low tolerance for discomfort.If households were able to tolerate the discomfort they would shift their load more.Households would still pay more as the price is raised; this feature of a TOU price may be of interest to policymakers.The average summer price rises to between 4.19 and 4.62 US cents per kWh from 2.95 US cents per kWh in the reference case.The average yearly price does not increase as much because the price, and therefore demand, in the other seasons is unchanged.The electricity sector enjoys an overall profit gain as a result of the higher residential prices in the summer.The majority of the gain stems from the higher revenues from residential customers.The power utilities may additionally benefit form a lower cost of operation, as three of the four household behavioral cases show.The lower operation, because of reduced electricity demand, also results in a lesser amount of crude oil use for electricity generation.This analysis was performed in a short-run setting.In the long-run, the higher electricity prices may push households to invest in energy efficiency measures, which will manage their electricity demand growth.There may also be reduced investment in power capacity by utilities to meet electricity demand and the planning reserve margin requirement.
This paper assesses the potential effects TOU pricing will have on households and the wider economy.Based on an assumed TOU price that the power utility may charge during peak summer hours, the main findings of our analysis for the year 2011 are:
grown under planktonic or biofilm conditions.To our knowledge, this is the first report on the use of confocal scanning laser microscopy to evaluate changes in the density of Fusarial biofilms following exposure to thyme and clove oils.As depicted in Fig. 2, untreated biofilm of F. oxysporum S-1187 exhibited a dense network of cells with extracellular polymeric matrix after 48 h. Although the concentration of the EOs used in this trial was much lower than the minimum concentration of the oil found to be effective in vitro, both EOs significantly inhibited the biofilm formation of isolate S-1187 at this concentration.In addition, no metabolic activity was observed and none of the biofilm matrices were found to be viable, indicating excellent fungicidal activities of the oils tested.Antimicrobial potencies of EOs have been widely investigated through a large number of studies aimed at confirming their potential use to overcome the microbial drug resistance problem.Regarding disinfectant products, numerous plant-derived EOs, such as those from citrus, clove, cypress, Lippia spp, oregano, rosemary and thyme, have been introduced as they have shown potential as antibiofilm agents.Similarly, good antibiofilm activity against both fungal and bacterial biofilms by terpenes such as thymol and eugenol has been reported.This study revealed and reinforced the significance of plant extracts in the development of new, affordable, safe and effective with dual action antimicrobial agents.The success of clove and thyme EOs in inhibiting cell attachment by several F. oxysporum isolates, as shown in this study, is a potential tool for reducing microbial colonization on contact lenses.This potency is most probably a result of the high eugenol and thymol contents, respectively, of the two oils.However, as these tests have all been done in vitro, the next logical step is further clinical in vivo investigations to confirm if infection can be inhibited by the EOs, demonstrating the safe use of these oils for the prevention of Fusarial keratitis.
Fusarium infections, such as keratitis, are becoming increasingly difficult to control due to the build-up of resistance of fungi towards conventional antibiotics.Resistance is further enhanced by their ability to form fungal biofilms.Poor storage conditions and inadequate cleaning of contact lenses often lead to corneal infections.The solution to problems associated with contamination and resistance to antimicrobials could lie in the discovery of new, affordable, efficacious antimicrobial compounds.The purpose of this study was to evaluate the antifungal and antibiofilm properties of selected essential oils.The plant-based essential oils were evaluated for in vitro antifungal activity against Fusarium spp.using the Toxic Medium Assay (TMA) for preliminary screening.Clove and thyme oils, as well as pure citral, eugenol and thymol at 500. μL/L, exhibited the highest antimicrobial activity against Fusarium isolates.Antibiofilm capacity was investigated using soft contact lenses as substrate, while confocal laser scanning microscopy (CLSM) was used for visual observation of biofilm architecture.Although severe damage to the lens was observed when the essential oils were applied at 500. μL/L, no visible alteration of the contact lenses was detected at 50. μL/L and therefore the antibiofilm property of thyme and clove EOs was evaluated at this concentration.Clove and thyme oils prevented cell attachment, biofilm development and caused total inhibition of biofilm formation on soft contact lenses.The use of diluted clove and thyme oils could therefore be an option to prevent biofilm formation of soft contact lenses.
on oxidative damaged Chang liver cells, the antioxidant activity may be due to the amino acid residue Tyr.Amino acid residue Tyr containing phenolic hydroxyl group can serve as strong hydrogen donor and is the driving force for scavenging free radicals and ROS.In addition, Amino acid residue Phe with an aromatic ring structure can provide protons to electron deficient radicals.Studies found that Phe-Cys could inhibit ROS formation and membrane lipid peroxidation in oxidative stressed cells.This might explain how SF scavenge Chang liver intracellular ROS and increase the activities of SOD and CAT.Nevertheless, the antioxidant activity of peptide with more hydrophobic amino acid residues and smaller molecular size is not always stronger than the other peptides.IY containing two hydrophobic amino acid residues, its radical scavenging activity is lower than QY, which containing one hydrophobic amino acid residues, and molecular size of IY is smaller than QY, which indicate that amino acid composition and molecular size of peptides cannot accurately evaluate peptide antioxidant activities.Therefore, there need more studies to clarify the relationship between antioxidant activities and structure characteristics of peptides.In this study, eight novel antioxidant peptides of GY, PFE, YTR, FG, QY, IN, SF, SP and three known antioxidant peptides of YFE, IY and LY were isolated and identified from protein hydrolysate of M. oleifera seeds.Of which, SF and QY showed significantly protective effects on H2O2-induced Chang liver cells through increasing the activities of endogenous antioxidant enzymes SOD and CAT, and scavenging intracellular ROS.The results indicated that SF and QY can potentially serve as natural antioxidants in pharmaceutical or functional foods.
Eight novel and three known antioxidant peptides isolated from protein hydrolysate of Moringa oleifera seeds, using ultrafiltration, anion-exchange chromatography, gel filtration chromatography and reversed phase high performance liquid chromatography (RP-HPLC).The novel peptides were identified as Gly-Tyr (GY), Pro-Phe-Glu (PFE), Tyr-Thr-Arg (YTR), Phe-Gly (FG), Gln-Tyr (QY), Ile-Asn (IN), Ser-Phe (SF), Ser-Pro (SP), and the known peptides were identified as Tyr-Phe-Glu (YFE), Ile-Tyr (IY) and Leu-Tyr (LY), respectively, using protein amino acid sequence analyzer and electrospray ionization-mass spectrometry.All the eleven peptides exhibited strong scavenging activities on 2,2-Diphenyl-1picrylhydrazyl radical (DPPH[rad]) (EC50 2.28, 1.60, 1.77, 2.15, 0.97, 1.30, 0.75, 0.91, 1.21, 0.79, and 1.37 mg/L, respectively) and 2,2-azino-bis (3-ethylbenzothia zoline-6-sulfonic acid) diammonium salt radical (ABTS[rad]+) (EC50 1.03, 0.84, 0.95, 0.65, 0.37, 0.54, 0.33, 0.36, 0.67, 0.32 and 0.38 mg/mL, respectively).In addition, SF and QY showed protective effects on oxidative damage Chang liver cells induced by H2O2, and the contents of aspartate aminotransferase (ALT), alanine aminotransferase (AST) and malondialdehyde (MDA) decreased with the increasing concentrations of SF and QY.Furthermore, SF and QY could significantly increase the levels of superoxide dismutase (SOD) and catalase (CAT), thereby effectively scavenge reactive oxygen species (ROS) in H2O2-induced oxidative damaged Chang liver cells (P < 0.05).These results suggested that SF and QY had the ability to protect Chang liver cells from oxidative stress damage and might serve as potential antioxidants or oxidative damage cytoprotective agents used in pharmaceutical and health food industries.
Voltage-gated calcium channels consist of three subgroups, the CaV1, 2 and 3 classes.Most of these channels, apart from CaV1.1 which is a skeletal muscle channel, are involved in neuronal function, with their most prevalent functions being in excitation–transcription coupling, synaptic transmission, and regulation of neuronal excitability and pacemaker activity.Because of their key roles in neuronal function, it is not surprising that a number of different calcium channels have been implicated in the pathogenesis of various forms of epilepsy, in both humans and in animal models.These channels include T-type channels, P/Q-type channels, and the auxiliary subunits, β4 and α2δ-2.Furthermore several calcium channels are either actual or potential targets for therapeutic intervention.The CaV auxiliary α2δ and β subunits, both of which have four isoforms, are associated with the “high voltage activated” CaV1 and CaV2 classes of calcium channel, but are not thought to be associated with CaV3 calcium channels.Both auxiliary subunits increase plasma membrane expression of the CaV1 and CaV2 channels, and influence their biophysical properties.Of relevance to the potential pathological roles of α2δ subunits, the α2δ-1 isoform is up-regulated following peripheral somatosensory nerve damage, whereas mutations in α2δ-2 have been linked to absence epilepsy.The α2δ proteins have also been reported to fulfill other functions independent of calcium channels, and are likely to interact with other binding partners, including thrombospondins.Both α2δ-1 and α2δ-2 represent binding sites for the anti-epileptic α2δ ligand drugs gabapentin and pregabalin.These drugs are used as adjunct therapy in several forms of epilepsy, particularly drug-resistant partial seizures.They are also widely used in the treatment of neuropathic pain resulting from peripheral nerve damage of various origins, such as trauma, trigeminal neuralgia, diabetes-induced nerve damage, and chronic pain following viral infection, including post-herpetic neuralgia.They have also been used for alleviation of chronic pain resulting both from human immunodeficiency virus infection and as a side effect of some of the anti-HIV drugs.Chronic neuropathic pain resulting from cancer chemotherapeutic drugs, including paclitaxel and cisplatin, is also treated with gabapentinoid drugs.The mechanism of action of the gabapentinoid drugs in the treatment of epilepsies remains unclear.In this study we wished to examine whether the level or distribution of α2δ-1 was altered following experimental induction of epileptic seizures in rats, since a change in α2δ-1 level or distribution might contribute to the anti-epileptic mechanism of action of gabapentinoid drugs, in a similar way to their therapeutic action in neuropathic pain.It was not possible to examine changes in distribution of α2δ-2 protein in parallel in this study, as no antibodies suitable for immunohistochemistry are currently available.We chose to use the rat kainic acid model of human temporal lobe epilepsy, in which spontaneous seizures have been found to occur following a latent period after the initial induction by kainic acid of persistent seizures, known as status epilepticus.In this model, rats first develop status epilepticus, and then consistently develop spontaneous seizures which exhibit a gradual increase in spontaneous frequency in the subsequent weeks.This model is relevant because gabapentin is known to be effective against seizures induced by this means.For comparison, we also used the tetanus toxin model of temporal lobe epilepsy, in which status epilepticus is not induced.Ten adult male Sprague–Dawley rats weighing approximately 250 g were injected with kainic acid to induce status epilepticus.Injections were repeated once per hour until 5–9 Racine stage III/IV/V seizures per hour have occurred.After 40–60 min from the onset of status epilepticus, diazepam was injected repeatedly until continuous motor activity disappeared.Following status epilepticus animals were housed separately.When seizures stopped, diazepam was continued every 30 min.Subcutaneous administration of warmed sterile saline was given if the animals appeared lethargic and/or a significant drop in weight occurred.Rats were housed in single cages under standard conditions in a room with controlled temperature and 12/12-h light/dark cycle.The animals had ad libitum access to food and water.Immediately following the status epilepticus, rats were manually fed, if necessary until adequate recovery, and provided with standard food and also mashed food and apple slices.Control animals were treated with an equivalent volume and number of injections of sterile saline.Four rats were injected with tetanus toxin and four rats with saline as controls.Surgical preparation was performed as previously described, under ketamine/xylazine anesthesia.A small trephine opening was drilled over the right hippocampus at coordinates 4.1 mm caudal to bregma and 3.9 mm laterally.Using a Hamilton microsyringe and infusion pump 1 μl of tetanus toxin solution was injected into the stratum radiatum of the right hippocampal CA3 area.The tetanus toxin solution contained 25 ng of tetanus toxin in 1 μl of 0.05 M phosphate-buffered saline and 2% bovine serum albumin.It was injected at 200 nl/min.The microsyringe was left in the hippocampus for 5 min after the injection ended to avoid the solution leaking back through the injection track.Control animals were injected with 1 μl of 0.05 M PBS with 2% bovine serum albumin.Following surgery, the rats were housed in single cages and allowed to recover for 2 days.Subsequently they were monitored for spontaneous seizures in video monitoring units to verify the development of spontaneous and recurrent seizures.Videos were recorded using digital infra-red cameras.Animals were video-monitored for 4 weeks.All animal procedures were licensed and performed in strict accordance with the Animal Scientific Procedures Act of the United Kingdom and with Birmingham University Ethical Review.Rats were deeply anesthetized with an intraperitoneal injection of pentobarbitone, perfused transcardially with saline containing heparin, followed by perfusion with 4% paraformaldehyde in 0.1 M phosphate buffer.Brains were dissected and the tissue was
In this study we therefore examined whether the level or distribution of α2δ-1 was altered in the hippocampus following experimental induction of epileptic seizures in rats, using both the kainic acid model of human temporal lobe epilepsy, in which status epilepticus is induced, and the tetanus toxin model in which status epilepticus is not involved.
do not act by mechanisms involving inhibition of GABA breakdown, or activation of GABA-A or GABA-B receptors.Furthermore GABA itself does not bind to the α2δ subunits that are now known to be the target for gabapentinoid drugs.Although it has been found that α2δ-1 is the target for the gabapentinoid drugs in the alleviation of experimental neuropathic pain in rodents, this is not known for the efficacy of the gabapentinoids in animal models of epilepsy.An in situ hybridization study showed that α2δ-1 expression was often more associated with excitatory neurons, and α2δ-2 with inhibitory neurons.The α2δ-1 protein is strongly expressed in the hippocampus, therefore it is possible that a change in expression of α2δ-1 in epileptic foci might influence the effectiveness of the gabapentinoid drugs.In contrast α2δ-2 is expressed in a more restricted pattern, for example it is strongly expressed in cerebellar Purkinje neurons.The loss of expression of α2δ-2 in cacna2d2 mutant mouse strains including “Ducky” results in cerebellar ataxia and spike-wave epilepsy, and is associated with severe Purkinje cell dysfunction.Furthermore CACNA2D2 is disrupted in rare recessive human cases of epileptic encephalopathy.Therefore interference with α2δ-2 function might be intuitively less likely to be the therapeutic target of the gabapentinoids in epilepsy, compared to disruption of α2δ-1 function.However, this would not exclude the possibility that there is localized alteration of α2δ-2 expression, which could be a therapeutic target in focal epilepsy.Thus it is possible that α2δ-2 levels or distribution might be affected in animal models of epilepsy.Unfortunately there are currently no available α2δ-2 antibodies that are effective in immunohistochemistry, so at present this cannot be easily tested.However, it would be extremely useful to examine whether gabapentin is effective in epilepsy models, using knockin mice in which either α2δ-1 or α2δ-2 is mutated so that it is gabapentin-insensitive.Recently, α2δ-1 has been found to interact with thrombospondins, and this interaction has been shown to be involved in synaptogenesis, a process which has been described as being independent of its function as a calcium channel subunit.Thrombospondins are a ubiquitous family of extracellular matrix proteins, which are secreted by many cell types, including microglia.In a number of experimental and human epilepsies there is reactive gliosis, microglial activation, and axonal sprouting.It has also been proposed that gabapentin inhibits the interaction between α2δ-1 and thrombospondins, and therefore inhibits synaptogenesis.Although this might be considered a plausible mechanism of action of gabapentinoid drugs in treatment of epilepsies, synaptic remodeling that occurs at epileptic foci is likely to have already occurred before the onset of treatment with these drugs.Nevertheless, it is possible that the gabapentinoid drugs may also modify epileptogenesis and decrease the consequences of status epilepticus by reducing cellular damage and seizure frequency.A.C.D. and J.G.R.J. conceived the study.P.J. performed in vivo procedures and monitoring.M.N.-R.performed and analyzed all hippocampal histology with the help of G.S. C.S.B. and M.N.-R.performed all DRG and spinal cord histology.A.C.D. and M.N.-R.wrote the paper, with input from all authors.
The auxiliary α2δ-1 subunit of voltage-gated calcium channels is up-regulated in dorsal root ganglion neurons following peripheral somatosensory nerve damage, in several animal models of neuropathic pain.The α2δ-1 protein has a mainly presynaptic localization, where it is associated with the calcium channels involved in neurotransmitter release.Relevant to the present study, α2δ-1 has been shown to be the therapeutic target of the gabapentinoid drugs in their alleviation of neuropathic pain.These drugs are also used in the treatment of certain epilepsies.The main finding of this study is that we did not identify somatic overexpression of α2δ-1 in hippocampal neurons in either of the epilepsy models, unlike the upregulation of α2δ-1 that occurs following peripheral nerve damage to both somatosensory and motor neurons.However, we did observe local reorganization of α2δ-1 immunostaining in the hippocampus only in the kainic acid model, where it was associated with areas of neuronal cell loss, as indicated by absence of NeuN immunostaining, dendritic loss, as identified by areas where microtubule-associated protein-2 immunostaining was missing, and reactive gliosis, determined by regions of strong OX42 staining.
the current study is conceptually similar to the visuomotor adaptation task reported by Galea et al.Participants in the current study learnt to adapt to a counter-clockwise force perturbation delivered by a robotic manipulandum during reaching movements and, after learning had taken place, to de-adapt to the removal of the force perturbation.The key finding is that tDCS-induced plasticity, indexed by alterations in MRS-GABA, is a significant predictor of motor learning and memory, as measured by the magnitude of errors made during adaptation and de-adaptation on the robotic force perturbation task.Specifically individuals with larger tDCS-induced decreases in MRS-GABA exhibited reduced reaching errors during adaptation and increased reaching errors during de-adaptation.Importantly, this finding extends the work reported by Galea et al. by demonstrating: that offline tDCS.i.e., the after-effects of tDCS stimulation, to M1 may influence both learning and memory performance; that the offline effect of anodal tDCS is to decrease MRS-GABA concentrations; and, that individual differences in tDCS-induced alterations in MRS-GABA predict the efficacy of motor learning and motor memory.It is of interest to note that in the current study, whereas individual differences in baseline GABA concentration did not predict individual motor learning performance, measured either during the adaptation or de-adaptation phases of the force perturbation study.Instead it was the tDCS-induced change in MRS-GABA that predicted motor learning.This lack of any significant correlation between baseline MRS-GABA level and motor learning performance is consistent with the findings reported by Stagg et al. who found that baseline MRS-GABA levels were positively correlated with measures of performance but did not predict learning performance in a serial reaction time paradigm.By contrast, Stagg et al. reported a positive correlation between the degree of GABA responsiveness to anodal tDCS and individual differences in learning on sequence learning task which involved learning a repeating sequence of manual button press responses.It has been argued previously that the force adaptation task used in the present study may involve a different form of learning to that observed in the serial reaction time task.Specifically, while the force perturbation task is thought to involve adaptation to a predicable force through the formation of an internal dynamic ‘forward’ model, the sequence learning task is thought to be a form of a skill learning that is more dependent on success-based state-space exploration.Importantly, the results of the current study, together with those reported by Stagg et al., indicate that tDCS-induced plasticity in primary motor cortex – as indexed by alterations in MRS-GABA following anodal stimulation – predicted motor learning performance for both tasks.This indicates that GABAergic activity within the primary motor cortex may play an important role in modulating cortical excitability thus influencing in motor learning and memory across a wide range of tasks.
Transcranial direct current stimulation (tDCS) is a non-invasive brain stimulation technique that alters cortical excitability in a polarity specific manner and has been shown to influence learning and memory.tDCS may have both on-line and after-effects on learning and memory, and the latter are thought to be based upon tDCS-induced alterations in neurochemistry and synaptic function.T) magnetic resonance spectroscopy (MRS), together with a robotic force adaptation and de-adaptation task, to investigate whether tDCS-induced alterations in GABA and Glutamate within motor cortex predict motor learning and memory.Note that adaptation to a robot-induced force field has long been considered to be a form of model-based learning that is closely associated with the computation and 'supervised' learning of internal 'forward' models within the cerebellum.Importantly, previous studies have shown that on-line tDCS to the cerebellum, but not to motor cortex, enhances model-based motor learning.Here we demonstrate that anodal tDCS delivered to the hand area of the left primary motor cortex induces a significant reduction in GABA concentration.This effect was specific to GABA, localised to the left motor cortex, and was polarity specific insofar as it was not observed following either cathodal or sham stimulation.Importantly, we show that the magnitude of tDCS-induced alterations in GABA concentration within motor cortex predic.ts individual differences in both motor learning and motor memory on the robotic force adaptation and de-adaptation task.
typically associated with low potential evapotranspiration, low atmospheric water vapor content and reduced specific humidity, possibly reinforcing drought conditions through increasing surface latent-heat flux related to decreasing soil moisture and through diminished net primary productivity.Summer dust event days display weak anomalies in lapse rates, relative humidity and surface geopotential heights.This suggests that land surface conditions become more significant in explaining the occurrence of summer dust events, related to diurnal surface heating from dry, bare soil surfaces.These dust events often occur with southerly and southwesterly winds and generate extensive soil erosion, associated with peak surface windspeeds, and very high dust concentrations.Weak vertical wind shear during JJA dust events will isolate updrafts from the low-level moisture supply, i.e. the GPLLJ, which was broadly weakened during the 1930s, but exerts the strongest presence in this season.This allows for more frequent dry, hot southwesterly winds across the Great Plains from Mexico and the southwest U.S. during these summer months.The fourth dust event mode discriminates cold-frontal haboobs, forming irrespective of season and dominantly occurring over the Texas panhandle with a uniform thermal gradient from the surface to 850 hPa, which is consistent with the passage of a cyclone.When these events occur, there are low levels of atmospheric water vapor, but soil moisture content and direct evaporation from bare soil surfaces are 25 to 70% higher than during other modes.The low GPH at 1000 hPa and deep convection indicated by the increased rates of direct evaporation further suggest the presence of a steep, low-pressure front, and correlate with events that began earlier in the day, persisted for ≥24 h, and with visibility <10 km.The frequency of haboobs over panhandle counties suggests greatly diminished primary productivity to enable such increases in evaporation from soil surfaces on any given day of year.However, areas associated with this mode were neither heavily cultivated, nor with appreciable coverage by eolian deposits, yielding uncertainty on the land use or landforms contributing to dust event generation.Dust events propagated by thunderstorms, passing fronts, and/or movement of low-pressure systems often transport dense concentrations of suspended particles and can be associated with a magnitude increase in electrostatic charge from saltating sand and Brownian motion of suspended particles, which is an underappreciated natural hazard.Many dust events during the Dust Bowl and earlier droughts were associated with substantial free-electrical discharges that charred telephone poles, stranded cars, and disrupted power service.Particle collision with particularly low humidity conditions induces formation of a static electric charge, restricted usually to ˜1 km above the surface.Thus, the tribocharging of sand grains was a potential factor for intensification of dust events leading to exceptionally low visibility during the Dust Bowl, like conditions on the infamous Black Sunday, on April 14, 1935.The ubiquity of 1930s dust events is often attributed to the use of unsuitable lands for cultivation.The principal components of dust events indicate that only 2.6% of variance of observed dust events is attributed to the extent of land under cultivation by 1935.This would suggest that the persistence of dust events was not exclusively related to poor agricultural stewardship but also influenced by the desiccation of sandy lands left as range, which accounts for >5% of the variance of dust-event days.Indeed, sand dunes are developed on many areas throughout the Great Plains, particularly between the Arkansas and Canadian rivers within the specified area of severe wind erosion.However, this influence changes seasonally, with the timing of cultivation significant during spring and winter dust events, in keeping with Lee and Tchakerian’s study of dust events on the Southern High Plains since 1947.In contrast, the influence of exposed eolian deposits in rangelands is apparently more significant than meteorological anomalies for summer dust events, likely related to elevated temperatures exacerbating soil moisture deficits and inhibiting development of soil crusts strong enough to dampen eolian erosion on surfaces with high sand content.The PM10 dust fluxes measured by PI-SWERL in this study are similar to those measured from eolian sands in the Mojave Desert and China, and crusted and disturbed loess in Nebraska.A broader assessment of the still and aerial photographic record and primary documentation from Soil Conservation Service experiment stations reveal a range of land surface conditions from fully vegetated to eolian-eroded, denuded surfaces during the DBD.This complex landscape mosaic is consistent with concepts of heterogeneous ecosystem response to extreme drying or precipitation variability.Twenty-first century dust events in the southern U.S. exhibit similar characteristics and are often point-sourced to cropland or rangeland on the Southern High Plains.PI-SWERL tests reveal that disturbed soils in this region begin to emit at a magnitude-higher rate than undisturbed surfaces when the threshold wind velocity is met, and this rate increases linearly with windspeed.Conversely, crusted, undisturbed surfaces do not begin to reach the same flux rate until much higher windspeeds, at which point the crusts are broken and emissivity rates increase rapidly, similar to disturbed surfaces.These higher threshold velocities also are common to variably crusted and clodded cultivated soils.Loamy sand and sandy loam soils are particularly potent emitters when compared to other soil textures, which is supported by numerous studies.In the Southern High Plains, Lee et al. identified sandy loam soils as a hot spot for dust point sources.In general, sandy soils are some of the most erodible by wind and are key sources of global dust.Significantly, the particle emissivity of undisturbed, loose sandy soils mirrors that of disturbed surfaces in relation to windspeed and potential magnitude of dust emitted.This suggests that some bare,
A broader assessment of the potential emissivity of SHP soils reveals that disturbed surfaces begin to emit dust at a magnitude-higher rate than undisturbed surfaces as soon as the wind velocity reaches the threshold, increasing linearly with windspeed.Conversely, crusted undisturbed soil surfaces do not begin to reach the same flux rate until much higher windspeeds, at which point crusts are broken and emissivity rates increase exponentially.Significantly, the particle emissivity of undisturbed, loose sandy soils mirrors that of disturbed surfaces in relation to windspeed and potential magnitude of dust emission.
sandier, uncultivated soils of the Southern High Plains could be equal to or greater dust sources than cultivated fields during periods of intense aridity causing native vegetation mortality.Cultivated surfaces are seasonally unavailable dust sources either via crop cover or soil crust formation between agricultural treatments.In contrast, dunes and sandsheets persist as available dust sources year-round with the inhibited development of biological surface crusts stemming from extensive vegetation loss.Furthermore, the higher relief of eolian landforms, combined with the decreased shear velocity from reduced plant cover, increases sand mobility under higher windspeeds.These sand grains, if blown into fields from surrounding denuded areas, could pulverize surface crusts similar to the process of particles pitting and frosting automobile windshields during dust events.Associated Press journalist Robert Geiger, credited with coining the term “Dust Bowl,” wrote of the dust as he travelled through Guymon, OK: “It gets into your clothes, literally in your hair, and sometimes it seems in your very soul.Certainly it gets under the skin”.The anomalously elevated temperatures during the 1930s account for one-third of the variability in Dust Bowl dust event activity, which carries significant implications for a warming world.Dust sources on the Southern High Plains abound with predominately sandy soils and are often associated with antecedent dunes and cover sands.Climate models forecast significant aridity and decade-long droughts on the Great Plains for later in the 21st century coincident with extreme, elevated summer temperatures, ideal conditions for vegetation mortality and formation of haboobs, the quintessential characteristics of the DBD.Such continental-scale dust events would increase PM loads >20 μg m−3 d-1, would be detrimental for public health in nearby urban centers, and potentially across North America, dependent on synoptic conditions.Shortgrass prairie in the driest areas of the USGP may shift in ecosystem function with an increase in surficial heterogeneity, like those of the desert grasslands in the SW and similar to landscape response in the 1930s.Like during the Medieval Climate Anomaly megadroughts, this could potentially precipitate a magnitude increase in mineral dust aerosol emissions from the Southern High Plains.The Dust Bowl of the 1930s was an iconic event of environmental degradation across the U.S. Great Plains with crop failure, denudation of uncultivated and cultivated lands, and with numerous loci for the generation of fugitive dust.This study accessed primary historical archives, the Global Historical Climate Network, the 20th Century Reanalysis Project, field surveys and measurements with a Portable In-Situ Wind Erosion Laboratory, to assess the controls and character of dust event variability and soil surface emissivity across the Southern High Plains resulting from persistent and intense periods of aridity.For the first time, a continuous, quantitative and spatially-explicit record of dust events was compiled across the core Dust Bowl region from April 1938 to May 1940, which enabled investigation of three main objectives related to meteorological and land surface conditions.In summary:Lower-level atmospheric and surface air temperatures are the strongest principal components driving the variance in Dust Bowl dust events, followed by low-level relative humidity.Anomalies in this thermal gradient and moisture carried by the Great Plains Low Level Jet occurred on dust event days that were not present on days without dust events within the same season.The extent of antecedent eolian deposits in the Dust Bowl region explains more variance in dust events than the extent of land under cultivation.Four modes of dust events were identified related to the season of occurrence and dominant meteorological controls, with land surface conditions a secondary factor.Two modes characterize “blowing season” events, with spring dust events related to an inversion of surface and atmospheric air temperatures, and summer dust events associated with intensified surface heating.The third mode of dust event occurs during the winter after an extended dry period, and the fourth dust event mode reflects the passage of vigorous cold-frontal haboobs occurring irrespective of season.The seasonal timing of agriculture is correlated with the occurrence of spring and winter dust events, whereas the presence of sandier soils and eolian landforms correlate strongly to the occurrence of summer dust events.Assessment of the potential PM10 emissivity from common dust sources across the Southern High Plains indicates that anthropogenic disturbance of surface crusts can increase the magnitude of particle emissions from siltier soils.However, emissions from loose, uncultivated sandy soils can emit similarly potent levels of dust as disturbed cultivated surfaces, suggesting a more complex narrative than previously recognized for landscape degradation in the 1930s Dust Bowl.
Mineral dust aerosols are a key component of the Earth system and a growing public health concern under climate change, as levels of dustiness increase.Combined with experiment station reports from the Soil Conservation Service, reanalysis data products, and contemporary field surveys using a Portable In-Situ Wind Erosion Laboratory (PI-SWERL), the study examined meteorological catalysts for dust events and surficial dynamics of particle emission on the Southern High Plains (SHP).Results identified four dominant modes of dust events related to the season of occurrence and principal meteorological controls.This finding suggests that the prevalent sandier, rangeland soils of the SHP could be equal or greater dust sources than cultivated fields during periods of sustained, severe aridity.
analysis, it is possible to analyse the variation of the IC according to the variation of each input parameter.Fig. 7 shows that the heating base temperature and heating schedule are the parameters with the larger influence on the heating demand assessment of this case study.For example, the results show that the IC for the ‘heating base temperature’ in an office building is 2.5.The results also show that the IC for the WWR, roof area, NHA, internal gains, and solar gains is below 0.5.In fact, it is only 0.16 for the ‘WWR’ input.If the analysed building is a residential building, the impact of this input parameter is 0.04.The cooling demand, WWR, cooling BT, and GS are the parameters with the larger influence on the cooling demand assessment.For example, the results show that the IC for the ‘cooling base temperature’ of a residential building is 4.3.The results also highlight the influence of the GS with an impact of 2.5.The results show that the IC for inputs such as the RA and NHA is less than 0.5.In fact, the impact of the roof area is only 0.19.The analysis of the influence of all input parameters shows that some of the parameters that have the largest influence are the most difficult to control or determine in buildings with unmonitored energy systems because parameters, such as the base temperature or schedule, for heating and cooling can vary from one building to another or even from one day to another.However, when the energy system is monitored, and the municipality has access to this information, the adaptation of these two parameters can be used to calibrate the model and considerably improve the accuracy of the DMM output.Furthermore, the results show that the influence of parameters, such as WWR and GS, is considerably if the cooling energy demand is assessed by the DMM methodology.Based on the architectural typology, each municipality should calibrate their WWR values because it is difficult to standardise this parameter.Finally, due to the simplification of the DMM methodology, the solar gain parameter does not consider factors such as the orientation of each building or shadows of surrounding buildings.Due to this simplification, the accuracy of the assessment of this parameter using the current version of this methodology is limited.Therefore, future work should focus on the improvement of this parameter.Once the district model is correctly calibrated, this methodology can be extrapolated to other districts or the entire city to deliver information relevant for the design of the energy plan of the city.This paper proposes a methodology that allows municipalities to carry out energy studies of their existing building stock.The motivation is based on the lack of tools or applications that allow municipalities to evaluate available data and obtain a general energy vision of the city.This methodology is divided into two main blocks: 1) the input data block in which the necessary data sources are obtained and pre-processed, and 2) the district energy assessment block in which the data are processed and the internal energy assessment algorithm is performed to obtain results at the building level for the whole district.The methodology has been validated based on the application to the historical district of Antwerp.The results of the first validation show that the average difference between the results provided by this methodology and those obtained by Design Builder® is less than 11% for the heating demand, less than 11% for the cooling demand, and less than 10% for the DHW demand.In fact, the validation process shows that the main difference between both methodologies is based on the assessment of two parameters: the thermal conductivity and ventilation losses.The second validation process, which compares the results obtained by this methodology with the real natural gas consumption of 339 buildings of this case study, shows that the variation between the results and the real data for residential buildings is 11%.However, the variation increases to 32% for tertiary buildings because this study does not consider if buildings have been refurbished, that is, if their energy efficiency was improved or their initial use was modified, changing their thermal needs.In addition, a sensitivity assessment was proposed in this work including the analysis of the relevance of each input parameter considered by the DMM methodology.Based on the ‘influence coefficient’, the heating base temperature and heating schedule are the parameters with the larger influence on the heating demand assessment.The cooling demand, WWR, cooling base temperature, and solar gains are the parameters with the larger influence on the cooling demand assessment.One of the drawbacks of the methodology is its dependency on the input data quality.This methodology intends to simplify the energy assessment with available data, integrating this type of assessments into municipal energy planning processes.Hopefully this will contribute to the improvement of the quality and availability of energy data and encourage the better and wider use of cadastral and energy information.However, it is very difficult to obtain detailed information to define, for example, the specific use of each building because this methodology is based on cadastral information.In the future, refurbishment strategies should be included in this methodology to consider the age and level of architectural protection of buildings.Other future work might include the calibration of the results through new study cases, the search for new alternatives in case the municipality does not have cadastral data in an accessible format, and improvement of the visualisation of the results to generate a new 3D model for each evaluated district.
Municipalities play a key role in supporting Europe's energy transition towards a low-carbon economy.However, there is a lack of tools to allow municipalities to easily formulate a detailed energy vision for their city.Nevertheless, most municipalities have access to georeferenced cartographic and cadastre information, including that on basic building characteristics.This article describes an innovative method to calculate and display the current hourly thermal energy demand for each building in a district based on basic cartography, cadastre, and degree-day values.The method is divided into two main blocks: (1) input data processing to obtain geometric information (e.g.geolocation, building and facades’ dimensions) and semantic data (e.g.use, year of construction), and (2) district energy assessment to calculate the thermal energy demand using data obtained in block 1.The proposed method has been applied and tested in the historical district of Antwerp.The reliability and thoroughness of the results obtained using the method are demonstrated based on two different validations: (1) comparison of the results with those calculated using an existing dynamic energy simulation tool, and (2) comparison of the results with the real gas consumption of a partial sector of the selected district.The first validation shows that the average difference between the two methodologies is less than 11% for the heating demand, less than 11% for the cooling demand, and less than 15% for the domestic hot water demand.The second validation shows a 24% difference between the real natural gas consumption and that obtained by new methodology.Finally, the results have been presented to the municipality of Antwerp, which plans to use the method to design the district heating expansion within the city centre.Furthermore, sensitivity assessment was used to determine the relevance of the main input parameters considered in this method, such as the base temperature, energy system schedules, window-to-wall ratio, and solar gains.
used to investigate collagens expression after 8 weeks.It demonstrated that collagen organization present more in MBG/SF scaffolds compared with MBG/PCL composite scaffolds.OCN staining was used to test stem cell osteogenic differentiation abilities of composite scaffolds .As shown in Fig. 12, the results illustrated that MBG/SF scaffolds possessed better osteogenic differentiation ability than MBG/PCL scaffolds significantly.According to the results above, we can find that MBG/SF scaffolds have favorable osteogenic ability in vivo.To further evaluate the capabilities of scaffolds in ectopic osteogenesis in vivo, we separated the total RNA of new tissue after that scaffolds were implanted for 8 weeks and used qRT–PCR method to detect them.Similar with the results of in vitro gene expression, MBG/SF and MBG/PCL scaffolds can both promote osteogenesis related gene expression, as shown in Fig. 13.For the expression of BMP-2, the result of MBG/SF scaffolds was 0.141 ± 0.018, significantly higher than that of MBG/PCL scaffolds.For the expression of BSP, the result of MBG/SF scaffold was 0.052 ± 0.001, while MBG/PCL was 0.034 ± 0.005.The results indicated that compared with MBG/PCL scaffolds, MBG/SF scaffolds can promote BMP-2 and BSP expression more effectively in vivo.MBG/SF composite scaffolds have been 3D printed followed by a post-processing, which are three dimensional and porous with good mechanical strength.Moreover, the MBG/SF scaffolds possessed good bioactive ability both in vitro and in vivo.Significantly, after the implantation of the MBG/SF scaffolds into the back of nude mice, the MBG/SF composite scaffolds exhibited better new bone formation ability compared to the control group.In conclusion, the 3D printed MBG/SF scaffolds could be a very promising candidate for bone tissue engineering.
The fabrication of bone tissue engineering scaffolds with high osteogenic ability and favorable mechanical properties is of huge interest.In this study, a silk fibroin (SF) solution of 30 wt% was extracted from cocoons and combined with mesoporous bioactive glass (MBG) to fabricate MBG/SF composite scaffolds by 3D printing.The porosity, compressive strength, degradation and apatite forming ability were evaluated.The results illustrated that MBG/SF scaffolds had superior compressive strength (ca.20 MPa) and good biocompatibility, and stimulated bone formation ability compared to mesoporous bioactive glass/polycaprolactone (MBG/PCL) scaffolds.We subcutaneously transplanted hBMSCs-loaded MBG/SF and MBG/PCL scaffolds into the back of nude mice to evaluate heterotopic bone formation assay in vivo, and the results revealed that the gene expression levels of common osteogenic biomarkers on MBG/SF scaffolds were significantly better than MBG/PCL scaffolds.These results showed that 3D-printed MBG/SF composite scaffolds are great promising for bone tissue engineering.
further carbonation.The DAC experiments were performed for five cycles of calcination/ambient-carbonation in two sets of tests designated as DAC-CIN and DAC-COUT.The conversions for the cyclic experiments over time for DAC-CIN and DAC-COUT are shown in Fig. 5 and Fig. 6, respectively, where higher carbonation conversions are achieved during the cycles.The data for the temperature and relative humidity of DAC-COUT are presented in Figs. S4 and S5 of the SI.The results on conversions are consistent with those of DAC-OUT highlighting again the importance of moisture in air for ambient carbonation of CaO-based sorbents.DAC performance seems unaffected by the number of cycles as stable carbonation conversion profiles are obtained over the course of five cycles.Therefore, it can be concluded that the cycling has a negligible effect on CaO-based materials under the DAC conditions.The reason behind this superior performance is the simultaneous reactivation of the cycled material when in contact with humid air.Namely, it has been suggested that humid ambient air reactivates CaO-based materials exposed to calcination/carbonation cycles and enables the carrying capacity to be restored.This finding is important since the technology can be deployed at large scale if CaO-based sorbent is used in a cyclic mode.Also, sorbent reactivation and using it in longer series of capture cycles aids the economics of the proposed DAC system since the economic performance of the process is strongly affected by the sorbent cost and activity.The SEM images of the samples at different stages are presented in Figs. 7 and 8.It can be seen that there is no substantial difference between DAC-IN and DAC-OUT at the end of their exposure to air.Their morphology appears to be similar and most of the pores are closed due to CaCO3 formation.This is consistent with the results on conversion as both of these samples had almost the same carbonation conversions at the end of the exposure to air.Fig. 7 also presents the images of DAC-HYD after hydration and after exposure to air.These two morphologies differ greatly with the first one being very compact and dense after lime reacted with water.The sample presented in Fig. 7d appears to have a similar morphology to those of DAC-IN and DAC-OUT.This can be expected since carbonation conversions of these samples are very similar at the end of their exposure to air.In Fig. 8, the SEM images of DAC-CIN and DAC-COUT before and after the third cycles are presented.In this case, subtle differences can be found in the morphology with DAC-COUT having lower porosity, due to CaCO3 formation, than that of DAC-CIN.Namely, carbonation conversion after the cycles was substantially lower for DAC-CIN than for DAC-COUT, i.e., 29% compared to 53%, respectively, at the end of the third cycle.DAC technologies are expected to be essential in the future in order to capture CO2 that cannot be removed via large point-source CCS.One of the main challenges of DAC processes is the associated costs, which are often regarded as the main obstacle for scaling-up DAC processes.This work presents a novel DAC process utilising a cheap and environmentally friendly sorbent without the need of costly equipment or reactors.CaO-based materials are a viable solution for DAC by means of exposing them to ambient air in a thin layer.High levels of carbonation by ambient air can be achieved in a time scale of weeks to months, which is acceptable from an engineering point of view if these materials are used in the proposed DAC process.Importantly, there has been no loss of activity during the cycles, which implies that the DAC process requires neither additional reactivation nor disposal of cycled material, which is explained by simultaneous reactivation by humid air, which can significantly enhance the economics of the process and its technical feasibility.Also, it should be highlighted that limestone is cheap, environmentally friendly and widely available, making it a suitable candidate for DAC technologies considered in our recent studies.However, the availability of land should be further investigated as it could be one of the constraints of the DAC process explored in this study.In order to aid the scale-up of this technology, DAC processes should be developed looking at the future practical applications of the technology.Investigations on the suitability of these sorbents under variable realistic atmospheric conditions and geographic locations are scarce and thus necessary, as are techno-economic analysis and the investigation of other suitable sorbents.The authors declare no competing financial interest.
Carbonation of lime-based materials at high temperatures has been extensively explored in the processes for decarbonisation of the power and industrial sectors.Furthermore, faster weathering and higher conversions are demonstrated by hydrated lime, showing a carbonation conversion of 70% after 300 h. Importantly, it was found that there was a negligible difference in the carbonation conversions during five calcination/ambient-carbonation cycles, which can be explained by simultaneous reactivation of cycled material by moist air.These findings indicated that lime-based materials are suitable for carbon dioxide capture from ambient air employing cyclic processes, in a practical time scale, and that humidity of air plays a key role.
effects on LC16m8-vaccinated animals, it caused disseminated vaccinia in macaques immunized with Dryvax.Furthermore, as both Dryvax and LC16m8 vaccines protect healthy macaques from a lethal monkeypox intravenous challenge, their data identify LC16m8 as a safer and effective alternative to ACAM2000 and Dryvax vaccines for both healthy and immunocompromised individuals.This study strongly support the idea that CD4+ and CD8+ T cell- immunity plays crucial roles in protective ability, and that the lack of normal B5 protein does not affect this ability.Our study also strongly suggested that LC16m8 induced innate immunity partially contributing to protection at the early stages after vaccination because vaccination with LC16m8 protected wild-type mice challenged on 2 to 4 days after vaccination: humoral, as well as CD4+ and CD8+ T-cell mediated, immune responses, if any, may not be induced or are very poor in these stages.Vaccination with LC16m8 or Lister induced comparable partial protection in MHC class I- and II-double deficient mice lacking both T-cell- and B-cell-mediated immunity and strongly supports the above speculation of the contribution of innate immunity to immediate protection.Mice infected with various viruses, including VACV, induce natural killer cell activity on day 1 to day 10 after vaccination and is consistent with the speculation from our results that protective activity of LC16m8 observed at the early stages after vaccination in wild-type mice and in various immunodeficient mice is given by innate immunity.Microorganisms, like viruses that invade a vertebrate host, are initially recognized by the innate immune system through pattern-recognition receptors.Several classes of these receptors, including Toll-like receptors and cytoplasmic receptors, recognize distinct microbial components and induce the release of interferon-α and interferon-β .Induction of innate immunity by live LC16m8 virus is considered to activate cell-mediated and humoral immunity, as shown by good protection of mice against lethal challenge by at least 1 year after vaccination.VACV antigens present on the cell surface induce acquired immunity, including humoral and T-cell-mediated immunity, whereas VACV antigens inside cells contribute to the production of interferons, pro-inflammatory cytokines and chemokines, and NK cells responsible for innate immunity .This study strongly suggested that LC16m8 with truncated B5 protein has an activity to induce innate immunity and subsequent cell-mediated and humoral immunity almost completely comparable to the activity of its parental strain Lister.HY, YS, TK, SM, MK and HM are employees of Kaketsuken.SH declares that he has no conflicts of interest.
Background: Attenuated vaccinia virus strain, LC16m8, defective in the B5R envelope protein gene, is used as a stockpile smallpox vaccine strain in Japan against bioterrorism: the defect in the B5R gene mainly contributes to its highly attenuated properties.Methods: The protective activity of LC16m8 vaccine against challenge with a lethal dose of vaccinia Western Reserve strain was assessed in wild-type and immunodeficient mice lacking CD4, MHC class I, MHC class II or MHC class I and II antigens.Results: The immunization with LC16m8 induced strong protective activity comparable to that of its parent strain, Lister (Elstree) strain, in wild-type mice from 2 days to 1 year after vaccination, as well as in immunodeficient mice at 2 or 3 weeks after vaccination.These results implicated that the defect in the B5R gene hardly affected the potential activity of LC16m8 to induce innate, cell-mediated and humoral immunity, and that LC16m8 could be effective in immunodeficient patients.Conclusion: LC16m8 with truncated B5 protein has an activity to induce immunity, such as innate immunity and subsequent cell-mediated and humoral immunity almost completely comparable to the activity of its parental strain Lister.
Silica-based optical fibers are used for a large variety of applications ranging from high speed, high bandwidth data communications , diagnostics to point or distributed temperature or strain sensing .Most of these applications exploit their low attenuation that typically ranges below one dB km−1 at infrared telecom wavelengths.All the fiber intrinsic advantages explain that they are today widely used in telecommunications, structural health monitoring, the oil and gas industries and in medicine .A very particular application case concerns their integration in harsh environments associated with ionizing or non-ionizing radiations as those encountered in space, high energy physics facilities, fusion and fission-related facilities .Radiation usually strongly alters the functionality of commercial microelectronic technologies, preventing as a general rule, their use for dose levels exceeding a few Gy .The optical fibers are generally used as part of data links for signal transport from the irradiation zones to instrumentation zone, free of radiations , later as key component of diagnostics and today they serve as the sensitive element of numerous sensors’ architectures, either punctual ) or distributed technologies based on Rayleigh, Brillouin or Raman scattering .It has been shown since more than 50 years that radiation degrades the fiber optical properties in a very complex way, the main change being called radiation induced attenuation.RIA corresponds to a decrease of the fiber transmission capability .The RIA levels and kinetics strongly differ from one fiber to another .Numerous studies have been devoted to the analysis of the underlying parameters driving this phenomenon, especially to develop more radiation tolerant devices.To understand the RIA origins, the basic mechanisms of the radiation effects at the molecular scale have to be studied."Nowadays, it's well established that point defects are created by either ionization or displacement damage processes leading to structural modifications in the pure or doped amorphous host silica matrix of both fiber core and cladding .These radiation induced point defects are associated with optical absorption bands causing the observed excess loss under irradiation.Numerous experimental spectroscopic studies have been devoted to the characterization of the structure, optical or electronic properties of these point defects in silica as well as to the understanding of their thermal or photo-stabilities.The identification and attribution of these active centers, to a given molecular organization/configuration, is based on their specific signature responses which can be highlighted by using several crossing techniques such as the spectroscopic ones.A number of review papers , volumes and book chapters as well as PhD thesis have been devoted to the analysis of these optically-active point defects, mainly focusing on pure silica and Ge-doped silica.Even if a large part of these studies is of fundamental interest to understand the optical fiber responses, the fiber case notably differs from the one of bulk glass.Indeed, a fiber is manufactured by the successive deposition of numerous differently doped silica layers.Each of these layers will present a different radiation response, leading to a non-homogenous generation of defects in the optical fiber transverse cross-section.In addition to this composition inhomogeneity, the fibers are characterized by different internal stress levels in the various layers and at their interfaces, this stress being related to their manufacturing and drawing process.This stress will also affect the generation efficiencies of some of the defects, this is the case for example of the non-bridging oxygen hole centers that are more easily created from strained Si-O-Si bonds than from regular ones .Furthermore, in optical fibers, the light can only be guided through guided modes that are in limited number, the relative light power associated to the mode propagating into each layer depends on the waveguide properties and so the contribution of the defects to the global RIA.For single-mode fibers, at Telecom wavelengths such as 1550 nm, between 15% and 40% of the light can be guided in the claddings of some specialty optical fibers.Under the modeling point of view, guiding properties, as a function of the fiber geometry, can be obtained by numerically solving Maxwell Equations using experimentally determined macroscopic dielectric constants.However, the main theoretical efforts have been and are still mainly focused on the assignment between experimental spectroscopic signatures and point defects atomic structures , including attempts to understand generation and conversion mechanisms .In this review, we focus the analysis on the fiber radiation response in the 300 nm–2000 nm spectral domain, discussing which are the main defects responsible for the fiber degradation when exposed to transient irradiations or steady state ones.Even if the active optical fibers such as the Erbium, Erbium-Ytterbium-doped ones are not directly discussed, it is today well-established that their radiation responses is explained by the host glass matrix selected for their incorporation that usually contains phosphorus and/or aluminum dopants.In this section, we briefly describe the radiation effects on optical fibers and their impact on the related applications, more details can be found in the recent review .Three major effects are observed that can impact the functionality of fiber-based technologies:The radiation induced attenuation,The radiation induced emission,The radiation induced refractive index change,The relative importance of these three phenomena depends on the considered fiber, the harsh environment and the fiber profile of use.RIA almost affects all targeted applications, RIE can generally be mitigated for most of the applications/environments.RIE can also be used for dosimetry purposes, exploiting the Cerenkov emission or the radioluminescence .RIRIC or compaction is mainly observed under neutron exposure and will especially affect the performances of fiber-based sensors exploiting the glass structure to monitor environmental parameters and in-core instrumentation, exposed
Under irradiation, the macroscopic properties of the optical fibers are modified through three main basic mechanisms: the radiation induced attenuation, the radiation induced emission and the radiation induced refractive index change.Considering the strong impact of radiation on key applications such as data transfer or sensing in space, fusion and fission-related facilities or high energy physics facilities, since 1970′s numerous experimental and theoretical studies have been conducted to identify the microscopic origins of these changes.The observed degradation can be explained through the generation by ionization or displacement damages of point defects in the differently doped amorphous glass (SiO 2 ) of the fiber's core and cladding layers.
to very high fluences of fast neutrons, above 1019 n.cm−2 and to high doses of associated gamma radiations .RIA corresponds to an increase of the fiber attenuation when exposed to radiation.RIA levels and kinetics depend on many parameters that are reviewed in Fig. 1 and explain why this research domain is still strongly active.Indeed, the response of a given silica-based fiber has been shown to depend on the irradiation characteristics: dose or fluence) , the dose rate or flux) , the temperature of irradiation and of its profile of use: injected light power , operating wavelength .Furthermore, the RIA levels strongly depend on the fiber intrinsic characteristics such as the composition of its core and cladding , its manufacturing process , its opto-geometric parameters and its light guiding properties .During the irradiation of the fiber at room temperature, a RIA growth is usually observed and when the irradiation stops, for most of the fibers, the RIA decreases partially, reaching a permanent value depending strongly on the temperature.This behavior can be explained by competitive defect generation and bleaching mechanisms occurring during irradiation, while bleaching mechanisms clearly dominate the post-irradiation processes.The main parameter controlling the fiber radiation response in terms of RIA is its core and cladding composition.Usually, the fiber doping profiles are optimized to achieve two main objectives: first one is the design of the fiber refractive-index profile that defines the guided modes, confinement factor or sensing properties as its Brillouin signature.The second one is to ensure that the glass presents very low attenuation, reducing the absorption and Rayleigh scattering levels close to their theoretical limits.For these reasons, the common doping elements are quite limited in number for passive optical fibers: germanium, fluorine, boron, phosphorus, aluminum and nitrogen.Most of the telecom-grade optical fibers possess a Ge-doped core and either a pure silica cladding or a cladding doped with a combination between the Ge, P and F dopants.Phosphorus and Aluminum dopants are widely used in the core of active optical fibers to reduce the clustering of the rare-earth ions and then improve the performances of the fiber-based amplifiers or lasers.As it was shown that fibers containing one of these two dopants are very radiation sensitive, the response of passive Al- and P-doped optical fibers is today more and more studied pushed by the increasing need for distributed dosimetry techniques.Boron is also today widely used in polarization-maintaining optical fibers or to increase the photosensitivity of germanosilicate optical fibers for easier FBG writing.No recent study has been really devoted to the impact of this dopant on the fiber radiation vulnerability.There exists another class of optical fibers, less studied in the literature, that have their cores doped with nitrogen through the reduced-pressure plasma chemical vapor deposition process .These N-doped fibers present several very interesting properties, including a good radiation tolerance at low doses for both transient and steady state irradiations .These fibers also present strong radioluminescence and optically stimulated luminescence that can serve for online monitoring of photon or proton beams .Another class of fibers is not covered by this review, the microstructured and photonic band gap fibers as only a few papers deal with their radiation response .For the PBG fibers, these fibers present very low RIA levels under steady state γ-ray irradiation thanks to their air-core and pure silica structure ; for high dose rate transient exposures their potential sounds promising in comparison with usual PSC fibers but needs more investigations as an unusual dose dependence of the RIA was observed in .Fig. 2 illustrates the RIA dose dependence at 1550 nm observed for four different optical fibers during a steady state X-ray irradiation: The pure-silica core F-doped cladding optical fiber and Ge-doped, P-doped and Al-doped fibers, all with pure silica claddings.From this figure, it is evident that at 1550 nm, the third telecommunication window, the four fibers present very different vulnerabilities.The PSC fiber presents the lowest level of induced losses at this wavelength, with RIA of about ∼40 dB.km−1 after 1 MGy dose.Indeed PSC fibers, together with F-doped fibers, have been shown to be the most radiation hardened waveguides for applications having to operate at such high dose levels: ITER , nuclear waste repository , nuclear industry or high energy physics facilities … In this case, their radiation response is even more complex as the RIA levels and kinetics are strongly influenced by the amounts of impurities and by the glass properties: fictive temperature , stoichiometry .By affecting the glass structure and disorder, variation of the fiber manufacturing or drawing processes changes its radiation sensitivity by modifying the nature and concentration of the sites acting as precursor reservoir for the generation of the optically active radiation-induced defects: strained bonds, oxygen-deficient centers, oxygen-excess centers.Hydrogen also plays a key role in the defect creation and bleaching mechanisms and then in defining the fiber vulnerability .Indeed, hydrogen, coming either from the harsh environments or generated by radiations through interaction with the fiber coating or cable material is able to easily diffuse into the optical fibers with kinetics depending mainly on the temperature and fiber geometric parameters .After reaching the fiber core, hydrogen interacts with the radiation-induced defects passivating some of them.As a consequence, hydrogen loading was investigated for the radiation hardening of optical fibers, showing promising results mainly for pure-silica core in the visible-near infrared and active Er- and ErYb-doped optical fibers at the pump and signal wavelengths .This hardening technique is of no practical
Silica-based optical fibers, fiber-based devices and optical fiber sensors are today integrated in a variety of harsh environments associated with radiation constraints.Depending on the fiber profile of use, these phenomena differently contribute to the degradation of the fiber performances and then have to be either mitigated for radiation tolerant systems or exploited to design radiation detectors and dosimeters.Indeed, the fiber chemical composition (dopants/concentrations) and elaboration processes play an important role.In this review paper, the responses of the main classes of silica-based optical fibers are presented: radiation tolerant pure-silica core or fluorine doped optical fibers, germanosilicate optical fibers and radiation sensitive phosphosilicate and aluminosilicate optical fibers.
phosphosilicate glasses are well adapted to dosimetry applications as well as a host matrix for active optical fibers.Hardening studies on Er and Er-Yb fiber amplifiers show that the P-related defects, especially P1-centers, can be efficiently bleached by the hydrogen-loading of the glass or by Ce-codoping .Another particularity of this defect is that its concentration can increase after the end of the irradiation, as it was shown that POHCs can recombine into P1 at room temperature .Literature devoted to Al-related point defects remains significantly less well supplied than for the other silica-associated centers.First results have been obtained for natural silica that contains Al as impurity , only in few cases the investigated samples have Al contents that can be compared to those used for optical fiber manufacturing .As a consequence, whereas for the natural silica it is accepted that the Al can be inserted in the glass matrix replacing the Si and with an alkaline charge compensator as neighbor in doped silica this aspect still needs further investigations to be confirmed.Table 4 details the optical absorption bands associated with the Al-defects.As shown in Fig. 7, this defect set is sufficient to describe the RIA measured in the UV–visible spectral range in aluminosilicate fibers both during X-ray steady state irradiation and a few seconds after a pulsed X-ray.Regarding these attributions we notice that the relation between the Al-OHC and the 2.3 eV OA band is well supported by various investigations , whereas the others have still to be confirmed.As an example, there are some investigations in which the 2.3 eV OA band is clearly present in the RIA spectrum whereas the 3.2 eV seems absent or with a smaller relative amplitude with respect to those observed in .Anyway, the bands reported in Table 4 are not sufficient to fit the data from the UV to the NIR as evidenced in .Recent investigations have highlighted that the Al-doped fibers are good candidates for radiation detection , a better understanding of the Al insertion in silica matrix as well as the properties of its related defects represents, for sure, one of the future challenges.In this review paper, we presented the main macroscopic radiation effects on optical fibers: the radiation induced attenuation, the radiation induced emission and the radiation-induced refractive index change.The amplitudes and kinetics of these changes depend on a large number of parameters, some of them being related to the fibers such as composition or manufacturing processes; others are extrinsic such as the ones related to the irradiation characteristics and the fiber profile of use.These macroscopic effects can mainly be explained by the point defects generated by radiation in the pure or doped silica layers constituting the fiber core and cladding.Understanding their generation and bleaching mechanisms, identifying their optical properties and thermal stabilities allow devising ways to control the fiber radiation response.Such optimization is done to enhance the fiber radiation resistance, hardening studies, as it was successfully proved by the H2 loading treatments of the fibers or by the codoping of P or Al-doped fibers with Ce3+ ions.There is also today an increasing interest for using radiation sensitive optical fibers for radiation detection or dosimetry.In this paper, we presented typical responses of the four main types of fibers to both pulsed X-ray irradiations and steady state X-ray or γ-rays irradiation."We reviewed the today's knowledge about the point defects related to silica, Ge, P and Al and discussed how the known defects can explain the measured RIA.Though the identified defects are usually able to reproduce the loss excess in the UV and visible, for most of the fiber types, it is clearly shown that we are still missing part of the infrared RIA origins despite the large use of fibers at Telecom wavelengths.Historically most of the acquired knowledge was obtained by combining various experimental spectroscopic techniques such as absorption, luminescence, or electron paramagnetic resonance.Today with the establishment of very accurate and parameter free first-principle approaches, as GW-BSE and GIPAW, it becomes possible to somehow overcome the experimental limitations and to support the correlation between defect structure and experimentally observed signatures.However, the application of parameter-free accurate approaches, that overcome the known drawbacks of Density Functional based-frameworks, on complex models larger than few hundreds of atoms, requires an order of magnitude leap in computational power to be achieved successfully.In addition, the accurate modeling of self-trapping through electronic excitations, as well as luminescence is yet not straightforward.Only one group was able to perform the extremely computationally demanding calculation of the structure and luminescence of Self-Trapped Excitons in quartz within the GW-BSE framework.On the other side, while point defects in pure silica have attracted most of the theoretical efforts, primary, because of their interest for microelectronics, the number of available studies decreases significantly for Ge-doped silica and furthermore for other dopants relevant for optical fibers.Indeed, entire class of dopants, impurities and related defects lack, even, from an atomic-scale structure model that reproduce some experimental spectroscopic signatures and/or explain some measured behavior.The drawing of an exhaustive map of defects and dopants together with their spectroscopic signature and generation and conversion mechanisms that links experimental results with atomic-scale models is a priority.
Consequently, identifying the nature, the properties and the generation and bleaching mechanisms of these point defects is mandatory in order to imagine ways to control the fiber radiation behaviors.Our current knowledge about the nature and optical properties of the point defects related to silica and these main dopants is presented.The efficiency of the known defects to reproduce the transient and steady state radiation induced attenuation between 300 nm and 2 µm wavelength range is discussed.The main parameters, related to the fibers themselves or extrinsic - harsh environments, profile of use - affecting the concentration, growth and decay kinetics of those defects are also reviewed.Finally, the main remaining challenges are discussed, including the increasing needs for accurate and multi-physics modeling tools.
Osteochondromas or exostoses are the most common benign tumours of bone.They account for 35%–46% of all benign neoplasms of bone .This bone protuberance is generally found in the immature skeleton of children and adolescents, and their growth usually ceases when skeletal maturity is reached .According to the World Health Organization, osteochondromas are bone projections enveloped by a cartilage cover that arise on the external surface of the bone .Despite their predominant composition of bone, their growth is via progressive endochondral ossification of the cartilaginous cap.They present two distinct clinical forms: developing in the metaphyseal region of long bones either alone or in connection with the hereditary multiple exostoses syndrome, an autosomal dominant disorder characterized by the formation of multiple cartilaginous osteochondromata in the immature skeleton .About 90% are solitary exostoses and may occur on any bone but usually found on the metaphysis of long bones .Osteochondroma comprises of about 35%–46% of all benign bone tumours .About 90% occur in the metaphysis of tibia, humerus and distal femur .The scapula is involved in 3.0–6.4% of all cases.These tumours are usually asymptotic and are discovered incidentally.Some patents may present with pain due to mechanical pressure to surrounding structures, fracture of the bony stalk of the tumour, neurovascular impingement, bursa formation and rarely malignant transformation of the cartilaginous cap, and only then is surgery considered best treatment .Literature on the surgical technique of excision of symptomatic exostosis is limited .We therefore present the case of a 17 years old patient with symptomatic ventro-medial right scapula solitary exostosis.This case was reported in line with the SCARE criteria .We report the case of 17 years old right-handed male who presented in our outpatient department with progressive right shoulder pain for 02 years.During the last 01 year he developed gradual right scapula winging with limitation of overhead activities.There was no notion of trauma or fever.Patient was otherwise healthy with no pertinent family history.He had several consultations with an attempted excision during one of his previous consultations by an inexperienced health personnel.Physical examination showed an asymmetry of his scapulae with a wing-like prominence of his right scapula giving a right medial scapula elevation from thoracic cage of about 4 cm and a difficultly palpable mass with crepitus of the shoulder on mobilization.Elsewhere on inspection, we had a longitudinal scar of about 7 cm on the medial border of the right scapula from an attempted excision by inexperienced medical personnel.A full range of motion was found in both shoulders.Radiographic evaluation showed an irregular bony structure extruding from the scapula.Computed tomography revealed a bony exostosis along the medial border on the ventral surface of the right scapula.There were no signs of malignant transformation.Patient was placed in a prone position under general anesthesia.A parascapular incision was made along the medial border of the right scapula.Sharp dissection was carried out down to the level of the fascia of the trapezius muscle.After opening the fascia the trapezius muscle was retracted following its fibers cranially.The rhomboid was split bluntly in line with the fibers and subperiosteal dissection allowed full exposure of the exostosis.The stalk of the exostosis was excised at the base with an osteotome from the ventral surface of the scapula.The specimen measured 9 cm × 5 cm.The different muscular fibers layers fell back against each other after removal of the exostosis.A Redon vacuum drain was placed followed by the closure of the trapezius fascia and finally, the wound was closed in layers.Histologic examination confirmed that the specimen was an osteochondroma with no signs of malignant transformation.Patient was placed in a sling for pain relief for one week.Immediate range of motion was started as tolerated by the patient.Pain relief was excellent; there was no crepitus and scapula deformity.By three weeks full range of motion was possible without pain or discomfort.Patient had no pain, full range of right shoulder motion without discomfort at one year follow up,Clinical manifestation of osteochondroma of the scapula is strictly correlated to its size and location.Symptoms result from mechanical irritation of muscle, tendon or soft tissue, formation of a pseudo aneurysm or bursa, fracture, or malignant transformation .Osteochondroma of the scapula usually arises on its anterior surface .Surgical excision is an excellent treatment approach for symptomatic patients with scapula exostosis.There are 3 main surgical approaches to removal of scapula exostosis described in literature: muscles sparing, muscle detaching and endoscopically assisted techniques .In our case presentation we used a muscle sparing technique.No muscle detachment will ensure less blood loss, rapid and better postoperative recovery.This recovery time maybe even much shorter with endoscopy techniques alongside a better cosmetic outcome .Giving the limited access to endoscopic techniques in our resource limited settings, muscle sparing technique is a better alternative with good results.Surgical removal is useful in eliminating painful symptoms, discomfort and avoids possible malignant transformation.The diagnosis osteochondroma of the scapula should be considered in any patient with scapular pseudo-winging, crepitus and pain of shoulder region between 10–30 years of age.Good clinical outcome can be expected with surgical excision of symptomatic ventral osteochondromas of the scapula.Muscle-sparing technique offers a quick functional rehabilitation of patients with symptomatic osteochondromas in resource limited settings.This research did not receive any funding.This study was performed in accordance with the guidelines of the Helsinki Declaration and was approved by the Ethical board of the faculty of medicine and biomedical science of the University of Yaoundé 1.Written informed consent was obtained from the patient’s parents for publication of this
Introduction: Osteochondroma also known as exostosis is one of the most common benign bone tumours, and are characterized by bone protuberances surrounded by a cartilage layer.Scapula localization of solitary exostosis is relatively rare.Surgical excision is an excellent treatment option for symptomatic patients with scapula osteochondroma.Surgical removal is useful in eliminating painful symptoms and avoids possible malignant transformation.Conclusion: Good clinical outcome is expected with surgical excision of symptomatic scapula osteochondromas especially using muscle-sparing technique which offers a quick functional rehabilitation of patients.
The Caspian Sea, which is located in the northern I.R. Iran, is the largest lake in the world and is connected to the distant Baltic through canals and the River Volga.It is unique closed water basin, plays the important role in the establishment of the climate.The Anzali Wetland, located on the southern coast of the Caspian Sea, is internationally known as an important wetland for migratory birds, and was registered as a Ramsar site in June 1975 in accordance with the Ramsar Convention.Hamun wetland, the largest freshwater expanse of the Iranian plateau, is listed in the Convention on Wetlands, Ramsar .The fish species including Rutilus rutilus, Hemiculter Leucisculus, and Alosa Caspia Caspia, Ctenopharyngodon idella, Cyprinus carpio, Hypophthalmichthys molitrix, Hypophthalmichthys nobilis, Schizocypris altidorsalis, and Schizothorax zardunyi were randomly collected.Twenty fish samples from each species were transferred to the laboratory and stored in refrigerator.Afterwards, the tissues were separated and dried.The dried samples were ground and changed into a homogenous powder and then the mercury concentration rate has been determined by Advanced Mercury Analyzer, LECO AMA 254 according to ASTM, standard No. D-6722.Each sample was analyzed 3 times.The LECO AMA 254 is a unique Atomic Absorption Spectrometer that is specifically designed to determine total mercury content in various solids and certain liquids without sample pre-treatment or sample pre-concentration.Designed with a front-end combustion tube that is ideal for the decomposition of matrices, the instrument’s operation may be separated into three phases during any given analysis: Decomposition, Collection, and Detection .The AAS equipped with graphite furnace was used for lead analysis.A volume of 20 microliters of the sample was injected into the device .Fig. 2 shows the steps of the procedures used in this study.In order to assess the analytical capability of the AMA methodology, accuracy of total mercury analysis was checked by running three samples of Standard Reference Material, NIST SRM 1633b, SRM 2709, and SRM 2711 in six replicates.Recovery varied between 95% and 100%.In order to check the reproducibility of the analysis, the samples were analyzed in triplicate.The coefficient of variation was between 0.05% and 2.5%.The accuracy of the AAS method was verified by analyzing the standard reference material 1515-Apple Leaves.Certified value, observed value, and recovery was 0.470 ± 0.024, 0.450 ± 0.042, and 95.7%, respectively.As it can be seen, there is a good agreement between observed mean and certified value .There are four steps in this method :Hazard identification involves gathering and evaluating toxicity data on the types of health injury or disease that may be produced by a chemical and the conditions of exposure under which injury or disease is produced.The subset of chemicals selected for the study is termed “chemicals of potential concern”.Data from acute, subchronic, and chronic dose-response studies are used .NOAEL: No Observed Adverse Effect Level,LOAEL: Low Observed Adverse Effect Level,C: Concentration of toxic material,AF: a fraction of the dose in the organ or tissue that is absorbed after a while.AF for this study was assumed 0.4.The concentrations of Hg in tissues of Rutilus rutilus, Hemiculter Leucisculus, and Alosa Caspia Caspia was measured.The results of laboratory analysis showed that there are significant difference between the concentration of mercury between species.There was no significant difference between the independent variables of gender, age and weight of the dependent variable is the amount of mercury in the tissues of the Rutilus rutilus.But between the length and the amount of mercury in the kidney of Rutilus rutilus, there was significant difference at 95%.Mean concentrations of Hg in muscle of Ctenopharyngodon idella, Cyprinus carpio, Hypophthalmichthys molitrix, Hypophthalmichthys nobilis, Schizocypris altidorsalis, and Schizothorax zardunyi were 0.14, 0.28, 0.15, 0.15, 0.34 and 0.36 mg/kg respectively.The results of laboratory analysis showed that there are significant difference between the concentration of mercury in the muscle between species.Mean concentrations of Pb in muscle of Ctenopharyngodon idella, Cyprinus carpio, Hypophthalmichthys molitrix, Schizocypris altidorsalis, and Schizothorax zardunyi were 0.32, 0.39, 0.35, 0.72 and 0.81 mg/kg respectively.There was no significant difference between lead concentrations of these species.Table 3 shows ADDpot and HQ of heavy metals in muscles of fish samples from the wetlands.Among the fish species examined in this study, Hemiculter Leucisculus with a HQ value of 0.009 has the lowest potential health risk to mercury and Schizothorax zardunyi with a HQ value of 1.2 has the highest potential health risk to mercury.The HQ through the consumption of Schizocypris altidorsalis and Schizothorax zardunyi was higher than 1, indicating that there is potential health risk associated with the consumption of these fish from the hamun wetland.The results for lead concentration indicate that there is no HQ value>1, indicating that humans would not experience any significant health risk if they only consume metals from these species of fish from the hamun wetland.The concentrations of mercury in all species were below the limits for fish proposed by United Nations Food and Agriculture Organization, World Health Organization, US Food and Drug Administration and US Environmental Protection Agency, and European Union.Lead concentrations in Ctenopharyngodon idella, Cyprinus carpio, Hypophthalmichthys molitrix were under the scope proposed by FAO, WHO, FDA, Turkish Acceptable Limits, United Kingdom Ministry of Agriculture Fisheries and Food and National Health and Medical Research Council, but lead concentration in Schizocypris altidorsalis, and Schizothorax zardunyi were higher than WHO and TAL.CRlim is Maximum Allowable Consumption Rate per day.The highest amount of allowable consumption regarding mercury is for Hemiculter Leucisculus.In contrast, Schizothorax zardunyi has the lowest amount of fish intake.It should be noted that maximum consumption of
The concentrations of mercury were below the limits for fish proposed by United Nations Food and Agriculture Organization (FAO), World Health Organization (WHO), US Food and Drug Administration (FDA) and US Environmental Protection Agency (EPA), and European Union (EU).Lead concentrations in Ctenopharyngodon idella, Cyprinus carpio, Hypophthalmichthys molitrix was under the scope proposed by FAO, WHO, FDA, Turkish Acceptable Limits (TAL), United Kingdom Ministry of Agriculture Fisheries and Food (UK MAFF) and National Health and Medical Research Council (NHMRS), but lead concentration in Schizocypris altidorsalis, and Schizothorax zardunyi were higher than WHO and TAL.The results for lead concentration indicate that there is no HQ value > 1, indicating that humans would not experience any significant health risk if they only consume metals from these species of fish from the hamun wetland.
0.020 kg/day of Schizocypris altidorsalis and 0.019 kg/day of Schizothorax zardunyi there is no potential health risk.Tap: time averaging period,MS: meal size,The results show that for mercury, the Maximum Allowable Fish Consumption Rate is related to Hemiculter Leucisculus.The results of the present study aimed to provide data from Caspian Sea, Anzali wetland, and Hamoon wetland as indicators of natural and anthropogenic impacts on aquatic ecosystem as well as to evaluate the human hazard index associated with fish consumption.The results indicated that the highest Average Daily Dose regarding mercury was for Schizocypris altidorsalis and Schizothorax zardunyi.The Maximum Allowable Fish Consumption Rate per month for mentioned fishes was 2.68 and 2.54 meals, respectively.This result regarding lead for Schizocypris altidorsalis was interesting.The human health Hazard Quotient showed that the cumulative risk greatly increases with increasing fish consumption rate, thus yielding an alarming concern for consumer health.The annual monitoring and measurement of heavy metals and other pollutants in fishes of wetlands and production of a database is necessary.In between aquatic ecosystems, wetlands and rivers have a great ecological importance.Heavy metals from geological and anthropogenic sources are increasingly being released into natural waters.Contamination of aquatic ecosystems with heavy metals has seriously increased worldwide attention, and a lot of studies have been published on the heavy metals in the aquatic environment.Under certain environmental conditions, heavy metals may accumulate to toxic concentrations and cause ecological damage .Mercury is a special concern in marine ecosystems, where methylation occurs during the process of biotransformation and accumulates in biota.Mercury is a toxin to the central nervous system and it can readily cross the placental barrier .Lead is attracting wide attention of environmentalists as one of the most toxic heavy metals.The sources of lead release into the environment by waste streams are battery manufacturing, acid metal plating and finishing, ammunition, tetraethyl lead manufacturing, ceramic and glass industries printing, painting, dying, and other industries.Lead has been well recognized for its negative effect on the environment where it accumulates readily in living systems.Lead poisoning in human causes severe damage to the kidney, nervous system, reproductive system, liver and brain .The results of a study in Khur-e-Khuran international wetland in the Persian Gulf, Iran show that measured values of most heavy metals in some examined fishes of Khur-e-Khuran wetland were higher than those maximum permissible limit according to international standards .
The aim of this study is determination of mercury concentration in the muscle, intestine, gonad and kidney of Rutilus rutilus, Hemiculter Leucisculus (Anzali wetland), and Alosa Caspia Caspia (Caspian Sea), and mercury and lead concentrations in the muscle of Ctenopharyngodon idella, Cyprinus carpio, Hypophthalmichthys molitrix, Hypophthalmichthys nobilis, Schizocypris altidorsalis, and Schizothorax zardunyi (Hamun wetlands).The results of this study were compared with global standards.As well as in this multispecies monitoring, health risk assessment of consumers by EPA/WHO instructions has been done.Health risk assessment of consumers from the intake of metal contaminated (mercury and lead) was evaluated by using Hazard Quotient (HQ) calculations.The human health hazard Quotient (index) showed that the cumulative risk greatly increases with increasing fish consumption rate, thus yielding an alarming concern for the consumer's health.The results of the present study aimed to provide data from Caspian Sea, Anzali wetland, and Hamoon wetland as indicators of natural and anthropogenic impacts on aquatic ecosystem as well as to evaluate the human hazard index associated with fish consumption..The results show that for mercury, the Maximum Allowable Fish Consumption Rate (Meals/Month) is related to Hemiculter Leucisculus..
ventricular dilatation ; MRI: periventricular hemorrhagic infarction, cPVL, PHVD, large cerebellar hemorrhage ).The Kidokoro global score was calculated based on the MRI at term-equivalent age as a marker for injury severity and maturation.The Bayley Scales of Infant and Toddler Development, third edition was used to assess cognitive outcome at 30 months corrected age, by calculating the cognitive composite score.The Wechsler Preschool and Primary Scale of Intelligence, third edition, Dutch version was used to assess the intelligence quotient at 5 years of age.The mean in a normative population is 100 for the cognitive composite score and the IQ.SPSS version 21 was used to perform statistical analyses.Chi-squared or Fisher’s exact test was used to investigate the relation between the location of the EEG patterns and the infant’s head position and compare the proportion of infants with each pattern between those with and without injury on cUS during the first 72 h after birth or on MRI at term-equivalent age and between deceased and surviving infants.These tests were also used to compare the proportion of infants with each pattern between those with and without morphine during the first 72 h. Kruskal-Wallis test was used to compare the median PED burden between injury severity groups.Binary logistic regression was performed to investigate the relation between the burden of each pattern and injury on cUS and MRI.Linear regression was performed to investigate the relation between the burden of each pattern, the BSITD-III cognitive composite score, the WPPSI-III-NL total IQ, the PED burden, and the Kidokoro global score.A total of 77 preterm infants were included in this study.During the study period 110 infants were born <28 weeks of gestation.Thirty-three infants were excluded: one was admitted for a very short period, one had a congenital anomaly, 14 had a single channel EEG, 10 had a faulty electrode on one side and in seven infants the EEG file was corrupted.Patient characteristics are shown in Table 1.In 29 infants, no distinctive rhythmic EEG patterns were observed.The distribution and characteristics of the EEG patterns in the rest of the population are shown in Table 2.Multiple patterns in one infant were observed in 36.4% of the population.Only one infant had ictal discharges during the first 72 h after birth.This infant was diagnosed with a Listeria monocytogenes meningitis.The cUS on admission to the NICU showed fibrin strands in the lateral ventricles and echogenicity in the white matter.Later on, the infant developed intraventricular and parenchymal hemorrhages.The ictal discharges did not respond to antiepileptic therapy and the infant died at day of life 4.Interestingly, the ictal discharges were recognized on aEEG as a clear rise of the lower border, which was never seen in any of the other rhythmic patterns described in this study.Since ictal discharges were only observed in one infant, they were not included in further analyses.The EEG location of the rhythmic pattern was significantly associated with head position for the sinusoidal, zeta, and PED-like waves, but not for PEDs.The proportion of events when head position coincided with the presence of sinusoidal, zeta and PED-like waves was 73.7%, for PEDs this was 57.7%.Sinusoidal, zeta and PED-like waves: Of the patterns influenced by head position, the proportion of infants with sinusoidal waves was higher in infants with injury on cUS in the first 72 h after birth, but not in infants with injury on MRI at term-equivalent age.Mean suppression intervals during the first 72 h were significantly longer in infants with injury on cUS.The total duration of sinusoidal waves was not related to injury on cUS or MRI.No difference in incidence of zeta or PED-like waves was observed between infants with or without injury on cUS or MRI.The total duration of these patterns was also not related to injury on cUS or MRI.PEDs: The incidence and total duration of PEDs were not different between infants with and without injury on cUS or MRI.The median PED duration was not different between injury severity groups.No positive relation, but a trend towards a negative relation between total PED duration and MRI injury severity and maturation score was observed.Sinusoidal, zeta and PED-like waves: Of the patterns influenced by head position, the proportion of infants with sinusoidal and PED-like waves was significantly higher in infants who died, but no relation was found between the total duration of these patterns and the BSITD-III cognitive composite score at 2 years or IQ at 5 years.No associations were found between the incidence or total duration of zeta waves and any of the outcome parameters.PEDs: The incidence and total duration of PEDs was not associated with death, the BSITD-III cognitive composite score at 2 years or IQ at 5 years.Morphine was administered for sedation during mechanical ventilation at the discretion of the attending neonatologist.The incidence of sinusoidal, zeta, and PED-like waves was higher while the incidence of PEDs was lower in infants who received morphine during the first 72 h compared to infants who did not receive morphine.AEDs were given at the discretion of the attending neonatologist when they suspected an infant to have seizures, either clinical or on EEG.In 11 patients a total of 19 AEDs were administered during the EEG recording.Seven infants received only one AED, two infants received two different AEDs, one infant three and one received five different AEDs.Four AED administrations were started during a rhythmic pattern, which had a temporary effect on ictal discharges on one occasion, but no effect was seen on sinusoidal, zeta, PED-like waves and
Brain injury was assessed with sequential cranial ultrasound (cUS) and MRI at term-equivalent age.Neurodevelopmental outcome was assessed with the BSITD-III (2 years) and WPPSI-III-NL (5 years).No relation was found between the median total duration of each pattern and injury on cUS and MRI or cognition at 2 and 5 years.
an effect on PEDs in our cohort.It should be noted that this study has some limitations.First, we only assessed the EEG at the markers placed by the seizure detection algorithm of the BrainZ monitor.We chose to do so because in everyday clinic, assessment of long-term EEG recordings is mostly limited to the parts where markers are placed by the seizure detection algorithm and parts with changes in the aEEG trace.However, the BrainZ algorithm has a sensitivity of 83–95% for detecting seizures in full-term infants, and it is likely that the algorithm did not detect all rhythmic EEG patterns and the true burden of these patterns may be higher than reported in this study.The sensitivity in extremely preterm infants and the sensitivity for non-seizure rhythmic patterns are unknown.Second, the EEG assessments were made with continuous 2-channel EEG without extra polygraphic variables such as electrocardiogram and respiration.Multichannel video-EEG remains the gold standard for detection and quantification of rhythmic patterns and seizures.It can localize the different patterns to certain areas and when polygraphy is used common artefacts such as respiration can easily be ruled out.Furthermore, experienced technicians will detect movement artefacts and artefacts due to inadequate electrode impedance.Our results show that automatic seizure detection algorithms should be used with caution, especially in preterm infants.The seizure detection algorithm on the BrainZ monitor has not been validated for these extremely preterm infants.If detections are not reviewed carefully it may result in unnecessary AED administration and an infant might be incorrectly identified as having seizures.This will cause unnecessary concern for the parents and AEDs may be harmful to the developing brain as well.It should also be noted that electrodes other than needle electrodes may give rise to even more artefacts.Clinicians faced with rhythmic EEG patterns in 2-channel EEG of extremely preterm infants should first place the infant’s head in the midline and make sure the electrodes are free from contact with surrounding materials and be placed correctly.This will eliminate several artefacts.If rhythmicity is still observed, only when clear evolution in amplitude, frequency or morphology is visible, treatment with AEDs should be considered.The aEEG can aid in distinguishing PEDs from ictal discharges, showing a simultaneous rise in the lower border of the aEEG with ictal discharges.Patterns resembling sinusoidal, zeta or PED-like waves do not need treatment.When PEDs are observed, a multichannel video-EEG should be performed to confirm or rule out ictal discharges.Further research using multichannel video-EEG is needed to investigate the significance of PEDs in preterm brain development.Rhythmic EEG patterns are common in extremely preterm infants, but ictal discharges were only observed in 1.3%.Furthermore, three patterns were significantly related to head position and are likely artefacts.PEDs were common and not related to head position, but were also not associated with brain injury or poor cognitive outcome.This study shows that EEG patterns described in older infants and adults do not have the same prognostic or diagnostic value in extremely preterm infants.Clinicians using the BrainZ seizure detection algorithm in extremely preterm infants should review rhythmic activity marked by the algorithm carefully before starting AED treatment.We confirm that we have read the Journal’s position on issues involved in ethical publication and affirm that this report is consistent with those guidelines.None of the authors has any conflict of interest to disclose.
Objective Classify rhythmic EEG patterns in extremely preterm infants and relate these to brain injury and outcome.Patterns detected by the BrainZ seizure detection algorithm were categorized: ictal discharges, periodic epileptiform discharges (PEDs) and other waveforms.Results Rhythmic patterns were observed in 62.3% (ictal 1.3%, PEDs 44%, other waveforms 86.3%) with multiple patterns in 36.4%.Ictal discharges were only observed in one and excluded from further analyses.The EEG location of the other waveforms (p < 0.05), but not PEDs (p = 0.238), was significantly associated with head position.Conclusions Clear ictal discharges are rare in extremely preterm infants.PEDs are common but their significance is unclear.Rhythmic waveforms related to head position are likely artefacts.Significance Rhythmic EEG patterns may have a different significance in extremely preterm infants.
From all the events which occur daily, only a few are deemed to be newsworthy enough to be reported as news in traditional print newspapers or online.Those stories which are picked up by the press usually have special attributes, such as their unexpectedness, their major negative consequences, their effect on the social elite, violent attacks as their topic, eye-catching pictures or their impact on many people.By analysing the content published by a particular set of mass media outlets —mass media outlets are those providing written, broadcast or spoken communications that reach a large audience—, the media coverage given to a specific event or topic can be assessed.For example, the influence that news has on the stock market has been investigated.Published content can also be used to detect any bias that exists between what is published and reality.For instance, it is known that there is more coverage of crimes involving violence or indecency than other crimes, with tabloids tending to publish more sensationalist news items, meaning that what is published in the news differs from reality, in some sense.Also, it has been shown that fake news tends to spread faster than real news since it is more novel, perhaps inspired by shock or fear.The audience is highly selective in their media choices, their attention span is very limited, and collectively, they decide what they want to consume."Therefore, although editors of traditional media set a general agenda and coordinate with journalists to decide what is newsworthy, they also consider feedback from their audience.Therefore, although this is an area of debate, it might be considered that the audience itself ‘manages the news’ by maintaining or losing interest in a given subject.Therefore, any significant discrepancies between reality and what is portrayed by the media reveal the interests of their audience.The physical distance between individuals and a specific event, or in a more generalised way, any cultural or social ‘distance’, reduces how connected they feel about it and so, it also reduces their interest and determines the focus of media on the event.Media data can thus help to uncover the strength of the relations between different regions.It is therefore expected that most of the coverage given by the media is mainly focused on activities and events which are closer to where their audience happens to be.But, what is the balance of coverage of the media with respect to events which are at a similar distance?,Are certain locations more newsworthy?,It has been found that people in larger cities tend to suffer more crime, tend to have a higher income and apply for more patents per capita and it has also been speculated that they produce less CO2 emissions per person.Some socio-economic indicators increase with population size, for instance, people in larger cities tend to have more social contacts, they usually walk faster, and are less likely to migrate.The shared infrastructure of large cities allows them to have fewer petrol stations and big cities have less road surface per person.Thus, infrastructure, as well as social and economic aspects, ‘scale’ with city size and so, with respect to the media coverage, this begs the question: are large cities more newsworthy than small cities?,The analysis to compare spatial aspects of the media coverage given to various events is challenging since it often reveals differences between the events themselves rather than the social behaviour viewed through the lens of the media.For instance, although they were roughly similar events, it is not straightforward to compare the Bataclan massacre in Paris in November 2015 and the Nice attack in July 2016 for a variety of reasons: the first occurred on a Friday night and the second, on a Thursday, which also happened to be a national holiday; the first had more fatalities but fewer injured people; and also, on the day of the attack in Nice, Donald Trump announced that he would campaign with Mike Pence for the 2016 US Presidential election and two days later, the Republican National Convention took place, potentially creating a shadow in all but local media with regards to the Nice attack.It is thus hard to make a fair comparison of the media coverage given to the two events.Rather than considering two or more events, the idea here is to investigate the behaviour of the major news providers following an event that affected a large area and several major conurbations.The attention given by a subset of the most relevant mass media outlets in terms of the size of their audiences which have a national coverage is analysed following an emergency which affected many cities simultaneously, thus, allowing the impact of the population size on media coverage to be detected.In terms of the temporal aspects, the public attention given to various cultural products follows a consistent pattern over time.In particular, the attention that the audience places on any specific event covered by the media can be categorised into four stages: the pre-problem stage, a discovery stage, a stage of gradual decline in attention and a post-problem stage.This attention cycle is closely related to the coverage that the media gives to news related to climate change; the rise and fall of the anti-nuclear movement; terrorism and travel safety; and organisational changes in the structures of government among other examples.Previous studies have attempted to model the evolution of the interest in different published stories on online platforms, or on social media, focusing only on the attention given by the readers
Items featured in the news usually have a particular novelty or describe events which result in severe impact.Here, the length of time that a story remains in the media spotlight is investigated as well as the scaling with population size of the amount of attention that the media gives to stories from different cities.
≈ 0.62 showing that coverage is given to smaller cities but this behaviour changes by the end of the third week when β ≈ 1.Then β goes back to nearly the values observed before the earthquake, so larger cities are again the ones that create more news.It has been found that the producers of the most popular online newspapers with national coverage in Mexico and hence, also their consumers, follow an emergent collective behaviour by which the amount of coverage that they give to an event that affected a range of Mexican cities decays over time.Additionally, the amount of attention given to particular cities depends on the size of the city, with this dependency being fundamentally different just following the event.To quantitatively analyse the news spotlight given to a specific subject —in this case, an earthquake in Mexico— and to cities of different size, a media coverage index has been defined by taking into account the amount of coverage given by the 19 most relevant national online newspapers in Mexico, in terms of the size of their audience, since these are assumed to provide a good representation of the overall written news activity from the national media outlets.The amount of coverage given by each outlet is measured by counting the number of tweets related to the subject that are posted via the Twitter accounts corresponding to the newspapers.The assumption is that what appears in the Twitter accounts is a reflection of what also appears in their online, printed or broadcast material but this goodness of fit has not been formally assessed."The number of tweets corresponding to each Twitter account is weighted by the number of followers of the account, which is assumed to be a proxy for the size of the outlet's audience.Results show that the media coverage as measured here had an initial burst after the event, but it decayed exponentially over time.Roughly, every day the event has 8% less attention than the previous day.Similar to cultural products, such as music or academic papers, media coverage has a distinctive decay.Although there might be other specific aspects which determine how much coverage is given to other subjects, a similar behaviour is expected for the interest that media shows in situations other than this earthquake.The audience has an initial interest denoted by A0, and then, as t days go by, the attention placed on the same event, A, decays according to A = A0ψt, where ψ depends on factors such as locality or whether sports, stars or celebrities are involved.The earthquake was a particularly significant event and therefore there was a major initial post-earthquake interest but it is likely that a similar decay will exist for other events albeit with a lower initial peak.The earthquake in Mexico provided a special event to quantitatively measure national media coverage given by the most popular online newspapers over time but also to consider a spatial factor.The earthquake affected a large area, including a number of cities and yet the media spotlight on different cities varied.Results show that large cities are more newsworthy, but a major event might change it.In the absence of the earthquake, there is usually a superlinear behaviour in the dependence of the media coverage index with regards to the size of the population of the city.Using a similar approach it has been previously shown that people from large cities produce more new patents; tend to suffer more serious crime; have a higher wage and consume more electricity.Before the earthquake, the superlinear scaling coefficient of β = 1.3768 is surprisingly high meaning that events from large cities are seemingly more newsworthy than events in smaller cities.Immediately after the earthquake, the scaling is no longer superlinear and so small cities such as Jojutla, which previously were rarely mentioned, received considerably more attention.However, this change to the spatial distribution of interest is a short-term effect since, as the general interest in the earthquake decays, the spatial distribution of the media coverage also comes back to its usual form, i.e. four weeks after the earthquake large cities receive again more attention per capita from the media.A more refined version could be achieved by incorporating a measure of the impact of the earthquake within the definition of the media coverage index, but this is left for future work.It is interesting to note that in some cases, world capitals can be considered as outliers) and might affect the whole urban system when it comes to the application of scaling analysis).However, in this case, the results after the earthquake were compared against the baseline week before the earthquake, so it is possible to see how the attention of the media was distributed across cities before the event.The results show that the scaling of attention changed rapidly.The trends detected here, based on the analysis of the Twitter accounts belonging to the newspapers which provide national coverage and with the highest popularity in their online platforms, are likely to be reflected also in traditional printed media and other news outlets, including those with a regional or global outlook.The methodology suggested here can be applied to other regions of the world and to other equally suitable events.
The amount of coverage given to earthquake-related stories had an initial peak and then exhibited an exponential decay, dropping by half every eight days.Furthermore, the coverage per person usually exhibits a superlinear scaling with population size, so that stories about larger cities are more likely to appear in the news.However, during the immediate post-earthquake weeks, the scaling was no longer superlinear.The observed trends can be interpreted as a fundamental switch in the emergent collective behaviour of media producers and consumers.
Long-term memories are thought to be stored in the brain in specific neural networks that have been modified during learning or consolidation to alter their outputs relative to the naive state.These networks are known as memory engrams.Because formation and consolidation of LTMs requires de novo transcription and translation, researchers have identified potential LTM engrams by identifying cells in which memory-associated transcription factors are activated.In particular, the immediate early gene product, c-Fos, is rapidly expressed during LTM formation, and c-fos reporters and c-fos promoter-regulated activity modulators have been used to identify and perturb engrams.The cyclic AMP-response element binding protein and other transcription factors have also been used to study engrams.Although it has been unclear whether CREB expression is highly altered upon learning, various studies have demonstrated that neurons with high CREB activity are preferentially recruited into LTM engrams.Although c-fos activity has been useful in identifying engrams, it has been less clear how learned experiences activate c-fos to induce LTM.Drosophila can form LTM of an aversive olfactory association by repeatedly exposing them simultaneously to an odor and electrical shocks.LTM formation requires repetitive training sessions with rest intervals between each session, a training paradigm known as spaced training.These rest intervals are critical for c-fos-dependent engram formation, because repetitive training without rest intervals, known as massed training, produces a different type of long-lasting memory, anesthesia-resistant memory, which is formed even in the presence of transcriptional and translational inhibitors.LTM increases as a function of the number of training sessions and rest interval length, reaching a maximum in flies at ten training sessions and 10-min rest intervals.A previous study has shown that the appropriate length of rest intervals is determined by the activity of protein tyrosine phosphatase 2, which is encoded by the corkscrew gene in Drosophila and is associated with increased activity of ERK, a member of mitogen-activated protein kinase family.Phosphorylation of CREB by ERK is critical for CREB-dependent gene expression, suggesting that ERK may activate CREB to induce c-fos.In this study, we found that the relationship among ERK, CREB, and c-Fos in LTM formation is more complex than previously proposed.Repeated and prolonged activation of ERK produces transcriptional cycling between c-fos and CREB.CREB is phosphorylated by phosphorylated ERK to induce c-fos expression, while c-Fos is phosphorylated by ERK to form the transcription factor AP-1, which promotes expression of CREB.The first c-fos induction step is required to form LTM, while the second CREB induction is required to prolong LTM to last at least 7 days in flies.Thus, our data indicate that memory engram cells in flies are established by cell-autonomous ERK-dependent c-Fos/CREB cycling.In mammals, LTM formation requires activity of c-Fos, a CREB-regulated immediate early transcription factor.To determine whether c-Fos is also required for LTM in Drosophila, we conditionally expressed a dominant-negative version, either pan-neuronally or specifically in mushroom body neurons, and measured memory after spaced and massed training.Memory 1 day after spaced training was significantly inhibited, while memory after massed training was unaffected, indicating that c-Fos is required for LTM, but not ARM.In Drosophila, c-Fos is encoded by the kayak gene, and we showed that conditional knockdown of kayak expression in the MBs inhibited 24-hr memory after spaced training, but not massed training.We next examined kayak expression during spaced training and observed a significant increase in expression after the second spaced training trial that was maintained throughout training.In contrast, kayak expression did not change during massed training.Expression of a second immediate early gene, arc, whose expression is reported to be CREB independent, did not increase during spaced training but instead increased after the end of training.To determine whether increased kayak expression requires CREB, we expressed a repressor isoform of Drosophila CREB and examined kayak expression.Heat shock induction of dCREB2b before training significantly reduced increases in kayak, suggesting that training activates CREB, which then induces kayak.c-Fos and another leucine zipper protein, Jun, form the transcription factor AP-1.AP-1 positively regulates synaptic strength and numbers and has been reported to regulate CREB expression.Thus, we next measured expression of the dCREB2 activator and repressor isoforms during spaced training.Unlike kayak expression, expression of dCREB2 activators did not increase significantly after the second training session.Instead, expression increased more gradually and became significant after the fifth training session, before rapidly decaying back to baseline within an hour after training.dCREB2 repressor expression also increased during spaced training but was delayed compared to activator expression, resulting in an increased activator-to-repressor ratio during the latter half of training.In contrast to spaced training, massed training did not significantly affect expression of any dCREB2 isoform.To test whether increased dCREB2 expression during spaced training requires c-Fos, we next induced expression of FBZ in neural cells.While FBZ expression did not affect baseline expression of dCREB2, it significantly suppressed the increase in dCREB2 expression during spaced training.Altogether, our results indicate that c-Fos/CREB cycling occurs during spaced training, dCREB2 activity induces an increase in kayak expression early during spaced training, and the subsequent increase in c-Fos activity induces a later increase in dCREB2 expression.If this type of mutually dependent expression occurs, kayak expression early during spaced training should depend on activation of baseline, preexisting dCREB2, while expression late during spaced training may depend on dCREB2 produced from earlier c-Fos activity.If this is the case, FBZ should not inhibit increased kayak expression at early time points during spaced training but should inhibit expression at later time points.Consistent with this idea, the initial increase in kayak expression after the second training session occurred normally
However, the interaction between transcription factors required for LTM, including CREB and c-Fos, and activating kinases such as phosphorylated ERK (pERK) in the establishment of memory engrams has been unclear.Formation of LTM of an aversive olfactory association in flies requires repeated training trials with rest intervals between trainings.Preexisting CREB is required for initial c-fos induction, while c-Fos is required later to increase CREB expression.Long-term memory (LTM) requires transcription factors, including CREB and c-Fos.
coincidently activated neurons differ from other neurons because they activate c-Fos/CREB cycling, which then likely induces expression of downstream factors required for memory maintenance.Thus, memory engram cells can be identified by the colocalization of c-Fos, CREB, and pERK activities.Inhibiting synaptic outputs from these neurons suppresses memory-associated behaviors, while artificial activation of these neurons induces memory-based behaviors in the absence of the conditioned stimulus.The importance of rest intervals during training for formation of LTM is well known.10× spaced training produces LTM in flies, while 48× massed trainings, which replace rest intervals with further training, does not.Pagani et al. observed that pERK is induced in brief waves after each spaced training trial and proposed that the number of waves of pERK activity gates LTM formation.While our results are generally consistent with theirs, we find that LTM is formed after 48× massed training in CaNB2/+ and PP1/+ flies, which show sustained pERK activity instead of wave-like activity.Thus, we suggest that either sustained pERK activity or several bursts of pERK activity are required, first to activate endogenous CREB, then to activate induced c-Fos, and later to activate induced CREB.In our study, 10× massed training of CaNB2/+ flies produces an intermediate form of protein synthesis-dependent LTM that declines to baseline within 7 days.This result is consistent with results from a previous study, which identified two components of LTM: an early form that decays within 7 days and a late form that lasts more than 7 days.10× massed training takes the same amount of time as 3× spaced training, which is insufficient to produce 7-day LTM and instead produces only the early form of LTM from preexisting dCREB2.We propose that long-lasting LTM requires increased dCREB2 expression generated from c-Fos/CREB cycling.This increased dCREB2 expression allows engram cells to sustain expression of LTM genes for more than 7 days.Although we propose that c-Fos/CREB cycling forms a positive feedback loop, this cycling does not result in uncontrolled increases in c-Fos and dCREB2.Instead, spaced training induces an early dCREB2-dependent increase in c-fos and other LTM-related genes, and subsequent c-Fos/CREB cycling maintains this increase and sustains LTM.We believe that c-Fos/CREB cycling does not cause uncontrolled activation, because dCREB2 activity depends on an increase in the ratio of activator to repressor isoforms.Our data indicate that splicing to dCREB2 repressor isoforms is delayed relative to expression of activator isoforms, leading to a transient increase in the activator-to-repressor ratio during the latter half of spaced training.However, the ratio returns to basal by the 10th training cycle, suggesting that the splicing machinery catches up to the increase in transcription.The transience of this increase prevents uncontrolled activation during c-Fos/CREB cycling and may explain the ceiling effect observed in which training in excess of 10 trials does not further increase LTM scores or duration.Why does ERK activity increase during rest intervals, but not during training?,ERK is phosphorylated by MEK, which is activated by Raf.Amino acid homology with mammalian B-Raf suggests that Drosophila Raf is activated by cAMP-dependent protein kinase and deactivated by CaN.Our current results indicate that ERK activation requires D1-type dopamine receptors and rut-AC, while a previous study demonstrates that ERK activation also requires Ca2+ influx through glutamate NMDA receptors.Thus, training-dependent increases in glutamate and dopamine signaling may activate rut-AC, which produces cAMP and activates PKA.PKA activates the MAPK pathway, resulting in ERK phosphorylation.At the same time, training-dependent increases in Ca2+/CaM activate CaN and PP1 to deactivate MEK signaling.The phosphatase pathway may predominate during training, inhibiting ERK phosphorylation.However, phosphatase activity may deactivate faster at the end of training compared to the Rut/PKA activity, resulting in increased ERK activation during the rest interval after training.In this study, we examined the role of ERK phosphorylation and activation in LTM and did not observe significant effects of ERK inhibition in short forms of memory.However, Zhang et al., 2008 previously reported that ERK suppresses forgetting of 1-hr memory, suggesting that ERK may have separate functions in regulating STM and LTM.c-Fos/CREB cycling distinguishes engram cells from non-engram cells, and we suggest that this cycling functions to establish and maintain engrams.However, studies in mammals indicate that transcription and translation after fear conditioning is required for establishing effective memory retrieval pathways instead of memory storage.Thus, c-Fos/CREB cycling may be required for establishment and maintenance of engrams or for retrieval of information from engrams.The engram cells we identify in this study consist of α/β KCs, a result consistent with previous studies demonstrating the importance of these cells in LTM.Although we see some α/β neurons expressing high amounts of dCREB2 in naive and massed trained animals, we see few c-fos-positive cells and no overlap between c-fos expression and dCREB2 in these animals.After spaced training, the percentage of cells that express both c-fos and dCREB2 jumps to 18.9% ± 1.2%, and these cells fulfill the criteria for engram cells, because they are reactivated upon recall and influence memory-associated behaviors.While some mammalian studies suggest that neurons that express high amounts of CREB are preferentially recruited to memory engrams, we find that the percentage of neurons that express high dCREB2 and low c-fos remains relatively unchanged between massed trained and spaced trained flies.Furthermore, we find that the increase in neurons expressing high amounts of dCREB2 after spaced training corresponds to the increase in c-Fos/CREB cycling engram cells.Thus, in flies, LTM-encoding engram cells might not be recruited from cells that previously expressed high amounts of dCREB2 but instead may correspond to cells in which c-Fos/CREB cycling is activated by
Training-dependent increases in c-fos have been used to identify engram cells encoding long-term memories (LTMs).Here, we find that prolonged rest interval-dependent increases in pERK induce transcriptional cycling between c-Fos and CREB in a subset of KCs in the mushroom bodies, where olfactory associations are made and stored.Blocking or activating c-fos-positive engram neurons inhibits memory recall or induces memory-associated behaviors.Our results suggest that c-Fos/CREB cycling defines LTM engram cells required for LTM.Miyashita et al.show that spaced training, which induces LTM, activates c-Fos/CREB cycling, where increases in c-Fos require CREB and increases in CREB require c-Fos.c-Fos/CREB cycling defines LTM engram cells, and modulating the activity of these cells alters memory-associated behaviors.
Small ruminants play an essential role in improving the livelihoods of smallholder farmers in developing countries, providing meat, fibre, milk, skin/leather, manure and short-term cash income.The global population of small ruminants is concentrated in South Asia and Sub Saharan Africa, which are the focus of this paper.We use examples from India and Ethiopia, and focus on goats as a representative model of small ruminants, notwithstanding that sheep are also important in smallholder farming systems.There are around 200 million small ruminants in India and 56 million in Ethiopia, the vast majority of which are indigenous breeds.In both countries, goats are predominantly kept for meat production and managed in low-input, extensive grazing systems based on communal lands and native pastures.However, as grazing resources become increasingly scarce, it is becoming more common for farmers to tether or pen their animals.Compared to larger livestock such as cattle and buffalo, small ruminants have many advantages.They require a smaller up-front investment, and their short breeding cycle and fattening times provide a quicker return on investment, can assist with short-term cash flows, and help flocks to recover quickly following drought.Goats are also more suitable for marginal lands because they require less feed than larger animals, can browse trees and shrubs, and are better able to digest roughages.While small ruminants have traditionally been considered a stepping stone to owning higher value animals such as cattle or buffalo, there is evidence that some farmers prefer small ruminants to cattle, especially in densely populated areas with declining feed resources.In these areas, keeping a larger number of sheep or goats may be considered less risky for smallholders than owning a small number of valuable cattle.Despite the advantages of small ruminants, goat producers face many challenges that affect the productivity of their livestock enterprise.The main problems are low productivity and high mortality.Annual meat production is low, and is often less than 10 kg per animal.This is primarily caused by inadequate nutrition, which results in low growth rates and small mature size, and is compounded by slaughtering of animals at immature body weights.Poor nutrition also contributes to high mortality rates, which are also caused by disease outbreaks.Average annual mortality rates are high at around 10–20%, but can increase to over 50% during poor seasons and disease epidemics.Consequently, shortage of feed and health issues are often ranked as the most significant constraints to production, Assen and Aklilu, Vijay et al., Suresh and Chaudhary).Production may also be limited by the genetic potential of unimproved local breeds.Improvement strategies to lift productivity of goat systems have been developed and include improved animal feeding based on higher quality forages and more efficient utilisation of existing feed resources, control of diseases that affect animal production and survival, and introducing improved meat breeds to cross with low producing indigenous breeds.However, there is little information available in the literature about the scale of potential increases in goat production, and the impacts on household income.The aim of this study was to investigate the potential for different intervention packages to increase yields and profitability of goat meat production in Ethiopia and India.This information will contribute to making informed investment decisions and target technologies in the livestock sectors of developing countries.Household modelling was used to evaluate strategies to increase goat production within the constraints of the current production systems, and indicate likely economic outcomes.Interventions evaluated in this study included 1) improving goat nutrition, 2) reducing flock mortality through improved control of health and diseases, and 3) replacing indigenous livestock with improved goat breeds.Baseline scenarios and interventions to increase production were simulated using a smallholder household model run over a 20 year period.The integrated analysis tool, version 1.3.7 is a spreadsheet model that integrates crop production, forages, livestock production, flock dynamics, household economics and labour supply.It has previously been used to model both intensive and extensive livestock production systems in East Asia, South and West Asia, and Africa.The IAT was parameterised to create baseline scenarios for goat meat production across Ethiopia and India.Household-level data on goat production was obtained from a number of sources.The IMPACT Lite), Living Standards Measurement Study and Village Dynamics in South Asia datasets and government census data provided information on the number of animals per household, reproduction, mortality, feeding and production of crops used for livestock feed.Additional data on animal management, production and pricing was gained from the literature.Together these reports were able to provide a picture of reproductive rates, mortality rates, weights and ages of sale animals, etc. that permitted model parameterisation.Consequently, baseline scenarios were developed to reflect characteristics of a typical goat meat production enterprise in each region.Specific details of each baseline scenario are described in Table 1.In Ethiopia, goat meat production was simulated for three agro-ecological zones as defined by the Ethiopia Livestock Master Plan.These were: Lowland Grazing in pastoral zones based largely on grazing of natural pastures, highland Mixed crop-livestock Rainfall Deficient zone where rainfall is 900–1400 mm and households are rural with some crop land and access to grazing, and the highland Mixed crop-livestock Rainfall Sufficient zone where rainfall is greater more than 1400 mm.For this study the household type in the MRS was based on peri‑urban and urban livestock producers who do not have access to land.In India, goat production was simulated in the arid zone, which is characterised by erratic rainfall and long dry spells, with a large, but increasingly degraded, grazing resource that supports a large number of small ruminants.The livestock simulation model
Small ruminants such as goats are an important source of income for smallholder farmers in South Asia and Sub Saharan Africa: they may be kept as a stepping stone to owning larger and higher-value animals such as cattle or buffalo, or provide a more-profitable and less-risky alternative in marginal or densely populated areas where access to feed resources are limited.The aim of this study was to investigate the potential for different intervention packages to increase yields and profitability of goat meat production in Ethiopia and India.Household modelling was used to simulate the effects of interventions on goat production and household income in the extensive lowland grazing zone and highland mixed crop-livestock zones of Ethiopia, and the extensive arid zone of India.
establishment of oversown legumes although there is evidence of successful establishment in African communal grazing lands.Further increases in production were achievable in more intensive systems through supplementation with cereal straw and crop by-products.Targeted supplementation of specific classes of livestock had a large impact, and would be most practical for farmers with stall-fed livestock.This is highlighted in the Indian example, where supplements could be provided to weaned male goats to increase growth and sale rates, or does to increase kidding and survival rates.If resources were available, an effective strategy might be to feed poor quality crop residues to mature does, which have relatively low energy requirements, whilst directing higher quality but more expensive supplements towards male goats, which can be sold for cash.Whilst the opportunities to lift ruminant productivity through improved forages and/or feeding appear compelling from the analysis in this study, the challenges associated with adoption and implementation should not be under-estimated.Owen et al. in reviewing limited success of animal nutrition interventions in developing countries identified several causes including: poor or inappropriately targeted extension efforts; lack of participatory research and development approaches; and inadequate demonstration of benefit: cost ratios.Even with improved animal nutrition, the low genetic potential of local goat breeds mean that large increases in production at the farm scale are not possible without improved genetics.Improving livestock genetics is a popular strategy with donors, and in the right circumstances can lead to substantial increases in production and profit.Leroy et al. provide a review of some of the challenges affecting the success of genetic improvement programs in developing countries.These include appropriate animal management and nutrition, adaptation of exotic breeds to challenging environmental conditions and diseases, the logistics of developing and maintaining systems for distributing improved genetics, and costs associated with investing in new genetics.Reducing animal mortality rates through better healthcare and disease management provided a relatively low risk option to increase production rates and household income.Lower mortality rates resulted in larger flock sizes, thus a higher number of births per year and more animals available for sale.The biggest increases in production and profit were achieved when low mortality rates were combined with improved nutrition and better genetics, so that increased animal numbers were accompanied by increased sale weights.It is also worth considering that increasing the flock size through reduced disease and mortality rates will increase the resource requirements of smallholder famers.Whilst mortality rates in goats are high, a limitation of this analysis was the paucity of data to confidently parameterise the model for the mortality impacts of specific diseases.Overall mortality rates from disease and management complexes were instead used.Further, there is little information available on how diseases affect growth and production in those animals that remain alive.The productivity improvements beyond mortality reduction were therefore not considered in the reduced mortality scenarios, which may underestimate the benefits of disease reduction.More effort needs to be directed to better quantifying the benefits of disease management.While government services and development programs are often biased against goats in favour of large ruminants, our results show that there is value to smallholders in investments in small ruminants.Household modelling showed that reproduction, growth and survival rates can be increased through better nutrition, genetics and healthcare, but that the biggest increase in production and profits will occur when multiple interventions are combined.
However, smallholder goat production in these areas is often low due to low growth and reproduction rates and high animal mortality.Packages were based on improved nutrition, reduced flock mortality from improved control of health and diseases, and replacing indigenous livestock with improved goat breeds.Our analysis showed that there are opportunities to increase goat meat production in both countries.Reproduction, liveweight gain and survival rates can be increased through better nutrition, genetics and healthcare, but the biggest increase in production and profits occurred when multiple interventions were combined.Importantly, interventions resulting in the biggest increases in goat meat production or number of animals sold did not always give the highest profits.
the temperature dependence of the creep rate is an important aspect to consider if ice temperatures range from −9 °C to temperatures close to the melting point.The approach of Cox worked well in this study even though it was based on a small set of data.There is potential for further improvement based on better knowledge of temperature-dependent ice properties.One parameter used in the Bergdahl ice load model was found to vary systematically with location with the lowest value observed at the center of the dam.This implies that actual ice rheology can be difficult to determine from measurements in small reservoirs.Since the stress field should be homogeneous in a completely confined ice sheet, the spatial dependence of stresses registered in the ice and the systematic rotation of the principal stresses both point toward the absence of confinement at some section of the boundary.However, the systematic dependence of model parameters on location suggests that it should be possible to model spatial and temporal variability of stresses in small reservoirs numerically.In particular, earlier work found encouraging agreement between numerical models and field and laboratory measurements.Numerical models using intrinsic material properties and appropriate boundary conditions may be used to estimate effective parameters of models such as Eq."In this context one may also wish to consider the structure's ability to deform.In order to gain experience in defining numerical boundary conditions, biaxial measurements should be performed in reservoirs.A better understanding of the stress distribution will help in experiment design.With computational resources available today, a better understanding of thermal ice loads is within reach.However, more data on small reservoirs without waterlevel fluctuations may be needed.
Static ice loads (ice actions) are a key design parameter for dams in cold climates.However, their theoretical description is still elusive, introducing uncertainty in design and hindering development of remediation measures.We present and analyze measurements of stresses due to thermal loads in a small reservoir in northern Norway.Several weeks of observations, including both cold and warm spells, were well-described by a simple equation that accounts for thermal expansion and temperature-dependent creep.One model parameter was found to depend systematically on the location of measurements within the reservoir.Biaxial stress measurements showed that the stress field was not homogeneous.Results suggest that the stress field in reservoirs should be predictable from first principles with numerical methods and point toward a promising, simple parameterization.
This is a cross-sectional analysis of baseline data from the TreatFOOD study among 1609 children with MAM.The activity measures were registered as a secondary outcome.The study was conducted in the Province du Passoré, Burkina Faso, at 5 local health centers and a nongovernmental organization.Children were screened by community health workers using MUAC tapes or by designated screening teams with the use of both MUAC and WHZ."Furthermore, children could be referred to a study site from a health center or present at a site on a caretaker's initiative.The final assessment of study inclusion eligibility was performed at site.Children were enrolled if a diagnosis of MAM was confirmed, defined as WHZ between -3 and -214 and/or MUAC between 115 and 125 mm.15,In the study site, WHZ was determined using WHO field tables, but anthropometry was later recalculated before analysis.Children were not included if treated for SAM or hospitalized within the past 2 months, were participating in a nutritional program, required hospitalization, or had severe disability.The study protocol was approved by the Ethics Committee for Health Research in Burkina Faso, and consultative approval was obtained from the Danish National Committee on Biomedical Research Ethics.Consent was obtained verbally and in writing from caretakers of the children before inclusion.The study was carried out in accordance with the declaration of Helsinki and international ethical guidelines for biomedical research involving human subjects, published by the Council for International Organizations of Medical Sciences.Medical treatment was provided according to an adapted version of the Integrated Management of Childhood Illness guidelines.16,At enrollment, a nurse conducted a clinical examination and collected data using structured questionnaires for sociodemographic variables and breastfeeding status.Fever was defined as axillary temperature ≥37.5°C.Upper and lower respiratory tract infections were diagnosed by experienced pediatric nurses based on an adapted version of the Integrated Management of Childhood Illness.16,The morbidity data presented were collected at enrollment when initiating the activity measurement, and not repeated during the measurement period.Venous blood was collected to carry out rapid antigen test for Plasmodium falciparum malaria, and to determine hemoglobin level; anemia was defined as <11 g/dL.Serum was separated and stored at -20°C.C-reactive protein and α1-acid glycoprotein were determined using a simple sandwich enzyme-linked immunosorbent assay.17,We defined CRP ≥10 mg/L and AGP ≥1 g/L as abnormal, indicating systemic inflammation.Weight and length were measured to the nearest 100 g and 1 mm, respectively.MUAC was measured to nearest 1 mm at the midpoint between the olecranon and the acromion process using a standard measuring tape.All measurements were done in duplicate.The anthropometry measurements were done by trained staff and equipment was checked daily.Standardization sessions were carried out prior to the start of the trial to ensure precision and accuracy of measurements.During the trial, anthropometry staff were closely supervised by the anthropometry team leader and the site supervisor.Movement ability of the children was defined as not able to crawl/walk, able to crawl, or able to walk as assessed by measurement staff based on observation using an adapted version of the Malawi Developmental Assessment Tool.18,Physical activity was measured objectively using a triaxial accelerometer.The accelerometer was attached to an elastic belt placed on the skin at the right side of the hip and worn for 6 consecutive days.Caretakers were instructed to only let enrolled children wear the device and to make sure that the accelerometer was placed on the right hip during the monitoring period.Monitors could be removed during bathing.We used data recorded by the device starting 7 hours after leaving the clinic and ending 7 hours before returning to the clinic to avoid recording unusual activity caused by the need to attend the clinical appointments.After monitor removal, the caregiver was interviewed using a structured physical activity questionnaire including perception and acceptability of the device, episodes of device removal, and whether children were carried and if so how many times per day.The recorded activity data were uploaded from the monitors using the Actilife 6 Software.Raw accelerometer data were collected at a rate of 100 Hz.Data were integrated to 10-second epochs to permit detection of short bouts of activity.6,19,Each axis was converted to counts per min, following which vector magnitude was calculated as the square root of sum of the three squared count values.We included data from all times of the measurement period including night time in the analysis, except the 7 hours in the beginning and end of the file and periods marked as nonwear.We defined nonwear time as continuous runs of zero activity ≥90 minutes.Days with <8 hours valid wear data and participants with <1 valid day of recording were excluded from the present analyses.We calculated total physical activity as mean vector magnitude over valid days.All statistical analyses were performed using Stata v 12.Anthropometric WHZ and height-for-age z score were calculated using the package “zscore06” in Stata.Variables were tested for normality by histograms and Shapiro-Wilk tests.Means ± SD were calculated for normally distributed variables and median for non-normally distributed variables.To determine associations between activity and covariates, we first built unadjusted models comparing volume of physical activity between groups of morbidity, biochemistry, and anthropometry.Second, we adjusted for age and sex, and finally for all covariates including age, sex, paternal and maternal profession, season of measurement, breastfeeding, number of children under 5 years of age in household, carrying status, and movement ability.The Figure is based on random effect mixed models with age and sex as fixed effects and child as a random effect.A total of 1609 eligible children, predominantly Mossi, were enrolled
Study design In a cross-sectional study, 1609 children aged 6-23 months wore a triaxial accelerometer (ActiGraph GT3x+; ActiGraph, Pensacola, Florida) for 6 consecutive days, from which total physical activity were determined.Data on morbidity were collected based by history and physical examination, and serum C-reactive protein and α1-acid glycoprotein were measured.The mean (±SD) total physical activity was 707 (±180) vector magnitude counts per minute (cpm).Trial registration Controlled-Trials.com: ISRCTN42569496.
in the study from September 2013 to August 2014, after consent from caretakers.Of the 1609 enrolled, 29% of children were enrolled based on MUAC only, 50% based on WHZ and MUAC, and 21% based on WHZ only, as previously reported.20,Among these, 1544 children had baseline physical activity data and were included in the analysis.The median age was 11.3 months.More than one-half the children were girls, and almost all children were breastfed.The majority of children were from families with fuel for cooking based on “coal/wood/straw,” and from families who were owners of their own house.The mean MUAC, WHZ, and HAZ were 123 mm, -2.2, and -1.7, respectively.As previously reported,21 comorbidities were common.The 65 children who were excluded from analyses did not differ from those included with respect to age, proportion of girls, proportion of breastfeeding or prevalence of fever, positive malaria test, diarrhea, cough, or raised levels of CRP or AGP.Of the 1544 children with physical activity data, 1498 completed 6 consecutive days of recording with a daily median wear time of 24 hours.At the first day of enrollment, the 25th, 10th, 5th, and 1st percentiles of wear time were 16.7, 11.4, 11.4, and 10.6 hours, respectively.The mean total physical activity was 707 cpm, with age being inversely associated.Compared with children below 12 months of age, those aged 12-17 months and 18-23 months had 34 and 121 cpm lower activity, respectively.Judging from the diurnal pattern of activity, waking hours began on average between 6 a.m. and 7 a.m., from which time activity increased up to 9 a.m., then declined to reach a local nadir at around 2 p.m. and increased again until reaching a peak at 7 p.m. and decreased thereafter.The highest accumulation of activity occurred between 6 p.m. and 7 p.m.During daytime hours, younger children were more active than older children.In unadjusted models, children who were not able to crawl/walk had 51 cpm lower activity and 38 cpm higher activity compared with those classified as able to crawl or walk, respectively.There was no difference between boys and girls but children of farming parents had higher activity levels.Ethnic group and socioeconomic status based on fuel for cooking and house ownership were not associated with activity.Breastfed children were 239 cpm more active than those not breastfed.Neither anthropometric indicators nor admission criteria were associated with physical activity.Multivariable analysis, including all variables from Table I as covariates, did not change the difference between age groups; children younger than <12 months of age had higher adjusted activity than children aged 12-17 months and 18-23 months.The adjustment only marginally affected the role of breastfeeding but increased the impact of ability to crawl and walk."Also father's profession and season for measurement were unaffected by adjustment. "On the contrary, the impact on activity by mother's profession or carrying status seemed to be confounded and no longer significant after adjustment.In unadjusted analyses, most measures of clinically assessed morbidity, anemia, and inflammation were negative correlates of physical activity.In the adjusted models, signs of respiratory tract infection were no longer associated with activity.However, fever, malaria, anemia, and inflammation remained strongly associated with lower volume of activity, with estimates only marginally changed compared with unadjusted models.Finally, none of the anthropometric measures, WHZ, HAZ, and MUAC, were associated with physical activity; HAZ, WHZ, and MUAC.None of the selection criteria was associated with physical activity; WHZ only vs WHZ and MUAC or MUAC only vs WHZ and MUAC.Among this large group of young children from Burkina Faso with MAM, we found activity to be correlated with measures of sociodemographic status and morbidity.Few studies from low-income countries are available using similar equipment, but most of these used a different study design in that they did not collect data for the full 24 hours of the day, and there were also differences with respect to accelerometer data reduction methods.This makes direct comparison somewhat difficult but do allow relative comparisons of the within-population associations with covariates.A single-center study from Ethiopia in a small group of children with SAM used an identical approach to the one used in the present study.11,Compared with the Ethiopian study, we found a 5-fold higher level of physical activity in children with MAM from Burkina Faso, suggesting that the degree of malnutrition is a likely determinant of movement in this age group.We did not find any difference in activity between boys and girls among children with MAM in our study, possibly reflecting that activity of young healthy children may not differ by sex.The lack of difference in physical activity between boys and girls is consistent with studies from Belgium among 20-month-old children,6 from Australia among 19-month-old children,22 from Sweden among 2-year-old children,8 and from The Netherlands.9,With respect to the association with motor development milestones, ability to crawl or walk were associated with higher activity levels compared with children who are less developed; remarkably, this effect was observed independently of age and how much the child was being carried.Diurnal patterns in activity showed peaks of activity in the morning and afternoon.This could represent times where children were more engaged in unstructured play.The decrease of activity observed during midday is likely due to feeding and subsequent napping, although we have no observational data to confirm this.These diurnal patterns are, however, consistent with studies among 36-month-old children from New Zealand23 and Australia.24,Although poor nutritional status is considered to have a negative effect on activity,25,26 we did not see any association between the anthropometric measures and physical
Results A total of 1544 (96%) children had physical activity measured, of whom 1498 (97%) completed 6 consecutive days of physical activity recording with a daily median wear time of 24 hours.Age was negatively correlated with physical activity; compared with children below 12 months of age, those 12-17 months of age, and 18-23 months of age had 51 (95% CI, 26; 75) and 106 (95% CI, 71; 141) cpm lower physical activity, respectively.
activity, possibly because all children included in this cohort had anthropometric measures within the narrow MAM range.Breastfed children, who were younger, seemed to be more active.Although this could have been influenced by the children being carried, both the age and the breastfeeding effects remained significant after adjustment for how much the child was carried, suggesting that breastfeeding may play an important role in the nutritional support for activity of children with MAM.The higher activity seen among children from farming families may have been because they spent more time in the field either playing or in field activities.They could also have been carried while the mother works in field, which could have influenced the registered movement; indeed, this effect was no longer significant when other covariates, including carrying status, were being considered.Children enrolled during the rainy season were more active, which was also reported in a study from Zanzibar.25,Because farming activity is linked to the rainy season, this could account for the greater physical activity among children measured during this season.We found a negative association between infection at enrollment and physical activity."Our results were consistent with the study in Zanzibar, where malaria was found to be negatively associated with children's activity, likely because of inflammation, lethargy, and poor appetite.25",Children compensate for lack of dietary energy by decreasing energy expenditure through reduced physical activity.27,Infection and inflammation can lead to a reduction in body mass, which may reduce capacity to perform work or movement.28,Higher hemoglobin was associated with greater physical activity, as has been seen also for children in Mexico and Indonesia.29,30,Anemia may be related to iron deficiency or to inflammation.Irrespective of the underlying cause, anemia leads to lower oxygen-carrying capacity or reduced cellular oxidative capacity resulting in low energy production associated with low activity levels.It is notable that anemia remained significantly associated with activity in the multivariable analysis, which included infection indicators, suggesting these other mechanisms also may be important."Anemia may reduce children's endurance as has been found in adolescents and school children.31",The strengths of this study include its large sample size, the use of an objective measure of activity with high time resolution, and high compliance covering all 24 hours of the day.Also, this is the first study to investigate physical activity and correlates among young children with MAM using accelerometers.The limitations of the study include a lack of age-matched control data from well-nourished young children from Burkina Faso.Also, most of the children were breastfed and may to some extent have been carried by caregivers and the lack of synchronous activity registration of the caregiver and/or logs did not allow detailed distinguishing passive movement of the child caused by carrying from actual physical activity of the child.Another potential limitation is the lack of a sleep log that would have enabled better comparison with other studies that include only daytime activity.Physical activity declines with age and is associated with infection and inflammation status in children with MAM.However, because younger children are more likely to be carried, future studies should use both accelerometers and activity logs to improve assessment and aid the distinction between passive and active movement.
Objective To assess the levels of physical activity among young children with moderate acute malnutrition and to identify clinical, biochemical, anthropometric, and sociodemographic correlates of physical activity.Fever and malaria were associated with 49 (95% CI, 27; 70) and 44 (95% CI, 27; 61) cpm lower activity, respectively.Elevated serum C-reactive protein and α1-acid glycoprotein were both negative correlates of physical activity, and hemoglobin was a positive correlate.Conclusions Physical activity declines with age in children with moderate acute malnutrition and is also inversely related to infection and inflammatory status.Future studies are needed to ascertain cause and effect of these associations.
the hardness of the tablets.Studies by Sugimoto et al. demonstrate that mannitol absorbed little moisture upon increase in relative humidity compared to excipients such as glucose or sorbitol.The results in Fig. 4 show that the tablets incorporating freeze-dried vesicles can be formatted as rapid disintegrating tablets.However, the tablets produced still expose the vesicles to the external milieu of the stomach which can result in vesicle breakdown and/or loss of entrapped antigen if designed for the delivery of vaccines.Hence, as an alternative to tabletting, freeze dried vaccine formulations within capsules were also examined.Therefore vesicles were prepare containing the H3N2 antigen and loaded within capsule shells and compare the rehydration characteristics to a control tablet to ensure the capsule is not interfering with the vesicles.The capsule formulations did not require the use of additives such as dextran or mannitol as the capsule shell provides structural integrity thus increasing handling and protection of the vesicular vaccines enclosed.The capsule shells are presented in Fig. 5A with a fill volume of 0.5 mL.Antigen loading, vesicle size distribution and surface charge was analysed prior to and after the freeze drying.The initial vesicle size is comparable to previous formulations with an average vesicle size of 6–6.5 μm and a highly negative zeta potential.After freeze-drying within the capsules, the contents were rehydrated with 0.5 mL ultrapure water, vortexed and then analysed for vesicle size and charge."The vesicles from the capsules reduced in size to an average of 4 to 5 μm with the zeta potential remaining highly negative which is within the size range suitable for uptake by the Peyer's patches.Similarly, antigen loading within the vesicles was compared across all 3 dosage form platforms.Antigen loading was 30 to 40% of initial amount used, in line with the antigen loading of vesicles prior to freeze-drying.Fig. 5D represents the disintegration time of the capsule and the control tablet formulation within gastric media.Results show that the tablets disintegrate within 5 s, compared to the capsules which have a disintegration/ release time of 5–6 min.In both cases, complete dispersion of the freeze-dried product and release of the niosomes was achieved.Capsules prepared by Jones et al., have shown that in humans in the fasted state, gelatin based capsules disintegrate within 7 ± 3 min compared to a fed state of 12 ± 3 min.The capsules prepared within this study offer this delayed release profile of 6 min and reduces the contact time of the vesicles and potentially any drugs/antigens prone to acid or enzymatic degradation within the stomach.Within this study, we have produced an alternative solid dosage forms for the oral delivery of bilayer vesicles.Such systems can provide a more convenient and cost-effective delivery system for oral vaccine systems by helping the distribution and access to patients.Should improved targeting to the intestine be required, delayed release capsules could also be adopted to ensure the bilayer vesicle systems are not released until the small intestine.Furthermore, depending on the lipid composition and antigen selected, additional studies to assure the validity of the antigen may be required.However with the current bilayer vesicles, previous studies have shown these vesicles are stable within the GI tract, are taken up by the M cells and stimulate an immune response, therefore the primary outcome of this study was to format the formulation in an easy to use solid dosage form.
Within this work, we develop vesicles incorporating sub-unit antigens as solid dosage forms suitable for the oral delivery of vaccines.Using a combination of trehalose, dextran and mannitol, freeze-dried oral disintegrating tablets were formed which upon rehydration release bilayer vesicles incorporating antigen.Initial studies focused on the optimisation of the freeze-dry cycle and subsequently excipient content was optimised by testing tablet hardness, disintegration time and moisture content.The use of 10% mannitol and 10% dextran produced durable tablets which offered strong resistance to mechanical damage yet appropriate disintegration times and dispersed to release niosomes-entrapping antigen.From these studies, we have formulated a bilayer vesicle vaccine delivery system as rapid disintegrating tablets and capsules.
efficiently escape the neutralizing antibodies?,Additionally, there is a need for characterization of the exo-AAV preparations.It is unclear the extent to which membrane components could be contaminating the exo-AAV preparation.It is possible that the inclusion of membrane components in addition to DMEM could alter the tropism or the immune response after injection.While we have not presented evidence one way or the other on this issue, the authors would like to highlight this potentiality for the sake of authenticity.Future studies are essential to better define these points in large and small animal models.Another key observation of the current study is the increase of transgene expression in neurons, without enhanced expression in astrocytes, using exo-AAV vectors under the PGK promoter.Application of viral vectors in gene therapy requires the choice of an optimal promoter able to direct the gene expression in specific target cells.Many promoters have been tested in the CNS, including cytomegalovirus, chicken beta-actin, and ubiquitous PGK1; although CMV and CBA are commonly used in gene transfer vectors, they have demonstrated potential drawbacks.Whereas in vivo CMV is prone to silencing over time after being transduced into the genome of cells in some tissues, such as hippocampus, striatum, and substantia nigra,35–37 CBA shows a significantly low expression in motor neurons.38,Previous investigations by Mellor et al.39 showed that when the PGK1 gene was cloned into a multicopy plasmid and expressed in yeast, Pgk1p accumulated to be as much as approximately 50% of total cell protein.Furthermore, different powerful expression vectors were constructed, based on the promoter region of the PGK1 gene, and these vectors have been used to study the expression of a number of heterologous genes.40–42,Although the activity of promoters can vary depending on host cells and experimental settings, based on the evidence, we chose the PGK1 promoter as the basis for construction of a new vector containing microvesicles.The brain is an organ where many of the most devastating human neurological disorder processes arise, for which the causes remain elusive and treatments are still lacking.These diseases encompass a broad spectrum of pathological states and can have specific effects on the metabolism, development, and function of neurons.Several studies report a transduction efficiency of neurons and non-neuronal cells such as astrocytes using std-AAV6 and std-AAV9;43–46 opposite to what we expected, we observed a brain region-specific tropism of neurons and oligodendrocytes using std-AAV6 and std-AAV9 conjugated with exosomes, while a low transduction in astrocytes was observed using both exo-AAV and std-AAV constructions.We wondered what could be the causes of these dissimilarities, considering two key factors: the brain injection was performed using the same viral preparations and the same volume of injection, and all viral vectors used in this study were made under the control of the same PGK promoter.A possible reason might be the different cytoarchitectures among brain regions such as the cortex, hippocampus, and striatum that may influence the efficiency of exo-AAV and std-AAV-mediated gene delivery.47,Additionally, differences in viral vector manufacturing, purification, and route of administration have been previously documented to drive different cell tropism of AAV vectors in the brain.48,However, these results established that a modified AAV vector carrying an envelope exhibited a preferential in vivo tropism for neurons and oligodendrocytes in specific brain regions, representing a major departure from the normal neuronal tropism indicated by other AAV serotypes with dominant neuronal tropism after direct injection into the brains of rodents.43,49,50,In particular, several studies have described the ability of specific std-AAV serotypes to transduce oligodendrocytes at a low efficiency, requiring the use of oligodendrocyte-specific promoters to prevent expression in neurons, the preferred cell type for these vectors.51,52,In addition, we hypothesize that, because AAVs enter target cell receptor-mediated endocytosis,53,54 the exosome association may confer to the viral vector a new tropism via binding to different cell receptors.Our study suggests that exo-AAVs might be applied to specifically transfect neurons and oligodendrocytes, constituting a tool of choice in gene therapy for neurological diseases, such as multiple sclerosis, spinal muscular atrophy, or amyotrophic lateral sclerosis, where currently no effective curative therapy exists.Applying an in vivo imaging technology, with image resolution equivalent to confocal microscopy on living animals, it has been possible to track and detect the exo-AAV-mediated GFP expression throughout the brain.Conventional optics and scanners are large and bulky; in addition, they require wide surgical exposure that may lead to significant trauma.Confocal endomicroscopy is a minimally invasive method that has sufficient resolution and allows for model animals to be studied longitudinally, reducing the numbers needed per experiment and improving statistical power.This may allow for testing to see if an exo-AAV construct is capable of delivering the gene into a specific area of the CNS as a pre-translational screening.The current formulation for drug therapeutics fails to provide drug delivery into the correct brain regions; an efficient system of delivery is direly needed.In conclusion, exo-AAV vector is a powerful tool to enhance vector spreading in the brain, and it might offer the potential to address significant unmet clinical needs.The 8-week-old male C57BL/6J mice were purchased from Janvier.Mice were housed in a temperature-controlled room and maintained on a 12-h light-dark cycle.Mice were maintained under specific pathogen-free conditions.Food and water were available ad libitum.All procedures were approved by the Regional Ethics Committee in Animal Experiment No. 44 of the Ile-de-France region.All animal experiments were carried out in full accordance with the European Community Council directive for the care and use of laboratory animals.Standard AAV6-PGK-GFP and AAV9-PGK-GFP and their exo-AAV counterparts used in this study
However, in vivo spreading of exosomes and AAVs after intracerebral administration is poorly understood.This study provides an assessment and comparison of the spreading into the brain of exosome-enveloped AAVs (exo-AAVs) or unassociated AAVs (std-AAVs) through in vivo optical imaging techniques like probe-based confocal laser endomicroscopy (pCLE) and ex vivo fluorescence microscopy.Although sparse GFP-positive astrocytes were observed using exo-AAVs, our results show that the enhancement of the transgene expression resulting from exo-AAVs was largely restricted to neurons and oligodendrocytes.Our results suggest (1) the possibility of combining gene therapy with an endoscopic approach to enable tracking of exo-AAV spread, and (2) exo-AAVs allow for widespread, long-term gene expression in the CNS, supporting the use of exo-AAVs as an efficient gene delivery tool.
in non-cardiomyocyte population.A semi-quantitative estimation of Hsp70, Hsp90, CaN, Calp and SarcAct levels in cells was determined by Western blotting.The results obtained showed insignificant changes in the expression levels in proteins studied.The absence of currently used methodologies or technologies to determine the expression level of proteins simultaneously in live and dead cells, other than FACS, hinders data validation.Repetition of experiments is the only proof of evidence in this circumstance.The drawback of the potential loss of floating dead cells which is often discarded during the washing steps has been negated in our previous studies , by the adding the pellets of pooled media or buffering solution discarded after cell treatments.This is the first report which compares the expression of CaN, Calp and Hsps in cardiomyocytes during ischemia and subsequent reperfusion.Previous studies were only able to gauge the global expression of these proteins in cardiomyocytes without any distinction of being live or dead .The current study uses FACS based assay to differentiate live and dead cells and also quantifies the protein expression in these cells separately.Further studies using animal knockdown models and rescue assays using over-expressed proteins can support these novel findings.In brief, the present study describes the use of triple staining for comparative protein expression analysis of Hsp70, Hsp90, CaN, Calp and SarcAct in normal, ischemia-induced and reperfused cardiomyocytes by FACS.Ischemia induces an increase in the expression of molecular chaperones Hsp70 and Hsp90 in cardiomyocytes along with increase in cell death.The expression of Hsp70 and Hsp90 decreases slightly during reperfusion.The absence of enhanced cell death suggests the cardioprotective nature of these proteins.CaN expression peaks during ischemia and reduces during subsequent reperfusion similar to our previous studies .An increase in cell death was observed in cells expressing CaN following ischemia but no further increase in cell death during reperfusion implies that the CaN expression promotes cell survival and is therefore cardioprotective.Calp expression increased during ischemia and subsequent reperfusion much similar to previous reports .Decreased cell death was observed in cells co-expressing Calp with Hsp70 and Hsp90 during reperfusion compared to cells expressing only hsp70 or Hsp90 or Calp.This suggests the cardioprotective role of Calp by inhibiting Calpn.Expression of SarcAct remained consistent in cardiomyocytes since being ubiquitous and used as a control.Thus this study validates the cardioprotective nature of Hsp70, Hsp90, CaN and Calp previously reported by many groups.
Objective: Calcineurin (CaN) interacts with calpains (Calpn) and causes cellular damage eventually leading to cell death.Calpastatin (Calp) is a specific Calpn inhibitor, along with CaN stimulation has been implicated in reduced cell death and self-repair.Molecular chaperones, heat shock proteins (Hsp70 and Hsp90) acts as regulators in Calpn signaling.This study aims to elucidate the role of CaN, Calp and Hsps during induced ischemia and reperfusion in primary cardiomyocyte cultures (murine).Methods and results: Protein expression was analyzed concurrently with viability using flow cytometry (FACS) in ischemia- and reperfusion-induced murine cardiomyocyte cultures.The expression of Hsp70 and Hsp90, both being molecular chaperones, increased during ischemia with a concurrent increase in death of cells expressing these proteins.The relative expression of Hsp70 and Hsp90 during ischemia with respect to CaN was enhanced in comparison to Calp.Reperfusion slightly decreased the number of cells expressing these chaperones.There was no increase in death of cells co-expressing Hsp70 and Hsp90 along with CaN and Calp.CaN expression peaked during ischemia and subsequent reperfusion reduced its expression and cell death.Calp expression increased both during ischemia and subsequent reperfusion but cell death decreased during reperfusion.Conclusion: The present study adds to the existing knowledge that Hsp70, Hsp90, CaN and Calp interact with each other and play significant role in cardio protection.
Generative modeling of high dimensional data like images is a notoriously difficult and ill-defined problem.In particular, how to evaluate a learned generative model is unclear.In this paper, we argue that *adversarial learning*, pioneered with generative adversarial networks, provides an interesting framework to implicitly define more meaningful task losses for unsupervised tasks, such as for generating "visually realistic" images.By relating GANs and structured prediction under the framework of statistical decision theory, we put into light links between recent advances in structured prediction theory and the choice of the divergence in GANs.We argue that the insights about the notions of "hard" and "easy" to learn losses can be analogously extended to adversarial divergences.We also discuss the attractive properties of parametric adversarial divergences for generative modeling, and perform experiments to show the importance of choosing a divergence that reflects the final task.
Parametric adversarial divergences implicitly define more meaningful task losses for generative modeling, we make parallels with structured prediction to study the properties of these divergences and their ability to encode the task of interest.
by the density difference of the leaked gas and ambient air.This study also demonstrated the effect of the leakage rate on the gas spread.The nondimensional concentration profiles normalized by Cin had almost the same profiles under the condition of the leakage rate of the gas of approximately 1.4 × 10−3 ≤ Wm ≤ 4.0 × 10−3 kg s−1.While the present numerical modeling is useful to obtain the concentration profiles of the leaked flammable gas, we need additional information to assess the hazard posed by gas leakage.Extensive efforts must be made to link the concentration profiles predicted by the numerical modeling to the assessment by evaluating the possibility of ignition to the leaked gas.The present numerical modeling does not introduce a turbulent model to compute the concentration profiles of the leaked gas.While the numerical results do not involve uncertainties produced by the turbulent model equations, the computations require massive meshes to discretize the governing equations, resulting in huge CPU time and hardware requirements.The implementation of a suitable turbulent model is necessary to reduce hardware requirements and CPU time, as shown in Table 2, and to expand availability of the developed numerical tool for a wide variety of gas leakage conditions.In addition, the results of the numerical predictions have not been validated by using a series of laboratory experiments.Validations of the present numerical results by laboratory and in situ measurements should be provided in the near future.
This study proposes a new numerical formulation of the spread of a flammable gas leakage.A new numerical approach has been applied to establish fundamental data for a hazard assessment of flammable gas spread in an enclosed residential space.The approach employs an extended version of a two-compartment concept, and determines the leakage concentration of gas using a mass-balance based formulation.The study also introduces a computational fluid dynamics (CFD) technique for calculating three-dimensional details of the gas spread by resolving all the essential scales of fluid motions without a turbulent model.The present numerical technique promises numerical solutions with fewer uncertainties produced by the model equations while maintaining high accuracy.The study examines the effect of gas density on the concentration profiles of flammable gas spread.It also discusses the effect of gas leakage rate on gas concentration profiles.© 2014 The Authors.
rates that have not changed in the past 60 years – a result largely due to lack of available therapies that can both navigate anatomical barriers and not adversely impact delicate neuronal tissue.The rapid advances in cancer immunotherapies represent a promising frontier in the potential treatment of these otherwise inoperable tumors.However, while immunotherapy approaches have shown promising results in other cancers, little progress has been made for brain tumors.The brain is a critical organ, and it imposes unique anatomical, physiological, and immunological barriers to successful treatment.These unique anatomical & physiological constraints lead to intrinsic engineering challenges for developing therapies that necessarily act within the brain and brain tumor immune microenvironment.To date, engineering approaches to applying immunotherapies in the brain, beyond simply re-acquisitioning of existing extra- Antibody neural immunotherapies, are lacking; the opportunities, however, are ample.By considering at therapeutic approaches from an engineering perspective, we may be able to leverage more variety of technologies in order to advance tailored treatment strategies that can address the particular biological constraints of treating within the brain, thus enabling better brain cancer immunotherapies.The following is the supplementary data related to this article.A combination of tumor type and type of immunotherapy and intervention guided search parameters applied on The National Institute of Health Clinical Trial database between February 22nd and March 1st 2017.Additional search parameters were applied based on literature review to encompass all trials for a specific intervention and all duplicates were removed.Supplementary data to this article can be found online at http://dx.doi.org/10.1016/j.addr.2017.06.006.
Malignant brain tumors represent one of the most devastating forms of cancer with abject survival rates that have not changed in the past 60 years.This is partly because the brain is a critical organ, and poses unique anatomical, physiological, and immunological barriers.The unique interplay of these barriers also provides an opportunity for creative engineering solutions.Cancer immunotherapy, a means of harnessing the host immune system for anti-tumor efficacy, is becoming a standard approach for treating many cancers.However, its use in brain tumors is not widespread.This review discusses the current approaches, and hurdles to these approaches in treating brain tumors, with a focus on immunotherapies.We identify critical barriers to immunoengineering brain tumor therapies and discuss possible solutions to these challenges.
Intramural hematomas of the gastrointestinal tract are uncommom, mostly located in the esophagus or duodenum, being of idiophatic origin or secondary to a condition or intervention.A gastric intramural hematoma is very rare, with only few previous case reports .A very unusual case of GIH is presented, with literature review.The current work has been reported in line with the SCARE criteria .An 87-year-old woman presented to the Emergency Room complaining of an acute and intense thoracic pain radiating to the dorsum and upper abdomen, associated with nausea, dyspnea, sudoresis and hypertensive peak.She had a medical history of hypertension.Eight years ago, she underwent a hip arthroplasty, complicated by pulmonary thromboembolism, when she was started on anticoagulants for the first six months and then on aspirin associated with cilostazol daily."On the ER's initial evaluation, chest radiograph revealed an enlarged mediastinum.The patient had persisting pain, despite the administration of morphine and nytroglicerin, without hemodynamic instability.She had normal cardiac enzymes and electrocardiogram with elevated D-dimer test.The emergency thoracic computed tomography angiography showed an ectatic ascending aorta, descending aorta aneurysm with mural thrombus and laminar mediastinal fluid, excluding PTE.The echocardiogram identified in addition to the aortic ectasia, a mild aortic insufficiency.Subsequently, the patient was admitted to the hospital and transferred to the Intensive Care Unit.As the pain subsided, she was managed conservatively.After three days, she had another burst of severe epigastric pain, associated with hematemesis, hemoglobin drop to 7.9g/dL, hypotension and tachycardia.The standard protocol of care of patients with acute upper gastrointestinal bleeding was executed: NPO diet, intravenous fluids, blood transfusion and proton pump inhibitors, followed by esophagogastroduodenoscopy.The EGD diagnosed an enormous bright red subepithelial mass, occupying the fundus and corpus in the lesser curvature, with luminal bulging, no mucosal bleeding, nor ulcerations/erosions, as shown in Figs. 1 and 2.No therapy nor biopsies were performed.The remainder of the stomach, esophagus and duodenum were normal.The patient denied any recent trauma, surgery or endoscopic intervention.Hence, she underwent further investigation in order to determine the nature of the hematoma, with the working differential diagnoses of intramural neoplasia and dissecting visceral aneurysm.After the EGD, she was submitted to a new CTA, with evidence of a slightly hyperdense wall thickening of the corpus and fundus, a small amount of blood compatible material in the lumen and perigastric adipose tissue hyperdensity, as shown in Figs. 3 and 4.It also showed a descending aorta aneurysm with an extensive crescent-like mural thrombus, which was hyperdense in the non-enhanced scan, with intimal flap and false lumen, suggesting dissecting acute intramural aortic hematoma, as demonstrated in Fig. 5.There were signs of rupture, characterized by the presence of mediastinal hematoma measuring 6.4 × 5.4 × 3.3cm at the level of the pulmonary trunk and left pleural effusion.The conjoint findings made the final diagnosis of aortic dissection with contained rupture.Hematologic examination found no other coagulopathies and the only predisposing factor was the previous use of two antithrombotic antiplatelet drugs, which were suspended since the admission.Thus, the aortic dissection with contained rupture was considered the cause for the GIH.Endovascular repair was performed, with implant of an aortic graft.The angiography presented no contrast extravasation of the gastroparietal branch, nor a visceral aneurysm.In the next day, control EGD revealed the GIH was stable, with slight regression.The oral diet was resumed on the second post-operative day.The GIH was managed conservatively and the patient had favorable outcome.Approximately three weeks after the diagnosis, another EGD showed GIH involution.She was discharged after a few days and the prescription of aspirin was restarted.On routine follow-up, on the 30th postoperative day, she was submitted to a control Endoscopic Ultrasound, which was normal, the GIH had disappeared.The patient remains on follow-up and regular appointments with her vascular surgeon and geriatrician.The patient gave consent for publication of her case in the medical literature.GIH is a very rare condition .It can arise in the submucosal or muscular layer of the organ .The postulated mechanism for the bleeding is shredding of terminal arteries at the point of penetration in the muscular layer with subsequent dissection of the muscularis propria from the submucosa and the most common contributing factor is hemorrhagic diathesis/anticoagulant use .An extensive Pubmed search was conducted, looking for all reported data in adult patients in the English literature and utilizing the following entries , , and , with the findings summarized in Table 1.In the review, with a total of 26 articles and 57 patients, the main etiologies were trauma, post-interventional endoscopy, coagulopathy/anticoagulants and spontaneous .Gastric blunt trauma is infrequent, with a reported incidence up to 1.7%.It may occur in high velocity impact involving the epigastrium in the post-prandial period, due to tangential tearing along fixed points, increased intraluminal pressure and crushing against the vertebral bodies.The most frequently injured part of the stomach is the fundus, followed by the corpus .The endoscopy-related cases include post-percutaneous endoscopic gastrostomy, argon plasma coagulation, endoscopic mucosal resection, endoscopic submucosal dissection, EUS-guided fine needle aspiration and post-injection therapy .Furthermore, there are few reports with anedoctal causes for GIH, such as foreign body, peptic ulcer disease, amyloidosis and pancreatitis .The authors did not find any previous reports like the present, with aortic dissection-associated GIH.In GIH work-up, CT is the current diagnostic tool of choice .Solazzo et al. , in a very elegant study on gastric blunt trauma, proposed a radiological-based grading system with diagnostic and prognostic applications.The grades range from 1 to 4, representing gastric contusion to full thickness rupture.The authors believe such classification can
Introduction: Intramural hematomas of the gastrointestinal tract are uncommom, usually located in the esophagus or duodenum, with idiophatic or secondary causes.Discussion: The postulated mechanism for the bleeding in gastric intramural hematoma is shredding of terminal arteries at the point of penetration into the muscular layer with subsequent dissection of the muscularis propria from the submucosa.The most frequently cited risk factor is hemorrhagic diathesis/anticoagulant use and the main etiologies are trauma and post-interventional endoscopy.
be extrapolated for the present case, despite of the different cause.Thus, our patient would be classified as grade 1 lesion, with the suggestion of conservative care, which was performed, with the additional correction of the aortic dissection/rupture.Endoscopy plays a key role in the investigation of GIH, especially when it is post-interventional endoscopy."Usually, there is gastric luminal narrowing and a submucosal bright-red or dark-red mass, depending on the examination's timing, sometimes, with the presence of active mucosal bleeding .EUS is also a useful method to assess lymph nodal status and the depth of the mass, providing cytological and histological material, to differentiate from the subephitelial tumors/GISTs .In this case, EUS was undertaken during outpatient follow-up to assess regression of the hematoma and perform EUS-FNA if needed.However, there is the remote possibility of not making a final diagnosis in time.The only GIH-attributed fatality in the literature happened in an elderly man.It was consequent to the rupture of a left gastric dissecting aneurysm, in the so-called “double-rupture phenomenon”, i.e. delayed fatal rupture of a visceral splanchnic aneurysm , when they first hemorrhage into the lesser sac with subsequent overflow into the peritoneal cavity, followed by sudden circulatory collapse and estimated mortality rate of 70% .There is no standard of care for such rare condition.The treatment has a progressive spectrum.It can be conservative, resort to minimally invasive therapies and/or to surgery .The treatment may be cause-dependent.Hence, GIHs secondary to coagulopathy are generally managed conservatively with blood transfusion and anticoagulation reversal .Oral intake is usually interrupted in patients with abdominal pain, acute bleeding and symptoms of gastric outlet oclusion .If there is active bleeding or a trend toward enlargement, transcatheter arterial embolization may be indicated.However, it is only technically possible if contrast extravasation of the gastroparietal branch of the left gastric artery is identified .There are two documented cases of post-acute pancreatitis GIHs.One was treated conservatively and the other resorted to CT-guided percutaneous drainage successfully .In the interventional endoscopy-related GIHs, endoscopic approach may be attempted, with two successful reports.In the first, in an APC-caused hematoma, the authors performed an endoscopic incision and drainage .The second GIH, which was consequential to an EUS-FNA, was treated with endoscopic band ligation in the bleeding spot .Furthermore, Park et al. described a post-EMR GIH with combined therapeutics: endoscopic hemocliping in the resection wound and subsequenty TAE."In traumatic GIHs, therapy is tailored, considering the patient's clinical status and prognostic stratification, to decide between close follow-up or surgery. "In Solazzo et al.'s case series, the biggest one to date so far, most of the patients were treated surgically .Surgery is still the most frequently employed treatment.As exposed in the current review, 50.8% of the sample underwent surgical exploration.It is especially recommended in cases with unclear diagnosis, suspected complications, ongoing bleeding and failure of minimally invasive therapy .The decision to operate emergently is largely driven by the clinical scenario .Gastric Intramural hematoma is rare and has many different causes.This article described a new etiology for it – aortic dissection with contained rupture favoured by the combined use of two antiplatelet drugs.The computed tomography is the diagnostic modality of choice, with the aid of other examinations, as performed in this case.The treatment comprises conservative measures, minimally invasive approach or most commonly surgery.Learning from this case, physicians should be aware of this rare finding and prepare to make a faster diagnosis and consequential treatment.This article is exempt from ethical approval.The patient gave consent for publication of her case in the medical literature.This article received no funding.Borges AC: management of the patient, article concept, data collection and manuscript writing;,Cury MS: interpretation of data and images, analysis and discussion;,Carvalho GF: technical review of the article and imagiological validation of the findings;,Furlani SMT: final review of the article.All the authors declare no conflicts of interest.
We present a very rare case of gastric intramural hematoma caused by an unpublished etiology, with literature review.Case presentation: An elderly woman suffered acute thoracic aorta dissection followed by gastric intramural hematoma, diagnosed through endoscopy and computed tomography angiography.The treatment included endovascular aortic repair and conservative management.In the diagnosis work-up, computed tomography is the method of choice, usually associated with endoscopy.There is no standard of care for such rare condition.Thus, treatment may be cause-dependent, ranging from conservative to minimally invasive and/or surgery.Conclusions: Gastric intramural hematoma is a rare disorder with many causes and we described a new etiology for it.The computed tomography is the diagnostic modality of choice, with the aid of other examinations.The treatment comprises conservative measures, minimally invasive approach or most commonly surgery.
Self-training is one of the earliest and simplest semi-supervised methods.The key idea is to augment the original labeled dataset with unlabeled data paired with the model’s prediction.Self-training has mostly been well-studied to classification problems.However, in complex sequence generation tasks such as machine translation, it is still not clear how self-training woks due to the compositionality of the target space.In this work, we first show that it is not only possible but recommended to apply self-training in sequence generation.Through careful examination of the performance gains, we find that the noise added on the hidden states is critical to the success of self-training, as this acts like a regularizer which forces the model to yield similar predictions for similar inputs from unlabeled data.To further encourage this mechanism, we propose to inject noise to the input space, resulting in a “noisy” version of self-training.Empirical study on standard benchmarks across machine translation and text summarization tasks under different resource settings shows that noisy self-training is able to effectively utilize unlabeled data and improve the baseline performance by large margin.
We revisit self-training as a semi-supervised learning method for neural sequence generation problem, and show that self-training can be quite successful with injected noise.
The world's largest Cu–Ni–PGE resources are hosted by large layered intrusions such as the Bushveld Complex, Noril'sk Camp, Sudbury Igneous Complex or the Duluth Complex.The volumetrically important base and precious metal ores in these intrusions were produced by segregation, fractionation and settling of immiscible magmatic sulfide, but other models involving the significance of fluids and halogens also exist.Devolatilization, desulfurization and/or partial assimilation of the country rocks is also a principal process of ore formation in mafic intrusions.It has been presented recently that high grade, though small volume Cu–Ni and PGE mineralization may also occur in the footwall lithologies of layered intrusions.Fluid inclusion and geochemical studies on hydrothermal alteration zones around mafic–ultramafic complexes reveal that fluids were produced by the contact metamorphism, partial melting of the footwall, or enter the system from external sources.These fluids played a significant role in the remobilization of primary magmatic sulfide and in the transport of metals in the footwall of these magmatic complexes.Locally, high-grade mineralization was also reported in the metagranitoid footwall of the South Kawishiwi intrusion by Patelke.No metal concentration data are available from the studied WM-002 drill core, but both the intrusion and the footwall part of the near-by WM-001 drill core was systematically analyzed.The Cu content in the granitoid footwall is locally up to 1.91 wt.%, and the TPM is markedly high in the 338–383 m and in the 391–392 m depth interval with up to 827 ppb.Locally in the footwall Pd concentration is anomalously high, up to 860 ppb.Metal concentration data from the footwall, comparable with metal concentrations in the SKI underline the economic significance of mineralization in the footwall.Hydrothermal processes and their roles in PGE remobilization in the troctolitic intrusions of the DC have been demonstrated by Mogessie et al., Ripley et al., Severson and Gál et al.Studies of Mogessie et al. conclude that Cu and PGEs were remobilized from the primary magmatic mineralization along fracture zones by C–O–H–S and Cl-rich fluids.Gál et al. described vein-type, hydrothermal Cu-mineralization associated with actinolite–chlorite–prehnite–pumpelleyite–calcite alteration assemblage in the hanging-wall of the SKI at the Filson Creek deposit.In this work remobilization of Pd from the primary magmatic Cu–Ni–PGE mineralization to an unknown location was also demonstrated and it was shown that the late serpentinization of the intrusion resulted in only very local scale remobilization of base metals and PGEs.Hydrothermal remobilization of primary ores has also been demonstrated in the Babbitt Cu–Ni deposit by Ripley et al.Based on stable isotope studies they proved that fluids both from magmatic and metasedimentary footwall sources were involved in the Pt–Pd redistribution.These studies have the following economic significance: 1.),fluid circulation in the primary mineralized zones of the intrusion may have resulted in depletion of the ores in PGEs, and 2.),there is a high probability of the formation of secondary hydrothermal Cu-mineralization also far from the basal mineralized zones, in the hanging wall or in the footwall units of the intrusion.The aim of this paper is to present the textural, mineralogical and geochemical characteristics of the hydrothermal alteration zones in the charnockitic footwall of the SKI and to characterize the hydrothermal fluids based on fluid inclusion studies.Fluid inclusion studies have not only been carried out on footwall samples, but also on non-metamorphosed and unaltered granitoid samples far from the contact aureole, in order to distinguish regional and local fluid flow events.The DC and associated intrusions in northeastern Minnesota are part of the Mesoproterozoic Midcontinent Rift which is exposed in the Lake Superior region.Intrusive and effusive rocks with the MCR cover ca. 5700 km2 arcuate area in northeastern Minnesota.The DC is defined as a continuous mass of mafic to felsic plutonic rocks that intruded between the Neoarchean to Early Proterozoic footwall and the co-magmatic rift related volcanic rocks of the hanging wall.Miller and Severson distinguished four general rock series within the nearly continuous mass of intrusive rocks forming the DC, namely the Felsic Series, the Early Gabbro Series, the Anorthositic Series and the Layered Series.The Cu–Ni–PGE sulfide mineralization is hosted by three of the Layered series intrusions that were emplaced during the main stage of the rift related magmatism.The Cu–Ni–PGE laden mineralized intrusions are from NE to SW the South Kawishiwi intrusion, the Bathtub intrusion and the Partridge River intrusion.The DC is in its current position tilted by 15–20° to SE, and therefore the immediate footwall units of the SKI, Bathtub intrusion and Partridge River intrusion crop out along the north-eastern perimeter of the DC.Footwall rocks adjacent to and beneath the DC include Neoarchean intrusive, sedimentary and magmatic rocks and Paleoproterozoic sedimentary rocks of the Animikie Basin.Neoarchean granitoid rocks form the direct footwall with mineralized layered intrusions only the northeastern segment of the SKI.Granitic, granodioritic, monzonitic and tonalitic rocks comprise most of the 2.7 Ga Giants Range batholith, which is intrusive into the supracrustal rocks of the Wawa–Abitibi subprovince — the southernmost granite–greenstone belt of the Superior Province.Except for a short segment along the SKI, the footwall of the DC is the Early Proterozoic Biwabik Iron Formation and the graphite- and pyrite-bearing Virginia Formation.Multiple intrusions of cumulate textured anorthositic and troctolitic rocks produced high-temperature amphibole–pyroxene hornfels facies contact metamorphism, devolatilization and locally partial melting of the footwall lithologies.The dominantly disseminated Cu–Ni–PGE sulfide mineralization of the SKI, Partridge River intrusion and Bathtub intrusions is confined to the lower 100–300 m of the intrusions.The sulfide ore mineralization collectively constitutes over 4.4 billion tons of material averaging 0.66% Cu and 0.20% Ni at a
In the Neoarchean (~ 2.7 Ga) contact metamorphosed charnockitic footwall of the Mesoproterosoic (1.1 Ga) South Kawishiwi intrusion of the Duluth Complex, the primary metamorphic mineral assemblage and Cu-Ni-PGE sulfide mineralization is overprinted by an actinolite + chlorite + cummingtonite + prehnite + pumpellyite + quartz + calcite hydrothermal mineral assemblage along 2-3 cm thick veins.
calcite II can be considered as the last phase to precipitate during hydrothermal fluid flow.Pure or nearly pure CO2-bearing inclusions are common in charnockites.Pressure and temperature conditions of trapping, calculated for carbonic inclusions, are generally consistent with the P and T conditions of the host rock; therefore these inclusions are believed to have formed close to the peak of the metamorphism.Since almost pure CO2 inclusions are very rare in the nature, the nearly pure CO2 composition, the origin and the transport of the carbonic phase require explanation.Carbonic phases in migmatites generally originated from: mantle degassing, decarbonation of carbonate-bearing lithologies, or reaction of graphite, aqueous fluids and hydrous minerals.Only the Early Proterozoic Virginia Formation contains graphite and carbonate beds.Decarbonation of these metasedimentary rocks, upward migration of carbonic fluids, as well as, strong textural relationship of graphite and Cu–Ni–PGE sulfide mineralization in the basal mineralized zone of the layered intrusions has been widely reported in the past decades.However, these rock units are not found in the Spruce Road deposit, since this rock units were probably digested by the intruding troctolitic magma.Evidence for the presence of the metasedimentary rocks prior to the emplacement of the SKI magma are the hornfels inclusions in the basal zone of the SKI.The chalcopyrite–pyrrhotite-rich sulfide melt in the pyroxene veins is derived from the basal mineralized zone therefore it is proposed that the carbonic-rich fluids may have also migrated along these partially molten shear zones from the graphite-bearing Virginia Formation-rich basal mineralized zones into the charnockitic footwall.Recently, Banerjee et al. modeled CO2 migration in charnockitic rocks.They found that percolation of CO2 along grain boundaries is inhibited due to the large wetting angles but possible and fast along microfractures.CO2 diffusion in melts, however, is only possible above ~ 840 °C.Considering that the temperature in the proximal 10 m part of the footwall of the DC was between 830 and 910 °C and the granite was in a partially molten state, both processes could play role in the CO2 diffusion into the granite, but possibly the presence of high permeable zones promoted the CO2 diffusion.Calcium-bearing, high salinity fluids have been found as primary inclusions in calcite II in the hydrothermal vein assemblage and in the rock forming quartz of the GRB 10 km away from the intrusion–footwall contact.Ca-rich high salinity fluids are commonly found in sedimentary basins and on shields where the high salinity is the result of the significant fluid–rock interaction under low fluid/rock ratio conditions.Therefore, these fluids are considered as regional formational fluids.Groundwater in the Soudan Mine has been recently investigated by Holland et al. and Alexander pers.comm.They reported that the composition of regional formational fluids do not show any significant variation in the past 2 Ga years.The reported compositions are comparable with the compositions of the fluid inclusions discussed in this work.Sulfur isotope compositions of sulfide assemblages related to the partial melting have average values of approximately + 8‰ in the footwall, and are comparable with the sulfide isotope ratios in the basal zones of the intrusion.The similar values indicate that during the infiltration of the sulfide liquid into the footwall, along some permeable zones created by the partial melting, there was essentially no isotopic fractionation between the sulfide liquid and the high temperature sulfide minerals.However, the comparable sulfide isotope values in the footwall and in the basal mineralized zone is direct evidence that the sulfide most likely was derived from the intrusion.At high temperatures isotopic fractionation is minor and hence can be neglected.However, with decreasing temperature, the significance of isotopic fractionation increases, although the magnitude of the isotope fractionation depends upon the fluid species and minerals involved.In addition, isotopic fractionation depends not only on temperature, but also on the pH, fO2, fS2 and the total S content of the fluid.According to Sakai and Ohmoto, in a hydrothermal fluid SO42− strongly sequesters 34S relative to sulfide.Therefore, sulfide minerals precipitating from a relatively oxidizing hydrothermal fluid will have lighter δ34S values relative to the total dissolved sulfur in the hydrothermal fluid.We propose, based on the petrographic evidence, that the hydrothermal fluids dissolved sulfide minerals with sulfur isotope values around + 8‰.During the re-precipitation from low-temperature hydrothermal fluid under high oxygen fugacity conditions, due to the isotopic fractionation indicated above — the precipitated sulfide minerals will have lighter δ34S values than the dissolved sulfur in the hydrothermal fluid.δ34S values around + 5.4 to + 5.7‰ validate this relationship between sulfides with magmatic and hydrothermal origin.Under certain conditions, variation in pH may also cause variation in sulfur isotope ratio.Chalcopyrite in association with calcite II indicates increasing pH. Increasing pH will shift δ34S of HS− and H2S to lighter values vs. total dissolved sulfur, and hence δ34S value of sulfide minerals in the hydrothermal veins will decrease vs. sulfur in the hydrothermal fluid.This process may also have contributed to the decrease in sulfide isotope values in the hydrothermal veins.To sum up, either the increasing or high oxygen fugacity or/and the increase of pH lead to the precipitation of hydrothermal chalcopyrite having lighter sulfur isotope values relative to the primary sulfide assemblage.Chalcopyrite inclusions in the Type II and III fluid inclusion assemblages indicate that these fluids played a role in metal redistribution.However, these solid inclusions are found associated with CO2 and CH4 bearing fluids only in the recrystallized quartz porphyroblast, which is mantled by chalcopyrite and pyrrhotite.Therefore, the estimation of the extent of this remobilization is difficult.Hydrothermal veins with comparable mineral assemblages
The presence of chalcopyrite inclusions in these fluid inclusions and in the trails of these fluid inclusion assemblages confirms that at least on local scale these fluids played a role in base metal remobilization.The composition of the formational fluids in the Canadian Shield is comparable with the composition of the studied fluid inclusions.
have been reported from the Layered and Anorthositic Series of the Partridge River intrusion and SKI.These studies suggested that one type of hydrothermal fluids was probably Ca-bearing, high salinity fluid, based on the calcic alteration around the chlorite veins in plagioclase feldspars.Based on fluid inclusion studies, our work confirms this model.Temperature estimation for chlorites in the hydrothermal veins of Gál et al. overlap perfectly with our calculations.Gál et al. also proved that the hydrothermal fluid flow has been controlled by some major fault zones related to the rifting.A clear, fault related occurrence of hydrothermal mineralization was not confirmed in the footwall, but the mineralization is characteristically associated to partial melt veins, that are probably associated to the tectonic activity during the rift formation.Based on the mineralogical, geochemical similarities and on the overlap in temperature estimations, we conclude that the hydrothermal formations in the footwall and within the DC may belong to the same, large-scale hydrothermal fluid flow system.Mogessie et al. and Mogessie and Saini-Eidukat reported PGE mineralization associated to the chlorite-bearing hydrothermal veins, whereas similar association was not confirmed by Gál et al.PGM in association with hydrothermal veins have not been found in the footwall during this study.Notwithstanding, it has to be noted that this might account for the PGE-poor character of the remobilized ore.Hydrothermal alteration assemblages and Ca-bearing fluids have been found in the SKI, Partridge River intrusion and Bathtub intrusion of the Duluth Complex, as well as, in the granitic footwall at the Spruce Road deposit.Schmidt reported regional, low-grade burial metamorphism of the Keweenawan basalts characterized by albite, epidote, prehnie, actinolite, chlorite and albite.Jolly and Smith and Jolly presented evidences, that fluids associated to this low-grade burial metamorphism were able to remobilize Cu and Zn from the volcanic rocks.These evidences raise the question if the alteration in the footwall of the SKI is necessarily associated to the formation of the MCR.Considering, that the composition of shield fluids did not change considerably in the past 2 Ga, the alteration may have happened any time after the formation of the Duluth Complex.Four different fluid inclusion assemblages have been distinguished in the charnockitic footwall of the SKI at the Spruce Road deposit.Apparently pure CO2-bearing fluid inclusions were found as primary inclusions in a recrystallized quartz porphyroblast with near critical homogenization temperatures.Assuming that these inclusions trapped during the peak metamorphism when the temperatures were in the range from 830 to 920 °C, the calculated pressure of trapping varies between 1.6 and 2.0 kbar.This pressure range corresponds to earlier pressure estimations, carried out in the felsic dykes at the bottom of the SKI.The CO2 in the inclusions can be derived from the Paleoproterozoic graphite bearing Virginia Formation, found as inclusions in the basal zones of the SKI, which was decarbonated during the contact metamorphism.A possible explanation for the apparently pure CO2 composition of the inclusions is that the H2O that percolated primarily with the CO2 in the system reacted with the silicate minerals forming secondary hydrous silicates.Fluid inclusion assemblages with CO2 + H2O and CH4 + N2 + H2O compositions migrated after the peak of the contact metamorphism along some permeable zones in the charnockite.Based on petrographic evidence for both assemblages, the mixing of a low salinity aqueous and a low density carbonic fluid can be modeled.Trapping temperatures and pressures were calculated using the isochore interception method of the pure end-members.The obtained pressures range between 240 to 650 bar and 315 to 360 bar, respectively.These pressures are far lower than those calculated for the primary inclusions.Therefore these fluids, trapped probably any time after the formation of the SKI during the exhumation of the DC.The carbonic phase can be derived from the decarbonation of the graphite-bearing metasedimentary Virginia Formation inclusions.Both fluid phases played a significant role, at least at local scale in the remobilization and redistribution of copper along some fracture zones in the charnockite.In order to estimate the economic significance of hydrothermal remobilization a more extended work is needed involving several drill core samples and a detailed geochemical work is needed to evaluate the role of fluids in PGE redistribution,Ca-bearing, high salinity fluids resulted in formation of pervasive or vein type lower greenschist to prehnite–pumpellyite facies alteration.In these zones the actinolite + cummingtonite → chlorite + calcite I + quartz + albite + magnetite → prehnite + pumpellyite + calcite II + chalcopyrite indicate cooling from high to low temperatures.Composition of fluid inclusions found in the second generation of calcite are comparable with current groundwater compositions and hence involvement of shield brines in the late stage fluid–rock interaction related to the cooling of the contact aureole of the Duluth Complex.Since Ca-bearing high-salinity fluids resulted in alterations with similar alteration assemblages in several units of the Duluth Complex, the possibility of a post-Keweenawan fluid flow event cannot be excluded.Sulfur isotope studies have revealed that the different sulfide assemblages characterized by sulfur isotope values infiltrated primarily during the contact metamorphic process from the intrusion into the footwall.During the hydrothermal remobilization due to the preferred fractionation of 32S in the sulfide phase, the δ34S values will decrease compared to the source sulfur isotope values.Accordingly, the measured low δ34S values range around + 5.5‰.
In calcite, hosted by the hydrothermal alteration zones and in a single recrystallized quartz porphyroblast, four different fluid inclusion assemblages are documented; the composition of these fluid inclusions provide p-T conditions of the fluid flow, and helps to define the origin of the fluids and evaluate their role in the remobilization and reprecipitation of the primary metamorphic sulfide assemblage.Pure CO<inf>2</inf> fluid inclusions were found as early inclusions in recrystallized quartz porphyroblast.These inclusions may have been trapped during the recrystallization of the quartz during the contact metamorphism of the footwall charnockite in the footwall of the SKI.The estimated trapping pressure (1.6-2.0kbar) and temperature (810-920°C) conditions correspond to estimates based on felsic veins in the basal zones of the South Kawishiwi intrusion.The estimated trapping pressure and temperature conditions (240-650bar and 120-150°C for CO<inf>2</inf>-H<inf>2</inf>O-NaCl inclusions and 315-360bar and 145-165°C for CH<inf>4</inf>-N<inf>2</inf>-H<inf>2</inf>O-NaCl inclusions) are significantly lower than the p-T conditions (>700°C and 1.6-2kbar) during the contact metamorphism, indicating that this fluid flow might not be related to the cooling of the Duluth Complex and its contact aureole.No evidences have been observed for PGE remobilization and transport in the samples.The source of the carbonic phase in the carbonic assemblages (CO<inf>2</inf>; CH<inf>4</inf>) could be the graphite, present in the metasedimentary hornfelsed inclusions in the basal zones of the South Kawishiwi intrusion.The hydrothermal veins in the charnockite can be characterized by an actinolite+cummingtonite+chlorite+prehnite+pumpellyite+calcite (I-II)+quartz mineral assemblage.Chlorite thermometry yields temperatures around 276-308°C during the earliest phase of the fluid flow.This suggests that the composition of the fluids did not change in the past 2Ga and base metal remobilization by formational fluids could have taken place any time after the formation of the South Kawishiwi intrusion.
amorphous Al-rich-A-S-H type gel, while the zeolitic phases faujasite-Na, and partially Sr-substituted zeolite Na-A were observed as additional phases for geopolymer gels cured at 80 °C.The-A-S-H type gel comprises Al and Si in tetrahedral coordination, with Si in Q4 and Q4 sites, and Na+ and K+ located within extra-framework sites charge balancing the net negative charge resulting from Al3+ in tetrahedral coordination.Upon incorporation of Sr and Ca, the alkaline earth cations displace some of the alkali cations Na+ and K+ from the extra-framework sites and fulfil the charge balancing role.The remaining alkali cations are unaffected.Incorporation of alkaline earth cations results in a slight decrease in the Si/Al ratio of the-A-S-H gel due to an increased charge balancing capacity resulting from substitution of divalent alkaline earth cations for monovalent alkali cations.Both Ca and Sr induce the same structural changes and cause a slight decrease in Si/Al ratio in the geopolymer gels.No other changes were observed for geopolymer gels cured at 20 °C, however in those cured at 80 °C incorporation of Sr appeared to promote the formation of LTA over FAU zeolite phases.The findings presented here have significant implications for the long term stability and durability of these materials and suggests that metakaolin-based geopolymer gels are excellent candidates for production of wasteforms for immobilisation of radioactive waste containing 90Sr.
Radioactive waste streams containing 90Sr, from nuclear power generation and environmental cleanup operations, are often immobilised in cements to limit radionuclide leaching.Due to poor compatibility of certain wastes with Portland cement, alternatives such as alkali aluminosilicate ‘geopolymers’ are being investigated.Here, we show that the disordered geopolymers ((N,K)-A-S-H gels) formed by alkali-activation of metakaolin can readily accommodate the alkaline earth cations Sr2+ and Ca2+ into their aluminosilicate framework structure.The main reaction product identified in gels cured at both 20 °C and 80 °C is a fully polymerised Al-rich (N,K)-A-S-H gel comprising Al and Si in tetrahedral coordination, with Si in Q4(4Al) and Q4(3Al) sites, and Na+ and K+ balancing the negative charge resulting from Al3+ in tetrahedral coordination.Faujasite-Na and partially Sr-substituted zeolite Na-A form within the gels cured at 80 °C.Incorporation of Sr2+ or Ca2+ displaces some Na+ and K+ from the charge-balancing sites, with a slight decrease in the Si/Al ratio of the (N,K)-A-S-H gel.Ca2+ and Sr2+ induce essentially the same structural changes in the gels.This is important for understanding the mechanism of incorporation of Sr2+ and Ca2+ in geopolymer cements, and suggests that geopolymer gels are excellent candidates for immobilisation of radioactive waste containing 90Sr.
There are more than 4 million neonatal deaths annually and a third are caused by severe bacterial disease, which mainly presents as sepsis .In sub-Saharan Africa, the limited available data show that neonatal sepsis is caused by both Gram-positive and Gram-negative bacteria , although more than half of the cases are attributable to the former, particularly Staphylococcus aureus , Streptococcus pneumoniae and group B streptococcus .Early-neonatal sepsis is mainly due to intrapartum bacterial vertical transmission during delivery or during the first weeks of life as a result of the close physical contact with the mother if she carries pathogenic bacteria .If newborns are mainly infected by their mother, an intervention that is able to reduce maternal bacterial carriage should prevent vertical transmission and consequently neonatal sepsis.Azithromycin is a macrolide with a wide antimicrobial spectrum currently licensed for use in children >6 months of age for a wide range of infections .As part of the WHO-recommended trachoma control strategy, mass azithromycin treatment campaigns in countries where trachoma is endemic decreased both the nasopharyngeal pneumococcal carriage and the overall childhood mortality .Azithromycin has also been used in pregnant women in sub-Saharan Africa in several trials designed to reduce the incidence of maternal malaria and preterm deliveries and low-birthweight, but a meta-analysis found no effect of these outcomes .However, a recent study conducted in Papua New Guinea showed 25% reduction of low-birthweight infants after including monthly azithromycin for 3 months to the standard sulphadoxine-pyrimethamine during the last months of pregnancy .Blocking vertical transmission has already been used successfully in the context of GBS, the main bacterium causing neonatal sepsis in developed countries, in Europe and USA .But whereas in Europe and the USA treatment is targeted at women with GBS vaginal carriage, in sub-Saharan Africa systematic treatment may be more feasible, since half of pregnant women are carriers of bacteria associated with neonatal sepsis in the region .In a first proof-of-concept assessing the potential of a new intervention to prevent neonatal sepsis, we evaluated the efficacy of one oral dose of azithromycin administered to women in labour in decreasing bacterial carriage both in the mother and her newborn.The study protocol has been published elsewhere .Briefly, this was a phase III, double-blind, placebo-controlled, randomized trial in which women in labour were randomized to receive a single dose of oral azithromycin or placebo.The packaging and labelling of the interventional medical product was conducted by IDIFARMA.Azithromycin and placebo were provided as tablets packed in blisters.IDIFARMA created the randomization list and numbered the blisters according to the list.One blister pack of interventional medical product contained four tablets each of 0.5 g of azithromycin or placebo.The active drug and the placebo looked identical.The statistician of the Data Safety Monitor Board kept the list until the final database was locked."The investigators were blinded to the patient's allocation until the database was locked, when the code was broken.The study was based at the Jammeh Foundation for Peace, a government-run health centre located in western Gambia that manages 4500 deliveries/year.The population in the catchment area is representative of The Gambia and it covers the main ethnic groups.Approximately 70% of deliveries in the country occur in health facilities.The climate of the area is typical of the sub-Sahel region.Illiteracy is high .Between April 2013 and April 2014, women in labour aged 18–45 years were recruited when attending the JFP labour ward.They had signed consent to participate in the study during their antenatal visits.Eligibility was re-assessed in the JFP labour ward based on the exclusion criteria: known human immunodeficiency virus infection; any acute or chronic condition that could interfere with the study as judged by the research clinicians; planned travel out of the catchment during the follow up; known risk of caesarean section; likely referral during labour; known multiple pregnancy; known severe congenital malformation or intrauterine death confirmed before randomization; known allergy to macrolides; consumption of antibiotic within the previous week.Pre-intervention samples were collected during labour and vaginal swab).An NPS was collected from the baby within 6 h after birth.After discharge, mothers and babies were visited at home for 2 months, daily during the first week and weekly thereafter.NPS and breast milk samples were collected at days 3, 6, 14 and 28.In addition, a VS was collected between days 8 and 10 after delivery at the postnatal check visit at JFP.The primary end point was prevalence of carriage of S. aureus, GBS or S. pneumoniae in the NPS sample of the newborn at day 6.Secondary end points included: bacterial carriage in the NPS of the baby and the mother; carriage in the VS and breast milk during the first 4 weeks after delivery; and prevalence of carriage of any of the study bacteria non-susceptible to azithromycin.To evaluate the safety of the intervention on mothers and newborns, adverse events were monitored and assessed throughout the follow up.Diagnoses were based on clinical judgement according to the study clinicians.A local safety monitor and a DSMB reviewed serious adverse events during the trial, and the trial was monitored by an independent clinical trials monitor.The study was approved by the joint Gambia Government/Medical Research Council/Ethics Committee.The NPS, low VS and breast milk samples were collected as part of the trial; see details elsewhere .Samples were processed following standard microbiological procedures .The sample size was chosen to provide 88% power to detect a 20% reduction in the primary end point.We assumed that NPS would not be available for 15% of
We conducted a phase III, double-blind, placebo-controlled randomized trial to determine the impact of giving one oral dose of azithromycin to Gambian women in labour on the nasopharyngeal carriage of S. aureus, GBS or S. pneumoniae in the newborn at day 6 postpartum.They were followed for 8 weeks and samples were collected during the first 4 weeks.Between April 2013 and April 2014 we recruited 829 women who delivered 843 babies, including 13 stillbirths.
newborns at day 6.Case-report-forms and laboratory forms were reviewed before being double entered into OpenClinica.The analysis was carried out using Stata.We compared the prevalence of bacterial carriage in mothers and newborns allocated to the azithromycin and placebo groups.Additional analyses included carriage acquisition rates for the periods 0–6 days and 7–28 days."Ratios and 95% CI were calculated for each comparison, and Fisher's exact test was used to compute p values.We included twins, but did not adjust the 95% CI and significance tests for the effect of clustering because the design effect was negligible.A Poisson regression model was used to assess the effect of the intervention on prevalence of bacterial carriage across all time-points.The model included the baseline carriage status of the mother and time as covariates, and robust standard errors were used to account for the dependence between observations from the same individual as well as the model misspecification.For the primary end point, three additional analyses were performed: a sub-group analysis in women who delivered 2 hours or more after taking the drug; a sensitivity analysis using multiple imputation in which bacterial carriage was imputed from baseline demographic data and from carriage data at other time-points; and a per protocol analysis.In all, 1061 women in labour were assessed for eligibility and 829 were recruited, randomized and treated.These 829 recruited women delivered 843 babies.Seven neonatal deaths had a clinical diagnosis of sepsis, meningitis or pneumonia according to the adverse events form; four in the placebo group, and three in the azithromycin group.There were no maternal deaths reported.Of the 25 maternal hospitalizations, 18 were due to labour complications and three had diagnosis of puerperal sepsis.No adverse events/serious adverse events related to the intervention were reported for the newborns.One mother developed moderate urticarial rash that lasted for 3 days.At day 6, samples were collected from 93% of the newborns that were alive and at all other times at least 90% of mothers and newborns provided samples.Pre-intervention samples.Prevalence of carriage in VS and NPS was similar between study arms for all bacteria.Post-intervention NPS.All study bacteria were less common in newborns in the azithromycin group than in the placebo group.At day 6, the prevalence of nasopharyngeal carriage of study bacteria in newborns was 28.3% in the azithromycin group versus 65.1% in the placebo group = 0.43; p <0.001).Similarly, in mothers bacterial carriage was lower in the azithromycin group, with PR ranging from 0.08 to 0.57.Post-intervention breast milk samples.The prevalence of carriage of study bacteria in breast milk was significantly lower among mothers from the azithromycin group at day 6.Post-intervention VS. The prevalence of carriage of study bacteria in the VS collected post-intervention was lower in the azithromycin group than in the placebo group.S. pneumoniae was not isolated from VS.Antibiotic resistance.Isolates resistant to azithromycin, particularly S. aureus, were more common in the intervention group for all sample types.Sensitivity analyses.Differences in bacterial carriage between trial arms in the analysis stratified by time of treatment, the per protocol analysis and the analysis of carriage acquisition were similar to those observed in the primary analysis.For mothers recruited <2 h before delivery, prevalence of nasopharyngeal carriage of study bacteria at day 6 in their newborns was 35.5% if they received azithromycin and 67.5% if they received placebo.The intervention reduced antibiotic prescription in the study women by 40%, but not in the newborns.One oral dose of azithromycin given to women in labour substantially reduced the prevalence of S. aureus, GBS and S. pneumoniae carriage both in the newborn and the mother.The difference between arms was already evident at birth, probably as a result of the clearance of bacterial pathogens from the birth canal, and was maintained during the entire neonatal period."The prolonged effect of azithromycin on neonatal bacterial carriage is probably attributable to its substantial effects on carriage in the mother's nasopharynx and breast milk and the presence of azithromycin in the breast milk .Several studies, including one in The Gambia , have shown that azithromycin can reduce nasopharyngeal carriage of S. pneumoniae when given as prophylaxis but no studies have investigated whether azithromycin has an effect on GBS, and few have investigated whether it has an effect on S. aureus, both leading causes of neonatal sepsis in sub-Saharan Africa and the latter the bacterium most commonly isolated from the nasopharynx in newborns .A recent study comparing the impact of sulfadoxine-pyrimethamine in combination with three courses of 4 g of azithromycin versus sulfadoxine-pyrimethamine with chloroquine given monthly during pregnancy found no difference between groups on the prevalence of S. aureus in the maternal nasopharynx at delivery .Group B streptococci were almost eliminated in mothers treated with azithromycin and the prevalence was very low in their offspring.This may have implications beyond sub-Saharan Africa as GBS is a common cause of early neonatal sepsis in Europe and the USA where pregnant women are screened for GBS and, if positive, are treated during delivery with intravenous antibiotic .Intravenous prophylactic treatment with penicillin is only recommended if the women attend the maternal ward at an early stage of labour .Our data show that azithromycin can prevent vertical transmission of GBS and the other study bacteria even when taken <2 h before delivery.The prevalence of azithromycin resistance among bacterial isolates in the intervention arm was high, particularly for S. aureus.Although this may be worrying, such resistance is unlikely to be sustained as resistance falls in the absence of antibiotic pressure because of the associated fitness cost .In
No maternal deaths were observed.No serious adverse events related to the intervention were reported.According to the intent-to-treat analysis, prevalence of nasopharyngeal carriage of the bacteria of interest in the newborns at day 6 was lower in the intervention arm (28.3% versus 65.1% prevalence ratio 0.43; 95% CI 0.36–0.52, p <0.001).At the same time-point, prevalence of any bacteria in the mother was also lower in the azithromycin group (nasopharynx, 9.3% versus 40.0%, p <0.001; breast milk, 7.9% versus 21.6%, p <0.001; and the vaginal tract, 13.2% versus 24.2%, p <0.001).
The Gambia, after mass campaigns of azithromycin treatment, S. pneumoniae macrolide resistance returned to baseline levels within 6 months .Still, the selection of resistance after azithromycin treatment should be closely monitored in future studies because if larger studies show an effect on severe clinical outcomes, the intervention proposed would lead to continuous and sustained antimicrobial pressure.The importance in clinical care of such resistance is limited considering that azithromycin is currently not used for clinical care in The Gambia.Furthermore, as we found that the intervention reduced antibiotic use in mothers by >40% during the follow-up period, future larger studies should also monitor any effect on the intervention in reducing resistance to common antibiotics.The trial was designed as a proof-of-concept, to assess whether azithromycin can reduce bacterial carriage in the mother and in her offspring."It was not powered to assess the intervention's effect on neonatal morbidity or mortality, neither did it attempt to standardize clinical end points.The diagnosis of neonatal sepsis was made on a clinical basis and supported by chest X-rays or laboratory results at admission.Compared with the general population, study participants benefited from better care, which probably decreased morbidity and mortality in both arms.If a study nurse suspected an infection during a home visit, the participant was immediately referred to the study clinicians who managed them accordingly.Such close follow up may have increased the probability of hospitalization in children with mild signs/symptoms of disease who, in a real-life situation, would not have been taken to a health facility, and this may have contributed to decreasing the possible difference between arms.Although both arms had a comparable number of deaths due to severe infections, deaths in the intervention arm, unlike the placebo arm, occurred in newborns with an underlying severe condition.Azithromycin treatment during labour can significantly reduce bacterial carriage among newborns and may therefore lower their risk of neonatal sepsis, pneumonia and meningitis.This simple intervention could therefore have a dramatic impact on neonatal mortality, and might also prevent puerperal sepsis and maternal deaths.A larger randomized controlled clinical trial is urgently needed to determine whether the intervention is effective against these clinical outcomes.Such a trial should also monitor the effect of the intervention on the spread of azithromycin resistance in the community.All authors declare no competing interest.AR and UDA conceived the study.AR designed the study, drafted the protocol and wrote the initial manuscript.UDA contributed significantly in the final version of the design, protocol and manuscript.CO, BC and AB developed and adapted the field and laboratory work and made contributions to the development of the manuscript.CB led the statistical analysis plan document, conducted the statistical analysis of the trial and contributed to the manuscript.AD, RB and BK contributed to the study protocol and the manuscript.All authors read and approved the final manuscript.The MRC Unit in The Gambia receives core funding from the MRC UK.This trial was jointly funded by the UK MRC and the UK Department for International Development under the MRC/DFID Concordat agreement and is also part of the EDCTP2 programme supported by the European Union.
Bacterial sepsis remains a leading cause of death among neonates with Staphylococcus aureus, group B streptococcus (GBS) and Streptococcus pneumoniae identified as the most common causative pathogens in Africa.Asymptomatic bacterial colonization is an intermediate step towards sepsis.Study participants were recruited in a health facility in western Gambia.Sixteen babies died during the follow-up period.Differences between arms lasted for at least 4 weeks.Oral azithromycin given to women in labour decreased the carriage of bacteria of interest in mothers and newborns and may lower the risk of neonatal sepsis.Trial registration ClinicalTrials.gov Identifier NCT01800942.
participants behaved ; our interpretation of this behaviour ; and our degree of positivity in communicating our results .We took specific steps to address these potential problems, and in doing so believe we present a balanced and detailed critique of PINGR.On reflection, author BB found his position may have afforded him insider status , gaining more honest insights from participants than a non-medically qualified researcher could elicit.The study’s small sample size may be perceived as a further weakness, though was guided by the achievement of thematic saturation of usability issues .This implies the cost of including further participants would have been unnecessary .Interview transcripts were coded by one researcher, which may be viewed as a threat to credibility .However, we took explicit steps to mitigate this by triangulating findings from multiple data sources , in addition to holding critical analytic discussions between authors to challenge any potential biases or assumptions .Our interface design recommendations for e-A&F systems have been derived from empirical studies of one system.Although PINGR’s design has been informed by relevant existing usability and theoretical evidence, and its evaluations contextualised in the wider usability literature, effective alternative e-A&F designs may exist.Given the paucity of evidence on e-A&F usability, our recommendations are a reasonable starting point, though should continue to be tested and refined in future.Finally, given this study focused on interface usability in a controlled laboratory setting, it does not address issues that may only be revealed when e-A&F systems are studied in more naturalistic settings .Examples include problems with system implementation, such as how work arising from e-A&F is distributed between health professionals , in addition to wider cultural issues, such as whether clinicians are comfortable with scrutiny of their performance by a machine .We used a combination of qualitative and quantitative methods to evaluate the usability of a novel e-A&F system with target end-users.In doing so, we gained important insights into how to design user-friendly e-A&F systems according to their four key interface components, and how they can be integrated.This enabled us to refine key design recommendations , and determine their implications for patient safety.Although our study focused on primary care and long-term conditions, our findings may generalise to other clinical areas and settings.Each of our data sources uncovered novel usability issues as well as providing further understanding of those reported by others .Methodologically, we showed how to maximise the discovery of usability issues in a complex health information system , and increase the validity of our findings .As far as we are aware, this is the first published study of an e-A&F system to use eye tracking, and therefore presents unique insights into visual search behaviour, integrated within a wider data set.We report answers to previously identified research questions for e-A&F system design , on how to: display information in patient lists; summarise patient-level data across multiple clinical areas; and whether to incorporate clinical performance of other users.This study raises further questions; notably how best to prioritise and communicate clinical performance summaries, patient lists, and suggested actions – a grand challenge of CDS – and how these findings translate to more naturalistic settings.Future work will seek to address these questions in further phases of our iterative development framework .Finally, our findings provide further support for, and an example of, multi-method approaches in usability studies .All authors contributed to the conception and design of the study.BB conceived and designed PINGR, which was built by both RW and BB.BB collected the data.All authors analysed the data.All authors contributed to and approved the final version of the manuscript.This work was supported by a Wellcome Trust Research Training Fellowship for BB ; the MRC Health e-Research Centre, Farr Institute of Health Informatics Research ; and the National Institute for Health Research Greater Manchester Primary Care Patient Safety Translational Research Centre.The views expressed are those of the author and not necessarily those of the NHS, the NIHR or the Department of Health.
Introduction Electronic audit and feedback (e-A&F) systems are used worldwide for care quality improvement.They measure health professionals’ performance against clinical guidelines, and some systems suggest improvement actions.However, little is known about optimal interface designs for e-A&F, in particular how to present suggested actions for improvement.We developed a novel theory-informed system for primary care (the Performance Improvement plaN GeneratoR; PINGR) that covers the four principal interface components: clinical performance summaries; patient lists; detailed patient-level information; and suggested actions.As far as we are aware, this is the first report of an e-A&F system with all four interface components.Objectives (1) Use a combination of quantitative and qualitative methods to evaluate the usability of PINGR with target end-users; (2) refine existing design recommendations for e-A&F systems; (3) determine the implications of these recommendations for patient safety.Methods We recruited seven primary care physicians to perform seven tasks with PINGR, during which we measured on-screen behaviour and eye movements.Participants subsequently completed usability questionnaires, and were interviewed in-depth.Data were integrated to: gain a more complete understanding of usability issues; enhance and explain each other's findings; and triangulate results to increase validity.Results Participants committed a median of 10 errors (range 8–21) when using PINGR's interface, and completed a median of five out of seven tasks (range 4–7).Errors violated six usability heuristics: clear response options; perceptual grouping and data relationships; representational formats; unambiguous description; visually distinct screens for confusable items; and workflow integration.Eye movement analysis revealed the integration of components largely supported effective user workflow, although the modular design of clinical performance summaries unnecessarily increased cognitive load.Interviews and questionnaires revealed PINGR is user-friendly, and that improved information prioritisation could further promote useful user action.Conclusions Comparing our results with the wider usability literature we refine a previously published set of interface design recommendations for e-A&F.The implications for patient safety are significant regarding: user engagement; actionability; and information prioritisation.Our results also support adopting multi-method approaches in usability studies to maximise issue discovery and the credibility of findings.
Insulinomas are tumors originating from pancreatic beta cells, which are located in the pancreas in 99% of cases.They are characterized by the excessive and rapid production of insulin with a long half-life that is not regulated by glycemia .This type of neuroendocrine tumor represents an unusual but usually curable cause of hypoglycemia and accounts for 2% of all gastrointestinal tract neoplasms, with an annual incidence of 1–2 per 100,000 inhabitants .It is more commonly found in females, between the fourth and sixth decades of life, and with a mean age at diagnosis of 45 years .Clinical manifestations of pancreatic insulinomas were described for the first time by Whipple and Frantz, and consist of a triad composed of hypoglycemic fasting symptoms, plasma glucose <50 mg/dL, and relief of symptoms following intravenous glucose administration .During the insulin peak produced by these tumors, patients may present adrenergic symptoms, especially sweating, tremors, hyperphasia and palpitations .In addition, neuroglycopenic symptoms may also occur, including mental confusion, visual changes, convulsions, and changes in consciousness level .Insulinomas are the most frequent functioning endocrine pancreatic tumors, which may present malignant behavior in 10% of the cases and be associated with Multiple Endocrine Neoplasia type 1 in another 4–6% .The recurrence rate in these is significantly higher after treatment, reaching about 21% in 10–20 years, whereas this rate in insulinomas not associated with MEN1 varies from 5 to 7% in 20 years .Herein we present a case of pancreatic insulinoma operated on during the gestational period, together with a review of the literature on the diagnosis and treatment of these tumors.This work has been reported in line with the SCARE criteria .A 30-year-old female patient with no comorbidities and no family history of diabetes sought medical attention due to hypoglycemic symptoms associated with palpitations, perioral paresthesia, nausea, vomiting, and severe sweating during prolonged fasting periods for 3 months.She also reported an episode of syncope.A hemoglucotest was performed, showing capillary glycaemia of 35 mg/dL.Faced with this hypoglycemia, the team chose to perform an intravenous glucose infusion and keep the patient under observation.Upon improvement in the general condition of the patient, the frequency of meals was increased and prednisone was prescribed.At the same time, laboratory tests were requested in order to investigate the case better, but which were not performed by the patient.There was partial relief of symptoms with prednisone, but the patient abruptly discontinued the medication due to pregnancy about 12 months after initiating treatment.The symptoms returned with great intensity two weeks after prednisone cessation.Thus, the patient chose to return to the outpatient Endocrinology clinic due to the recurrence of the symptoms.At the consultation the patient was 31 years old and in her 10th week of gestation, and performed laboratory tests which confirmed hypoglycemia, having plasma glucose of 20 mg/dL.In addition, she underwent a prolonged fasting test with interruption after four hours, with increased C-peptide and insulin levels, confirming endogenous hyperinsulinemia and suggesting insulinoma.The team chose to hospitalize the patient due to the marked hypoglycemia and the intensity of the symptoms to try to clinically control the condition.However, conservative measures to control hypoglycemia were not sufficient, and continuous glucose infusion was necessary to keep the patient euglycemic.Due to the difficult management of the patient, an opinion was requested from General Surgery.The surgical team requested a diagnostic imaging evaluation through a total abdomen Ultrasonography and Nuclear Magnetic Resonance, which did not show pancreatic changes.An endoscopic USG was then performed, which was able to identify a single, well-defined, superficial and hypoechoic lesion measuring 11 mm, located at the head-neck transition of the pancreas.Faced with the clinical treatment failure and intense discomfort on the part of the patient, the team opted for the surgical approach, even during gestation.The surgery was scheduled and performed in the 18th week of gestation, in March 2018.During surgery, an intraoperative USG located a single superficial lesion of the pancreatic parenchyma measuring 0.5 cm × 0.5 cm, unrelated to vascular structures, allowing enucleation of the tumor.The patient developed hyperglycemia in the immediate postoperative period.A postoperative obstetric control USG was performed, which showed good fetal vitality.Thus, the patient was discharged on the 6th postoperative day in good clinical conditions, with cavitary drainage, in the presence of grade A pancreatic fistula, and without presenting new hypoglycemia episodes.The surgical specimen was an irregular, brownish nodular tissue measuring 2.5 × 1.2 × 1.1 cm and weighing 2.0 g.The histopathological examination confirmed it to be a pancreatic neuroendocrine tumor, presenting 03 mitoses per 10 fields of great increase, and absence of angiopathic or perineural invasion, with free margins.The specimen’s immunohistochemistry was compatible with histological grade 2 neuroendocrine tumor, stained with hematoxylin-eosin and showing a positive result for synaptophysin and chromogranin receptors, thus confirming insulinoma.Childbirth occurred at term at 38 weeks, and the child was born healthy.At the moment, about 12 months after surgery, the patient remains euglycemic, with no signs of relapse and no other complaints.Biochemical diagnosis is performed by an inadequate elevation of insulin during a spontaneous or induced hypoglycemic episode .Serum insulin levels higher than 5 IU/mL and serum glucose below 40 mg/dL with an insulin/glucose ratio equal to or greater than 0.3 reflect inappropriate insulin secretion in 98% of the cases .In addition, serum C-peptide elevation is also useful in excluding causes related to the exogenous use of insulin because it evaluates the pancreatic reserve, so when it is increased it suggests endogenous insulin hypersecretion .After the clinical diagnosis, the tumor
Introduction: Insulinomas are neuroendocrine tumors characterized by an excessive secretion of insulin, and its most common primary site is the pancreas.Presentation of the case: Female, 31 years old, who underwent surgical resection of a pancreatic insulinoma measuring 0.5 × 0.5 cm during the third trimester of pregnancy.The patient was discharged from the hospital on the 6th postoperative day.The neonate was born at term, 38 weeks, with appropriate weight for gestational age.Today, around 12 months after surgery, the patient and the infant are in good health conditions and have no complaints.
should be located through the image to determine its staging and to plan the most appropriate therapeutic approach.In general, abdominal ultrasonography is performed as the initial examination.It is a simple, operator-dependent exam whose sensitivity varies between 23 and 63%, depending on the size and location of the tumor .Computed tomography and Nuclear Magnetic Resonance have higher sensitivity since the lesions are hyperdense in the contrasting phases.This is due to increased blood supply compared to other pancreatic tumors .USG Endoscopy is an invasive and operator-dependent method.Its sensitivity for detecting pancreatic tumors which are not visualized in other methods such as abdominal USG and CT ranges from 82 to 93%.It has the advantage of not involving radiation in its performance, but is limited in penetration depth.The procedure also allows a concomitant biopsy with fine needle aspiration for histopathological study .Somatostatin receptor scintigraphy, or OctreoScan, may aid in radiologic diagnosis.However, it may present many false negatives, since 50% of the insulinomas do not sufficiently express type 2 somatostatin receptors for their detection .Positron Emission Computed Tomography shows promising results.PET-CT with Gallium-radiolabeled peptides has presented better results than somatostatin receptor scintigraphy in some recent studies .Surgery is still the only possibly curative treatment for pancreatic neuroendocrine tumors.It is indicated to attenuate compressive symptoms of the tumor mass and systemic symptoms caused by the hormonal overproduction, as well as to avoid the malignant spread .The surgical technique of choice is enucleation, which enables resecting insulinoma, preserving as much pancreatic tissue as possible .Enucleation is indicated in benign, single, superficial, well-defined lesions which are smaller than 2 cm and are not related to the pancreatic duct.It is a safe technique with a low mortality rate; however, with morbidity similar to other pancreatic resections17,18].When lesions are larger than 2 cm, poorly defined, locally advanced, suspected malignancy or involving anatomical structures, it is prudent to opt for pancreatic resection techniques.Among these techniques are distal pancreatectomy, central pancreatectomy, and cephalic and total gastroduodenopancreatectomy .Intraoperative USG can be used to increase surgical accuracy in lesions which have not been previously found or to confirm tumor location in case of doubt.In addition, it assists in choosing the best surgical technique, reducing the number of iatrogenic lesions .This type of ultrasonography study has approximately 85% sensitivity and can also perform intraoperative staging .An alternative for patients with high surgical risk is ethanol ablation.The procedure is guided by ultrasound and presents good resolution of hypoglycemia symptoms, but for an indefinite period .In some cases, clinical treatment may be used in `both preoperative symptom control and in patients with surgical contraindication or patients with unresectable metastatic disease.Diazoxide, verapamil and phenytoin are used to minimize hypoglycaemic symptoms.Corticosteroids also help to stabilize glycemia at acceptable levels since they stimulate gluconeogenesis and insulin resistance .Surgery should be avoided during pregnancy whenever possible due to increased risk for mother and fetus.Clinical treatment including dietary intake, diazoxide, calcium channel blockers and octreotide should be initiated to control the hypoglycaemia symptoms.The potential risks and benefits of treatment should be carefully considered due to the limited number of patients treated with octreotide during pregnancy.Intraoperative USG laparotomy should be performed in cases of failure of clinical treatment, especially in those cases in which the preoperative location failed .There are 8 reports of patients surgically treated with pancreatic insulinoma during the gestational period in the literature.Surgery was performed in the first trimester in three reported cases, in the second trimester in another three, and at the time of cesarean section or soon after labor induction in two patients.All women who underwent laparotomy during pregnancy, as well as their newborns, had good results.All these children were healthy and had normal development.The surgical approach moment of the case described herein reflects the importance of this report, since it reinforces the viability of tumoral excision, even during the gestational period, without repercussions in the gestation or to the fetus during the intra and postoperative periods.The delivery in our case and in all previously reported in the literature occurred at term, with a healthy fetus and without sequelae.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.Ethics approval was not necessary for this study and manuscript due to the type of study design.All patient data and photographs are de-identified.Written informed consent was obtained from the patient for publication of this case report and accompanying images.A copy of the written consent is available for review by the Editor-in-Chief of this journal on request.Study concept and design: AABF, ACC.Data Collection: CCAN, LFMV, MARCA.Data Analysis and interpretation: CCAN, LFMV, NSL.Writing the paper: FSC, MARCA, NSL.Revision: FSC, AABF, ACC.Adriano C Costa MD is the Guarantor of the study.Not commissioned, externally peer-reviewed.
The patient started to present characteristic symptoms (hypoglycemia, adrenergic and neuroglycopenic symptoms) 18 months before being referred to our center.It was decided to resect the lesion due to the intensity of the symptoms and failure of the clinical treatment.The postoperative course was uneventful, and the fetus presented good vitality, as evidenced by the obstetric ultrasound.Conclusion: The moment of the surgical intervention in our case reflects the importance of this report, since it reinforces the feasibility of this procedure during pregnancy.
diagrams.The values for local diffusion distances above would suggest that while the diffusion of hydrogen controls the process of precipitation at temperatures below 0 °C, the presence of rapid diffusion at temperatures relevant to fuel assemblies would mean that diffusion is never rate limiting.This explains the extremely fast precipitation observed below 150 °C, where 98% of precipitation is complete almost immediately.As temperatures increase above 150 °C, the drop in precipitation rate observed in Fig. 4 then occurs through the activation energy becoming the rate limiting process.It should be noted, however, that while hydrogen diffusion is considered not to be rate limiting above 0 °C the process of nucleation is still dependent on short range diffusion, where the local jumping of hydrogen across the interface regulates the speed of nucleation.Where hydrogen concentrations are significantly lower, such as in the case of CANDU pressure tubes where the life design allowance is 100 ppmwt.of hydrogen, the inter-hydride spacing would in-turn be larger .Such increases in the required distance for hydrogen to travel could then lead to diffusion becoming a rate-limiting factor.From the diffusion data above this would seemingly not be until inter-hydride spacing was in the order of tens of microns, at least.For the purpose of this work, however, the above assessment of diffusion distance seems sufficient to explain the lack of the lower half of the C curve expected in a conventional TTT diagram.Synchrotron X-ray diffraction was able to investigate successfully the isothermal precipitation of hydrides within Zircaloy-4.From completion percentage data, a peak in precipitation rate was measured close to, or below, 100 °C.The measured concentration of hydrogen in solution while precipitating, recorded during a 1 °C s−1 cooling transient, was shown to be in poor agreement with the literature.During quench and hold cycles, the initial solubility showed a similar agreement with the continuous cooling values, but then continued to drift towards the position of the literature solubility.This was explained as being the product of significant residual supersaturation, where the process of precipitation was unable to keep pace with the rate of temperature change.This effect was primarily seen at temperatures above 200 °C, while those below this threshold showed little or no recordable residual supersaturation after the quench, within the bounds of experimental error.A simple model was used to predict the peak temperature for hydride nucleation using the Eshelby method for estimating misfit strain energy in two geometries of precipitate, as well as performing a strain energy-free calculation.This represents a lower and upper bound to nucleation rate behaviour, respectively.The temperature dependence of the misfit was included in this model and shown to be significant in the calculation of strain energy.While only a simple approximation, this model is reasonably consistent with the experimental observations, predicting a peak in nucleation rate at between 113 °C and 149 °C.Strain energy has a modest effect, reducing the temperature of peak nucleation rate by up to 36 °C, depending on the precipitate geometry considered.The reason a large undercooling is required to reach the peak nucleation rate is mainly attributed to the relatively high interfacial energy reported in the literature for the hydrides.Diffusion distance calculations showed that hydrogen diffusion was not significantly rate limiting given the diffusion distances required at temperatures relevant to the life of zirconium clad fuel assemblies.Ultimately, these simulations act to justify the experimental observations made through synchrotron X-ray diffraction.However, it should be noted that while precipitation at these higher temperatures is considered relatively slow when compared with the lower thermal region, the overall process of hydride precipitation in zirconium alloys is very rapid, owing to the high mobility of hydrogen and the large driving force that develops with increased undercooling.
High-energy synchrotron X-ray diffraction was used to investigate the isothermal precipitation of δ-hydride platelets in Zircaloy-4 at a range of temperatures relevant to reactor conditions, during both normal operation and thermal transients.From an examination of the rate kinetics of the precipitation process, precipitation slows with increasing temperature above 200 °C, due to a reduction in the thermodynamic driving force.A model for nucleation rate as a function of temperature was developed, to interpret the precipitation rates seen experimentally.While the strain energy associated with the misfit between hydrides and the matrix makes a significant contribution to the energy barrier for nucleation, a larger contribution arises from the interfacial energy.Diffusion distance calculations show that hydrogen is highly mobile in the considered thermal range and on the scale of inter-hydride spacing and it is not expected to be significantly rate limiting on the precipitation process that takes place under reactor operating conditions.
study .The P. aeruginosa specimens isolated from a burn unit in Kurdistan indicated that 22% of burn-wound samples were siderophore carriers .However, the rate at which our study isolated P. aeruginosa–producing siderophores from burn-wound culture was much higher, at 56.4%.This may show that using imipenem and vancomycin for prophylaxis in burn wounds at different burn units could affect the incidence of siderophores in hospitals.Further, in a study that sought to detect and classify siderophore-producing isolates, 70 P. aeruginosa samples isolated from an intensive care unit found that 78% were siderophore producers .The incidence of siderophore producers was lower in our study than theirs.Norozi et al. reported that 8% of isolates in a burn unit were positive for siderophores.However, in our study, the rate was higher.Among siderophore-positive isolates from various clinical samples from hospital patients in our study, maximum resistance was observed for ciprofloxacin, followed by gentamicin, ceftazidime, nalidixic acid, amikacin and cefotaxime.For siderophore-negative isolates, maximum resistance was observed for cefotaxime, followed by amikacin, nalidixic acid, ceftazidime, gentamicin and ciprofloxacin.This is in contrast to the study of Kalantar et al. , for which ceftazidime, nalidixic acid and cefotaxime had the most resistance.Performing an antibiogram before prescribing antibiotics is crucial to reducing multidrug resistance.Further, confronting multidrug-resistant isolates in burn units and intensive care units must be done because they have the greatest adverse effect on health.
Siderophores secreted by nonfermentative negative bacilli such as Pseudomonas aeruginosa are capable of increasing rates of resistance to carbapenem antibiotics.Furthermore, the resistance of these isolates to antibiotics has been enhanced by producing siderophores, and their frequencies have erratic patterns.We studied the outbreak of P. aeruginosa strains and their antibiotic patterns in different clinical samples.In this descriptive cross-sectional study, 100 P. aeruginosa samples were isolated from different clinical specimens at the 5th Azar Hospital, Gorgan, Iran, in 2017.These strains were identified by biochemical tests, and their antibiotic resistance patterns were measured via the disc diffusion method.Next imipenem and EDTA-imipenem (10–30 μg) antibiotics were employed for the detection of siderophores.Amongst 100 P. aeruginosa samples, 31 isolates (31%) were siderophore carriers.The frequency of this enzyme among specimens was as follows: 56.2% in burn wounds, 36.4% in urine, 22.2% in respiratory secretion, 19.4% in blood and 16.7% in wounds (p > 0.05).Moreover, P. aeruginosa isolates producing siderophores had the highest range of resistance to ciprofloxacin (47.6%), gentamicin (46.7%), ceftazidime (34.9%), nalidixic acid (34.3%), amikacin (34.1%) and cefotaxime (31.6%).The prevalence of siderophore producers, and especially their antibiotic patterns have no specific algorithms; in addition, an antibiogram is recommended to identify the most effective antibiotics against those isolates.
formation and maintenance mechanisms of the N-LLJ, which is found over the state of Indiana on 1st October 2014.The ensemble approach is evaluated by comparing the ensemble simulations to the single-initial-condition simulation.The results suggest that the ensemble simulations are better at reproducing the N-LLJ than a single simulation.It is probably because the uncertainties for the intensity and moving speed of the cold surge behind the quasi-stationary front, which is the important factor to the N-LLJ, is estimated more accurately by ensemble simulations than the single model run.Based on the ensemble study, it is found that the synoptic features associated with the N-LLJ in this study are consistent with those suggested by Walter et al.Specifically, the N-LLJ was located behind of the surface quasi-stationary front and to the west of an upper level low pressure system.It shows typical meso-scale structural characteristics with the width of about 200 km and length of 500 km, which is much smaller than the S-LLJs in previous studies.For example, the Mid-Atlantic LLJ, which is as weak and less extensive as the LLJs over the Great Plains, still has a width of 300–400 km and a length of more than 1500 km.The temporal variation of the N-LLJ presents a non-typical diurnal cycle with a single peak other than a bimodal pattern that often observed in S-LLJs.The reason is probably that the large-scale environment where the N-LLJ is formed is a shallow weather system, which has a short life time and is easy to be replaced or disrupted by some deep systems.These characteristics imply for some different mechanisms of N-LLJs in comparison to S-LLJs.The analysis suggests negative buoyancy and thermal wind imbalance induced by frontogenesis have important contribution to the formation and intensity variation of the N-LLJ.Its strength, however, is linked to the intensity of negative buoyancy and frontogenesis.Besides, the N-LLJ in this study is accompanied with partly cloudy or a clear sky owing to its advection of dry air.This condition may have impacts on convective mixing within boundary layer during daytime, which have important implications in atmospheric environment.It is interesting to find that during the active period of the N-LLJ on 1st October 2014, moderate air pollution was observed in Indiana.So in the future, the linkage between the N-LLJ and the air pollution over Indiana will be detailedly discussed.
The northerly low-level jets often occur in the Great Lakes region of the United States during the cold season.Usually, the jets are closely related to winter snowstorms, wildfire spreading and air pollution dispersion.However, our knowledge on the northerly low-level jets is quite limited.In this study, an ensemble simulated approach, which is based on the WRF-LETKF ensemble Kalman filter numerical prediction/data assimilation system, is used to investigate a northerly low-level jet (N-LLJ) occurred on 1st October 2014 over the state of Indiana during the INFLUX experiment.The simulations are carefully verified by using the Halo Streamline Doppler lidar Profiler data and flight observational datasets.Results show that, the ensemble simulations are more efficient to reproduce the N-LLJ than a single simulation.Based on the ensemble simulation results, the characteristics and mechanisms of the formation and evolution of the N-LLJ are further studied.It is shown that the N-LLJ exhibits typical meso-scale characteristics.Both the increased negative buoyancy and the thermal wind imbalance caused by synoptic frontogenesis are responsible for the formation of this N-LLJ.However, controlled by a shallow large scale system, this N-LLJ does not show significant diurnal cycles.Besides, the comparisons among the ensembles imply that the strength of the N-LLJ is closely related to the intensity of negative buoyancy and frontogenesis.
was used at 1:10,000 diluted in 1% normal goat serum.After washing, sections were incubated with secondary antibodies, goat anti-mouse Alexa Fluor 594, or goat anti-rabbit Alexa Fluor 488 at 1:500 dilution.Sections were mounted on superfrost plus coated microscope slides, coverslipped using EverBrite Hardset Mounting Medium containing DAPI, and cured overnight prior to fluorescent microscopic examination using a Leica DM-5500 using 63× magnification for assessment of ubiquitin binding or Keyence BIOREVO BZ-9000 fluorescent microscope for all other analyses at 10× magnification.Fluorescent images of substantia nigra were analyzed using ImageJ.59,Images were converted to 8 bit and the substantia nigra was automatically selected using a region of interest based on positive tyrosine hydroxylase green fluorescence.Within this region, ataxin-3 nuclear staining was determined using the “Analyze Particles” function based on red fluorescent staining.Background fluorescence was subtracted, and the average fluorescence intensity per cell was subsequently used to represent intensity of nuclear ataxin-3.Identical analysis values were used to analyze all images.Between 400 and 1,300 individual cells were assessed for 10.4- and control AON-treated mice.To detect aggregated ataxin-3 protein, mouse brain lysates in RIPA buffer were diluted in PBS to a final concentration of 0.15 μg/μL with 0.025% SDS.A total volume of 200 μL lysate was then passed through a cellulose acetate membrane of 0.2 μm mounted in a 96-well vacuum manifold.Three additional washes were performed using PBS to remove unbound protein.The membrane was blocked using 5% non-fat milk and subsequently stained for 3 hr with mouse anti-ataxin-3 1H9 1:5,000 and 1 hr with goat anti-mouse IRDye 800CW 1:10,000.Dots were quantified with the Odyssey software version 3.0 using the integrated intensity method.Research conception and experiment design: W.M.C.v.R.-M., L.J.A.T., F.R., and H.v.A. Experiments and manuscript writing: L.J.A.T. Revision of manuscript: W.M.C.v.R.-M., H.v.A., and F.R.
The resultant expanded polyglutamine stretch in the mutant ataxin-3 protein causes a gain of toxic function, which eventually leads to neurodegeneration.One important function of ataxin-3 is its involvement in the proteasomal protein degradation pathway, and long-term downregulation of the protein may therefore not be desirable.In the current study, we made use of antisense oligonucleotides to mask predicted exonic splicing signals, resulting in exon 10 skipping from ATXN3 pre-mRNA.This led to formation of a truncated ataxin-3 protein lacking the toxic polyglutamine expansion, but retaining its ubiquitin binding and cleavage function.Repeated intracerebroventricular injections of the antisense oligonucleotides in a SCA3 mouse model led to exon skipping and formation of the modified ataxin-3 protein throughout the mouse brain.Exon skipping was long lasting, with the modified protein being detectable for at least 2.5 months after antisense oligonucleotide injection.A reduction in insoluble ataxin-3 and nuclear accumulation was observed following antisense oligonucleotide treatment, indicating a beneficial effect on pathogenicity.
by SEM equipped with an EDXS and was reported in Fig. 16.As shown in Fig. 16, the surface suffers from an intensive localised corrosion attack of the oxide layer grown on the eutectic region.The corrosion propagation path was revealed by applying FIB-SEM in a cross-section view, as shown in Fig. 17.The corrosion attack initiates from the surface of the eutectic region and propagates through the eutectic region where the Si particles may have some defects with surrounded Al2O3 matrix.These results are in accordance with results of the work by Zhu et al. .In this paper, the anodising behaviour of Al-Si components produced by liquid casting and rheocasting was studied and compared.Second phase particles in the eutectic region such as the Si particle and the Fe-intermetallic play as different roles during anodising.Si particles remain embedded in the oxide layer, while the Fe-rich intermetallics are dissolved which makes the oxide layer defective.The anodising parameters, such as anodising time and applied voltage, influence the hardness and corrosion resistance of the oxide layer.For Al-Si alloys, an increase of the layer thickness could result in more stress introduced defects in the oxide layer, as well as larger nanoporosity in the oxide microstructure due to a local heating effect.Thus, an increase of the oxide layer thickness by increasing anodising time or applied voltage decreases the hardness and corrosion resistance of the oxide layer.The corrosion resistance of the oxide layer on Al-Si components is strongly influenced by the microstructure of the substrate as well as on the oxide thickness, in which the eutectic region plays an important role.The eutectic region acts as a propagation path for the corrosion attack.More eutectic regions, as well as cracks and defects derived from the dissolution of Fe-rich intermetallics, facilitate the propagation of corrosion attack.In this study, the longitudinal macrosegregation due to the rheocasting process was evaluated in terms of corrosion resistance and hardness.The longitudinal macrosegregation influences the corrosion protection provided by the oxide layer, with the part at near to the vent showing lower corrosion protection due to a higher eutectic fraction on the surface.However, the longitudinal macrosegregation does not have a significant impact on the hardness of the rheocast Al-Si components before or after anodising.The presence of SLS layer by the transverse macrosegregation due to the rheocasting process does not have a significant impact on the corrosion behaviour of the oxide layer of as-cast surfaces compared to liquid casting, as the liquid casting has a skin effect on the as-cast surface.
The anodised layer of Al-Si alloys produced by rheocasting was studied and compared to anodised traditional liquid casting in this paper.The anodising was performed in 1.0 M H 2 SO 4 solution at room temperature on the as-cast substrates, and anodising voltage and time were optimised as process parameters.This study focuses on understanding the effect of the surface liquid segregation (SLS) layer by rheocasting on the hardness and corrosion protection of the oxide layer.The hardness depends on the anodising parameters and varies along the oxide thickness.The corrosion protection given by the oxide layer was evaluated by electrochemical impedance spectroscopy (EIS) in 3 wt-% NaCl solution, and the results revealed that the longitudinal macrosegregation influences the corrosion protection, with the near-to-vent region showing lower corrosion protection due to a higher eutectic fraction.A comparison between liquid and rheocast samples indicated that the presence of SLS layer by the transverse macrosegregation does not have a significant impact on the corrosion resistance of the oxide layer.Moreover, it was found that an increase of the oxide layer thickness by longer anodising time or higher applied voltage decreases both the hardness and corrosion resistance of the oxide layer.
respective assignments in Al, demonstrating large-area method repeatability.In addition, clear boundaries are observed, where intergranular corrosion, where topography is not representative of the underlying orientation, is <2 μm wide, less than the sampling distance of this technique.For Ni, prior EBSD data, was down sampled to three-colours, against which to benchmark the novel approach.The resulting output orientation map using this method is shown in Fig. 7k.Validation was undertaken by down sampling EBSD data to the same pixel size as the output map.The two datasets were subsequently overlaid in Matlab, wherein they were subtracted; any value that did not correspond to zero was allocated mis-assigned.Conversely, zero values relate to direct correlation.The map is in broad agreement with prior EBSD data on a pixel-by-pixel basis and crystallographic contrast is observed in the output map.Correlated pixel assignment was greatest for textured grains, followed by and .In order to realise the origins of uncorrelated pixels in the validation study, the Ni surface was inspected with SE microscopy.Different causes have been identified leading to correlation discrepancy: 1) Material removal, where the revealed microstructure is not representative of the near-surface microstructure, measured by EBSD, leading to shifting boundaries during etching, also the complete corrosion of grains.2) Hybrid topographies of unpolarised grains not vicinal to any principal orientation.Bisecting grains are assigned as non-measured points in the output maps, although in some instances surfaces appear representative of the more dominant removal directions.This could explain the bias of mis-assigned pixels from grains to , ahead of slow-dissolving .3) Grain-boundary effects, such as where stepped edges are generated, exhibiting -characteristic topographies.The difficulty in validating the novel method with conventional materials characterisation techniques such as EBSD is further evidenced in Fig. 8.The prior EBSD data was down sampled to the same pixel size accordingly.Upon EJP, the shifting of the grain boundary zone and the removal of a grain is clearly shown in the subtracted overlay.This contributes to the uncertainty, in correlation with prior EBSD data.In addition, it should be noted that a small proportion of the subtractive mis-assignment arises from non-measured points in the prior EBSD data.In this study, a scalable procedure has been defined allowing rapid assignment of crystalline texture under ambient conditions with minimal intervention, making the technique particularly amenable to large-area industrial application.Automation of data acquisition has been demonstrated, resulting in the ability to differentiate grains according to the three cubic principle crystallographic directions.In addition, a route through which orientation-dependent topographies can be rapidly and selectively generated using non-toxic solutions is presented, without altering the crystallography of the material.In this regard, it is suggested that the EJP surface preparation technique could be assistive to other developing optical characterisation methods such as directional reflectance microscopy .Although a large degree of uncertainty is acknowledged and remains the subject of ongoing research within the group, it is hoped that the materials community will assist with the development of this technique into a more comprehensive method.Enhancements in output data quality and correlation with EBSD can be achieved using simple filtering operations, although care was taken to present only raw datasets in this study.Further enhancement of the polar resolution and crystal class identification is possible by considering absolute gradient directions, as well as hybridisation characteristics of unpolarised grain topographies.
Orientation affects application-defining properties of crystalline materials.Hence, information in this regard is highly-prized.Implementation of this technique allows localised dissolution to be anisotropic and dependent on etch-rate selectivity, defined by the crystallography.EJP therefore, generates complex, but characteristic topographies.Through rapid surface processing and analysis, textural information can be elucidated.In this study, samples of polycrystalline Al and Ni have been subjected to EJP, and the resulting surfaces analysed to generate three-colour orientation contrast maps.Comparison of raw data acquired through our method with prior electron back-scatter diffraction data shows broad correlation and assignment (68% on a pixel-by-pixel basis), showcasing rapid large-area analysis at high efficiency.
niche.In the case of composites, policies might include the continued exertion of pressures for the adoption of lighter weight materials; investing in maintenance technologies; formulating standards for maintenance; and providing financial backing for companies that take on risky production jobs.However, such policies are likely to be controversial.One reason that Japanese companies were able to compete favorably for 787 contracts was that the Japanese government provided backing for the companies, and many observers object to such support as a violation of international competitiveness rules.Boeing and Airbus are currently in a WTO dispute about whether their respective governments have given the companies unfair economic assistance.These recommendations stem from an approach to analyzing transitions that acknowledges the heuristic value of socicotechnical transitions theory, but seeks to go further by unpicking the particularities of specific technological developments.The same factors that have made theories such as the MLP so successful—its clear conceptualization of the overarching structural interactions involved in transitions—can also be a weakness if the framework is adopted unthinkingly as a template for understanding all types of technological transitions.As the composites case shows, transitions can come about in different ways.If sociotechnical transitions theory is to be a truly useful tool for analysis and policy advice then further investigation is needed into how transition pathways vary across different types of innovation.
Radical technological innovations are needed to achieve sustainability, but such innovations confront unusually high barriers, as they often require sociotechnical transitions.Here we use the theoretical perspectives and methods of Science and Technology Studies (STS) to demonstrate ways that existing theories of innovation and sociotechnical transitions, such as the Multi-Level Perspective (MLP), can be expanded.We test the MLP by applying STS methods and concepts to analyze the history of aircraft composites (lightweight materials that can reduce fuel consumption and greenhouse gas emissions), and use this case to develop a better understanding of barriers to radical innovation.In the MLP, «radical innovation» occurs in local niches - protected spaces for experimentation - and is then selected by a sociotechnical regime.The history of composite materials demonstrates that radical innovation could not be confined to «niches,» but that the process of scaling up to a wholly new product itself required radical innovation in composites.Scaling up a process innovation to make a new product itself required radical innovation.These findings suggest a need to refine sociotechnical transitions theories to account for technologies that require radical innovation in the process of scaling up from the level of sociotechnical niche to regime.
of the three different rice models.The model proposed here by Sirisomboon and Posom was the most accurate, displaying the highest coefficient of determination, 0.9991–0.9992, and the lowest residual standard error, 0.00030–0.00040.The k1 parameter is a constant that influences the creep process at the beginning.1/k1 value describes the initial increasing rate when t is small.A high k1 value indicates that the initial increasing rate is low, the slope of the strain and time curve is small and the strain slowly approaches a constant value which indicates that the material has high elastic behavior.The k1 value of cooked white rice, brown rice and germinated brown rice were not found to be significantly different, suggesting they display the same elastic behavior.The 1/k2 value indicates the level of hypothetical asymptotic of strain that corresponds to the equilibrium strain level.When the k2 value increases the asymptotic strain decreases.When the k2 value is equal to infinity the material displays ideal elasticity and there will be no creep.When the k2 value is equal to zero, the material is an ideal liquid.The observation that the k1 and k2 values of all three rice samples were not significantly different indicates that they have the same elastic behavior.The n value is a constant that controls the approaching rate to the equilibrium strain.The n value is always greater than zero.The higher the n value, the quicker the initial increasing rate of strain is reached.Therefore the slope of the strain-time curve is high.The n value has no affect on the value of the equilibrium strain.From Table 1, the n values associated with the cooked germinated brown rice samples are found to be significant higher than those of cooked white rice and brown rice.The n values of the cooked white rice and brown rice were not significantly different.This indicates that the germinated brown rice will creep faster at the beginning of the test, after it has been subjected to a constant stress.This was supported by the microstructure of germinated brown rice by SEM analysis that showed the pore distribution in layers when compared to those of brown rice and white rice.Generally the creep behavior of food follows linear creep behavior using the classical models, for examples on rice gel; from Xu et al., the creep process of rice gel consisted mainly of retarded elastic deformation, and viscosity flow deformation.The retarded elastic modulus, relaxation time, and the viscosity coefficient, of the rice gel were estimated according to the Burger model; and wheat kernel, the creep tests of wheat kernels were studied with the generalized Kelvin–Voigt model.The proposed creep model by Sirisomboon and Posom in this work showed good performance in predicting the non linear viscoelastic parameters from creep test for three different types of Thai Jasmine rice.The three rice variants displayed the same elastic behavior, however, the germinated brown rice was observed to creep faster at the beginning of the test, after it was subjected to constant stress.There are several advantages of evaluating non linear viscoelastic properties on cooked rice of different processes, including: 1) it is the basic final form of rice usage that customer consumes; 2) the viscoelastic properties obtained from modeling is fundamental rheological characterization that readily to correlated with the sensory properties of the cooked rice, 3) the non linear viscoelastic tests on cooked rice are easy to perform by the developed protocol.The application of non linear viscoelastic model can also be applied for food other than rice.Jetsada Posom, Panmanas Sirisomboon: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.The authors declare no conflict of interest.No additional information is available for this paper.
A new creep model with three parameters for non-linear viscoelastic behavior is proposed as εt=ε0(1+[Formula presented]), where the applied stress is constant, εt is the strain at retardation time (t), ε0 is the initial strain and k1, k2 and n are constants.The relationship has been proved using data derived from cooked Thai Jasmine rice including white, brown and germinated brown rice samples.The creep test at high strain was conducted on scoops of cooked rice using a compression test rig.The model developed showed very accurate prediction performance with coefficients of determination (R2) between 0.9991–0.9992 and residual standard errors (RSE) between 0.00030–0.0004.
Comprising >700 proteins, G-protein-coupled receptors are the largest family of cell-surface receptors encoded by the human genome and are of great pharmacological importance.The essential features of GPCR signaling have been well characterized over the last >40 years; however a number of key questions remain unanswered.The issue of GPCR stoichiometry, particularly for the largest GPCR subfamily—the rhodopsin receptors—is still a matter of some debate, with arguments both for and against the formation of receptor homo-oligomers.Of particular pharmacological interest is the extent to which GPCRs also heterodimerize, as different combinations of paired receptors could generate druggable complexes with unique signaling properties.Whether such complexes exist is also contentious, however, despite the large number of studies claiming receptor heteromerization for a variety of receptors.Leaving aside the problem that apparent cooperative behavior between GPCRs could be due to receptor cross talk rather than heteromerization, the key technical issue is that the great majority of the studies proposing heterodimerization were performed using coimmunoprecipitation, single-point resonance energy transfer, or bioluminescence RET saturation assays, the interpretation of which can be problematic.Briefly, co-IP is complicated because it is inherently biased toward the detection of oligomers because monomers cannot be detected and also because the extent of receptor association that might be induced during solubilization per se is an unknown.RET assays, which have the advantage of reporting stoichiometry in situ, exploit the nonradiative transfer of energy from a donor molecule to an acceptor fluorophore, which typically only occurs if the two molecules are within ∼10 nm of one another.However, single-point RET assays do not usually account for nonspecific RET between membrane proteins,.The difficulty of discriminating between nonspecific and specific RET has been a considerable challenge, and several approaches have been proposed to overcome it.The widely used BRET saturation assay involves the measurement of BRET efficiency at a range of acceptor concentrations and constant donor density, but the interpretation of this assay is complicated by the varying levels of nonspecific RET resulting from changes in acceptor density alone.This is particularly problematic in the case of heterodimerization because differences in expression rates or subcellular distribution of receptor subunits increases the likelihood of data deviating from pseudolinearity and therefore the risk of reporting false dimers.Arguably, the most powerful tool for examining receptor stoichiometry, that of single-molecule imaging, has yet to be used to demonstrate GPCR heterodimerization, even though it is largely responsible for the present consensus that homodimerization is generally transient and the dominant GPCR stoichiometry is that of monomers.Single-molecule spectroscopic techniques, such as fluorescence cross-correlation spectroscopy, have reported heterodimerization in a number of cases, but these approaches have yet to become routine.GPCR heteromer-identification technology is a BRET-based approach designed to identify GPCR heterodimers.In GPCR-HIT, receptor A is fused to the donor luciferase, whereas receptor B is untagged.Acceptor fluorophores are expressed as fusions with appropriate signaling proteins, typically arrestins.The acceptor is recruited to receptor B upon agonist binding, giving increased BRETeff if the receptors are interacting, as reported for several GPCR pairs.No increase is expected for noninteracting proteins, but it has been argued elsewhere that the recruitment of cytosolic arrestin to the plasma membrane will increase nonspecific BRET through an increase in effective acceptor concentration.Moreover, some GPCRs and their signaling partners do not necessarily experience free diffusion or uniform membrane distribution; this further complicates interpretation because the recruitment of acceptor to a genuine donor-containing heteromer versus a region of the cell surface simply enriched for donors cannot be distinguished.Three previously proposed BRET assays, which reliably report receptor homodimerization, are not well suited to the investigation of heterodimerization.Types -1 and -2 BRET assays rely on equivalent rates of protein expression from acceptor- and donor-tagged constructs for a range of acceptor/donor ratios or total densities, which cannot be guaranteed when expressing heterodimers.The type-3 competition assay could be used to identify heterodimers but only with caveats.First, this assay would only identify heterodimers formed by proteins that also homodimerize; otherwise, competitors have no effect on BRETeff.Second, heterodimers that do not disrupt homodimerization would also be undetectable because homodimeric complexes, and hence BRETeff, would be unaffected by the presence of a competitor.Finally, changes in BRETeff would also be attributable to factors independent of heterodimerization, such as signaling cross talk or depletion of a common interaction partner.Presently, there is no BRET-based approach that unambiguously identifies GPCR heteromers.Here, we describe a BRET-based assay for membrane protein heterodimerization based on the induced multimerization of one of the interaction partners at constant expression.We benchmark the assay using the known GPCR homodimers, CXC chemokine receptor 4 and the sphingosine-1-phosphate receptors and observe that heterodimerization occurs in a manner that disrupts homodimer formation.Details of the strategy used to generate green fluorescent protein-tagged proteins have been published previously.The vector used for this, pGFP2, was used as the template for the generation of all expression vectors used in this study to ensure equivalent expression across constructs.A vector containing FK506-binding protein in place of GFP2 was generated from the pGFP2 vector by PCR amplification of FKBP from the pC4-FV1E vector using oligonucleotide primers 1 and 6.This produced an FKBP fragment with 5′ BamHI and 3′ NotI restriction sites as well as a TAG STOP codon after the final codon of FKBP and a Gly-Ser-Gly-Ser-Gly-encoding linker between the BamHI site and the start of FKBP.This fragment was used to replace GFP2 in pGFP2 by ligation after digestion with BamHI and NotI to remove GFP2.To generate constructs tagged with three FKBP proteins, FKBP1 was removed from the above vector
G-protein-coupled receptors (GPCRs) comprise the largest and most pharmacologically important family of cell-surface receptors encoded by the human genome.Here, we describe a targeted bioluminescence resonance energy transfer (BRET) assay, called type-4 BRET, which detects both homo- and heteromeric interactions using induced multimerization of protomers within such complexes, at constant expression.
unlikely, inducing oligomerization in the FKBP3-tagged receptors could change their capacity to heterodimerize with the GFP-/Rluc-tagged partners.The flexible linker between the C-terminus of the tagged protein and the FKBP tag should generally allow interaction interfaces to remain accessible to heteromeric partners.It is also possible that induced recruitment stabilizes homodimers because of the increased local concentration of similar receptors.In cases in which heterodimerization is competitive with homodimerization, this would reduce the potential for heterodimerization.In such cases, BRETeff would likely still increase on oligomerization of the FKBP-tagged protein because GFP- and Rluc-tagged proteins would be released from heterodimeric complexes and freed to form additional BRET-productive homodimers.The type-4 assay cannot distinguish between this effect and increased BRETeff caused by the local concentration of GFP-/Rluc-tagged receptors upon induced oligomerization, but both would be a consequence of heterodimerization.The type-4 BRET data obtained for CCR5 and CXCR4 are consistent with other results, suggesting such chemokine receptor heteromerization.T cells express CXCR4 constitutively but only express substantial amounts of CCR5 after activation by dendritic cells.The disruption of CXCR4 homodimerization by CCR5 may allow T cells to make migratory responses appropriate to their activation status.The observation that homodimerization is not required for heterodimerization in the case of CCR5 is intriguing because it suggests that the small fraction of homodimeric rhodopsin-family GPCRs may be influenced by the more-numerous monomers.However, CXCR4 did not interact with other receptors generally, indicating that heterodimers may form mostly or entirely among closely related receptors at similar shared interfaces.Whether other monomers disrupt GPCR dimers requires more investigation.The existence of S1P receptor heterodimers was suggested previously on the basis of β-galactosidase complementation and co-IP experiments, but the same investigations reached contradictory conclusions with regard to their heterodimerization with LPA receptors: S1P1-LPA1 interactions were reported using β-galactosidase complementation but not by co-IP.We have previously shown that S1P2, S1P3, and S1P5 behave as homodimers, whereas the closely related LPA1, LPA2, and LPA3 receptors do not.Heterodimerization within the S1P subfamily but not LPA receptors reflects a gain of dimerization after the divergence of the S1P and LPA subfamilies.This contrasts with the chemokine receptors, in which CCR5 blocked homodimerization of CXCR4.The S1P3 homodimer is in part stabilized by interactions involving transmembrane helix 4, and it seems likely that S1P heterodimers use the same interface.S1P5 shares less transmembrane helix 4 sequence identity with S1P2 and S1P3 than they do with one another, perhaps explaining why S1P5 appears to form the weakest homo- and heterodimeric interactions.But how widespread is heterodimerization among GPCRs?,Our results suggest that, for the most part, heterodimerization may rely on an existing capacity for homodimerization, that is, that homodimers emerged first during receptor evolution and subsequently continued to interact after gene duplication and divergence.This is supported by the finding that we only detected heterodimers between closely related receptors and not between subfamilies.This suggests that GPCR heterodimerization, when it occurs, may be restricted to close relatives.This conflicts with the many reports of rhodopsin-family GPCR heterodimerization, often involving distantly related partners).Moreover, if homodimerization is generally a requirement for heterodimerization, then it seems likely that most rhodoposin-family GPCRs will not heterodimerize given the dominance of monomeric receptors within the family.Nonetheless, the example of CCR5 suggests that some capacity to heterodimerize might exist independently of homodimerization ability.J.H.F. and S.J.D. conceived the problem and wrote the manuscript.J.H.F. designed the experimental strategy, performed the experiments, and analyzed the data.A.M. assisted with cloning, data collection, and optimization for the FKBP1 system.
In many instances, the distinct signaling behavior of certain GPCRs has been explained in terms of the formation of heteromers with, for example, distinct signaling properties and allosteric cross-regulation.Confirmation of this has, however, been limited by the paucity of reliable methods for probing heteromeric GPCR interactions in situ.The most widely used assays for GPCR stoichiometry, based on resonance energy transfer, are unsuited to reporting heteromeric interactions.Using type-4 BRET assays, we investigate heterodimerization among known GPCR homodimers: the CXC chemokine receptor 4 and sphingosine-1-phosphate receptors.We observe that CXC chemokine receptor 4 and sphingosine-1-phosphate receptors can form heterodimers with GPCRs from their immediate subfamilies but not with more distantly related receptors.We also show that heterodimerization appears to disrupt homodimeric interactions, suggesting the sharing of interfaces.Broadly, these observations indicate that heterodimerization results from the divergence of homodimeric receptors and will therefore likely be restricted to closely related homodimeric GPCRs.
40Cr steel is widely used in the transmission components of automobiles, such as half axis shaft, gears and steering knuckles, which are usually produced via hot forging process, during which the material experiences the phenomenon of work hardening, dynamic recovery and dynamic recrystallization .Due to the complex deformation mechanism, the material flow behavior is usually quite difficult to predict.In the previous studies, many constitutive models have been proposed to describe the flow stress of metals and alloys, such as Arrhenius model , Johnson-Cook model , and Zerilli-Armstrong model .Moreover, the Arrhenius model with strain compensation shows higher accuracy than the other models, since it considers the combined effects of more deformation parameters such as deformation temperature, strain rate and strain.DRX is the dominant softening mechanism for 40Cr steel during hot deformation process.There are two main DRX mechanisms in metallic materials .The first one is discontinuous dynamic recrystallization, which is associated with bulging mechanism in materials with low stacking fault energy.The other one is continuous dynamic recrystallization occurring in materials with high stacking fault energy, which includes the formation of low angle boundaries, and the transformation of LABs to high angle boundaries.The process parameters of temperature, strain and strain rate significantly affect the volume fraction and the grain size of DRX.The models of DRX kinetics and DRXed grain size have been developed to describe the DRX behavior during hot deformation process.In recent decade, numerical simulation has been widely used in thermo-mechanical processes, and the information of material flow behavior, DRX fraction, and DRXed grain size can be obtained visually.Based on simulated results, the process parameters and forming tools can be designed and optimized.In order to guarantee the reliability of simulated results, the accurate equations of constitutive relationship, DRX kinetics and DRXed grain size are required.However, most of the previous studies only focused on one of the above mentioned equations, while the comprehensive considerations about material flow behavior, DRX fraction and DRXed gain size are still lacked.Hence, it is difficult to provide plenty of basic data to obtain reliable macro and micro results by means of numerical simulation.In this study, the isothermal hot compression tests of 40Cr steel were carried out at varied temperatures and strain rates.The measured flow stress was corrected to eliminate the influence of friction.The constitutive relationship was developed based on strain compensated Arrhenius model.The DRX fraction was described by means of Avrime model.Moreover, the grain structure was investigated to find the relationship between grain size, deformation temperature and strain rate.The above established equations were embedded into Deform 3D, and the numerical simulation on the hot compression tests was performed to verify their accuracy.Finally, the effects of deformation condition on the martensite structure formed after hot compression and quenching were examined.The material investigated in this research was hot rolled 40Cr steel with the chemical compositions of Fe-0.42C-0.94Cr-0.60Mn-0.24Si.The hot compression tests were carried out on Gleeble-3500 using cylindrical specimens with the dimension of ϕ8 × 12 mm.The specimen was firstly heated to 1373 K with the heating rate of 100 K/min, and soaked for 5 min.Then, the specimens were cooled to experimental temperatures with the cooling rate of 100 K/min and held for 1 min.After that, the specimens were subjected to 60% height reduction with the strain rate of 0.01, 0.1, 1 and 10 s−1, respectively.Finally, the specimens were immediately quenched into cold water.The compressed specimens were sectioned parallel with compression direction, and the microstructure was examined by optical microscope and electron backscattering diffraction.The OM specimens were mechanically ground and polished, and then etched in the solution of saturated picric acid at the temperature of 333 K for 1–2 min.The EBSD specimens were firstly mechanically polished, and then electro-polished in the solution of 940 ml CH3COOH and 60 ml HClO4 at 288–298 K using the voltage of 20–25 V for 10 s.The EBSD examination was conducted using scanning electron microscope at a step size of 0.15 μm, and the EBSD data was analyzed by Channel 5 software.In order to verify the established equations of constitutive relationship, DRX kinetics and DRXed grain size, the finite element simulation was performed.The above equations were embedded into material library of Deform 3D.1/8 specimen was used for simulation to reduce the calculation time.The billet and dies were defined as rigid-plastic body and rigid body, respectively.The absolute meshing method with tetrahedral elements was adopted for billet and dies, where the minimum element size is 0.08 mm and the size ratio is 3.0.The initial grain size of billet was set to 104 μm according to the microstructure measurement.The temperatures of billet and dies were set according to the hot compression tests.The heat transfer coefficient between dies and billet is 11 N/s/mm/K, while that between air and billet is 0.02 N/s/mm/K.The velocity of top die was set according to the strain rate of hot compression tests.The friction between dies and billet was described by shear friction model.The simulation was stopped when the height of specimens was compressed to 7.2 mm.As mentioned above, the graphite flake was used to minimize the friction between specimen and tooling head during hot compression tests.However, the slight bulging of specimen still appeared, and it should have adverse effects on the accuracy of the measured flow stress.Therefore, the flow stress was firstly corrected to eliminate the influence of friction.In this study, the correction mothed described in Ref. was utilized.Fig. 1 shows the flow stress of 40Cr steel before and after friction correction at different
The hot deformation and dynamic recrystallization (DRX) behaviors of 40Cr steel were studied by hot compression tests over a wide range of deformation temperature (1023 K–1323 K) and strain rate (0.01 s−1–10 s−1).The constitutive relationship, DRX kinetics and DRXed grain size were modelled, and numerical simulation was performed to verify their predicting accuracy.
caused by DRX at low temperature, while that is grain coarsening caused by grain growth at high temperature.The constitutive equation, DRX kinetics and DRXed grain size of 40Cr steel have been modelled based on hot compression tests.Then, the above developed equations were embedded into Deform 3D, and the simulation on hot compression tests were carried out.Fig. 9 compares the experimental result with simulated results including the distribution of strain, DRX fraction and DRXed grain size in the longitudinal section of specimen.According to the experimental result shown in Fig. 9, the banded structure with bright bands and dark bands can be observed.From the image with high magnification, it can be seen that fine DRXed grains distribute inside dark band, while the coarse deformed grains distribute inside bright band.Hence, the dark band is considered as the region with the occurrence of DRX.Moreover, the denser the dark bands, the higher degree of DRX.Thus, the zone with high DRX fraction locates at the center of deformed specimen, while the zone with low DRX fraction locates at the contacting surfaces between specimen and tooling head.Based on the distribution of strain, the specimen can be divided into three regions, viz., the large plastic deformation region I, hard deformation region II, and slight deformation region III as indicated in Fig. 9.Region I has high DRX fraction, since large plastic deformation offers sufficient energy for DRX.Owing to the shear friction and thermal exchange at the contacting surface, the deformation in region II is difficult, and the partial DRX occurs.According to the simulated results, it can be seen that the DRX fraction has a close relationship with the DRXed grain size.The large size grains locate at the region with low DXR fraction, while the small size grains locate at the region with high DRX fraction.The reason is that the refinement of grain depends on the degree of DRX.Based on the above analysis, the simulation result shows a good agreement with experiment result.The above works mainly focus on the analysis and modelling of flow stress and DRX behavior.Generally, quenching treatment is usually employed after hot deformation process, and complex phase transformation occurs resulting into the formation of martensite.The martensite structure strongly depends on the deformation temperature, strain and strain rate .In order to further investigate the effects of deformation condition on martensite structure, EBSD analysis was carried out on the hot compressed and quenched specimens, and the results are shown in Fig. 10.The gray lines represent LABs with misorientation angle between 2° and 15°, and the black lines represent HABs with misorientation angle larger than 15°.As is seen, all specimens exhibit complete martensite structure.The clear martensite packet can be observed in Fig. 10, and each packet contains several blocks with entirely different orientations .According to the block orientation, the martensite packets can be divided into two categories, viz., the packet with and orientation and the packet with and orientation.Since the martensite laths within a prior austenite grain grouped themselves into several packets , the relative size of prior austenite can be confirmed by analyzing the size of martensite packets.The average sizes of martensite packets in Fig. 10– are calculated as 29 μm, 20 μm and 13 μm, respectively, which demonstrates that the strain rate affects the conservation from austenite to martensite.Both the deformed and DRXed grains were remained after hot compression, and the deformed grains easily transform to martensite with large aspect ratio.Comparing Fig. 10 and, the aspect ratio of martensite lath becomes larger at higher temperature.The main reason is that DRX is difficult to occur if the deformation temperature is quite low.Moreover, the black masses in Fig. 10 were identified as second phase by EBSD analysis.The percentages of second phase were 8.2%, 0.72%, 0.46% and 1.0%, respectively.Comparing Fig. 10-, it was found that the strain rate had strong effects on the distribution of the second phase.More specifically, the lower the strain rate is, the easier the separation of second phase.When it comes to Fig. 10 and, it can be seen that the second phase separates out easily with the increase of deformation temperature.
Moreover, the effects of deformation parameters on martensite structure after hot compression and quenching were analyzed.The results show that the strain compensated Arrhenius can accurately predict the flow stress with the average absolute relative error of 3.89% and correlation coefficient of 0.992.The fraction and grain size of DRX are strongly related to deformation conditions, and low strain rate and high temperature are beneficial for the occurrence of DRX.Moreover, the equation of DRX fraction was developed based on Avrami model, and the relationship between DRXed grain size and Zener-Holloman parameter was also established.The simulated results of true strain, DRX fraction and grain size agree well with experimental findings.Finally, it was found that the size of martensite packet became larger under high strain rate, while the aspect ratio of martensite lath increases at higher deformation temperature.
This article gives the validation files for the simulation of the CNC milling of a face gear tooth.The data include three video files and one CATIA file, as stated in Table 1.The video files are the simulation videos of the machining process with the given tool paths, cutters, and workpiece in the commercial machining software, VERICUT.Different views are applied in different videos.When the machining process is simulated in VERICUT, the result, a simulated machined model, can be output as a STL file, which can be input into CATIA to implement the machining error analysis.Based on this idea, the source file and analysis result are shown in the given CATIA file.Both the design model and the simulated machined model are given, and they are compared with the distance analysis function of CATIA.Subsequently, the data of the analysis result is obtained and shown in the given CATIA file.The generation of the given data in this article is stated as the following three points.The generation of the CNC milling simulation,The simulation is implemented in VERICUT, which is CNC machining simulation software.The simulation can be implemented with the given tool paths, cutters and workpiece.The tool paths are usually obtained from calculations , or generated from other commercial CNC machining software.The workpiece can either imported or defined in VERICUT.Usually, if the workpiece has a simple geometry, such as the data ‘Simulation_GeneralView.mp4’, it is directly defined in VERICUT.If the workpiece has a complex geometry, such as the data ‘Simulation_PartialView.mp4’ or ‘Simulation_EnlagredView.mp4’, it is imported from the existed model, which is modeled in commercial CAD software or imported from of previous simulation.The generation of 3D design model of face gears,With the design parameters of a face gear drive, the tooth surface points can be calculated in an even distribution on the tooth surface.Subsequently, the tooth surface points can be imported into commercial CAD software to be connected as curves and surface.With the tooth surface model, the 3D model can be easily obtained in CAD software.The similar process and the details can be referred to Refs. .The machining error analysis,The machining error analysis is implemented in CATIA as the comparison result of the design model and simulated machined model.The design model is built in CATIA based on the tooth surface points calculation according to the given design parameters .The simulated machined model is imported from the result of the VERICUT simulation.
This data article gives the validation files to the article “CNC milling of face gears with a novel geometric analysis” [1].The data is about the simulation and machining error analysis of the CNC milling of a face gear tooth with given tool paths.It includes four files.Three of them are simulation videos of the CNC milling process in VERICUT with a general view, partial view and enlarged view, respectively.The other one is the source file of the machining error analysis, and it has the design model of the face gear, the simulated machined model of the face gear, and machining error analysis according to the comparison of the design model and simulated machined model.
30% of the unique proteins in FT are part of the TOP 100 most identified proteins in EVs.On the other hand, only 6% of the unique proteins in P1 are in this list.Furthermore, 47% of the unique proteins in P1 are known to interact with the HIV-1 gag protein or have been found incorporated in HIV-1 gag viruses and virus-like particles.Considering both, particle size distribution and proteomic analysis, we conclude that in P1 HIV-1 gag VLPs are enriched and in FT a heterogeneous mixture of host cell EVs is present.In order to perform an adequate comparison of the impurity and p24 contents between both samples, the values obtained in the Bradford, Picogreen and p24 ELISA assays were normalized to 109 particles.The dsDNA content per dose was similar in both samples.These values already meet the requirements of the regulatory agencies .The total protein content and the p24 content were slightly higher in P1 compared to FT.Despite the recent efforts on the characterization of the role of EVs upon retroviral infection, as well as studies regarding the similarities in EVs and retroviruses biogenesis, so far no discriminative feature for these particles has been described except for high resolution imaging techniques such as cryo-EM.We used the combination of size distribution data and proteomic data to characterize the type of particles present in each sample.This method is simple and maybe useful for rapid isolation of extracellular vesicles.This two-step chromatography method combining a core-shell multimodal flow-through and a heparin binding chromatography is able to separate the enveloped VLPs from EVs.The first step is mainly for the reduction of small molecular impurities, while the second step separates different particles.The method is scalable and allows a fast particle separation within one day.This is in contradiction to other protocols where the extracellular vesicles are recovered by binding to heparin affinity chromatography.The collection of the fractions must be performed with a detector such as nanoparticle tracking analysis or multiangle light scattering to track the particles.The UV signal is not sensitive enough for the particle detection.The method solves the crucial problem of VLP and EV separation and quantification on a scalable robust platform with chromatography.In comparison to other methods such as ultracentrifugation this opens the possibility for a large-scale production of VLPs and EVs as well as the development of a small scale HPLC based analytical method in the future.
Separation of enveloped virus-like particles from other extracellular vesicles is a challenging separation problem due to the similarity of these bionanoparticles.Without simple and scalable methods for purification and analytics, it is difficult to gain deeper insight into their biological function.A two-step chromatographic purification method was developed.In the first step, virus-like particles and extracellular vesicles were collected and separated from smaller impurities in a flow-through mode.The collected flow-through was further purified using heparin affinity chromatography.In heparin affinity chromatography 54% of the total particle load were found in the flow-through, and 15% of the particles were eluted during the salt linear gradient.The particle characterization, especially particle size distribution and mass spectrometry data, suggests that extracellular vesicles dominate the flow-through fraction and HIV-1 gag VLPs are enriched in the elution peak.This is in part in contradiction to other protocols where the extracellular vesicles are recovered by binding to heparin affinity chromatography.The developed method is easily scalable to pilot and process scale and allows a fast accomplishment of this separation within one day.
the outcome of the self-selection process, in particular, for the distribution of earnings.For individuals who choose employment the probability of success in either self-employment occupation is below average, whereas their wage in employment is significantly above average.At the same time, for individuals in self-employed occupation 1 the probability of success in occupation 1 is above average, and the probability of success in occupation 2 is below average; the converse holds for individuals in self-employed occupation 2.The wage in employment is below the average pay-off to both self-employment occupations.Individuals who self-select in occupation 1 earn, on average, a higher payoff but with a higher variance than they would in occupation 2.At the same time, individuals who self-select in occupation 2 earn, on average, a higher payoff with a lower variance than they would in occupation 1.This is consistent with self-selection of individuals with lower risk aversion into occupation 1.An understanding of the individual tax compliance decision is important for revenue services.Their aim is to design policy instruments to reduce the tax gap.Empirical evidence demonstrates that a wide range of factors, including social groupings and network effects, may impact upon the individual compliance decision.The research we report in this paper combines ideas from behavioural economics and social networks to model occupational choice and tax compliance in an integrated framework.The analysis is based on the consequence of taxpayers possessing social connections through which information and attitudes relevant to the compliance decision are transmitted.The model accommodates differences in preferences, in productivity, and in opportunities for evasion.Occupational choice operates as a form of self-selection that places those who will evade into situations where evasion is possible.Social interaction results in the subjective probability of audit differing from the objective probability.Combined with a social custom that rewards compliance, this can generate relatively high levels of compliance.The simulations have considered two different processes for the formation of subjective beliefs.These are distinguished by whether an audit causes an increase in the subjective probability or a reduction.Although these processes are very different, the important qualitative properties of the simulations are the same in both cases.First, taxpayers self-select into occupations according to the degree of risk aversion.Second, the subjective probability of audit can be sustained above the objective probability.Third, the weight attached to the social custom differs across occupations, a finding relevant to the literature on the evolution of social norms.Finally, these factors combine to lead to a compliance level that is lower in the riskier occupation.The model has also demonstrated how it is possible for attitudes and beliefs to emerge endogenously that differ across sub-groups of the population.The population is heterogenous in characteristics and chooses occupational groups on the basis of characteristics.The behaviour is different across occupational groups, and this is reinforced by the development of group-specific attitudes and beliefs.A prominent avenue for future work is to relax the assumption of random auditing.In practice, only a small proportion of audits are selected in this manner, with the remainder targeted according to risk-based rules.This extension would allow for an analysis of the effectiveness of alternative strategies for targeting audits when taxpayers form beliefs about auditing from interaction in a social network.It would also be possible, following Rablen, to incorporate the cost of audit and imperfect audit effectiveness into the model.In this paper we assumed that an audit always detects the full amount of evasion, i.e. is perfectly effective.This would enable an analysis of the trade-off in audit strategy between audit probability and effectiveness, in particular, the effects of introducing so-called “light-touch” audits, which cost a fraction of a traditional tax audit, but which may also be less effective.
The paper analyses the emergence of group-specific attitudes and beliefs about tax compliance when individuals interact in a social network.It develops a model in which taxpayers possess a range of individual characteristics - including attitude to risk, potential for success in self-employment, and the weight attached to the social custom for honesty - and make an occupational choice based on these characteristics.Occupations differ in the possibility for evading tax.The social network determines which taxpayers are linked, and information about auditing and compliance is transmitted at meetings between linked taxpayers.Using agent-based simulations, the analysis demonstrates how attitudes and beliefs endogenously emerge that differ across sub-groups of the population.Compliance behaviour is different across occupational groups, and this is reinforced by the development of group-specific attitudes and beliefs.Taxpayers self-select into occupations according to the degree of risk aversion, the subjective probability of audit is sustained above the objective probability, and the weight attached to the social custom differs across occupations.These factors combine to lead to compliance levels that differ across occupations.
the autonomic nervous system, in addition to somatosensation.Second, electro-dermal activity is widely used as a biological marker of autonomic function, reflecting Bayesian surprise, in prediction error-based learning systems.In this study, however, autonomic responses such as skin conductance level could not be measured simultaneously with fMRI due to methodological limitations.Analyzing fMRI signals with skin conductance level would benefit investigations on the neural basis of cognitively induced sensory experiences.Third, we placed the stimulating dot outside of the body in our control condition.Someone may argue that placing the stimulation dot on the human body itself cause attention to the body part.Fourth, as we did not include any clinical outcome in this study, we cannot claim that sensory experience for cutaneous electrical stimulation can be compatible for the genuine electrical stimulations.Further studies are necessary to investigate the role of sensory experience in therapeutic effect.Finally, the sensory experiences from pseudo-electrical stimulation were relatively low in the current study.The intensity and spatial patterns of de qi sensation, however, were significantly different from the control condition in experimental settings where actual stimulation was absent and did not require the expectation of intense sensation through the instruction.As the prediction for somatosensation was constantly modified by Bayesian updating, further study is needed using computational modeling.In conclusion, the cognitive component of cutaneous electrical stimulation elicited significant somatosensation in the body area around the intended acupoint in the absence of actual stimulation.Furthermore, the insula and the pre-SMA in the brain are involved in the sensory experience from pseudo-cutaneous stimulation.Given the important role of the salience network in the brain, our findings suggest that prediction error-based probabilistic learning of the salience network is involved in the sensory experience elicited by the cognitive component of cutaneous electrical stimulation.
The brain actively interprets sensory inputs by integrating top-down and bottom-up information.Humans can make inferences on somatosensation based on prior experiences and expectations even without the actual stimulation.We used functional magnetic resonance imaging to investigate the neural substrates of the expectations of the sensory experience of cutaneous electrical stimulation on acupoint without actual stimuli.This study included 22 participants who wore sticker-type electrodes attached on three different acupoints on different body regions: CV17 (chest), CV23 (chin), and left PC6 (arm).Participants evaluated de qi sensations after they expected electrical stimulation on those points in random order without actual stimulation.All stimuli were presented with corresponding visual information of the stimulation sites.The control condition included the same visual information but outside the body.The expectations of cutaneous electrical stimuli without actual stimulation on three acupoints resulted in greater de qi sensation compared to the control condition.Cognitive components of cutaneous electrical stimulation exhibited greater brain activation in the anterior insula, pre-supplementary motor area, and secondary somatosensory area.The expectations of acupuncture stimulation exhibited a distinct experience of somatosensation as well as brain activations in insula and pre-supplementary motor area.Our findings suggest that the sensory experience of the pseudo-cutaneous stimulation may be derived from the predictive role of the salience network in monitoring internal and external body states.
Tuberculosis is a continuing problem especially in developing countries despite the availability of effective chemotherapy.Around 1–2% of tuberculosis patients have involvement of the skeletal system and 50% of them involve the spinal column .Most spinal involvement of tuberculosis occurs in lumbar spine region and some of those are also presented with psoas abcess .On the other hand, the incidence of both gluteal abscess and sacral tuberculosis are extremely rare.The previous studies on sacral tuberculosis were limited in case reports and few case series .We describe a case of 51-year-old woman with sacral tuberculosis and bilateral massive submuscular gluteal abscesses.To our knowledge, this is the second case of sacral tuberculosis that presented with submuscular gluteal abscess, which gave us a diagnostic challenge and treatment dilemma.The present case has an interesting magnetic resonance imaging pattern that described the anatomical pathophysiology of abscess dissemination in the sacral and gluteal region.Our case report has also been reported in line with the SCARE criteria .A 51-year-old woman presented with a massive painless lump on both of her thighs that had been enlarging for the past 6 months.The patient denied any history of trauma, manipulation, or injection around the lump before.She was otherwise healthy despite her lumps.However, she had a history of lymph node tuberculosis on her neck about 25 years before and underwent tuberculosis chemotherapy regiment for about six months.On the local physical examination, we found a painless non-mobile distention on her gluteal and upper femoral region bilaterally with some fluctuation and cystic consistency on palpation of the mass.The initial largest diameter of her thigh was 60 cm on the left and 45 cm on the right.There was no signs of inflammation, sinus or fistula around her thighs and buttock, or any remarkable signs on physical examinations.Laboratory examinations however, showed elevated level of ESR and CRP.Mantoux test were inconclusive due to previous infection of tuberculosis.Radiological examination showed no signs of abnormality besides the expanding soft tissue shadow especially on her left femur region.MRI examinations were then performed over the lumbosacral and pelvis region.Sagittal T2 weighted MR images of the sacrum showed destruction on anterior lower sacral segments, with hyperintense anterior lesion and presacral abscess.Axial T2 weighted images confirmed sacral body destruction and extension of the hyperintense lesion that involved the insertion of piriformis muscle.Pelvic axial fat-suppressed T2 weighted images gave another extended view of the lesion, showing lateral extension of the lesion over the posterior ilium that also extended to superior and inferior filling the gluteal compartment beneath the gluteus maximus and tensor fascia lata.Involvement of the piriformis muscle and gluteus medius were confirmed at the coronal FS-T2 images of proximal femur, in which there was a hyperintese bony lesion at the tip of greater trochanter.The abscess also extended distally through the space around the greater trochanter between the vastus lateralis and tensor fascia lata without any intracapsular involvement of the hip joint.Distal extension of the abscess reached the level of midshaft femur beneath the hamstring and tensor fascia lata muscle.Due to the previous history of tuberculosis and the endemic nature of the disease, we started anti-tuberculosis treatment initially for about 2 weeks.Serial laboratory examination of ESR and CRP showed remarkable decrease, which substantiates our working diagnosis.Surgical debridement and biopsy was performed.A curved incision centered on the left greater trochanter was used since it was the convergence point of the abscess on MRI images.Superficial dissection of posterior approach of the hip were used and after dissecting the gluteus maximus muscle, sero-purulent liquid discharged from the plane.Further debridement was performed and about 2.7 liters of pus were evacuated.There was intact muscular structure around the greater trochanter as well as the short external rotator thus the debridement were not extended to the hip joint.An additional debridement through posterior midline incision through the multifidus muscle around the sacral region were also performed but turned out negative.During the wound closure, a submuscular drain was put and minimal production was found during the first two days.The patient was discharged and started her antituberculosis drug regime due to high suspicion of tuberculosis.On the two weeks follow up, the surgical wound healed without any complication and there were no signs of recurrent fluid collection.Despite negative result on the culture result, the histopathological examination showed a necrotic tissue with appearance of epitheloid cells, lymphocytes, histiocytes, and Langhans multinucleated giant cells which was in accordance with chronic inflammation due to tuberculosis.Neither recurrence nor complication was found after 6 months follow up.Tuberculosis of the spine is major health problem especially in endemic regions.The current incidence of spine tuberculosis is around 50% of all skeletal tuberculosis cases.From previous epidemiological studies, sacral spine involvement only accounts for 0–6.3% of spine tuberculosis cases .The exact explanation for the low incidence of sacral tuberculosis are not clearly described in previous literature.However, the anatomical vascularity of the sacrum, local oxygen level and dissemination pattern of the mycobacteria from the primary foci, hypothetically, have some contributions for its rarity .The rarity of this subgroup of disease produced some difficulty especially in diagnosing the exact etiology of the symptoms, as it may mimic other diseases such as pyogenic infection or malignancy.Gluteal abscess, for example, which is one of the rarest presentations of spinal tuberculosis, is more often associated with pyogenic infection in perianal region .Cold abscess formation is common in spine tuberculosis, with a prevalence of 55–86%.In sacral spine, presacral abscess is the most common presentation, which accounts around 40% of cases
Introduction: Both gluteal abscess and sacral tuberculosis are rare entities in spinal tuberculosis cases.Presentation of case: A 51-year-old woman was admitted with painless massive lump on both of her thighs that have been enlarging for the past 6 months.She had a history of previous tuberculosis treatment.Oral anti tuberculosis drugs were administered after.There was no recurrence and complication at the final follow up.
.Sacral tuberculosis presenting as gluteal abscess, however, is very rare.Kumar presented the first case of submuscular gluteal abscess in a 3-year-old-female-child as the extension of her sacral tuberculosis .Puthezhath reported another tuberculosis case of lumbar spine with the gluteal abscess as the only symptoms .Based on CT evaluation on his case, the location of the abscess were very superficial, which was not suitable with the anatomical extension nor proposed pathophysiology of the disease.Magnetic resonance imaging is the most sensitive diagnostic procedure, however it has low specificity in diagnosing sacral tuberculosis as it may be misinterpreted as other infections or malignancy.Despite providing an excellent evaluation of the disease extension, MRI alone cannot determine the exact cause of the abscess.Histopathologic examination is the current gold standard in determining the exact diagnosis, provided that an accurate tissue biopsy was performed.Evaluation the extent of the abscess and the involved structure on MRI will provide not only the information about the site of biopsy but also about the probable pathophysiology of the disease.Based on the anatomic hypothesis on MRI images, the cold abscess over anterior vertebral bodies on S2–S4 may spread along the prevertebral space through the origins of piriformis muscles bilaterally to their insertions on the greater trochanter, which would allow the pus to spread beneath the gluteus maximus proximally and tensor fascia lata distally following the course of least resistance through intermuscular planes .Inexistence of separating fascia around the gluteal compartment allowed the fluid to spread massively and form a massive gluteal abscess.Another proposed theory of the gluteal abscess formation is by hematogenous spread along the branch of the aorta from the lumbar or sacral spine .Based on this theory, the final presentation of the cold abscess may extent to the ischiorectal fossa, submuscular gluteal abscess, psoas sheath, lumbo-dorsal region or even more distally to the medial side of the thigh, popliteal fossa or medial side of Achilles tendon .However, this scenario is unlikely because the absence of systemic infection and the patient’s immunocompetent status.The MRI images also shown that the extension of the abscess was mostly occurred on the posterior and lateral side of the thigh, which was not in accordance with the previous hematogenous theory.It is the current general consensus that tuberculosis is actually a medical disease .Anti-tuberculosis drugs especially isoniazid and ofloxacin are more than capable to permeate into the cold abscess.Despite having their maximum concentration significantly lower than the serum concentration, it still reached the level beyond their respective MIC and disappeared more slowly compared with that in the serum.However, in massive abscesses like in our case, the fluid’s volume within may dilute the anti-tuberculosis drug and reduce their concentration below the MIC.Thus, surgical debridement and abscess evacuation are indicated in the presence of massive abscess, despite lack of general agreement about the actual cut-off on the abscess volume.Percutaneous drainage of the abscess is usually recommended in most spine tubercular abscess without any indication for spinal stabilization .Open debridement and biopsy are warranted in several cases, where the diagnosis is still unclear thus adequate tissue sample has to be obtained.Estimated volume of the abscess is also a consideration in determining the method of evacuation.Although there isn’t any general consensus about the cut off point, the abscess in our case is massive and too extensive to perform a percutaneous drainage.Dinc et al. , performed an analysis of image guided percutaneous drainage of iliopsoas tuberculous abscess in 26 abscesses.In their series, the drainage volume was ranged from 85 to 1450 mL, which are significantly much lower than our case.Other factors such as delayed presentation of the disease may exhibit formation of granulation tissue, which will require more aggressive approach .Optimal debridement will improves the pharmacokinetic effectiveness of anti-tuberculosis drugs .For patients presenting with submuscular gluteal abscess, consideration of spine tuberculosis as the potential source, is mandatory especially in developing or endemic countries.Early diagnostic and management may prevent further morbidity and improves the patient’s outcome.Tuberculosis should be the first differential diagnostic in atypical presentation of sacral lesions and submuscular gluteal abscess.MRI examination combined with a structural anatomical analysis is a very useful in evaluating the abscess dissemination process and should be performed in regular basis.Ethical approval was exempted by our institution.We confirmed that written informed consent were obtained from the patient.All author participate in study concept, data interpretation, writing and editing the paper.First author were also responsible in data collection, interpretation, writing and editing the paper.The second and third authors were the main attending for the patient, and performing the diagnosis, surgery and follow up.Fourth and Fifth authors were responsible in study concept, data interpretation and editing the paper.All authors contributed in proposal preparation, data collection and follow up, and final manuscript preparation.First/ Corresponding author: Yoshi Pratama Djaja.Not commissioned, externally peer reviewed.
Even in endemic country, this atypical presentation may be the cause of delayed diagnosis and treatment.From the MRI examination submuscular gluteal abscess, which was an extension of the sacral tuberculosis, were found.Open debridement and biopsy were performed, which confirmed the suspicion of tuberculosis.Discussion: Cold abscess formation is common in spine tuberculosis however the formation of gluteal abscess as the extension of sacral tuberculosis is rare.Although MRI's specificity in determining the underlying cause is poor, it has a great role not only determining the location and size of the lesion, but also to describe anatomical pathophysiology of the abscess dissemination from sacral tuberculosis.Conclusion: Despite the limitation of the study and the rarity of this case, tuberculosis should be made as the main differential diagnosis for atypical sacral lesion that occurs with submuscular gluteal abscess.
analysis for C27400, C51900 and C75200 alloys are is summarised in Table 4.The results of the EDX chemical composition analysis for selected areas/points on all test samples were grouped in the following order:SEM images of sample cross-sections,qualitative analysis - maps showing distribution of analysed elements,"multipoint" quantitative analysis along a user-defined measurement lines coupled with graphs showing the concentration changes of analysed elements,EDS spectrograms and chemical composition tables with microphotographs for each of the analysed areas.After the oxidation process the revealed structure of oxide layers formed on metal substrates was observed using SEM.Figs. 8a-9d show surfaces of copper C11000 and C27400, C51900, C75200 alloys.The product Cu2O layer for C11000 is ≈ 1.0 μm thick while for other alloys it is noticeably thinner: ≈ 650 nm for C27400, ≈ 300 nm for C51900, and ≈100 nm for C75200.These values comply with the results of the cathodic reduction tests.Copper and copper alloys were also analysed for the distribution of specified elements at the cross-section surfaces: O and Cu, Zn, Sn, Ni.Relevant maps with detected elements are presented in Fig. 9.Previously observed samples of oxidized layers were thoroughly analysed for the presence of chosen elements using the multipoint method along a defined measurement line.Figs. 10–13b presents mentioned cross-sections of C11000, C27400, C51900, and C75200 respectively.Materials in the form of flat sheets with oxidised surfaces were coupled together for the SEM observations and the measurement line for 100 points was drawn.Figs. 10–13c presents charts showing changes in concentrations of analysed elements along the line.Content of Cu, O, and other alloy additives varies on cross-sections from the core, through oxide layers in the middle to upper substrate.The highest copper content in the material core decreases in the direction of the oxide layers at the expense of build-up of oxygen and other additives.For selected measurement points along a defined line spectrograms of characteristic x-ray radiation are shown in Figs. 10–13a.After the oxidation, copper C11000 and its alloys: C51900, C27400, and C75200 were tested for antimicrobial efficiency in contact with bacteria of Staphylococcus Aureus and Escherichia Coli.Figs. 14–17 presents a reduction in bacteria count on non-oxidised and oxidised samples.A summary of results is shown in Table 5.For oxidized ETP copper complete reduction of the SA bacteria count occurred after 180 min, while in case of non-oxidised ETP copper – after 240 min.The reduction time for EC suspension was the same for this material regardless of its surface condition – 240 min.For non-oxidised and oxidized brass CuZn37, 2 log reduction was observed for SA after 300 min, while for EC – the total bacteria reduction was by ca. 1 log.Difference in antimicrobial performance between non-oxidized and oxidized brass was not found.In case of CuSn6 tin bronze, the total bacteria count reduction for a oxidised material both for SA and EC, occurred after 60 min.For non-oxidised CuSn6 alloy, total reduction time was much longer for both bacteria types and it was 120 min for SA, and 180 min for EC.For nickel-silver CuNi18Zn20, a bacteria total reduction occurred after 240 min for SA on non-oxidised surface, and for EC after an oxidation process.As it was shown in the literature analysis the corrosion products on copper based touch surfaces are mostly copper oxides, which can be explained by the fact that other chemical compounds are not resistant to repeated wear by human palms.This research is based on the assumption that by means of atmospheric oxidation it is possible to produce oxide layers, which are similar to those reported after real life human palm oxidation.On the basis of conducted experiments on atmospheric oxidation and detailed characterization of the obtained surface layer parameters it was found these layers at properly selected heat treatment parameters can simulate oxide layers produced by contact with human palm sweat.On basis of microbiological tests of chosen oxidized and non-oxidized test samples it was found that oxidation has very little to none effect on antimicrobial efficiency of test materials.Only in case of tin bronze visible improvement after oxidation can be seen in comparison to non-oxidized surfaces.Monika Walkowicz planned the experiments, performed corrosion experiments, performed annealing experiments, analysed and interpreted the data, and was a major contributor to the writing of the manuscript.Piotr Osuch planned the experiments, performed annealing experiments, performed SEM-microscopic observations and EDX analysis, analysed and interpreted the data, prepared artwork for publication.Beata Smyrak performed corrosion experiments, analysed and interpreted the data, drafted the manuscript,Tadeusz Knych analysed and interpreted the data, drafted the manuscript.Ewa Rudnik performed chronopotentiometric measurements, analysed and interpreted the data.Łukasz Cieniek performed SEM-microscopic observations and EDX analysis, analysed and interpreted the data,Anna Różańska performed laboratory microbiological tests, analysed and interpreted the data, drafted the manuscript.Agnieszka Chmielarczyk performed laboratory microbiological tests, analysed and interpreted the data, drafted the manuscript.Dorota Romaniszyn performed laboratory microbiological tests, analysed and interpreted the data, drafted the manuscript.Małgorzata Bulanda analysed and interpreted the data, drafted the manuscript.The authors declare no conflict of interest.
Copper and its alloys are known for their antimicrobial activity, which makes them appealing materials for various touch surfaces in public facilities.
political relevance and practical application.Often it is only the generic indicators that are useful at larger scales and are transferable to some extent.To be meaningful for a particular site, indicators need to be made specific.For example, fish stocks are found across the North Sea and can be used as an indicator of the potential for food provision, but food provision from the Dogger Bank is contributed to by specific stocks such as plaice, turbot and lemon sole.The indicators selected in this work suggest that community structure of biota is a relevant indicator for a number of regulating ecosystem services, such as waste treatment and assimilation, climate regulation and air purification.To be useful to management, it needs to be specific to the location and context of interest, the service of interest and, in the case of waste treatment and assimilation, the waste of interest.These examples suggest that the scalable and portable criteria may not be of primary concern for the identification of relevant indicators in all cases.They also highlight the need for additional contextual indicators, for example, the quantity of waste or contamination that is entering the system, the quantity that is removed by dilution alone and where is it removed to.Exploring the application of an ecosystem service classification and related ecosystem service indicators to the Dogger Bank identified a number of issues.Ecosystem service classifications can capture the ecosystem services delivered by an offshore marine site, but generic level classifications, such as TEEB and EEA need to be tailored to each location.Irrelevant ecosystem services need to be removed and the definitions of each service fine-tuned to better reflect the case study site.Distinctions between ecosystem services, functions and benefits must also be made clear.Using the classification developed, indicators for the full suite of ecosystem services were derived as well as for associated functions and benefits.This provides a novel contribution as studies typically focus on only a limited number of services, and rarely assess the full complement of ecosystem services, functions and benefits.The relevance and applicability of these indicators, however, cannot always be guaranteed.Data scarcity for the marine environment results in many indicators being unquantifiable.Indicator specificity is a particular problem.Indicators of functions, services and benefits will likely respond to a number of different causes of change.Understanding how a specific location contributes to ecosystem service provision and the benefits they generate, and how they will respond to change remains a challenge.Tailoring generic level indicators to a specific case study can be achieved with meaningful results for ecosystem services, functions and benefits.All of these indicators should be assessed in conjunction to obtain a more complete understanding of the implications of ecosystem change.Focusing on just ecosystem service or function or benefit indicators may misrepresent a situation and lead to counterproductive management interventions.Despite these challenges, there is potential to apply ecosystem service indicators to positive effect.With increasing emphasis on marine management, the EU Marine Strategy Framework Directive,1 the EU Biodiversity Strategy,2 and the Intergovernmental Platform on Biodiversity and Ecosystem Services3 are all currently developing indicators to help monitor their implementation and progress.By identifying what indicators can best describe ecosystem services, functions and benefits, effort can be made to ensure that they become applicable through focused monitoring and evaluation programs.
There is a multitude of ecosystem service classifications available within the literature, each with its own advantages and drawbacks.Elements of them have been used to tailor a generic ecosystem service classification for the marine environment and then for a case study site within the North Sea: the Dogger Bank.Indicators for each of the ecosystem services, deemed relevant to the case study site, were identified.Each indicator was then assessed against a set of agreed criteria to ensure its relevance and applicability to environmental management.This paper identifies the need to distinguish between indicators of ecosystem services that are entirely ecological in nature (and largely reveal the potential of an ecosystem to provide ecosystem services), indicators for the ecological processes contributing to the delivery of these services, and indicators of benefits that reveal the realized human use or enjoyment of an ecosystem service.It highlights some of the difficulties faced in selecting meaningful indicators, such as problems of specificity, spatial disconnect and the considerable uncertainty about marine species, habitats and the processes, functions and services they contribute to.
is exemplified by elegant studies involving design of superphysiological TCR affinity under zero force that nonetheless paradoxically yielded reduced functional T cell activation .Mutations in complementarity determining regions that create the αβ TCR superphysiological binding at zero force could readily impair αβ TCR conformational change under nonequilibrium binding that is important for biological recognition of antigen fostering downstream signaling.Kinetic segregation has been widely invoked in consideration of T cell activation and has also been observed in a reconstituted system .Within the IS, the large ectodomain of the CD45 phosphatase relegates it to the dSMAC separated from the central SMAC where TCR and pMHC and kinases such as Lck come to reside.This process can be modulated by genetically modifying the molecular size of SMAC proteins .However, elongation of pMHC on the APC can disadvantage physical force generated from T cell crawling and/or retrograde flow, hindering T cell activation.Importantly, kinetic segregation cannot explain why a transgenic TCR with a CβFG loop deletion expressed at similar copy number to a wild-type TCR fails to negatively select thymocytes in vivo or trigger mature T cells unless, in the latter case, antigen concentration is increased by orders of magnitude.Collectively, these data suggest that kinetic segregation is a means to amplify TCR triggering rather than to initiate it.Elucidation of T cell mechanobiology principles makes it clear that the targeting of viral or tumor-specific antigens need not exclude candidates expressed at relatively low copy number per cell, assuming potent αβ TCRs are elicited via vaccination or arise naturally.Likewise, there is no requirement for TCRs with a fast off-rate to foster serial engagement.As physical force tunes αβ TCR recognition acuity, TCRs manifesting 1–10-s bond lifetimes generally are efficacious to foster TCR activation.That said, the ligand binding approach vector is important in the force transduction process , likely linked to requisite conformational changes in the TCR complex ectodomains and transfer to transmembrane and cytoplasmic domains coordinated with changes in vicinal lipids.Immunotherapeutics based upon native αβ TCR as well as chimeric antigen receptor transduction into autologous T cells can be examined by OT methods described herein for optimization of ligand triggering.Moreover, with regard to immune monitoring, given that ELIspot and tetramer technologies bypass external force application to assess the quality of the TCR–pMHC bond, it is not surprising that such assays may fail to identify key biomarkers of clinical outcome and/or vaccine responsiveness.When ELIspot and related methods are used to quantify cytokine production, stimulation by antigen generally uses micromolar concentrations of peptide.This concentration is well above the physiological range and fosters TCR crossreactivity that is less likely to be observed at peptide concentrations in the nanomolar to picomolar ranges .While there are multiple outstanding questions remaining to be answered, implementation of mechanobiology principles in assessing T cell adaptive recognition should be a game changer in the field.Is the TCR–pMHC bond lifetime force curve a key indicator of TCR quality linked to protective biology as opposed to conventionally measured functional avidity or tetramer staining?,From the molecular perspective, how does force-dependent transition in the αβ TCR heterodimer influence the quaternary structure of the αβ TCR complex and conformations of the CD3 cytoplasmic domains to initiate signaling?,What is the structure of the extended and force-loaded relative to the compact and unloaded αβ TCR complex?,Can adhesion molecules stabilize a single TCR–pMHC bond to facilitate T cell activation in a digital manner?,What is the precise molecular identification of the biochemistry, physiology, and associated protein interactions of the motor protein activated by TCR–pMHC bond formation?,On the APC side, how does viscoelasticity of the pMHC embedded in the membrane affect the TCR mechanosensing and force-induced T cell activation?,Is this altered when dendritic cells go from an immature to a mature state?,How does the mechanical force applied on the TCR–pMHC interface initiate the actomyosin activity?
T lymphocytes use αβ T cell receptors (TCRs) to recognize sparse antigenic peptides bound to MHC molecules (pMHCs) arrayed on antigen-presenting cells (APCs).Contrary to conventional receptor–ligand associations exemplified by antigen–antibody interactions, forces play a crucial role in nonequilibrium mechanosensor-based T cell activation.Both T cell motility and local cytoskeleton machinery exert forces (i.e., generate loads) on TCR–pMHC bonds.We review biological features of the load-dependent activation process as revealed by optical tweezers single molecule/single cell and other biophysical measurements.The findings link pMHC-triggered TCRs to single cytoskeletal motors; define the importance of energized anisotropic (i.e., force direction dependent) activation; and characterize immunological synapse formation as digital, revealing no serial requirement.The emerging picture suggests new approaches for the monitoring and design of cytotoxic T lymphocyte (CTL)-based immunotherapy.
Scotland's National Naloxone Advisory Group and co-authored the peer-review paper on before/after evaluation at 3-years of Scotland's National Naloxone Policy.SMB holds GlaxoSmithKline shares."JRR: JRR chaired Scotland's National Forum on Drugs-Related Deaths which recommended that Scotland should have a National Naloxone Policy.SW: SW is co-author of British Association for Psychopharmacology evidence-based guidelines for the management of substance abuse, harmful use, addiction and co-morbidity.The pilot N-ALIVE Trial was grant-funded by the Medical Research Council and co-ordinated by the MRC Clinical Trials Unit at University College London, which core-funds MKBP and AMM.SMB was funded by Medical Research Council programme number MC_U105260794."JS is core-funded by his university, King's College London. "The MRC's funding board had no role in data analysis, data interpretation, the decision to cease randomization in the N-ALIVE pilot Trial, the decision to submit for publication this account of how the decision was reached, the writing of this report.The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication.Elicitation concept and design: SMB."Implications of prisoners' altruism for feasibility of individually-randomized N-ALIVE main trial: MKBP. "Liaison with Office for National Statistics to obtain information on opioid-related deaths for England and Wales by calendar year of death, as part of applying Hill's criteria for determining causation in Scotland's NNP: SMB.Case for continued randomization in the wider public interest: JS.Deliberations by independent TS-DMC members: DA, JP, JRR, SW."Oversight of MRC-CTU's minutes on TS-DMC decisions and giving effect to them by liaison with Research Ethics Committee and with principal investigators at N-ALIVE prisons: AMM.Initial drafting of paper: SMB.Editing, interpretation and discussion: all authors.
The prison-based N-ALIVE pilot trial had undertaken to notify the Research Ethics Committee and participants if we had reason to believe that the N-ALIVE pilot trial would not proceed to the main trial.In this paper, we describe how external data for the third year of before/after evaluation from Scotland's National Naloxone Programme, a related public health policy, were anticipated by eliciting prior opinion about the Scottish results in the month prior to their release as official statistics.We summarise how deliberations by the N-ALIVE Trial Steering-Data Monitoring Committee (TS-DMC) on N-ALIVE's own interim data, together with those on naloxone-on-release (NOR) from Scotland, led to the decision to cease randomization in the N-ALIVE pilot trial and recommend to local Principal Investigators that NOR be offered to already-randomized prisoners who had not yet been released.
liveness in finite traces.However, in the case of liveness, manual inspection is required when the tool reports a potential liveness violation.All these tools for runtime monitoring of LTL are focused on finite traces.The main difference with TJT is the support of cycle detection due to the way in which the states are abstracted and stored, and the use of Büchi automata.Note that we have not included further experimental comparison of TJT with some of these runtime monitoring tools due to the lack of comparable public examples, or of the tools themselves.We have presented the foundations of TJT, a tool for checking temporal logic properties on Java programs.This tool is useful for testing functional properties on both sequential and concurrent programs.In particular, we explained how the use of Büchi automata combined with storing the states from runtime monitoring can be used to check liveness properties in non-terminating executions of reactive programs.Our tool chain includes the model checker Spin and JDI, which are integrated in the well known development environment Eclipse.The use of JDI instead of instrumented code makes it possible to detect deadlocks and provides wider access to events in the execution of the program, while being completely transparent.Our current work follows several paths.One is to apply static influence analysis to automatically select the variables relevant to the given property, as we proposed in.The second one is to implement methods to produce more schedulling in multithreaded programs for the same initial state.Finally, we plan to take advantage of multicore architectures to speed up the analysis, due to the already decoupled interaction between Spin and JDI modules.
This paper presents an approach for the automated debugging of reactive and concurrent Java programs, combining model checking and runtime monitoring.Runtime monitoring is used to transform the Java execution traces into the input for the model checker, the purpose of which is twofold.First, it checks these execution traces against properties written in linear temporal logic (LTL), which represent desirable or undesirable behaviors.Second, it produces several execution traces for a single Java program by generating test inputs and exploring different schedulings in multithreaded programs.As state explosion is the main drawback to model checking, we propose two abstraction approaches to reduce the memory requirements when storing Java states.We also present the formal framework to clarify which kinds of LTL safety and liveness formulas can be correctly analysed with each abstraction for both finite and infinite program executions.A major advantage of our approach comes from the model checker, which stores the trace of each failed execution, allowing the programmer to replay these executions to locate the bugs.Our current implementation, the tool TJT, uses Spin as the model checker and the Java Debug Interface (JDI) for runtime monitoring.TJT is presented as an Eclipse plug-in and it has been successfully applied to debug complex public Java programs.© 2013 The Authors.
vitro nephrotoxicity.Pregnancy is a natural condition that involves oxidative and nitrative stress.However, exacerbated ROS and/or RNS generation results in placental redox state changes that have been reported to be related to some pregnancy complications.We evaluated the impact of SCBs on ROS/RNS generation.JWH-018 and JWH-122 treatment resulted in a significant ROS production, while UR-144 and THC did not affect the redox state.This again suggests that the activated mechanisms of cell death may vary for the different cannabinoids.In fact, it was previously shown that XLR-11, a synthetic cannabinoid whose chemical structure is similar to that of UR-144, did not generate ROS in a human proximal tubule cell line.In addition, we described, in BeWo cells and primary cultures of human CTs, that WIN-55,212 disrupts trophoblast proliferation and causes cell death by promoting Δψm loss and activation of caspases -3/−7 as well caspase-9, without ROS generation.On the other hand, we demonstrated that JWH-122 and UR-144 induced a rapid ROS/RNS production and ER stress in a telomerase-immortalized human endometrial stromal cell line and in primary human decidual fibroblasts, though these effects were not accompanied by reduction in cell viability, which emphasizes the influence of the cell type on cannabinoid actions.It should be pointed out that although the concentrations at which the effects were observed, are widely used for in vitro assays, these levels are not typically achieved in plasma upon cannabis recreational use.However, the pharmacokinetic of THC is complex and dependent on individual metabolism and adiposity.THC is able to cross the placenta, due to its lipophilicity and may directly affect the foetus.Importantly, while the pharmacokinetics of THC has been extensively studied in nonpregnant adult subjects little is known about the maternal-foetal transfer of THC or of SCBs.Since SCBs, like THC, are generally lipophilic compounds and many have much higher affinity to bind to cannabinoid receptors their impact on pregnancy outcome should not be neglected.In addition, SCBs may present prolonged duration of action as several metabolites still exhibit a high affinity for cannabinoid receptors, whilst only 11-Nor-9-carboxy-Δ9-tetrahydrocannabinol is the most abundant THC active metabolite.Whether or not these metabolites affect placental and foetal development needs further investigation.In conclusion, as the balance of trophoblast turnover plays an important role in placental development and since THC and these SCBs may lead to cell cycle disruption and apoptosis, their use during pregnancy may impair fundamental gestational cellular processes.Considering the impact that SCBs use may have in pregnancy outcome, more studies are needed to better understand how these substances affect reproductive health.The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
The increasing use of synthetic cannabinoids (SCBs) in recreational settings is becoming a new paradigm of drug abuse.Although SCBs effects mimic those of the Cannabis sativa plant, these drugs are frequently more potent and hazardous.It is known that endocannabinoid signalling plays a crucial role in diverse reproductive events such as placental development.Moreover, the negative impact of the phytocannabinoid Δ9-tetrahydrocannabinol (THC) in pregnancy outcome, leading to prematurity, intrauterine growth restriction and low birth weight is well recognized, which makes women of childbearing age a sensitive group to developmental adverse effects of cannabinoids.Placental trophoblast turnover relies on regulated processes of proliferation and apoptosis for normal placental development.Here, we explored the impact of the SCBs JWH-018, JWH-122 and UR-144 and of the phytocannabinoid THC in BeWo cell line, a human placental cytotrophoblast cell model.All the cannabinoids caused a significant decrease in cell viability without LDH release, though this effect was only detected for the highest concentrations of THC.JWH-018 and JWH-122 increased reactive oxygen species (ROS) production and THC, UR-144 and JWH-122 caused loss of mitochondrial membrane potential.All the compounds were able to induce caspase-9 activation.The involvement of apoptotic pathways was further confirmed through the significant increase in caspase -3/-7 activities.For UR-144, this effect was reversed by the CB1 antagonist AM281, for JWH-018 and THC this effect was mediated by both cannabinoid receptors CB1 and CB2 while for JWH-122 it was cannabinoid receptor-independent.This work demonstrates that THC and SCBs are able to induce apoptotic cell death.Although they may act through different mechanisms and potencies, the studied cannabinoids have the potential to disrupt gestational fundamental events.
IN may have a stronger binding affinity for ALLINIs and EVG, but not for RAL.However, changes in binding affinities were only minimally altered, although the binding affinity of ALLINI and EVG for the variant IN appeared to be stronger as compared to wild-type IN.The IN gene of the HIV isolate from Papua contains genetic polymorphisms that encode for variant amino acids in the IN protein, based on the Stanford HIVdb mutation interpretation database.These variations slightly changed the three-dimensional structure of IN.Wild-type IN contains a longer helix domain as compared to variant IN.However, this structural change had a minimal to no effect on IN stability.This result may be due to the fact that the amino acid substitutions are not located in the conserved central core domain of IN .Molecular docking analysis indicated that the ALLINIs, EVG, and RAL did not much change the binding affinity to the wildtype or variant IN.These three INIs can bind in the wild-type and variant IN.IN residues 125, 128, 170, 171, and 173 are positioned to make contact with INIs; substitutions at these amino acid positions likely disrupt the inhibitor-mediated interface directly .IN resistance to ALLINI is dependent on the A128T polymorphism , which was not observed in this study.ALLINI bound to residues Glu170 and His171 of both the wild-type and variant IN.ALLINI also interacts with ALA98 and A125 of variant IN via a halogen bond and a pi-alkyl interaction, respectively.A study conducted by Lu et al. demonstrated that halogen bonding plays an important role in inhibitor recognition and binding to the targeted protein .The unique chemical characteristics of halogens are beneficial to the design of protein inhibitors and drugs .The ALLINIs showed relatively higher binding affinity to the variant IN as compared to wild-type.The IN polymorphisms T66I/A, V72I, F121Y, T125K, G140C, S147G, Q148H, V151I, S153Y, M154I, and S230R mediate resistance to EVG .The IN variant in this study contained M154I; however, according to the docking analysis, EVG bound more strongly to the IN variant as compared to wild-type.The binding affinity between EVG and variant IN was −8.1 kcal/mol, but −7.2 kcal/mol with the wild-type IN.Residues Glu170 and Thr174 of both wild-type and variant form hydrogen bonds with EVG.However residues Ala124 and Ala125 of the IN variant, but not wild-type, formed pi-alkyl interactions with EVG.The polymorphisms Q148H/K/R, N155H, and Y143H of IN were associated with the resistance of HIV-1 to RAL .These polymorphisms were not found in this study; however the binding affinity between RAL and the variant IN was decreased as compared to the wild-type IN, which was due to the loss of hydrogen bonding between RAL with Thr125 in variant IN and loss of several other interactions between RAL and Leu102, Ala128, Ala129, and Trp132.In summary, the study showed that ALLINI and EVG increased binding affinity to the variant-IN compared to the wild-type of integrase.The variant IN had a lower affinity for RAL as compared to wild-type IN.This suggests that the variant IN may be less susceptible to the development of drug resistance mutations.However, additional research should be done to further evaluate these results.Hotma Martogi Lorensi Hutapea: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.Yustinus Maladan, Widodo: Analyzed and interpreted the data; Contributed materials, analysis tools or data; Wrote the paper.This research was funded by Ministry of Health, Republic of Indonesia.The authors declare no conflict of interest.No additional information is available for this paper.
IN is a potential target of antiretroviral (ARV) therapeutic drugs such as ALLINI, Raltegravir (RAL), and Elvitegravir (EVG).The effect of IN polymorphisms on its structure and binding affinity to the integrase inhibitors (INIs) is not well understood.The goal of this study was to examine the effect of IN polymorphisms on its tertiary structure and binding affinities to INIs using computational approaches.HIV genomes were isolated from patient blood and the IN gene was sequenced to identify polymorphisms.Protein structures were derived using FoldX and the binding affinity of IN for ALLINI, RAL, and EVG was evaluated using a molecular docking method.The binding affinities of ALLINI and EVG for wild-type IN were lower as compared to an IN variant; in contrast, the binding affinity of RAL for the IN variant was lower as compared to wild-type IN.These results suggested that IN variant interacts with ALLINI and EVG more efficiently as compared to the wildtype, which may not cause resistent to the drugs.In vitro and in vivo studies should be done to validate the findings of this study.
is considered in the Eg estimation; however, the variations of individual conduction and valence bands deduced from PBE are still accurate.When the PBE conduction bands are shifted, therefore, all the PBE bands show excellent agreement with the bands approximated by HSE06 .A similar good agreement has also been observed for GaAs.Since the optical transitions are derived essentially from the band structures, the result of Fig. 7 verifies that the underestimated Eg contribution observed in the PBE spectra can be corrected by simply shifting the PBE spectra toward higher energies using ΔEg.In other words, the validity of our PHS method can be confirmed by simply comparing the conduction-band-shifted PBE band structure with the HSE06 band structure.Since the band-edge light absorption of tetragonal-based semiconductors is determined primarily by the optical transition from the first valence band to the first conduction band , the agreement near the valence band maximum and conduction band minimum is particularly important.We have developed a new DFT approach that can accurately predict material α spectra in logarithmic scale.In this method, the ε2 spectrum calculated from the PBE functional using very high-k mesh density is blue-shifted and the underestimated Eg contribution in PBE is corrected using the energy scale determined by HSE06 calculation, while the ε2 amplitude is corrected by applying sum rule.We have applied the developed method for the α calculations of five solar cell materials and the α spectra calculated by our method provide remarkable agreement with those observed experimentally.Our scheme realizes the requirements of high accuracy and low calculation cost simultaneously and allows the direct application of calculated DFT optical spectra to various optical device simulations.Mitsutoshi Nishiwaki: Conceptualization, Formal analysis.Hiroyuki Fujiwara: Supervision, Writing - review & editing.
Theoretical material investigation based on density functional theory (DFT) has been a breakthrough in the last century.Nevertheless, the optical properties calculated by DFT generally show poor agreement with experimental results particularly when the absorption-coefficient (α) spectra in logarithmic scale are compared.In this study, we have established an alternative DFT approach (PHS method) that calculates highly accurate α spectra, which show remarkable agreement with experimental spectra even in logarithmic scale.In the developed method, the optical function estimated from generalized gradient approximation (GGA) using very high-density k mesh is blue-shifted by incorporating the energy-scale correction by a hybrid functional and the amplitude correction by sum rule.Our simple approach enables high-precision prediction of the experimental α spectra of all solar-cell materials (GaAs, InP, CdTe, CuInSe2 and Cu2ZnGeSe4) investigated here.The developed method satisfies the requirements of high accuracy and low computational cost simultaneously and is superior to conventional GGA, hybrid functional and GW methods.
is encompassed in the pay more factor: perhaps an explicit item referring solely to choosing on the basis of CSR reasons – as well as boycotting on the basis of environmental reasons – would strengthen the scale.If the scale had been developed using solely UK respondents, then it may have incorporated items pertaining to food and an animal welfare dimension.Reference to animal welfare, dropped because Japanese respondents did not understand the need for such a dimension, should perhaps have remained because interesting cultural differences would have emerged – although of course this would have compromised measurement invariance.Certainly, there is a need for further research into the societal concern for animal welfare.Finally, the scale does not incorporate any economic dimensions in terms of buying locally produced goods, which is of great importance from an ethical perspective.Perhaps if the scale had utilized qualitative research as its foundation and been developed from scratch rather than using an existing scale, these issues would have emerged.Conversely, however, it should be noted that studying organic food purchases purely from an ethical perspective is problematic because many people purchase organic food because they perceive it to be healthier, safer, or to taste better.Clearly, researchers in different nations will still need additional measures in order to encompass a wider range of issues specific to different cultures and countries.Hopefully, other important facets to ethical consumer behavior, such as ethical logos and food miles, are incorporated implicitly in the scale, although perhaps explicit and direct items pertaining to such dimensions would further strengthen it.Nevertheless, previous studies have already acknowledged that ethical consumption needs continual refinements due to its dynamic nature and increasing importance, and this study has taken steps to refine and modernize ethically minded consumer behavior.Future research should utilize a qualitative phase of research in order to ensure that any future improvements or iterations of the scale are as comprehensive as possible.These limitations notwithstanding, the new scale does fill an important gap in that it provides future researchers with a measurement instrument ready to use in a variety of nations and in studies with a variety of research objectives, including the modeling of complex relationships among variables."Previously, researchers needed to use a variety of different scales to cover different facets of ethical consumption or to adapt existing scales such as the SRCB scale, which was no longer fully applicable in today's environment.The new EMCB scale is valid, reliable, easy to understand, and easy to administer.The scale is contemporary, and it views ethical consumption from a wide perspective.The choices of nations and the demographic of the samples therein are also strengths of the study: all too often scales development comprises young samples.The scale demonstrated the maximum possible ranges across all nations; thus, it is assumed that as it can differentiate between consumers drawn from “the ranks of the ethically motivated”.There is no obvious reason why the scale is not suitable for use with younger consumers, although further testing of the scale to ensure its use is appropriate with all ages is a recommendation for future research.Certainly, the wide range of scores found within every nation suggests that further investigation of the scale in order to identify its usefulness for segmentation purposes could be a productive research avenue.Moreover, studies often use scales developed in a different country or culture without checking that the measure is equivalent.This paper has demonstrated that the EMCB scale can be used with confidence across a variety of nations and cultures, providing future researchers and practitioners with a scale that measures the degree to which an individual perceives themselves as ethically minded when making consumption choices.Of course, as is the case with any consumer behavior scale, the instrument relies on the ability of respondents recollect their behavior and to tell the truth about that behavior.This caveat notwithstanding, the scale has advantages over many alternatives because it does at least ask questions about actual purchasing habits as opposed to attitudes and intentions, which is important given that ethical intentions rarely translate into actual purchasing behavior.Looking ahead, ethical consumption may have the potential to become a mass-market phenomenon.If a propensity toward ethically minded consumer behavior is to be considered a major asset for marketing and indeed for a society, it becomes absolutely necessary to have a valid measure for it.The EMCB provides researchers and practitioners from diverse countries with such a measure.
This paper details the development and validation of a new research instrument called the ethically minded consumer behavior (EMCB) scale.The scale conceptualizes ethically minded consumer behavior as a variety of consumption choices pertaining to environmental issues and corporate social responsibility.Developed and extensively tested among consumers (n = 1278) in the UK, Germany, Hungary, and Japan, the scale demonstrates reliability, validity, and metric measurement invariance across these diverse nations.The study provides researchers and practitioners with a much-needed and easy-to-administer, valid, and reliable instrument pertaining to ethically minded consumer behavior.
It is estimated that about 1.1 billion people worldwide practice open defecation as a result of lack of access to sanitation facilities.Diseases caused by open defecation are preventable and disproportionately affect the poor.Millions of people contract fecal-borne diseases, most commonly diarrhea and intestinal worms, with an estimated 1.7 million people dying each year because of unsafe water, hygiene and sanitation practices.In Indonesia 110 million people lack access to proper sanitation and 63 million practice open defecation.Two of the four main causes of death for children under five in Indonesia are fecal-borne illnesses linked directly to inadequate water supply, sanitation, and hygiene issues.About 11 percent of Indonesian children have diarrhea in any two-week period and it has been estimated that more than 33,000 die each year from diarrhea.By reducing normal food consumption and nutrient absorption, diarrheal diseases and intestinal worms are also a significant cause of malnutrition, leading to impaired physical growth, reduced resistance to infection, and long-term gastrointestinal disorders.Inadequate sanitation is associated not only with adverse health effects, but also with significant economic losses."Inadequate sanitation and poor hygiene in Indonesia is estimated to cost approximately US$6.3 billion, or more than 2.4 percent of the country's gross domestic product.Community-Led Total Sanitation is a program being widely implemented in more than 60 countries throughout Asia, Africa, Latin America, the Pacific and the Middle East to address the sanitation burden.CLTS aims to create demand for sanitation by facilitating graphic, shame-inducing community discussions of the negative health consequences of existing sanitation practices, rather than through the more traditional approach of providing sanitation hardware or subsidies.The Water and Sanitation Program of the World Bank is facilitating the implementation of CLTS widely.As part of a learning agenda to address the burdens associated with poor sanitation, the Bill and Melinda Gates Foundation funded randomized controlled trials of sanitation and hygiene interventions in seven locations around the world.1,This paper presents the results of the Indonesian evaluation.We report the impact of CLTS on outcomes of interest along the causal chain as improvements in sanitation have the potential to lead to a decrease in parasitic infestations, a decrease in anemia, and an increase in weight and height for young children.We rely on objective measures of impact—physical inspection of sanitation facilities, blood and fecal samples, and physical anthropometric measures.The evaluation results show that CLTS modestly increased the rate of toilet construction, decreased community tolerance of open defecation, and reduced the prevalence of roundworm infestation.There is no discernible impact on the other health measures and when we generate an overall health index including roundworm infestation, anemia, height and weight, there is no significant treatment effect.Allowing for heterogeneous treatment effects shows that the program is less effective among poorer households.In addition to poverty status, we examine two other sources of heterogeneity in program impact.An important component of the intervention is that it sought to create a large-scale sustainable program across the country.2,WSP contracted resource agencies to implement the program in a set number of villages."The resource agencies were also contracted to train local government staff to implement the program themselves.During the phase of the project we evaluate, the resource agencies and local governments were implementing the program simultaneously.Approximately half of our treatment villages were treated by the resource agencies and the other half by local government.This allows us to examine how program impact varies with the identity of the implementer and hence evaluate the scale-up process.As villages were not randomly allocated to implementing teams, the estimates examining this form of heterogeneity rely on the assumption that there are no unobserved differences between RA and LG villages which could be causing a differential impact.Discussions with WSP suggest there was no systematic process of assignment and tests of household and village baseline characteristics by implementer status show no significant differences.We find that while statistically significant program impacts are observed in communities where the program was implemented by a resource agency, local government implementation produced no discernible benefits.3,Second, we examine the role of social capital.As its name suggests, CLTS is a participatory development project in which facilitators are sent to villages to initiate a community analysis of existing sanitation practices and a discussion of the negative health consequences of such practices.The community actively participates in the facilitated meeting and is then left to forge its own plan to improve village sanitation with only limited follow-up support and monitoring from the program."Given CLTS' emphasis on community involvement, one might expect that it would function best in communities with high pre-existing levels of social capital.We define social capital as the norms and networks that enable collective action.4,Since social capital is not randomly allocated to villages, estimating these heterogenous treatment effects assumes that there are no unobserved variables which differ between low and high social capital communities that could be driving the results.While we cannot rule out that this might be the case, the stability of the estimates when a wide array of control variables are included suggests it is unlikely that the estimates are biased due to omitted variables.We find that baseline social capital is an important determinant of program effectiveness.High levels of community participation at baseline are strongly associated with increased toilet construction in treatment communities.However, in communities with low levels of social capital at baseline, significantly fewer toilets are constructed."We present results consistent with villages with higher initial community participation being more able to align community members' objectives with those of the program through the use
The program resulted in modest increases in toilet construction, decreased community tolerance of open defecation and reduced roundworm infestations in children.However, there was no impact on anemia, height or weight.We find important heterogeneity along three dimensions: (1) poverty—poorer households are limited in their ability to improve sanitation; (2) implementer identity—scale up involves local governments taking over implementation from World Bank contractors yet no sanitation and health benefits accrue in villages with local government implementation; and (3) initial levels of social capital—villages with high initial social capital built toilets whereas the community-led approach was counterproductive in low social capital villages with fewer toilets being built.
of cPC and aPC is dominated by the conformation of the phycocyanobilin chromophores and how these chromophores interact in oligomers of cPC and aPC monomers due to spatial proximity.When the phycocyanobilin is bound to apoproteins of either cPC or aPC, the chromophore is in an extended conformation.In this state the absorption spectrum depicts a characteristic main peak at 620 nm or 650 nm, respectively.If the protein structure denatures, the phycocyanobilin, a linear tetrapyrrole chromophore, relaxes and assumes a more stable cyclic conformation, which is characterized by a main absorption peak at 360 nm.This shift in conformation can be explained by quantum chemical calculations.Moreover, in aqueous low-ionic-strength conditions, aPC assembles in trimers, whereas cPC forms hexamers, trimers, or other oligomers under these conditions.When disintegrated into monomers, absorption at 620 nm is slightly reduced in the case of cPC, whereas in the case of monomeric aPC the absorption spectra resembles the one of monomeric cPC, resulting in a shift of the main absorption peak from 650 nm to 615 nm."Described losses in proteins' quaternary structure could help to explain the observed characteristic alterations in absorption spectra with increasing processing time.Yet, absorbance at 360 nm would be unaffected by changes in the proteins quaternary structure alone.There must have also been changes in the tertiary structure of the apoproteins that led to a change of the phycocyanobilin from its extended state into its cyclic form with an absorption maximum at 360 nm.Hence, it is speculated that the observed degradation in color activity is a combined effect of the disintegration of cPC and aPC into monomers and a denaturation of monomers yielding phycocyanobilins in cyclic conformation.For further insights on a mechanistic level, studies must focus on purified phycocyanin fractions to omit side reactions and to allow for conclusions that are more specific.Multiple reactions contributing to the degradation in color activity of phycocyanin solution would support the identified high empirical reaction order of n = 6.With the modelled apparent reaction order, the reaction rate constant k could be derived for the different processing temperatures.By plotting the logarithmized reaction rate constant against the inverse of the absolute temperature, the energies of activation could be calculated from the Arrhenius equation).Thereby, an Ea of 74.4 ± 8.9 kJ mol−1 was derived for the continuous measurements and Ea of 81.6 ± 10.3 kJ mol−1 for the batch processing experiments.As the Ea are not significantly different from each other, transfer could be demonstrated from batch processing in capillaries to continuous processing in the MMRS.This allows upscaling of the process through numbering-up approaches as parallelization of numerous micro-heating devices.Moreover, the provided knowledge of temperature-time interactions under isothermal conditions could support further upscaling initiatives by combining this with the respective heat and mass transfers, and flow conditions.Minor discrepancies in Ea could originate from heat transfer differences between the two systems.The continuous system implies conductive and convective heat transfer, whereas the batch heating in capillaries is dominated by conductive heat transfer.Residence times and thus processing times were similar in both systems due to high surface-to-volume ratios but also because of narrow residence time distributions reported with low viscous liquids for the MMRS system and its utilized modules.However, the experimental determination of the reaction order, which is the basis for calculation of the reaction rate constant k, relies on the spectrophotometric analytics of the processed phycocyanin solution.Color is the most important techno-functional property of phycocyanin powders for applications in the food industry, but its intensity degradation is an indirect description of the underlying chemical reactions as previously discussed.Improvements in determining the experimentally derived order of reaction require insights on a structural level and an understanding of the mechanistic processes.An Arthrospira platensis extract with a high content in cPC and aPC was processed in thermal short time trials using a batch and a continuous heating system with very high surface-to-volume ratios; 4444 m−1 and 2610 m−1 for batch and continuous operation, respectively."Controlled thermal processing with treatment times up to 60 s showed a biphasic degradation of phycocyanin's color activity for treatment temperatures of T ≥ 70 °C.Revealed kinetics were faster than most kinetics described in the literature on thermal degradation of purified cPC and they were best described with a reaction order of n = 6.Supported with the high empirical reaction order, it is proposed that the degradation in color activity was due to a combination of reactions.Purified fractions of the phycocyanins need to be investigated to elucidate the structural and mechanistic changes of aPC and cPC in HTST processing.This study focused on an impure phycocyanin formulation that is applied in the food industry and could demonstrate the transfer from batch to continuous processing on a micro-processing scale.Consequently, the insights gained may help to improve multi category applications of phycocyanin, which is the only natural blue food coloring on the market to date.
Stability of colorants is concerning for food coloring matrices, particularly for the only natural blue food coloring, phycocyanin.The Spirulina-based microalgal extract is mainly comprised of heat sensitive protein-chromophore complexes, C-phycocyanin and allophycocyanin.Here, phosphate buffered phycocyanin solution was heated in batch and emerging continuous processing systems, both characterized with high surface-to-volume ratios allowing isothermal conditions with residence times down to 5 s. Absorption scans revealed biphasic degradation of phycocyanin color activity to about 30% within 30 s at T ≥ 70 °C.It shows that decay in phycocyanin color activity is not a single process but encompasses C-phycocyanin and allophycocyanin aggregate disintegration and denaturation.Industrial relevance: Central to this study is the color stability of phycocyanin, which is a high value component, derived from the emerging food source microalgae.Insights could be gained on the color degradation kinetics by treating an industry relevant formulation in batch and emerging scalable continuous systems via micro process engineering.This data will directly support food research and development activities to optimize and minimize blue color losses within multiple product categories.
are optimally searched for in the accessions of the National Herbarium plus one herbarium in every province in which the species occurs.This finding is noteworthy as in South Africa there are often 2 to 4 herbaria per province from which one can source data.Our results are important when considering that collections data are routinely used for predicting the distribution and range of species by modelling spatial patterns.These predictions are usually made in conjunction with climatic surface data that incorporates some aspect of seasonality in temperature and precipitation.And, as Gioia and Pigott concluded in their study on flora in south-western Australia, “in spite of the ability to extrapolate plant distributions … there is no substitute for objective, vouchered data”.Collecting distribution data from herbaria and other supplementary sources was considerably time-consuming and costly in our study, and would have been easier had all the collections been digitised."Our findings indicate the extent to which QDS coverage within a species' geographic range may be under-represented by using only distribution data from PRE, an important consideration when conducting threat assessments or predicting species richness and distributions using extrapolative models.Moreover, had PRE data been used solely in the calculation of range size then nearly 30% of all threatened and Near Threatened medicinals investigated in this study would have required downgrading to a lower threat category following incorporation of data from all sources.Relative to the entire South African flora, however, traded medicinal taxa are generally widespread with large range sizes.As such, threat assessments of non-medicinal taxa in South Africa should similarly benefit from augmented distribution data sources since 91% of the entire flora was evaluated using restricted-range criteria.Accordingly, the current findings indicate that the ranges for some of these taxa could increase beyond the quantitative thresholds of criteria if supplementary distribution data were not sourced.Our findings highlight the importance of regional herbaria, the need to complete digitising of their collections, and the value of integrating and amalgamating these data into a well-curated national resource.The current report further emphasises the inherently dynamic nature of the threat assessment processes, and their outcomes, as new data are incorporated.
In a megadiverse country such as South Africa, plant locality data are routinely sourced from the South African National Herbarium (PRE).Evidence suggests that large areas of the country remain poorly collected and that locality records are not always adequately represented in PRE.Our aim was to assess whether distribution information obtained exclusively from PRE adequately represented the known range of selected species.We also assessed the relative value of regional herbaria and supplementary sources of locality data.Locality information was sourced from PRE, 17 regional herbaria, sight records and literature for a subset of 121 ethnomedicinal plant species that are currently regarded to be threatened with extinction or of conservation concern according to the IUCN Red List criteria.Geographic range (km2) was calculated using distribution information (Quarter-Degree Squares, QDS) obtained from PRE and non-PRE sources.The species’ ranges were examined to compare the differences in range size and the overall proportion of QDS records represented in PRE and non-PRE sources.Supplementary data obtained from regional herbaria and other sources increased the number of known QDS records by ± 45% per species across the various IUCN Red List threat categories, and the ranges increased by ± 28% per species.As the threat status of a species increased, proportionally more QDS were likely to come from supplementary sources.Rarer species tended to be found only in herbaria within their province of occupancy.‘Return for effort’ analyses indicated that QDS records should be sourced from PRE plus one other herbarium located within each province in which a species of interest occurs.QDS coverage within species’ geographic ranges was under-represented using only data obtained from PRE, reducing the accuracy of species occurrences and distributions relying solely on information sourced from that repository.We demonstrate that this can impact on the accuracy of conservation planning resources such as Red Lists.Our results highlight the relative importance of regional herbaria.