text
stringlengths
330
20.7k
summary
stringlengths
3
5.31k
The geophytic genus Hesperantha Ker Gawl., now with some 88 species is one of the larger genera of subfam.Crocoideae Burnett.Its range extends from southwestern Namibia and the Atlantic coast and near-interior of South Africa through eastern southern Africa to Ethiopia and Cameroon.Most diverse in the southern African winter rainfall zone, the genus is also well represented in the Drakensberg and adjacent highlands.It is distinguished by the style dividing near or below the mouth of the perianth tube into relatively long, usually laxly spreading style branches and, with a few exceptions, by woody corm tunics.The genus is taxonomically well understood through the revisionary works by Goldblatt and Hilliard and Burtt.Field work in the mountains of Northern Cape of South Africa in the spring of 2014 produced three new species which we describe here: two new species of sect. Concentricae Goldblatt from the Hantamsberg Massif in the western Karoo and one of sect. Hesperantha from the mountains of the Richtersveld.Hesperantha palustris Goldblatt & J.C. Manning, from high altitude marshes, is one of the tallest species in the genus, producing lax spikes of one to three, evidently unscented flowers that open after dark, and relatively large capsules up to 14 mm long.Hesperantha filiformis Goldblatt & J.C. Manning is distinguished by the presence of only two basal leaves and a third, partly sheathing cauline leaf, and spikes of one or two flowers with tepals shorter than the perianth tube ± 11 mm long.Hesperantha eremophila Goldblatt & J.C. Manning, from the Vandersterrberg in the remote Richtersveld National Park of Northern Cape, has a bell-shaped corm with a flat base, placing it in sect. Hesperantha.We also treat populations of Hesperantha falcata Ker Gawl.with diurnal flowers with a yellow perianth as subsp. lutea Goldblatt & J.C. Manning.The subspecies is restricted to clay soils in the southern part of Western Cape, whereas subsp. falcata, with white or creamy white, crepuscular and scented flowers, has a wider range entirely encompassing that of subsp. lutea, and occurs mostly on sandy soils or rock outcrops.Field studies during 2012–2014 in western South Africa also resulted in the discovery of some significant range extensions for two Hesperantha species, Hesperantha rivulicola and Hesperantha secunda, of the western Karoo, an important centre of diversity for the genus.We provide revised keys to sect. Concentricae in the southern African winter rainfall zone and to sect. Hesperantha, making it possible to identify the new species.A complete key to the genus will be provided in our account of Hesperantha for Flora of southern Africa, currently in preparation.The new species were described from living plants collected in the field, where flowering phenology and habitat details were noted.Pressed herbarium specimens were made of a sample of plants gathered in the wild.A search in herbaria containing significant holdings of southern African flora revealed no collections matching the new species described here.In addition, two of us have extensive experience of the taxonomy and biology of Hesperantha in the field and laboratory and are thus confident that the three species described here are indeed new, and most likely have not before been collected.Author names are abbreviated according to Brummitt and Powell.Plants with outer floral bract margins free to base; corm with rounded base, usually flattened on one side, and concentric, sometimes splitting into vertical sections tapering to points above:Flowers nodding to ± pendent with perianth tube curved at apex:Leaves finely hairy; main vein and margins raised and thickened, the margins forming wing-like ridges … H. secunda Goldblatt & J.C. Manning,Leaves glabrous; main vein and margins only slightly raised, margins thus not winged … Hesperantha bachmannii Baker,Flowers erect with straight perianth tube:Plants acaulescent or stem barely extending above ground and sheathed by leaf bases:Flowers white; tepals 8–10 mm long … Hesperantha hantamensis Schltr.ex R.C. Foster,Flowers yellow or pink to red-purple; tepals 13–25 mm long:Flowers yellow; crepuscular, opening at sunset; tepals 13–15 mm long … Hesperantha flava G.J. Lewis,Flowers pink to red-purple, diurnal, closing at night; tepals 17–25 mm long… Hesperantha humilis Baker,Plants with aerial stem:Leaves pilose to scabrid-ciliate on veins and margins, sometimes only microscopically; stems also often sparsely pilose:Leaves pilose to scabrid-ciliate on veins and margins, sometimes visible only microscopically:Leaves hollow or ± solid and with narrow longitudinal grooves; finely scabrid ciliate along groove edges:Leaves ± round in cross section; flowers white to cream, crepuscular, opening in late afternoon, closing late at night… Hesperantha teretifolia Goldblatt,Leaves oval in cross section; flowers pink, mauve, or purple, diurnal, closing in early afternoon … Hesperantha pilosa Ker Gawl.Leaf blade flat, sometimes with margins and main vein somewhat thickened or raised; pilose along sheath and blade margins and veins:Plants dwarf, to 50 mm high with 1-flowered spikes; flowers magenta; leaves sparsely hairy… Hesperantha glabrescens Goldblatt,Plants mostly at least 100 mm high with spikes usually with > 2 flowers; flowers white, blue, magenta or mauve; leaf blades and sheath of third leaf sparsely to densely hairy:Leaves sword-shaped to oblong; scale-like leaf below spike mostly 12–20 mm long, often pilose… Hesperantha pseudopilosa Goldblatt,Leaves linear to narrowly sword-shaped; scale-like leaf below spike 3–5 mm long, usually hairless:Leaf blades usually conspicuously hairy, usually acute; plants of open sandy slopes and flats… H. pilosa Ker Gawl.Leaf blades sparsely hairy, obtuse; plants of rocky cliffs… Hesperantha malvina Goldblatt,Leaves and stems hairless:Flowers predominantly shades of pink, mauve or purple:Perianth tube 20–35 mm long, longer than tepals:Leaves linear, 2–3 mm wide; perianth tube 25–30 mm long and filaments ± 8–10 mm long… Hesperantha oligantha Goldblatt,Leaves sword-shaped,
The southern and tropical African genus Hesperantha, now with some 88 species, is distinguished in subfam.Crocoideae by the style dividing near or below the mouth of the perianth tube into relatively long style branches and, with a few exceptions, by woody corm tunics.Hesperantha palustris (sect.Concentricae) from marshes at high elevation and Hesperantha filiformis (sect.H. palustris is one of the tallest species in the genus, reaching up to 480. mm, with moderate-sized white flowers opening after dark.The third new species, Hesperantha eremophila, has the flat-based corm of sect.
reaching to the base or lower fourth of the anthers in the closed flower.The slightly succulent foliage leaves are sparsely hairy and have thickened margins and main veins.We discovered a second population of the species in September 2013 on southern slopes of the Keiskie Mtns SE of Calvinia.Flowers in plants maintained in water opened ± 19:00 and produced a strong, somewhat acrid scent.We confirmed this phenology in a second population close to the type locality, Farm Knechtsbank, in a wet meadow at the top of Perdekloof.H. secunda was common at the site and grew together with H. pseudopilosa Goldblatt, then in fruit.The population included at least 150 mature plants in flower or early fruit.A short distance away but in drier, well drained places the closely related, pink-flowered H. pilosa subsp. bracteata Goldblatt & J.C. Manning was in flower.South Africa.NORTHERN CAPE.3119: Keiskie Mtns, gentle S-trending rocky slope in shale, 1 258 m, 25 Sept. 2013, Goldblatt & Porter 13906.3120: northern Roggeveld Escarpment, Farm Knechtsbank, wet sandy meadow, 1 470 m, 26 Sept. 2014, Goldblatt & Porter 14027.Hesperantha lutea var.luculenta R.C. Foster in Contr.Gray Herb.166: 19.Type: South Africa, , hills at Berg River Bridge, Piketberg, 4 Sept. 1894, R. Schlechter 5261.,Like subsp. falcata except flowers diurnal, pale or deep yellow, outer tepals usually flushed brown on reverse, unscented.Flowering time: mainly late August and September; flowers opening late morning or midday and closing ± 16:00.Distribution and ecology: extending from near Caledon to Mossel Bay in southern Western Cape, on clay or clay loam slopes and flats, usually in renosterveld.Diagnosis: subsp. lutea is readily recognized in sect. Hesperantha by the pale or deep yellow flowers.The flowers are diurnal, opening in the morning and closing in the late afternoon.Populations with deep yellow flowers occur in the south, from near Caledon to Mossel Bay and George, whereas populations in the west, from the Kobee Valley and Citrusdal to Porterville and Piketberg, have pale yellow flowers.The subspecies occurs on clay soils in renosterveld, whereas subsp. falcata is most often found on sandy ground or sandstone outcrops.The only other yellow flowered populations in the section are a rare pale yellow-flowered morph of H. pauciflora from the Bokkeveld Mtns, distinguished from H. falcata by the corms bearing prominent lateral spines.H. sufflava has pale, watery yellow flowers and differs in consistently having only three leaves, all basal, and a style dividing in the middle rather than just below the mouth of the perianth tube.South Africa.WESTERN CAPE.3139: Kobee valley, 1 Sept. 2001, Goldblatt & Porter 11794.3218: Olifants River Valley near Rondegat, 26 July 1974, Goldblatt 2187.3318: Porterville, 1 Sept. 1957, Loubser 462.3419: Caledon Swartberg and the Baths, without date, Ecklon & Zeyher Irid 236; Stanford–Caledon road, at foot of Steenboksberg, Aug. 1976, Goldblatt 4097.3420: Suurbraak, 3 Oct. 1971, Rycroft 3124.3421: Weltevreden near Albertinia, Aug. 1913, Muir 997.3422: Mossel Bay, at turnoff to Mossindustria from N2, 20 Sept. 2010, Goldblatt & Porter 13563.Additional exsiccatae are cited in Goldblatt under H. falcata and marked therein with an asterisk.Although first collected in the 1920s and early 1930s around Nieuwoudtville on the Bokkeveld Escarpment, H. rivulicola was described only in 1984 after additional populations were found near Calvinia at the Akkerendam Nature Reserve, ± 980 m. Exploration of the western end of the Hantamsberg in 2014 on the Farm Tierkloof resulted in the discovery of an additional population of the species on Sandkop.Plants were common along streams on shale, growing in moss pads, and at the shallow edges of marshes on a dolerite sill at elevations of ± 1 380 m.The extended population consisted of over 200 plants.The Tierkloof populations extend the altitudinal range of H. rivulicola significantly and indicate that the species is more common than previously known.Leaves of these plants were rectangular with the margins almost square to oblong-elliptic in cross section and lightly striate.Elsewhere the leaves have been described as elliptic with rounded margins and shallowly grooved.South Africa.NORTHERN CAPE.3119: Hantamsberg, Farm Tierkloof, near the top of Sandkop, along shallow stream in moss pads, 1 380 m, 27 Sept. 2014, P. Goldblatt & L.J. Porter 14034.
We describe three new species from Northern Cape, South Africa discovered in September 2014.Concentricae) from rocky sites, were both collected on the Hantamsberg.We also recognize Hesperantha falcata subsp.lutea for populations with yellow flowers opening during the day and closing in late afternoon.Typical plants of H. falcata have white flowers opening in the late afternoon.We also record two range extensions: an additional population of the Roggeveld Escarpment endemic, Hesperantha secunda, from the Keiskie Mtns SW of Calvinia; and high altitude populations of Hesperantha rivulicola from 1 380. m on the Hantamsberg, otherwise known from lower elevations near Nieuwoudtville and Calvinia (760-980. m).
We propose a fully-convolutional conditional generative model, the latent transformation neural network, capable of view synthesis using a light-weight neural network suited for real-time applications.In contrast to existing conditionalgenerative models which incorporate conditioning information via concatenation, we introduce a dedicated network component, the conditional transformation unit, designed to learn the latent space transformations corresponding to specified target views.In addition, a consistency loss term is defined to guide the network toward learning the desired latent space mappings, a task-divided decoder is constructed to refine the quality of generated views, and an adaptive discriminator is introduced to improve the adversarial training process.The generality of the proposed methodology is demonstrated on a collection of three diverse tasks: multi-view reconstruction on real hand depth images, view synthesis of real and synthetic faces, and the rotation of rigid objects.The proposed model is shown to exceed state-of-the-art results in each category while simultaneously achieving a reduction in the computational demand required for inference by 30% on average.
We introduce an effective, general framework for incorporating conditioning information into inference-based generative models.
To balance the fish demand of a growing population while respecting the fishing quotas, the aquaculture sector has seen a rapid expansion in these last 30 years.Lately, recirculating aquaculture systems have been developed, allowing to drastically reduce the water consumption per kilo of fish produced.Integrating aquaculture with other food production methods is under development in order to reuse the aquaculture wastes and to close the nutrient cycle.The potential for more efficient use of resources through the tightening of nutrient cycles and reuse of aquaculture wastewater may explain the increasing interest in aquaponics.The AP concept is to combine RAS and recirculated hydroponic systems for fish and horticultural plant production.The integration of RAS and RHS aims to reuse the aquaculture wastewater for irrigating the crops while converting the otherwise wasted nutrients excreted by fish into valuable plant biomass.Hence, the reuse of fish wastewater could significantly reduce the environmental impact of aquaculture and hydroponic plant production.The most common design of the AP system is the integration of hydroponic beds into the water loop of a RAS and may be called single recirculating aquaponic system.SRAP can be complex to manage as three different biological systems are merged in a single water loop.Decoupled aquaponic systems or double recirculating aquaponic systems present an alternative design to overcome the disadvantages of SRAP.In these decoupled systems, the RAS wastewater goes to the hydroponic part and does not return to the fish.The water would then leave the hydroponic part only by evaporation and plant transpiration.Optimal growing conditions can then be established in each production part avoiding compromises.In the hydroponic part, the RAS wastewater is complemented with macro- and micronutrients to obtain equivalent concentrations, pH and EC as in the standard RHS nutrient solutions.With this design the production and some sanitary aspects can be more easily controlled making the system more adapted for commercial farming operations.Also, the existing professional techniques can be easily used in the respective fish and plant parts without high technological innovations.Just a few AP designs have been tested and only in lab conditions or small scale, but studies on large-scale systems remain scarce, especially with DAPS.Moreover, comparisons of production in AP and hydroponics under the same conditions as with the techniques used in professional operations are lacking.Despite a few pioneer publications, the effects of fish water on the yields and quality of the plant products remain unknown, which hinders the adoption of AP by professionals of the horticultural sector.The feasibility of properly complementing the fish water with conventional commercial fertilisers needs to be investigated.Therefore, this study aimed to compare the tomato production of a decoupled aquaponic system to a conventional recirculated hydroponic system in a semi-practice scale.Special attention was given to plant health and physiological disorders as blossom-end rot.The macro and micronutrient content in the nutrient solutions was closely monitored with a specific regard to sodium and chloride levels.It is the first time a reliable comparison of tomato production in complemented RAS wastewater has been achieved and repeated for 3 years.The experiments were carried out in the facilities of the Inagro research institute located in Rumbeke-Beitem, Belgium.The experimental setup was a semi-practice combination of tomato and pikeperch production.It aimed to combine the techniques already used by professional farmers in Flanders.Part of the fish wastewater that otherwise would be flushed to the sewage was used for irrigating the tomatoes.The pikeperch were reared in an indoor recirculating aquaculture system operated as a professional farm.The facility had a surface of 700 m² for a total water volume of 160 m³.The RAS had an average fish load of 15 kg/m³ and was able to produce 2000–4000 kg of pikeperch per year.The fish were fed with a fish protein-based meal containing 56% protein and 16% fat.The average daily water exchange rate of 15% was relatively high because the system was conducted in such a way that the water discharged to the sewage had a NO3 content lower than 2.42 mmol/L in order to meet the wastewater disposal regional regulations.The wastewater delivered by the drum filters was decanted and the supernatant was stored in a 15 m³ tank prior to be discharged in the sewage.At regular intervals, part of this water was pumped to the tomato greenhouse facility.This water was then filtrated with a TAF filter and complemented in macro- and micronutrients to reach the standard hydroponic concentrations.The so prepared nutrient solution was stored in a feeding sump tank of 0.4 m³.The water from this sump was used to irrigate the tomato plants and this treatment was called the aquaponic treatment.The tomato production in the AP treatment was compared to a standard hydroponic treatment in which the plants were irrigated by a conventional standard hydroponic NS.The drain water of each treatment was collected separately in a 1 m³ drain water storage tank.Intermittently, their entire content was sand filtrated, and UV disinfected.The disinfected drain waters were stored in their corresponding disinfected drain water storage tank.The filtration event only happened when a disinfected drain water storage tank became empty.The same sand filtration and UV disinfection device was used for AP and HP drain waters, but the system automatically cleaned and flushed the filter and pipes with the new incoming solution before each use.The disinfected drain solution was thus reused into its respective fertigation loop each time a batch of new feeding solution was made.The tomato plants were cultivated in a compartment of a Venlo-type climate-controlled greenhouse of 352 m².The greenhouse had a column height
Decoupled aquaponic systems (DAPS) use the wastewater of recirculated aquaculture systems (RAS) as water source for plant production in recirculated hydroponic systems.RAS wastewater is complemented with macro- and micronutrients to obtain equivalent concentrations and pH as in standard hydroponic nutrient solutions (NS).Unlike in single recirculating aquaponic systems, optimal growth conditions can be established in each production part of a DAPS (i.e.fish and plant parts) avoiding compromises.DAPS design seems more adapted for commercial farming operations but feasibility studies on large-scale systems are lacking.Therefore, the production of tomatoes (Solanum lycopersicum L., cv.
apparent resistance to salinity.The only possible answer is that RAS wastewater used in the AP treatment should contain factors that were not present in the HP water.Indeed, RAS water is charged with a variety of microorganisms and dissolved organic molecules.The latter are released by physicochemical degradation of the fish feed and the fish excretions, or by microorganisms.Hence, a broad variety of DOM can be present as peptides, amino-acids, phytohormones-like, humic acid-like organic molecules.Microorganisms and/or DOM may have acted as tomato plant biostimulants, providing possibilities to overcome the salinity stress and keep productivity as high, and sometimes higher, than in the HP treatment.At the end of each cultivation period, slightly more roots were observed on the side of the AP slabs, however for technical reasons it was not possible to weigh or score it.Such increase of root development is in accordance with the observations of Delaide et al. for lettuce in complemented RAS water.Indeed, microorganisms settled in the rhizosphere can beneficially interact with the plants and promote root development.Further experiments with a close monitoring of the root mass, plant biostimulating microorganisms and DOM presence are needed to confirm our assumptions.Interestingly, Mn had a significant difference in concentration between treatments in the slabs.No technical intervention could explain this difference.Mn was always lower in the AP slabs and this was repeated through the years.On average, Mn was 1.9 times less concentrated in the AP than in the HP slabs.Compared to the feeding solution, Mn was slightly more concentrated in the HP slabs but 1.7 times less concentrated in the AP slabs.No significant differences were found between both treatments for the concentration of Mn in the feeding NS nor for the pH in the slabs, thus excluding the possibility of Mn loss by switching to an insoluble form.As such, the lower concentration of Mn in the AP slabs indicates that AP tomato plants assimilated more Mn than the HP ones.This higher Mn assimilation then should be due to microorganisms that colonized the rhizosphere and improved the nutrient use efficiency or increased the root mass.Silber et al. and Aktas et al. demonstrated that higher Mn content in fruit was correlated to reduced BER symptoms, and as we observed reduced BER and high Mn uptake, we suspect the tomato plants to have had a higher Mn content in their fruits.Unfortunately, in our study the Mn content in tomato leaves and fruits was not quantified.This should be achieved in further research to verify our assumption.Finally, as some authors claimed the low Ca concentration in the fruit tip, accompanying BER, to be a consequence of a metabolic disorder, related with an increase of reactive oxygen species, it is possible that microorganisms and/or DOM have interacted with the tomato plant metabolism and promoted resistance against BER via other ways than by only enhancing Mn uptake.In our study, we compared the production of tomatoes grown in a NS based on complemented RAS wastewater to tomatoes grown in conventional hydroponic NS.We especially achieved, for the first time, to accurately complement RAS water with macro- and micronutrient concentrations equivalent to the hydroponic NS and this during 3 seasons of production.Our results clearly indicate the suitability of complemented pikeperch RAS wastewater as feeding water for professional HP tomato production using drip irrigation as similar growth and yields were obtained in AP and HP treatments.While a significant higher EC was recorded in the AP treatment, due to intermittent NaCl addition in the RAS water, the tomato yields did not diminish.Interestingly, symptoms of BER were even reduced in this treatment.As RAS water contains a diversity of microorganisms and DOM, it is presumed that some of these beneficially interacted with the plant metabolism and mitigated the salinity stress and the BER symptoms.This study delivered also much evidence that this was achieved by the promotion of Mn uptake which has been reported as a BER reducing factor.Further experiments with a close monitoring of the root mass, the nutrient content in leaves and fruits, especially Mn, would be needed to confirm our assumptions.The simultaneous identification and count of microorganisms present in the plant root zone using advanced technology such as flow cytometry and metagenomic techniques should be envisaged.Methods to identify plant promoting DOMs should also be applied.This research was done in the frame of the INAPRO project funded by the European Union’s Seventh Framework Programme for research, technological development and demonstration under grant agreement number 619137.This research was also partly funded by the European Regional Development Fund via Aquavlan2.
Foundation) grown in a NS based on complemented pikeperch RAS wastewater (i.e.AP treatment) has been compared to that of tomatoes grown in conventional hydroponic NS (i.e.Our results clearly indicate the suitability of complemented pikeperch RAS wastewater as feeding water for professional HP tomato production using drip irrigation for DAPS.As RAS water contains a diversity of microorganisms and dissolved organic matter, it is assumed that some of these acted as plant biostimulants and mitigated the salinity stress and the BER symptoms.
implicated in skeletal muscle disease and desmin-related cardiomyopathy, e.g. caused by the R120G mutation in CRYAB .Overexpression of the latter in a mouse model activates cardiac hypertrophy and Bag3 has been found to regulate contractility and calcium homeostasis in cardiac cells .It is plausible that activation of Bag3 protein complex could contribute to pathogenic activation of hypertrophic signalling cascades also in the Csrp3 KI/KI mice.Our recent work has shown that MLP acts as an endogenous inhibitor of PKCα activity in non-failing hearts, potentially by providing an abundant cytoplasmic substrate competing with the activating auto-phosphorylation of the kinase .In pathological settings such as heart failure, PKCα is chronically activated and MLP cannot dampen PKCα activation sufficiently.Simultaneous induction of Carps leads to a recruitment of a complex of Carps and activated PKCα to the intercalated disk.It is speculated that chronic PKCα activity in this compartment is detrimental to the heart, e.g. by affecting adrenergic signalling .In our mouse model, protein depletion through UPS activity results in a lack of functional MLP and we have further shown that this is a feature of all three HCM-causing MLP mutations investigated in our cellular experiments.At the same time, these three HCM mutations were found to be hypo-phosphorylated by PKCα .However, any direct effect of the mutant MLP on PKCα activity is irrelevant as the lack of functional MLP levels through UPS-mediated protein depletion overrides it.As a consequence, PKCα becomes chronically activated and may cause aberrant induction of heart failure signalling.As all three HCM mutations are affecting conserved residues in the same region of the protein, namely the second zinc finger of the first LIM domain, it is likely that all three affect the protein structure in a similar way and lead to partial protein unfolding as demonstrated previously for MLP C58G .These unfolded proteins will be recognised by protein quality control systems and subsequently be targeted for degradation by the UPS.The mode of action appears to differ for DCM-associated mutations: K69R and G72R are located in the intrinsically unstructured glycine-rich region , hence an unfolded protein response in the presence of the mutations is unlikely."Instead, these mutations affect MLP's ability to inhibit PKC activity .In addition, a role of amino acids 64–69 in nuclear shuttling of MLP and acetylation of K69 are potentially disturbed by both DCM-associated mutations.In conclusion, our newly generated Csrp3 KI mouse model of non-sarcomeric HCM, combined with extensive cell-based work, provides important insights into molecular mechanisms underlying pathogenic effects of HCM-associated CSPR3 mutations.The authors have no conflict of interest to declare.
Cysteine and glycine rich protein 3 (CSRP3) encodes Muscle LIM Protein (MLP), a well-established disease gene for Hypertrophic Cardiomyopathy (HCM).MLP, in contrast to the proteins encoded by the other recognised HCM disease genes, is non-sarcomeric, and has important signalling functions in cardiomyocytes.To gain insight into the disease mechanisms involved, we generated a knock-in mouse (KI) model, carrying the well documented HCM-causing CSRP3 mutation C58G.In vivo phenotyping of homozygous KI/KI mice revealed a robust cardiomyopathy phenotype with diastolic and systolic left ventricular dysfunction, which was supported by increased heart weight measurements.Transcriptome analysis by RNA-seq identified activation of pro-fibrotic signalling, induction of the fetal gene programme and activation of markers of hypertrophic signalling in these hearts.Further ex vivo analyses validated the activation of these pathways at transcript and protein level.Intriguingly, the abundance of MLP decreased in KI/KI mice by 80% and in KI/+ mice by 50%.Protein depletion was also observed in cellular studies for two further HCM-causing CSRP3 mutations (L44P and S54R/E55G).We show that MLP depletion is caused by proteasome action.Moreover, MLP C58G interacts with Bag3 and results in a proteotoxic response in the homozygous knock-in mice, as shown by induction of Bag3 and associated heat shock proteins.In conclusion, the newly generated mouse model provides insights into the underlying disease mechanisms of cardiomyopathy caused by mutations in the non-sarcomeric protein MLP.Furthermore, our cellular experiments suggest that protein depletion and proteasomal overload also play a role in other HCM-causing CSPR3 mutations that we investigated, indicating that reduced levels of functional MLP may be a common mechanism for HCM-causing CSPR3 mutations.
anaesthetic will be disregarded, violating the autonomy principle, in the interest of not harming him/her, preserving the no-harm principle.However, a patient might express the desire to not be resuscitated if his/her heart stops.Under these special circumstances, observing the principle of autonomy should be placed as more ethical in the policy than the principle of doing no harm.There are many ways in which a policy can be updated with respect to a special context; we mention two.The most simple way would be to provide a replacement policy.This method is only possible if the context is predetermined, and it also represents significant effort, since all principles have to be re-ordered.Another method is to only alter a part of the policy concerning specific principles.For this method, update procedures need to be developed.One way to develop policy update procedures is to consider them as a special case of belief update and use a belief update operators, see for example for such operators.Clearly, whether the requirements for belief update operators are fully suited for operators used to update ethical policies is an issue that merits further exploration.A further hindrance to using belief update to update ethical policies could arise from research in belief update being normative, studying which properties a belief update operator should satisfy, rather than operational, i.e., designing belief update operators.Finally, while our examples have been from the UA domain the approach and principles are general enough to be relevant across autonomous systems.Consequently, we aim to extend this work to the formal verification of domestic/healthcare robotics and driver-less cars in the future.
Autonomous systems such as unmanned vehicles are beginning to operate within society.All participants in society are required to follow specific regulations and laws.An autonomous system cannot be an exception.Inevitably an autonomous system will find itself in a situation in which it needs to not only choose to obey a rule or not, but also make a complex ethical decision.However, there exists no obvious way to implement the human understanding of ethical behaviour in computers.Even if we enable autonomous systems to distinguish between more and less ethical alternatives, how can we be sure that they would choose right?We consider autonomous systems with a hybrid architecture in which the highest level of reasoning is executed by a rational (BDI) agent.For such a system, formal verification has been used successfully to prove that specific rules of behaviour are observed when making decisions.We propose a theoretical framework for ethical plan selection that can be formally verified.We implement a rational agent that incorporates a given ethical policy in its plan selection and show that we can formally verify that the agent chooses to execute, to the best of its beliefs, the most ethical available plan.
fragments opposed to being fragment-ions of TMTH itself.In this preliminary study we used the shotgun ESI-MS and ESI-MS/MS method for evaluating abundances of steroid metabolites in human urine.TOF-MS and MS/MS analysis of human urine samples revealed abundant peaks corresponding to protonated TMTH derivatised glucuronides.The identifications were based on exact mass and fragmentation patterns.The major steroids were characterised as glucuronides by the neutral loss of 176 Da.By fragmenting the + ions generated in the ESI source in a pseudo MS/MS/MS experiment and comparing the resulting spectra to those of the unconjugated standards it was possible to identify the steroids present to be glucuronides of androsterone and/or etiocholanolone, hydroxylated androsterone and or hydroxylated etiocholanolone, tetrahydrodeoxycortisol and/or tetrahydrocorticosterone, tetrahydrocortisone and tetrahydrocortisol and/or cortolone.In conclusion, we present a simple, mild and fast derivatisation method for oxosteroid analysis by shotgun ESI-MS and MS/MS.We applied the method to thirty oxosteroids, whose ionization efficiencies increased from 14 up to 2755-fold when compared to their underivatised equivalents.TMTH tags were tested on urine samples, showing that direct analysis of steroid conjugates is possible.
Here we report a new method for oxosteroid identification utilizing "tandem mass tag hydrazine" (TMTH) carbonyl-reactive derivatisation reagent.TMTH is a reagent with a chargeable tertiary amino group attached through a linker to a carbonyl-reactive hydrazine group.Thirty oxosteroids were analysed after derivatisation with TMTH by electrospray ionization mass spectrometry (ESI-MS) and were found to give high ion-currents compared to underivatised molecules.ESI-tandem mass spectrometry (MS/MS) analysis of the derivatives yielded characteristic fragmentation patterns with specific mass reporter ions derived from the TMT group.A shotgun ESI-MS method incorporating TMTH derivatisation was applied to a urine sample.© 2014 The Authors.Published by Elsevier Inc.
The non-stationarity characteristic of the solar power renders traditional point forecasting methods to be less useful due to large prediction errors.This results in increased uncertainties in the grid operation, thereby negatively affecting the reliability and resulting in increased cost of operation.This research paper proposes a unified architecture for multi-time-horizon solar forecasting for short and long-term predictions using Recurrent Neural Networks.The paper describes an end-to-end pipeline to implement the architecture along with methods to test and validate the performance of the prediction model.The results demonstrate that the proposed method based on the unified architecture is effective for multi-horizon solar forecasting and achieves a lower root-mean-squared prediction error compared to the previous best performing methods which use one model for each time-horizon.The proposed method enables multi-horizon forecasts with real-time inputs, which have a high potential for practical applications in the evolving smart grid.
This paper proposes a Unified Recurrent Neural Network Architecture for short-term multi-time-horizon solar forecasting and validates the forecast performance gains over the previously reported methods
and O3.The results are better for NO and PM, but still we observe variability from sensor to sensor.It is clear that for most citizen applications data quality does not need to reach the same standards necessary for air quality management by authorities or for research.We have computed the match score for the 24 sensor nodes co-located at Kirkeveien AQM during the period between April and June 2015.The aim is to see to what extent the sensor platform is able to provide an indication of the air pollution levels, i.e., whether the air pollution is low, medium, or high.This is similar to the information that authorities offer to citizens based on an Air Quality Index, which aggregates information from the reference monitoring stations.Table 5 shows the results of the match score analysis.NO and PM10 show very good results, with an average match score close of 0.8 and 0.9, respectively.For NO2, CO, O3 and PM2.5 the match score is below 0.5, indicating that the agreement between the sensor platform and the station is not good.Fig. 5 shows the daily variation for NO and PM10 concentrations.The sensor platform is capable of reproducing the time variation measured at the reference station.Thus, even if their data uncertainty is too high for use for legislative purposes, some sensors are still capable of offering interesting information to concerned citizens.We have evaluated the performance of commercial low-cost sensors measuring four gaseous pollutants and particulate matter.We performed the tests in the laboratory against traceable gas standards and controlled ambient conditions and in the field where 24 AQMesh nodes were co-located against reference instruments and tested under real-world conditions for 6 months.We found high correlations for all the gaseous pollutants in the laboratory when the sensors were tested under steady temperature and relative humidity conditions, while in the field the correlations were significantly lower.Our results clearly show that a good performance in the laboratory is not indicative of a good performance under real-world conditions.Particulate matter measurements were only evaluated in the field.Our results show better agreement at sites with low traffic than at high traffic sites.This might be related to the conversion factors employed by the AQMesh platform manufacturer to the OPC data, when converting the measured particle number concentrations to mass concentrations.Use of realistic conversion factors adapted to the location might help to improve this issue.In carrying out this evaluation, we identified a main technical challenge associated with current commercial low-cost sensors, regarding the sensor robustness and measurement repeatability.Our results show that laboratory calibration is not able to correct for real world conditions and that it is necessary to perform a field calibration for each sensor individually.Moreover, the calibration parameters might change over time depending on the meteorological conditions and the location, i.e., once the nodes are deployed it will be difficult to determine if they are under-or over-estimating the pollutant concentrations.Thus, it is necessary to evaluate rigorously low-cost sensor platforms under diverse environmental conditions."The evaluation of the sensors' uncertainty revealed that for some pollutants and nodes, as NO, PM10 and PM2.5, the expanded uncertainty meets the DQO criteria as defined in the Air Quality Directive.However, other pollutants, e.g., CO, NO2 and O3, show a highly expanded uncertainty that exceeded the DQO for indicative methods."The high sensor-to-sensor variability of the performance measures and the major variations in the nodes' response to the varying weather conditions or emission patterns make them currently unsuitable for air quality legislative compliance applications or applications that require high accuracy, precision and reliability, such as scientific evaluations of exposure estimates.However, recent studies show that the application of field calibrations based on machine learning techniques can reduce the expanded uncertainty.The outlook for commercial low-cost sensors is promising, and our results show that currently some sensors, i.e., NO and PM10, are already capable of offering coarse information about air quality, indicating if the air quality is good, moderate or if the air is heavily polluted.This type of information could be suitable for applications that aim to raise awareness, or engage the community by monitoring local air quality, as such applications do not require the same accuracy as scientific or regulative monitoring.
The emergence of low-cost, user-friendly and very compact air pollution platforms enable observations at high spatial resolution in near-real-time and provide new opportunities to simultaneously enhance existing monitoring systems, as well as engage citizens in active environmental monitoring.This provides a whole new set of capabilities in the assessment of human exposure to air pollution.However, the data generated by these platforms are often of questionable quality.We have conducted an exhaustive evaluation of 24 identical units of a commercial low-cost sensor platform against CEN (European Standardization Organization) reference analyzers, evaluating their measurement capability over time and a range of environmental conditions.Our results show that their performance varies spatially and temporally, as it depends on the atmospheric composition and the meteorological conditions.Our results show that the performance varies from unit to unit, which makes it necessary to examine the data quality of each node before its use.We have implemented and tested diverse metrics in order to assess if the sensor can be employed for applications that require high accuracy (i.e., to meet the Data Quality Objectives defined in air quality legislation, epidemiological studies) or lower accuracy (i.e., to represent the pollution level on a coarse scale, for purposes such as awareness raising).Data quality is a pertinent concern, especially in citizen science applications, where citizens are collecting and interpreting the data.In general, while low-cost platforms present low accuracy for regulatory or health purposes they can provide relative and aggregated information about the observed air quality.
The use of alcohols in combination with gelling agents has been increasing in different industrial fields.In the food industry, hydrocolloids are often used with ethanol in both food and beverage products to provide functional properties, such as the system homogeneity and stability over time, the viscosity enhancement and the soft tribology improvement.Moreover, alcohols might be found in drying processes, such as in the supercritical-fluid drying of both gel systems and entire food products.In the biomedical sector, alcohols may be used to dry and sterilise hydrogels and to prepare them for microscopy characterisation.Furthermore, in tissue engineering ethanol is widely used in combination with biopolymers during scaffold preparation and for the decellularisation process.Some hydrocolloids can still form a gel network if alcohols are added to the hot solution during the preparation stage.In this case, the used term is alcogel.However, the alcohol percentage depends on the specific gelling agent and can be limited.For instance, hydroxypropyl cellulose is soluble in aqueous solution with an ethanol concentration around 50%, while xanthan gum water-solution can contain up to 60% in ethanol.Furthermore, the ethanol addition can affect some gel properties, like transparency, and promote gelation at lower temperatures.In this context, a common gelling agent used in the food industry is LA gellan gum, which is a microbial polysaccharide, produced by the microorganism Sphingomonas elodea in a fermentation process.The primary structure is a tetrasaccharide unit composed of glucuronic acid, rhamnose and glucose, with a molecular weight range between 100 and 200 kDa.In the adapted Fig. 1, both the high-acyl and deacylated gellan gum chains are shown.The two gellan gum types can be blended to provide synergistic properties to the system, especially in terms of mechanical properties, whose ratio and total solid content depends on the specific application.Morris, et al. reported that the gelation mechanism for gellan gum starts by formation of double helices, and, afterwards, the ion-induced association of the double helices leads to junction zone formation.In other words, the gellan network consists predominantly of flexible, disordered chains, with few ordered junction zones between the helices.For these zones, stabilising forces such as hydrogen bonds, electrostatic forces, hydrophobic interactions, Van der Waals attractions and molecular entanglement are defined by the solvent conditions and polymer structure.It is reported that low acyl gellan gum requires cations, acid, soluble solids or combinations of these additives.Since gellan gum is not soluble in ethanol, yet the product formulation may contain both the ingredients, this work proposes a method to widen the alcohol content in gelling agent systems.LA gellan gum was chosen as a gel model system, representing other hydrocolloids with similar gelation mechanism.The characterisation at the molecular scale was performed by mDSC and FTIR, as Sudhamani, Prasad, and Udaya Sankar reported for the gellan gum gels without alcohols, whereas the mechanical properties were investigated by texture analysis.The alcohol-hydrocolloid interaction was assessed when the gel was already produced.Particularly, quiescent gels were evaluated to study the effect on the molecular/network structures.However, this study is also applicable to smaller aggregates, e.g. gel particle suspensions.Double distilled water, obtained by a water still system, was heated up to 85 °C and then the low acyl gellan gum powder was slowly added to avoid clump formation.The polymer concentration was 2% w/w in order to have a stable quiescent gel block, yet not too dense.To ensure a complete hydration, the solution was stirred for two hours at constant temperature.No salts were added to strengthen the gel in order to avoid introducing a further variable to the system, and potentially affecting the results on the solvent interaction with the gel network in both mechanical and chemical properties.The solutions were poured into sample moulds, which were covered with a plastic film to prevent evaporation.A cooling rate of around 0.5 °C/min down to room temperature was recorded.This specific cooling rate was kept constant during all the experiments to minimise changes in the gel structure, as Nitta, Yoshimura, and Nishinari suggested.After the gel setting, the moulds were stored at room temperature for 24 h.Afterwards, the obtained gel samples had dimensions of 13.5 mm in diameter and 10 mm in height.Similarly, 2% w/w gelatin and 2% w/w k-carrageenan were prepared to compare the solvent quality with LA gellan gum.Once the gels were formed, different alcohols were separately used to assess their effect on the gel properties.Ethanol, 1-Propanol and 2-Propanol/Isopropanol were used as pure solvents or diluted in different concentrations with distilled water to perform a gradual alcohol treatment.In this specific case, solutions at 25, 50 and 80 wt% were prepared.When a gradual treatment was applied, the gel samples were left, stepwise, in the alcoholic solution for 6 h. Finally, the treated gels were submerged in the pure solvent.The last step of the gradual treatment was 24-h long.On the other hand, if the treatment was not gradual, the samples were directly left for 24 h in the specific alcoholic solution/pure alcohol.Mechanical properties were evaluated by analysing the material texture.Particularly, both the Young’s modulus and the gel strength produced by a strain compression of 50% were assessed.The texture analyser was the TA.XT.plus, and a 40 mm diameter cylindrical aluminium probe was fitted on it.In this way, the sample diameter was always kept at least twice smaller than the diameter of the probe.After the application of a thin layer of silicone oil on the probe plates, a compression rate of 2 mm/s was set.All the measurements were carried out in triplicate for the statistical analysis.The gel strength value is
This work focuses on the understanding of the interaction of alcohols with gel systems during solvent exchange, following the gel formation.A method of widening the possible alcohol contents in formulations is proposed, as most hydrocolloids have a low tolerance of high alcohol concentrations and in some gelation is completely prevented.Once the CPKelco LA (low-acyl or deacylated) gellan gum gel was produced, different alcoholic solvents (ethanol, 1-propanol, 2-propanol) were used to remove water from the material and replace it, investigating the effect on the gel network as a function of the alcohol chain length.Specifically, the interaction of the alcoholic solvents with both the polymer chains and three-dimensional network was evaluated by characterising the physical and mechanical gel properties throughout the alcohol treatment.
ethanol, as it was possible to see when a gradient up to 100% isopropanol/1-propanol was used.Similarly, the measured fracture true strain was 40.4% ± 2.8% for the alcogel treated with a 1-propanol gradient up to 100 wt% and 39.5% ± 3.2% for isopropanol.Since the gel shrinkage for isopropanol and 1-propanol was comparable with gradual treatment with ethanol, the slight strength decrease might be dependent on a different polymer chain mobility and tribology as a function of the solvent molecule length, as Mills, et al. suggested.Moreover the different solvent viscosity might have a role in the gel mechanics.In agreement with the previous results, Fig. 10 compares the true stress and true strain as a function of the employed solvent after complete gradual solvent treatment.The mechanical properties results for the untreated LA gellan gum were in agreement with Norton, et al.It was noticed that at around 10% true strain the EtOH alcogel started to considerably increase the resistance to the compression, since there was an increment in true stress.On the other hand, this stress increase was slightly shifted to higher strain values for both 1-PrOH and 2-PrOH alcogels.In Fig. 10 the curve related to gellan/water is also reported: the gel samples were submerged in pure water for 24 h before the texture analysis.A slight decrease in true stress was measured in comparison with the untreated gel.Combining this consideration to the slight volume expansion when more water was added during the treatment, it seemed that water tended to open the gel structure.More free water led to a gel softening, as shown in Fig. 10, in a sort of “network dilution”.It is noteworthy to mention that the error bars are collapsed to the experimental points due to the wide experimental range of true stress.The increase in true stress as well as the Young’s modulus with the solvent concentration suggested an entangled and packed structure when solvents were gradually used, raising the final stiffness value.Nevertheless, the network was less ordered, as discussed in the mDSC section.The error bars in Fig. 11 become more evident increasing the solvent concentration.Although the effect of the solvents on the quiescent gel shape distortion was considerably less pronounced if compared with the non-gradually-treated alcogel shape, it could affect the texture results.In general, the gel strength increased as a function of the solvent concentration due to the molecular interactions between the alcohol and polymer, as the FTIR analysis suggested.It seems that alcohols do not affect the M+ site available along the gellan gum chain, since it is in contrast to the HA gellan gum mechanical properties.The acyl substituents are well-known to lead to a softer and more flexible gel.Specifically, the glycerate provides stabilisation by adding new hydrogen bonds, yet disrupting of the binding site for cations by orientation change of the adjacent carboxyl group and consequently the junction zone alteration.On the other hand, the acetate hinders the helix aggregation.However, the acetyl groups do not modify the overall molecular network and the double helix structure, unlike the alcohol case.As an evidence and to further validate the considerations on the molecular level of LA gellan gum, a mDSC evaluation on k-carrageenan and gelatin was performed.These hydrocolloids were investigated as additional models, since they are respectively similar and different to LA gellan gum in terms of gelation and molecular configuration.The collected mDSC results suggested that k-carrageenan had a similar behaviour to LA gellan gum, as it is shown in the thermographs, whereas gelatin preserves the thermal peaks.This trend was expected for k-carrageenan, considering that the polymer gelation is equivalent.A further study might investigate the effect of ethanol on the gelatin-LA gellan gum gel mixture to assess which trend will be predominant.The alcohol addition to gels was found to lead to water network alteration, which irreversibly affected the gel properties at both the molecular and macroscopic scales, depending on the solvent type and concentration.The reason for this behaviour is likely to be related to the interaction between the gel network and the solvent molecules.From a macroscopic level, the presence of alcohols led to an increase in compression strength and stiffness due to the network fixation.Furthermore, depending on the alcoholic molecule length, the mechanical properties slightly changed, probably due to a different polymer chain mobility.A gradual treatment allowed a more successful retention of both the volume and shape with respect to the direct use of solutions at high alcohol content.Therefore, it is recommended when gelling agents are combined with alcohols.Finally, it seems that this study can be extended to other gelling agents and it is likely to expect similar results if the gels are comparable to gellan gum, like k-carrageenan, or different results if the polymer and its gelation are dissimilar.
It is the first time that a research paper considers the high-alcohol/gellan gum systems at both the molecular and macroscopic scales, proposing the link between them.From this study, the solvent effect on the gelling agent is evident, leading to structure shrinkage and distortion due to a high-induced stress on the gellan gum network.As evidence, and to further validate this study on LA gellan gum, both k-carrageenan and gelatin alcogels were investigated, since their gelation mechanism and molecular configuration are respectively similar and different to LA gellan gum.It was found that k-carrageenan reproduced the LA gellan results, unlike gelatin.
combination of a significantly reduced chipping wear on the flank face and the minimal crater wear on the rake face obtained with the M-steel in this study indicates that a higher CBN content tool grade can be used with this steel.High CBN content PCBN grades are often considered tougher, yet more susceptible to wear by chemical dissolution than that of the tool grade used in this study.An option with the M-steel would therefore be to not increase the tool life but to further reduce the risk of chippings and edge fractures.These phenomena are stochastic to their nature and undesired from a robust production point-of-view.However, prior to a full-scale introduction of the Ca-treated steel two aspects must be dealt with.The modified metallurgy of Ca-treatment must be verified with respect to the fatigue strength in component tests and compared to available data of the reference steel and specification.There is a multi-source requirement of steel suppliers that can offer Ca-treated steels with the desired steel quality.Guaranteed deliveries from several suppliers and with repeatable properties is necessary to make use of the improved machinability of the M-steel.The role of non-metallic inclusions on the tool wear and PCBN cutting tool life in fine machining of carburising steel grades was investigated.A Ca-treated carburising steel grade was compared to a standard steel grade.Furthermore, tool life tests was conducted in order to study the active degradation mechanisms at the end of the tool life.Also, additional machining tests were made with an interruption prior to the tool life aiming at studies of the initial wear mechanisms.From the findings in this work, the following conclusions can be made:The hard part machinability of a standard carburising steel was improved with by 110% by the Ca-treatment.The improved machinability corresponds to a reduced tooling cost of 50% at the middle cone production of gearboxes at Scania.Therefore, to implement the M-steel on a wider range of components would be economically beneficial for the gearbox production at Scania.This can lead to a significantly reduced cost per produced component.The most valuable benefit of the M-steel in this study is the reduced expansion of the chipping on the flank face of the secondary cutting edge and the more controlled progressive flank wear.Consequently, the M-steel enables a more robust machining process than the R-steel.Thanks to its beneficial tool wear characteristics, a Ca-treated steel can be combined with a higher CBN content tool grade in hard part turning to further minimise the risk of stochastic chippings and edge fractures, thereby promoting the production robustness.It is believed that the Ca-enriched barrier of slag deposits protects the tool against a diffusion-induced wear at the rake face of the PCBN cutting tool.Thus, this would minimise the material transfer to the PCBN edge.In addition, without the lubricating slag layer, a higher friction between the tool edge and the chip flow is expected.This, in turn means a higher temperature at the tool-chip interface.Therefore, Fe-rich compounds from the chip flow may penetrate the tool surface and attack the CBN grains, which will result in an edge chipping.It is also believed that the extremely thin slag deposits form also on the edge line and on the flank face.These films reduce the affinity of the tool-chip contact and they impede transfer of workpiece material on to the cutting edge line.Thereby explaining the reduced chipping wear in the machining tests with the M-steel.The improved machinability of the M-steel is linked to the Ca-enriched barrier ofS and non-metallic inclusions that is built up on the PCBN rake face crater during hard part turning.The presence ofS and phases at the rake face crater of the PCBN edge generates a characteristic wear pattern of ridges.Without the Ca-treatment, no formation of the protective non-metallic inclusions will occur which may lead to a faster degradation of the PCBN cutting tool as a consequence of a diffusion-induced chemical wear.
This study describes the influence of the steel characteristics of Ca-treated carburising steel grades during hard part turning of synchronising rings in gearbox production.The main focus was on the chemical composition of the non-metallic inclusions in the evaluated workpieces and their effect on the PCBN tool wear.In addition, a Ca-treated carburising steel grade was compared to a standard steel grade.Machining tests were performed at the transmission machining site at Scania in order to evaluate the PCBN cutting tool life as defined by the generated surface roughness during actual production.The Ca-treated steel showed a more than doubled tool life than that of the standard steel grade.The superior machinability was linked to the formation of a Ca-enriched slag barrier composed of (Mn,Ca)S and (Ca,Al)(O,S).It is believed that the stability of the protective deposits is essential to minimise diffusion-induced chemical wear of the PCBN tool.
as only one sample from site M met the reliability criteria, this flow was not included in our result.No samples from site T, T and S2 meet all the reliability criteria.The selected samples have shown corresponding distinct directional components, positive pTRM checks, no significant curvature of the Arai plots and little or no zig-zagging of the Arai or Zijderveld plots, supporting that the samples have not been influenced by lab-induced alteration or multi-domain behavior.In this study, the success rate for paleointensity determination is 56%.The accepted microwave paleointensity results from this study are combined with some of the thermal Thellier-type and Wilson results from previously published studies.For the Maymecha and Yubileinaya sites, the published thermal Thellier-type results are consistent with the new Microwave results.In contrast, there is a large degree of in-site dispersion when all of the results are combined for the Truba and Sytikanskaya sites, with site standard deviations of up to 55% of the site mean.Close analysis of the two accepted Truba flows reveals that the thermal paleointensity estimates are approximately double the value of the microwave results.One possible explanation for this discrepancy is that multidomain behavior was enhanced in one set of experiments over the other.In particular, we note that in the thermal Thellier experiments performed by Shcherbakova et al., no checks for MD behavior were performed and fraction values from three out of the four estimates were less than 0.5.The use of the IZZI protocol in the microwave experiments and the resulting increased quality values leads us to favour the new results over the old ones and thereby exclude the significantly higher thermal estimates from these two site means.For the Sytikanskaya kimberlite pipe, some of the thermal results are consistent with the microwave results while others are approximately twice as high.Blanco et al. divided the accepted thermal paleointensity results in two categories- ‘A’ and ‘B’; the ‘A’ category results met all the reliability criteria defined by Selkin and Tauxe, whereas the results fell into the ‘B’ category if one of the reliability criteria failed one of these but otherwise fell into the following limits: 10% < meanDEV < 25%, 20% < pTRM tail check <25% and 30% < f < 60%.Since the ‘B’ category results are more prone to biasing from either laboratory induced alteration and/or MD effects, we exclude them from the site means for the Sytikanskaya and the Yubileinaya kimberlite pipe.The mean geomagnetic field intensity obtained from the four northern extrusive sites is 13.4 ± 12.7 μT.This is slightly lower than the Sytikanskaya mean and substantially lower than the Yubileinaya site mean.Furthermore, a nonparametric Mann-Whitney U Test, based on the individual specimen estimates rejects the null hypothesis of equality of medians between any two of the three regions at the 99% significance level.A similar regional discrepancy has been pointed out earlier by Blanco et al. and was suggested to be a consequence of bias from multidomain behavior in northern specimens.The present study does not support this explanation as the discrepancy remains even within a result set that showed little evidence of zigzagging and generally lower curvature parameters.Another possibile cause that we rule out is crustal magnetic anomalies as these are weak in the region considered.Our preferred explanation is simply that the regional discrepancy reflects slightly different time intervals within the 0.1–2 Myr emplacement event.Pavlov et al. estimates that the formation of the Norilisk and Maymecha-Kotuy sections “did not exceed a time interval on the order of 10,000 years” based on secular variation analysis of the directions from the Truba section and the Norilisk section.Therefore, in the context of rates of secular variation such as that seen in the last 2 Myr, it is perfectly feasible that the units from the northern, Sytikanskaya and Yubileinaya sites were emplaced during time periods perhaps a few tens or hundreds of kyr apart when the field was in a different intensity regime.It is also worth noting that Pavlov et al. suggests that thick parts of the sequence towards the base of the Norilisk section: the upper part of the Ivakinskii Formation to the lower part of the Nadezhdinsky Formation, represent those of a transitional and/or excursional field.The published paleointensity results from these formations seem to be in agreement with this analysis as the VDM results are consistently lower than those form the same section in a distinct polarity zone.None of the samples from this transitional part of the section have been used for microwave analysis and we exclude these published results from our composite analysis outlined in the next section.In this study, the overall mean paleointensity calculated using all seven site means is 19.5 ± 13.0 μT which corresponds to a mean virtual dipole moment of 3.2 ± 1.8 × 1022 Am2.Our results therefore support that the average magnetic field intensity during these short intervals is significantly lower than the present geomagnetic field intensity.There are currently five published paleointensity studies for the Permo-Triassic Siberian Traps listed in the PINT database, that have not been superseded by another publication.All of the sites listed in these publications, along with the sites in this study and that of Shcherbakova et al., 2015, have been collated and assessed.Each site mean VDM value was assigned a QPI value based on the number of criteria that the estimate passed.Supplementary Table provides the directions, intensities, and the complete breakdown of the estimation of QPI values for all the published studies along with this one.The sites cover
The samples display corresponding distinct directional components, positive pTRM checks and little or no zig-zagging of the Arai or Zijderveld plot, providing evidence to support that the samples are not influenced by lab-induced alteration or multi-domain behavior.The accepted microwave paleointensity results from this study are combined with thermal Thellier-type results from previously published studies to obtain overall estimates for different regions of the Siberian Traps.It demonstrates that the overall mean paleointensity of the Siberian Traps is 19.5 ± 13.0 μT which corresponds to a mean virtual dipole moment of 3.2 ± 1.8 × 1022 Am2.
three regions- the two northern regions which have distinct but correlatable stratigraphy, and the southeastern region which contains the sills from the areas around the kimberlite pipes Sytikanskaya, Yubileinaya and Aikhal.To test the robustness of the geomagnetic means from these regions, sites were filtered out based on their QPI values to see how the site mean changed as less reliable sites were removed.For the northern localities, both of the regions have similar median paleointensities and show minimal variation with QPI filtering, as shown in Fig. 6.For up to QPI ⩾ 5, the Norilsk section has a much greater range due to the larger number of sites associated with this locality.There is a much greater variation in the median with QPI filtering for the southeastern localities because there are very few sites but the geomagnetic mean always remains significantly higher than the northern localities.The northern sites represent ∼ 90% of the total sites studied indicating that the overall median is likely to be heavily biased by the potentially short-lived and extreme secular variation represented by the northern sites.Nevertheless, we point out that the simple average of the median northern and eastern regional results would still yield a dipole moment of only approximately half the present-day value.Dipole moments based on different rock types for Permian to Cretaceous are shown in Fig. 7 to allow investigation of the extent of the MDL behavior.Here, the paleointensity data of previously published 55 different studies, archived in the 2015 version of the PINT database, and this study are analyzed.As geomagnetic field intensities vary across geographic locations, the VDM or VADM record is used for this analysis.It is obvious that there is a degree of variability of dipole moment between different materials, such as volcanic rock, submarine basaltic glasses, plutonic rocks etc.Geomagnetic field strength recorded in submarine basaltic glasses, plutonic rocks and single silicate crystals is high relative to volcanic rocks and baked sedimentary rocks.The mean VDM/VADM of entire rock types for Permian is higher than that of present day, whereas it is lower for the other three time intervals – PTB, Jurassic and Cretaceous.The mean VDM/VADM has changed during the last 300 Ma indicating a period of low dipole moment during the Mesozoic, at least for the Jurassic, and, not withstanding a ∼50 Myr gap in the record during the Triassic, now might extend to the PTB.
The quantity of igneous material comprising the Siberian Traps provides a uniquely excellent opportunity to constrain Earth's paleomagnetic field intensity at the Permo-Triassic boundary.There remains however, a contradiction about the strength of the magnetic field that is exacerbated by the limited number of measurement data.To clarify the geomagnetic field behavior during this time period, for the first time, a microwave paleointensity study has been carried out on the Permo-Triassic flood basalts in order to complement existing datasets obtained using conventional thermal techniques.Samples, which have been dated at ∼250 Ma, of the Permo-Triassic trap basalts from the northern extrusive (Maymecha-Kotuy region) and the southeastern intrusive (areas of the Sytikanskaya and Yubileinaya kimberlite pipes) localities on the Siberian platform are investigated.These units have already demonstrated reliable paleomagnetic directions consistent with the retention of a primary remanence.Furthermore, Scanning Electron Microscope analysis confirms the presence of iron oxides likely of primary origin.The mean geomagnetic field intensity obtained from the samples of the northern part is 13.4 ± 12.7 μT (Maymecha-Kotuy region), whereas from the southeastern part is 17.3 ± 16.5 μT (Sytikanskaya kimberlite pipe) and 48.5 ± 7.3 μT (Yubileinaya kimberlite pipe), suggesting that the regional discrepancy is probably due to the insufficient sampling of geomagnetic secular variation, and thus, multiple localities need to be considered to obtain an accurate paleomagnetic dipole moment for this time period.Results indicate that the average magnetic field intensity during Permo-Triassic boundary is significantly lower (by approximately 50%) than the present geomagnetic field intensity, and thus, it implies that the Mesozoic dipole low might extend 50 Myr further back in time than previously recognized.
level of the discrete quantitative variable ‘Age’ has been analysed by comparing means through Snedecor's F distribution non-considering equality of variances. "For the rest of the nominal and ordinal variables, as purely quantitative analyses could not be carried out, Chi-squared test was conducted on contingency tables to test whether or not a relationship exists between variables, and Somers'D to reflect strength and direction of the association between variables.Finally, decisions regarding the significance were made with confidence levels of 99% and 95% in the collected results.Table 1 presents the frequency distribution obtained through the studied sample in the nominal/dichotomous variables and in the ordinal variables assessed during the study.In a first simple analysis of the results, Table 1 shows that the majority options are the following: to be a man; to have at least university studies; not to have a direct relationship with education; to have previously tried advanced virtual reality viewers; to be users of video game consoles viewers; to have acquired a viewer during the last year; to use virtual reality at least once a week; not to use virtual reality as a learning tool; not to have interest in its use as a learning tool; to have interest in learning through virtual reality in the future; and not to have optimism regarding its future pedagogical possibilities.Concerning the discrete quantitative variable ‘age’, the global mean obtained was M: 36.91 with a standard deviation of SD: 6.39.In relation to its combination with the “gender” variable, it can be observed in Fig. 1 below that presence of men is significantly higher than presence of women.However, age difference does not imply statistical significance according to gender, as it can be seen in Table 2."Only the ‘Optimism regarding the future pedagogical possibilities of virtual reality's variable presents a different performance in combination with the ‘age’ variable.Those who feel more optimistic are significantly younger than those who do not feel that way.The significance of the observed differences between the various nominal and ordinal variables of this study may be analysed from Table 3."This table shows the values obtained from the contingency tables using the statistical Chi-squared test, which shows the significance of the correlations between two variables, and the Somers' D, which shows the significance and direction of the correlations observed.It should be taken into account that in order to establish a direction in significance, some nominal variables were given different scores, thus becoming ordinal variables.Therefore, the relationship between the man/woman category and the rest of the assessed variables can be easily observed.Otherwise it would not be possible to establish which gender has a direct or inverse association with the rest of the measured variables.The same conversion was carried out in every dichotomous variable, giving the higher score to the category associated with the answer “YES” in each one of those variables.From the previous Chi-squared test and Somers´D conducted on the contingency table, it can be outlined that some combination of variables shows a statistically significant relationship.For instance, the ‘gender’ variable shows that women have a significantly higher educational level, higher number of women are related to formal education and higher number of women use virtual reality as a learning tool.On the other hand, men have more frequently tried the advanced viewers and used virtual reality viewers.The current direct relationship with formal education is also significantly and directly associated to the educational level and significantly and inversely associated to the fact of having tried an advanced virtual reality viewer, to the level of the private virtual reality viewer and to the frequency of use of virtual reality.In addition to the before mentioned significant relationships, the frequency of use is significantly and directly associated to the fact of having tried an advanced virtual reality viewer and to the level of the private virtual reality viewer.The same variable is significantly and inversely associated to the educational level.Another strong and direct significant relationship is found between the fact of having tried an advanced virtual reality viewer and the level of the private viewer.Regarding the variables directly related to the use and interest of virtual reality as a learning tool, it can be generally observed that strong and positive significant correlations exist.Both affirmative answers of having interest in the use of virtual reality as a learning tool and in learning through virtual reality in formal education in the future are significantly and directly associated between them.They are as well associated with the two other variables considered."Results in the previous contingency table also show a statistically significant and nonlinear or second-degree association, which implies combinations of variables with significant values through the Chi-squared test but not through Somers' D.This situation exists when some of the categories for a variable produce a partial influence over other variable, such as ‘Number of years using virtual reality’ variable; its performance is different depending on the associated variable because in the case of categories in which users have been using virtual reality since recently, the interest of the user in this technology is still in a stage of discovering of its possibilities, so their frequency of use is unusually high and their interests very varied when wanting to try all the possibilities of the new technology.Regarding the ‘Level of the private viewer’, the category ‘Video game console’ has a different performance in relation to the ‘gender’ variable, and in ‘Current use of virtual reality as a learning tool’.The fact that among the users
The relationship between twelve variables was analysed comparing means through the Snedecor's F distribution and the contingency tables through the Chi-squared test and Somers’ D. Among other issues, it was concluded that the virtual reality user profile at present corresponds to a person older than 36, mainly men, with higher education and having acquired their viewer no longer than one year ago.
and necessary.Without them, these first green shoots could wither on short notice and this relationship could get cold and be wasted in the future.Another problem to be faced in the future for the use of virtual reality as a learning tool is accessibility to groups of students.Currently, teachers participating in this study prefer cheap equipment and sporadic use.If prices are reduced, it is likely that teachers will end up using better equipment and increasing the time of use, considering greater possibilities of virtual reality as a tool for learning.Discussions about the curricular quality and adequacy of schools to the 21st century reality are necessary as an urgent challenge with significant repercussion on the international political-pedagogical debate.Numerous organizations are urging upon the need of reformulating the pedagogical culture in order to achieve educational institutions where the ICT become actual pedagogical tools and, generally, the students could take advantage of their complete development, instead of been a mere mechanical response to problems disconnected from reality.At this point, virtual reality will play an important role, since it is the technology destined to bring the educational sphere closer to reality.It will be able to consider experiences so near to reality that they could generate emotions and sensations very similar to those generated by reality itself, bringing the distance between educational simulation and reality so close that they almost can touch each other.The correct use of the virtual reality technology applied to the educational world is now part of these challenges.Software and all kind of tools and applications that make students travel to the inside of contents are spreading, allowing the reconstruction and direct experimentation of any imaginable situation inside the classroom.Virtual reality is the technology that will best achieve the objective of bringing the student closer to learning situations as close to reality, without the need to assume the risks involved.The possibilities of offering a greater quality education are multiplied exponentially, as well as the motivation of students towards learning.So that, from all the new technologies applicable to education, there are good reasons to consider the development of virtual reality applications for education as a priority.Nevertheless, due to their emerging pedagogical use, it is not currently possible to do a solid prospective reflection about every educational possibility that their correct use in teaching procedures would have.It will be necessary, at least, to wait until this technology settles down to evaluate the limits of its potential reach.This study has described the social and demographic profile of the early adopters of virtual reality in Spain and has assessed their interest in this technology.Therefore, together with other studies, a series of rational evidences and considerations are emerging step by step that allows us to have a clearer and wider idea about the real possibilities of virtual reality that is arriving to the classrooms.To explore contexts, to experience sensations, to travel through time and to live experiences at the classroom, unthinkable a few years ago, it means a new world of possibilities that is modifying classrooms, educational institutions, pedagogical stereotype and, ultimately, the educational world as we know it until now.Roberto Sánchez-Cabrero: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper.Oscar Costa-Román, Francisco Javier Pericacho-Gómez, Miguel Ángel Novillo-López: Performed the experiments; Wrote the paper.Amaya Arigita-García, Amelia Barrientos-Fernández: Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.The authors declare no conflict of interest.No additional information is available for this paper.
This study describes the social and demographic profile of the first generation of users of marketed virtual reality (VR) viewers in Spain and, subsequently, it assesses the interest in its use as a learning tool.Concerning the interests of virtual reality users as a learning tool, only a few of them currently use virtual reality for this aim, but they mainly show an interest in using the virtual reality as a learning method and they feel optimism regarding the future use of this technology as a learning tool.Finally, it can be stated that current use as a learning tool among teachers and students is occasional and preferably via smartphones.
In this paper, we present an approach to learn recomposable motor primitives across large-scale and diverse manipulation demonstrations.Current approaches to decomposing demonstrations into primitives often assume manually defined primitives and bypass the difficulty of discovering these primitives.On the other hand, approaches in primitive discovery put restrictive assumptions on the complexity of a primitive, which limit applicability to narrow tasks.Our approach attempts to circumvent these challenges by jointly learning both the underlying motor primitives and recomposing these primitives to form the original demonstration.Through constraints on both the parsimony of primitive decomposition and the simplicity of a given primitive, we are able to learn a diverse set of motor primitives, as well as a coherent latent representation for these primitives.We demonstrate, both qualitatively and quantitatively, that our learned primitives capture semantically meaningful aspects of a demonstration.This allows us to compose these primitives in a hierarchical reinforcement learning setup to efficiently solve robotic manipulation tasks like reaching and pushing.
We learn a space of motor primitives from unannotated robot demonstrations, and show these primitives are semantically meaningful and can be composed for new robot tasks.
s and over 1000 ng/mL samples).Based on these results, we decided to mix samples and the ISD pretreatment reagent for 180 s.The within-assay and day-to-day assay precision for 3 different whole blood controls were tested.Day-to-day assay precision was investigated by performing the assay on 10 days over a two-week period.Duplicate assays were performed each time, and the mean values are shown in Table 1 and Table 2.The within-assay CV ranged from 1.8% to 3.6% at CsA concentrations between 94 and 1238 ng/mL and 2.1–3.9% at TAC concentrations from 2.1 to 17.8 ng/mL, whereas the day-to-day CV ranged from 3.0% to 4.1% at CsA concentrations between 92 and 1240 ng/mL and 2.8–3.9% at TAC concentrations from 2.0 to 17.5 ng/mL.CsA and TAC linearities were examined using whole blood samples supplemented with Sandimmun® or Prograf®Astellas Pharma, Tokyo, Japan).We measured 10-step dilutions of 2000 ng/mL and 40 ng/mL whole blood pooled samples with the dilution drug-free whole blood.Each assay was performed in duplicate.The dilution curves were linear up to a CsA concentration of at least 2000 ng/mL, and a TAC concentration of at least 40 ng/mL.We examined the calibration stability by measuring 3 different whole blood controls over 21 days.Each assay was performed in duplicate, and mean values were within the variation range of the day-to-day assay precision.CsA and TAC calibrations were both found to remain stable for 21 days.We measured serial dilutions of 85 ng/mL and 2.7 ng/mL whole blood pooled samples with the dilution drug-free whole blood for five days running.Each assay was performed in triplicate and the LOQ was defined as the concentration at which the CV was approximately 10%.Each lower LOQ obtained was 16 ng/mL, and 0.95 ng/mL.There was no significant interference within the recovery rate of±10% which was caused by the addition of 211 mg/L conjugated bilirubin, 211 mg/L unconjugated bilirubin, 2800 IU/L rheumatoid factor or 1490 formazine turbidity units of chyle material to control whole blood pooled samples.The effects of the hematocrit on this assay for CsA and TAC were also evaluated by using whole blood samples from healthy volunteers.The hematocrit was assayed using the fully automated hematological analyzer Sysmex XN-1000.The hematocrit of the blood samples was adjusted to 30%, 40%, 50%, 60%, and 70%.After each blood sample was centrifuged, plasma were removed.Then, by adding the same amount of Sandimmun® or Prograf® as the removed plasma, we examined the series with different hematocrit values with constant concentrations of CsA and TAC.Variations in the hematocrit up to 70% had minor insignificant influence.We examined the relationship between CsA values obtained by the Elecsys® Cyclosporine assay and data obtained with ACMIA CsA by a standardized major axis regression analysis.We also examined the relationship between TAC values obtained by the Elecsys® Tacrolimus assay and data obtained with the CLIA TAC assay in the same manner.Furthermore, we compared differences between the Elecsys® Cyclosporine assay and data obtained with ACMIA CsA, and between the Elecsys® Tacrolimus assay and data obtained with the CLIA TAC assay using the Bland-Altman technique.Each regression line is shown in Figs. 5A and 6A.The correlation of CsA assay between ACMIA and ECLIA was r=0.995, y=0.924x −1.175, n=200, while that of TAC assay between CLIA and ECLIA was r=0.994, y=1.080x −0.197, n=200.When the extent of the agreement between the methods was assessed using the Bland-Altman technique, the mean difference was −17.51 for the Elecsys® Cyclosporine assay versus ACMIA, whereas it was 0.51 for the Elecsys® Tacrolimus assay versus CLIA.However, the Elecsys® Cyclosporine assay showed lower concentrations than ACMIA when samples containing more than 500 ng/mL CsA were examined.In the present study, we evaluated the analytical performance of the Elecsys® Cyclosporine and Elecsys® Tacrolimus assays.Since CsA and TAC are largely distributed in red blood cells and bound to proteins, a one-step manual pretreatment is performed to release the analytes from the proteins .This step is very important for accurate measurements of CsA and TAC concentrations.Regarding the mixing time of samples and the pretreatment reagent, this was only described as “vortex” in the package leaflet.We compared mixing times of 10, 60, 180, and 300 s.We demonstrated that a pretreatment mixing time of more than 180 s is necessary in order to obtain stable results.The Elecsys® Tacrolimus concentrations were elevated at higher hematocrit values.A previous study indicated that negative correlations between the hematocrit value and tacrolimus concentration were caused by Micro Particle Enzyme Immunoassay optics .However, our results were completely the opposite.TAC is distributed in red blood cells more than CsA; therefore, high hematocrit values have no effect on measurements of CsA concentrations, but influence those of TAC concentrations.Concerning the influence of the hematocrit, if TAC is completely extracted from samples, its concentration at a high hematocrit may be greater than that at a low hematocrit.Since we did not examine the influence of the hematocrit using completely hemolytic samples in the present study, the cause for this remains unclear; but it was not considered to be a problem, because this tendency is within a reproducible error range.The Elecsys®Tacrolimus assay showed good precision with a reasonable LOQ, good linearity, no interference except hematocrit, and its correlation with CLIA methods was also good.The Elecsys® Cyclosporine assay showed good precision with a reasonable LOQ, good linearity, and no interference.However, the Elecsys® Cyclosporine assay showed lower concentrations than ACMIA methods when samples containing more than 500 ng/mL CsA were examined.This result may be attributed to the measuring mode.The linearity of ACMIA method is 500 ng/mL, we measured
Results Within-assay coefficients of variation were 1.8−3.6% (CsA: 94−1238 ng/mL) and 2.9−3.9% (TAC: 2.1−17.8 ng/mL), whereas day-to-day coefficients of variation ranged between 3.0−4.1% (CsA) and 2.8−3.9% (TAC).A method comparison using a standardized major axis regression analysis of ACMIA and ECLIA was r=0.995, y=0.924x −1.175, n=200 (CsA), while that of CLIA and ECLIA was r=0.994, y=1.080x −0.197, n=200 (TAC).
over 500 ng/mL samples with the high concentration measurement mode “CSAE”, which is the reagent for measuring high concentration of CsA.If measurement is performed using reagents immediately after dissolution of the reagent, there is a possibility that the measurement result may fluctuate largely.Thus, our results may be explained by the CSAE mode, although the exact cause remain unclear.In the ACMIA method and the ECLIA methods, a slight difference occurs in the measurement result due to the difference in reactivity of the antibody contained in the reagent.As a result, there was a possibility that significant difference was observed in the high concentration range.We consider that further verification of the difference in reactivity of the antibodies in each method is necessary.In conversion between TAC and CsA, the simultaneous measurement of TAC and CsA concentrations is necessary in order to estimate suitable doses and target trough levels .Elecsys® Cyclosporine and Tacrolimus can simultaneously measure the concentration of tacrolimus and cyclosporine in the blood at a single pretreatment and the measurement time is shortened by the current method ACMIA method and CLIA method.So, the Elecsys® Cyclosporine and Elecsys® Tacrolimus assays may be suitable for routine therapeutic drug monitoring.Although care should be taken when measuring patient samples with high CsA concentrations, CsA and TAC concentrations may be simultaneously measured using a single pretreatment.Furthermore, the measuring time of the Elecsys® Tacrolimus assay is shorter than that of CLIA methods.The results of the present study indicate that the Elecsys® Cyclosporine and Tacrolimus assays correlate well with established methods, that the LOQ and linearity are acceptable for use in the patient population to be tested and that the precision is clinically acceptable.The authors wish to confirm that there are no known conflicts of interest associated with this publication.There has been no significant financial support for this work that could have influenced its outcome.
Background Cyclosporine (CsA) and tacrolimus (TAC) are immunosuppressant drugs that are often used to treat autoimmune diseases and as transplantation therapy; therefore, their concentrations need to be monitored carefully.We herein evaluated the analytical performance of the Elecsys® Cyclosporine and Elecsys® Tacrolimus assay kits, which have been newly developed to measure CsA and TAC concentrations in the whole blood.Methods We used residual whole blood samples from autoimmune disease and transplantation patients who were being treated with CsA or TAC.CsA concentrations were measured using an affinity chrome-mediated immunoassay (ACMIA) and an electrochemiluminescence immunoassay (ECLIA).TAC concentrations were measured using a chemiluminescence immunoassay (CLIA) and ECLIA.We investigated assay precision, linearity, lower limit of quantitation (LOQ), stability of calibration, influence of interference substances and the hematocrit, correlation of ACMIA with ECLIA, and correlation of CLIA with ECLIA.The limits of quantitation were defined as the concentration at which the CV was approximately 10%.Each lower LOQ obtained was 16 ng/mL (CsA), and 0.95 ng/mL (TAC).CsA and TAC calibrations were stable for at least 21 days.Neither the presence of conjugated bilirubin, unconjugated bilirubin, chyle, and rheumatoid factor nor the hematocrit affected these assays.Conclusions The analytical performances of the Elecsys® Cyclosporine and Elecsys®Tacrolimus assays were acceptable.Furthermore, CyA and TAC concentrations may be simultaneously measured using a single pretreatment which is of benefit if patients have to undertake conversion between these two drugs.Additionally, it benefits the workflow in the clinical laboratory.Thus, the Elecsys® Cyclosporine and Elecsys® Tacrolimus assays may be suitable for routine therapeutic drug monitoring.
the thoracic target with the size of the error being more prominent in the AEP and PEP.In the pelvis, all three LBP subgroups overestimated the target, with a pelvis tilted anteriorly as a result compared to no-LBP.The baseline classifier was created from the repositioning sense data of the no-LBP individuals.The Cardiff DST method classified LBP from no-LBP with 96.61% accuracy, with 7 out of 87 LBP cases classified in non-dominant regions indicating a higher level of uncertainty.None of the LBP cases were misclassified as dominant healthy.When classifying LBP subgroups from no-LBP, the Cardiff DST method classified FP with an accuracy of 93.83%, AEP with an accuracy of 98.15% and PEP with an accuracy of 97.62%.5 out of 45 of FP, 1 out of 24 AEP and 1 out of 14 PEP cases were classified as uncertain with no cases mis-classified within the dominant healthy region.Discrimination between the LBP subgroups was more variable.FP and PEP were discriminated with high accuracy of 98.44%, where 6 out of 49 FP and 6 out of 24 PEP individuals were classified in non-dominant regions and none were misclassified in dominant regions.FP and AEP were classified with 90.41% accuracy with 9 out of 49 of FP, and 6 out of 24 AEP subjects classified in non-dominant regions.One FP case was misclassified as dominant AEP.Lower level of discrimination accuracy of 70.27% was detected between AEP and PEP subgroups with 2 out of 14 in PEP misclassified as AEP and 4 out of 24 AEP misclassified as PEP.All measured variables were ranked on the basis of the discrimination accuracy percentage.When discerning LBP from no-LBP, lumbar AE during sitting and standing were the top two discriminators.When discerning LBP subsets from no-LBP, the most accurate discriminator for FP was lumbar AE during sitting and for AEP it was the lumbar CE during standing.For PEP there were three joint top discriminators all at 95.24%; lumbar AE, lumbar CE and pelvic VE during standing.Finally, discriminating between the LBP subgroups, pelvic VE during standing was the most accurate discriminator for discerning FP from PEP and FP from AEP with accuracy of 96.88% and 87.67%, respectively.Lumbar CE during sitting was the main discriminator between AEP and PEP with the lowest level of discrimination accuracy of 75.68%.This study employed a DST Classifier, an objective classification method, to discern between clinical subsets of LBP based on objective measures of repositioning sense in the spine and pelvis during sitting and standing.This is the first time the Cardiff DST Classifier has been used to discern between subsets of a single condition.This is of particular importance in LBP, the heterogeneity of which is recognised as an important factor attributing to low treatment success.This study had three main findings: First, within a sample of 115 participants, the Cardiff DST Classifier discriminated no-LBP from LBP with accuracy ranging between 93.83% and 98.15% with no mis-classified individuals falling within the dominant region of the simplex plot.Second, discrimination between the LBP subsets was more variable with the highest accuracy of 96.8% discriminating LBP flexion from the passive extension subset, 87.7% accuracy discriminating flexion from active extension and 70.27% accuracy discriminating the two extension subsets.Third, the ranking analysis revealed lumbar AE in sitting as the principal variable to discriminate LBP from no-LBP and flexion from no-LBP, whilst lumbar CE in standing most accurately discriminated the LBP extension subsets from no-LBP.The level of LBP classification accuracy reached by the Cardiff DST Classifier is of significant clinical importance.Clinical classification of LBP involves a complex synthesis of a large amount of subjective and objective information upon which a clinical judgement is made about the LBP subtype.A substantial amount of clinical training is required to reliably classify LBP which is deemed a barrier for successful implementation within the Health Service.The 90%+ classification accuracy reached by the DST Classifier in this study exceeds inter-examiner agreement levels of expert clinicians undergoing >100 h of LBP classification training with Kappa of 0.82.Even the lowest Cardiff DST Classifier accuracy of 70.27% when discerning between the LBP active and passive extension subsets is higher than agreement between practitioners undergoing up to 100 classification training hours.This demonstrates that repositioning sense function analysed with the DST Classifier can identify LBP subsets with accuracy comparable with that of a clinical expert, with a potential to be of significant clinical value assisting in classification to help practitioners to design and deliver individualised exercise therapies."Assessment of motor function forms an important part of the clinical classification process. "This was corroborated by biomechanical investigations identifying that compared to no-LBP, flexion pattern individuals sit and move nearer the end-range of lumbar flexion whilst extension pattern individuals tend to operate with their lumbar spine in relative extension.These findings were born out of generating a significant amount of spinal-pelvic biomechanical data including proprioception measures and bilateral trunk muscle electromyography during different function tasks.This study demonstrated that the DST Classifier can identify LBP subsets on the basis of performing a repositioning test.This may have clinical implications potentially focussing the LBP assessment on accurate evaluation of proprioceptive function.The DST Classifier also ranked the repositioning sense variables by their discriminatory power, identifying the lumbar AE and CE as the most powerful variables when discern LBP function from no-LBP."This is in agreement with previous research where both the lumbar AE and CE were greater in LBP compared to pain-free controls.Identifying variables that best characterise function within each subset may serve as a therapy target and means
87 LBP subjects, clinically subclassified into flexion (n = 50), passive extension (n = 14), and active extension (n = 23) motor control impairment subgroups and 31 subjects with no LBP were recruited.From no-LBP, subsets of flexion LBP, active extension and passive extension achieved 93.83, 98.15% and 97.62% accuracy, respectively.Classification accuracies of 96.8%, 87.7% and 70.27% were found when discriminating flexion from passive extension, flexion from active extension and active from passive extension subsets, respectively.Sitting lumbar error magnitude best discriminated LBP from no LBP (92.4% accuracy) and the flexion subset from no-LBP (90.1% accuracy).
of evaluating the impact of therapies on clinically important function outcomes.Interestingly, the nature of repositioning sense deficits reflected the LBP subsets pain presentation pattern.For example, the highest ranking RE in AEP and PEP was in standing, principally reported as most pain provoking posture in these subsets, whilst in FP the highest ranking RE was in sitting typically the most pain provoking posture in FP.This may indicate that the DST method could be sensitive to clinically important parameters ranking them as higher discriminators.This may have implications for establishing therapy and monitoring focus when evaluating the intervention effect in clinical environment.The 70.27% discrimination accuracy between the two LBP extension subsets was relatively low compared to the other comparisons made.Although the two subsets are distinct from each other in that AEP patients tend to actively adopt hyper-lordotic posture whilst PEP patients tend to passively sway forward into extension, both subsets report extension tasks a principal pain provoking posture and flexion tasks pain easing.This clinical similarity in pain presentation may reflect similarity of the repositioning sense deficits resulting in Cardiff DST Classifier unable to distinguish the difference between PEP and AEP in nearly 30% of cases.This AEP and PEP subset similarity was demonstrated previously in repositioning sense and muscle activity, potentially indicating that despite the two subsets appearing to be different clinical entities, the exercise approach may not need to differ substantially.The main strength of this study is the sample size, more than double the number of participants used in previous Cardiff DST Classifier studies.This sample size allowed establishing STV ratio in region of 10:1 which significantly improved the predictive ability of the DST analysis and allowed for the first to date single condition subset analysis.The limitation is in the repositioning sense data being obtained using a laboratory based 3-D Vicon motion capture system potentially reducing the clinical applicability.Nevertheless, there are rapid developments of portable motion analysis devices which can be utilised in clinical setting.Such devices are capable of harnessing volumes of high-quality biomechanical data in the clinical setting.This study provides an important step towards using well-designed objective classification methods to analyse the biomechanical evidence that could be obtained from such portable devices in the clinical setting."Further research is required however, utilising portable devices to obtain data and longitudinal assessment to evaluate the stability of the measures over time as well as the classifier's ability to track any changes in function in response to intervention or diseases progression.Regarding clinical implications, LBP is considered a highly heterogeneous with a targeted approach to management long advocated as being essential to improve treatment efficacy.This study demonstrated that the DST objective classification method can i) accurately distinguish between different subtypes of LBP on the basis of biomechanical measures of proprioceptive function and ii) rank the variables by discriminatory power.Such information can be used to personalise exercise and rehabilitation protocols for each individual patient.For example, an exercise protocol for LBP patient, classified by the Cardiff DST Classifier as flexion pattern with lumbar proprioceptive deficit in sitting ranked the highest discriminator, may focus on lumbar proprioceptive training in sitting.The Cardiff DST Classifier utilised biomechanical evidence of spinal and pelvic proprioceptive function to successfully distinguish clinically recognised subsets of LBP from healthy controls.This is a first to date example of discerning between clinically important subtypes of a single condition, achieving a classification accuracy comparable to that of an expert clinician.The repositioning sense variables that best characterise the studied LBP subsets were also identified providing a potential target for management.This study demonstrated the potential of the DST method to simplify the classification process of complex health conditions such as LBP, assisting practitioners to better target treatments with a clear focus on clinically important outcomes for each individual.
Background: Low back pain (LBP) classification systems are used to deliver targeted treatments matched to an individual profile, however, distinguishing between different subsets of LBP remains a clinical challenge.Methods: A novel application of the Cardiff Dempster–Shafer Theory Classifier was employed to identify clinical subgroups of LBP on the basis of repositioning accuracy for subjects performing a sitting and standing posture task.Thoracic, lumbar and pelvic repositioning errors were quantified.Findings: In discriminating LBP from no LBP the Classifier accuracy was 96.61%.Standing lumbar error best discriminated active and passive extension from no LBP (94.4% and 95.2% accuracy, respectively).Interpretation: Using repositioning accuracy, the Cardiff Dempster–Shafer Theory Classifier distinguishes between subsets of LBP and could assist decision making for targeted exercise in LBP management.
The intracellular organization of all eukaryotic cells are variations on a basic theme of membrane compartmentalization.Within this theme there is remarkable diversity across the eukaryotic tree of life .Many microbial eukaryotes, especially parasites with extreme lifestyle adaptations, have particularly exotic endomembrane systems.Some of this diversity is contributed by taxon-specific modifications of endosymbionts including plastids and mitochondria , but most organelles have a non-endosymbiotic origin, and are part of a dynamic endomembrane system.Peroxisomes play varied metabolic roles across eukaryotes, and have evolved to contain distinct types of metabolic machinery in different taxa .Particularly divergent examples of peroxisomes are the glycosomes of kinetoplastids such as Trypanosoma and Leishmania, which contain an extended glycolytic pathway .The post-Golgi pathway is extensively modified in many organisms, especially parasites, adapted for secretion.For example, the rhoptries, micronemes and dense granules of the apicomplexans Toxoplasma and Plasmodium are specialized secretory organelles essential for host cell invasion.Apart from the core secretory and endocytic pathways, these embellishments do not fit any simple formula: each new species presents new surprises.Parasites provide particularly well-studied instances of complex traffic systems, but this pattern repeats across the eukaryotes.This complexity is the starting point for our discussion.Should we think of it as arising from niche-specific selective pressures?,Must traffic systems be wired up in intricate ways in order to perform particular biochemical functions?,To understand the mechanistic and evolutionary origins of these diverse systems, it is important to study not just the endomembrane organelles, but also the processes by which proteins and other cargo are sorted and transported between them.The endomembrane system is dynamic, and compartment compositions are determined as the outcome of the exchange of material in vesicles. .Vesicle traffic is driven by a common molecular repertoire across eukaryotes .At its center are the twin processes of vesicle budding and fusion.Coatomers and adaptors load specific subsets of cargo from the source compartment onto budding vesicles.By cargo sorting, multiple vesicles of distinct compositions can be generated from the same source compartment .Tethers and SNARE proteins drive the fusion of these vesicles into specific target compartments .ARF and Rab GTPases regulate multiple steps of vesicle budding and fusion, ensuring the correct specificity and timing of events.Compartment and vesicle behavior are governed by transmembrane molecules in conjunction with cytoplasmic molecules.The very process of budding or fusing a vesicle requires the vesicle to carry certain transmembrane molecules, and necessarily causes changes to the compositions of source and target compartments.Therefore, it is impossible to study any one pathway of a vesicle traffic system without considering the flow of material across the entire system.Perturbations at any point in such an interconnected network can have cascading effects throughout the cell.If the state of a cell appears invariant over time, this is only because a large number of budding and fusion events balance one another out.In this context, mathematical modeling has proved useful in connecting basic molecular events, such as budding and fusion, to dynamic processes, such as cargo transport, and finally to the compartmental compositions of the endomembrane system .These models require the specification of a large number of parameters.Typically, many of these parameter values are estimated indirectly, and very few are obtained from prior experimental measurements.Such analyses can be applied to study a specific system of interest, but do not provide a natural framework to study diversity.It would be useful to be able to “try out” the behavior of millions of hypothetical cells over millions of possible parameter combinations, and see the spectrum of resulting cellular properties.For this to be possible the model itself has to be made very simple.This is precisely the goal of so-called Boolean models, where the state of the system is described by a series of 1s and 0s representing the presence and absence of molecular components.Boolean models are efficient to simulate on a computer, and therefore allow the rapid exploration of a large parameter space.Such approaches have been extremely useful in revealing the inner workings of gene regulatory networks and signaling networks .Here we present the outlines of such a Boolean model when applied to vesicle traffic.Our discussion is pedagogical: we consider specific examples in detail, and avoid focusing on mathematical derivations.Our goal is to introduce the reader to a new way of thinking about what models are and what they can do.We propose that Boolean models can be used to understand the vesicle traffic system and its diversity across eukaryotes.We want to model a cell containing multiple distinct compartments exchanging molecules via vesicles.The basic question is: given some underlying model and its parameters, what are the different compartments that arise and how are they connected?,The most general traffic model would include the spatial location, shape, size and molecular compositions of these compartments, and would account for stochasticity in vesicle budding and fusion events.As a reasonable simplification we could ignore the spatial location and shape of compartments, and focus instead on their size, composition, and connectivity alone.Models that focus on size distributions allow for a continuum of structures from transport vesicles to large compartments, and account for fission and fusion of all such structures .Many models focus on compartments and the complex rules of molecular specificity, but do not treat transport vesicles explicitly and do not account for inter-vesicle fusion .In such models, compartment compositions are specified by the concentrations of various molecules, and vesicle traffic is represented by a series of ordinary differential equations.This approach has been applied to consider the detailed case of vesicle exchange
Microbial eukaryotes present a stunning diversity of endomembrane organization.From specialized secretory organelles such as the rhoptries and micronemes of apicomplexans, to peroxisome-derived metabolic compartments such as the glycosomes of kinetoplastids, different microbial taxa have explored different solutions to the compartmentalization and processing of cargo.The basic secretory and endocytic system, comprising the ER, Golgi, endosomes, and plasma membrane, as well as diverse taxon-specific specialized endomembrane organelles, are coupled by a complex network of cargo transport via vesicle traffic.It is tempting to connect form to function, ascribing biochemical roles to each compartment and vesicle of such a system.
possible genomes.Most such genomes are not viable, but some result in functional traffic rules through molecular interactions.These rules then generate the steady-state traffic system, with some set of compartments interconnected by vesicles.We cannot fully know the complexities of mutation, selection and drift, nor the sequence-function relationship of proteins.Nevertheless, we can directly analyze the middle layer of rules.By sampling molecular rules at random, we are testing an evolutionary null hypothesis.Due to the hourglass nature of vesicle traffic, rule-generated traffic systems have many properties which would be unexpected had we sampled the bottom layer at random.The requirement that molecules must be transported in vesicles means that traffic systems have a large number of constraints, and consequently a large number of special properties not expected under random compartmental dynamics.Compartmental maturation has emerged as a recurring theme in cell biology.In some eukaryotic species including metazoans, the cisternae of the Golgi apparatus undergo compositional maturation .This allows the processing of large cargo which cannot fit in small transport vesicles .The early endosomal compartments of many eukaryotes undergo a series of compositional changes before finally fusing with the lysosome .From these diverse examples it might appear that maturation is a specifically selected process driven by specialized molecular machinery.We would argue the opposite: chains of compartmental maturation arise in randomly generated vesicle traffic systems through pure mass balance constraints, in the absence of selective pressure.Indeed, it would be surprising if real cells did not show evidence of maturation.Of course, once mutation has provided an embellished traffic system, mistargeting of enzymes and cargo into new pathways could provide a substrate for selection .Thus, a complex structure that initially arose non-adaptively could be exapted for new function.For example: more so than free-living organisms, parasites are engaged in a constant arms race with their hosts.For Toxoplasma, rhoptry-secreted factors are a key determinant of virulence and thus under constant selective pressure .It is striking that the secretory system of Toxoplasma appears to have “repurposed” the endosomal maturation system for the production of rhoptries .Compartmental maturation is an evolutionarily available source of organellar variation, providing a rich substrate upon which selection can act to modify parasitic secretory pathways.We hope this discussion serves to highlight the utility of simple models, which can help to identify general properties of biological systems even when they do not contain enough detail to be predictive in specific cases.For most biological systems, even under the most optimistic scenarios, there will never be enough information to constrain a complex model, and most of its parameters will be mere guesses.In this situation of incomplete information, the modeler must take a systematic approach.First: decide the level of detail at which we would like to describe some phenomenon of interest.In this case, we are interested in the inter-organelle connectivity of a vesicle traffic network, not in features such as spatial location.One reason this is a good choice is that information about connectivity can be obtained from genetic and cell-biological measurements.Second: determine the least number of assumptions necessary to reproduce known features of the system at the chosen level of detail.If the model fails at this stage, change the assumptions.For example, the movement of single SNARE proteins is beyond the purview of the model; but compartmental maturation is fair game.If we had found that our vesicle traffic model could not account for maturation, a common feature of parasite traffic, it would have signified a serious error and we would have had to return to the drawing board.Third: once the model passes the basic check of reproducing a known set of features, find out what else it predicts.This is more an art than a science.Let us subject our model to the test.What has it taught us?,One important contribution is to rule out previously proposed vesicle traffic mechanisms whose only justification was to explain compartmental maturation.For example, biophysical models have invoked a protein gradient caused by rapid SNARE decay as a mechanism to set up Golgi maturation .We have shown that no assumptions other than the basic mechanisms of vesicle budding and fusion are needed for maturation to arise.Unless some independent evidence exists for rapid SNARE decay, the observation of maturation on its own is not enough to justify such a hypothesis.Apart from ruling out existing proposals, a good model makes new predictions.We have undertaken precisely this type of detailed analysis in the mathematical follow-on paper, but we can already highlight one interesting prediction: we find that cargo richness increases as one moves from early to late stages of a maturation chain.In principle this is measurable by organelle proteomics, and therefore falsifiable.Many successful models in physics and biology have been not predictive but explanatory, providing a unified and elegant resolution to a disparate set of known observations.It is often the case that complex and detailed model provide an illusion of rigor, but have outcomes that are rarely falsifiable.The best test of a model is to ask: are we getting out more than we put in?,The simpler the model, the better the chance that the answer is “yes”.MT conceived the project.SM and MT designed the simulations.SM performed the simulations and analyzed the results.MT wrote the paper.
Here we argue that traffic systems of high complexity could arise through non-adaptive mechanisms via purely physical constraints, and subsequently be exapted for various taxon-specific functions.Sampling at random from among such rules represents an evolutionary null hypothesis: any properties of the resulting cells must be non-adaptive.We show by example that vesicle traffic systems generated in this random manner are reminiscent of the complex trafficking apparatus of real cells.
on a different epitope from the capture antibody.A conjugated enzyme was added into the assay.After incubation periods and wash steps specified by every supplier to remove unbound antibody from the plate, a substrate solution was added in order to obtain a measurable signal.The intensity of this signal was proportional to the concentration of the protein present in the CM.Assays were performed in triplicate, and absorbance at 450 nm was read on a plate reader.70% confluent cells on a 6-well plate were washed twice and scraped with PBS.This fraction was centrifuged at 850 g for 10 min to collect the cells.Cells were then lysed by 15 min incubation in hypotonic buffer supplemented with detergents.After this incubation, nuclei were collected by centrifugation and supernatant recovered as cytosolic fraction.This pellet, including mainly intact nuclei, was lysed in a rocking platform for 30 min with gentle agitation and nuclear soluble fractions were collected after centrifugation.Microarray data of control MCF10A.PLK4 cells and MCF10A.PLK4 cells treated with DOX to induce centrosome amplification is publicly available at ArrayExpress, accession number E-MTAB-6415.Appropriate statistical tests were applied as per described in each legend using GraphPad Prism 5.0.Briefly, student’s t-tests were used for comparisons between two groups.One-way ANOVA with Tukey post hoc test were used for comparison of three or more groups with one independent variable.∗P < 0.05, ∗∗P < 0.01, ∗∗∗P < 0.001, ∗∗∗∗P < 0.0001, ns not significant.
Centrosomal abnormalities, in particular centrosome amplification, are recurrent features of human tumors.Enforced centrosome amplification in vivo plays a role in tumor initiation and progression.However, centrosome amplification occurs only in a subset of cancer cells, and thus, partly due to this heterogeneity, the contribution of centrosome amplification to tumors is unknown.Here, we show that supernumerary centrosomes induce a paracrine-signaling axis via the secretion of proteins, including interleukin-8 (IL-8), which leads to non-cell-autonomous invasion in 3D mammary organoids and zebrafish models.This extra centrosomes-associated secretory phenotype (ECASP) promotes invasion of human mammary cells via HER2 signaling activation.Further, we demonstrate that centrosome amplification induces an early oxidative stress response via increased NOX-generated reactive oxygen species (ROS), which in turn mediates secretion of pro-invasive factors.The discovery that cells with extra centrosomes can manipulate the surrounding cells highlights unexpected and far-reaching consequences of these abnormalities in cancer.uncovered a non-cell-autonomous function for centrosome amplification in cancer.Cells with extra centrosomes induce paracrine invasion via secretion of pro-invasive factors.Altered secretion is partly regulated by elevated reactive oxygen species in cells with extra centrosomes.This work highlights far-reaching consequences of centrosome amplification in cancer.
to improve parts of the CRISPR system; new producer strains can be obtained from marine sources and modified/improved using CRISPR.CRISPR/Cas systems can contain genes that encode highly divergent proteins.All these are a valuable source for the synthetic biology.There are at least two studies which presented CRISPR-based transcriptional cascades for synthetic circuits, such as transcriptional activators and repressors.Voltage-dependent changes led to the change to fluorescence resonance energy transfer signals or changes to endogenous protein fluorescence.While this was done in mammalian cells, it is also applicable for transfer to prokaryotes or other cells.The shift from observation to application is beginning to take place.Synthetic biology opens the door to a new era of producer improvement that could even lead to the creation of new organisms.Advances in oligonucleotide synthesis, where longer pieces are being produced, are enabling the recreation of entire genomic DNA from certain cells.In addition to the development of pathways and components, biological parts can be standardized, and the functional units are being introduced into organisms, with the potential to construct entire organisms de novo.In 2000, a 9.6 kbp hepatitis C virus genome was synthesized.Two years of work later, a synthetic 7.7 kbp poliovirus genome was developed.The next year, the synthesis and construction of a 5.4 kbp bacteriophage Phi X 174 genome took only two weeks.In 2006, the J. Craig Venter Institute constructed a synthetic genome of a newly discovered minimal bacterium Mycoplasma laboratorium.In 2010, Venter demonstrated the synthetic assembly of a 1.08-M-bp Mycoplasma mycoides genome.Eukaryotic algal chromosomes of up to 500 kb have been assembled in yeast at the Craig Venter Institute.It is likely that fully synthetic eukaryotic producers will soon be synthesized and assembled.Other developments include expanding the natural genetic code to an extra base pair.The production of new life forms and new proteins encoded from previously unknown and from developed amino acids is expected in the future.The first synthetic cell created by Venter cost 40 million USD.However, as DNA synthesis and sequencing technologies improve and prices fall, we can expect not only further discovery of naturally occurring organisms, but also the development of more engineered microorganisms.Many companies are involved in creating synthetic cells to capture CO2 and produce renewable fuels such as Joule Unlimited in Cambridge, LS9 Inc. in San Francisco, Amyris Biotechnologies in California, Synthetic Genomics Inc. in California and the Exxon Mobil Corporation in Texas.Synthetic biology serves to benefit from marine metagenomics because lots of the components for synthetic biology are derived from marine microorganisms.With the constant advancement of sequencing technologies, delivering longer reads at lower costs, we can expect a flood of information about the genes, pathways and whole genomes of microorganisms in various environmental populations.The limiting step will be reaching extreme locations to collect some of the most specially adapted organisms to discover not only new genomes, but also potentially undiscovered branches on the tree of life.High-throughput screening utilizing microfluidic droplet technology will allow us to select the best enzymes and best biomolecular producers.Many opportunities remain to catalog genomes and pathways for the simplified selection of necessary parts for genetic engineering.Researchers at the Massachusetts Institute of Technology recently presented CRISPR-based biological parts to perform logical functions for synthetic biology.Both the CRISPR-based transcriptional activators and repressors were designed and demonstrated to be successful for synthetic circuits in mammalian cells; very likely this technology will soon be applied to microbial cells.Both the standardization of the sample collection technique and bioinformatics pipeline will speed up data production and analysis.Large quantities of data are expected to become a challenge for bioinformatics; solutions so far include storing reference genomes with their possible variations.We will see more synthetic organisms using environmental pollutants or industrial waste to produce food, livestock feed, fuels, small molecules and pharmaceuticals.Considerable interest from industry is expected to speed up solutions for scaling up production.Interest from both industry and academia promises to move developments in marine metagenomics along at an impressive pace.
This review summarizes usage of genome-editing technologies for metagenomic studies; these studies are used to retrieve and modify valuable microorganisms for production, particularly in marine metagenomics.The novel genes, pathways and genomes can be deducted.Therefore, metagenomics, particularly genome engineering and system biology, allows for the enhancement of biological and chemical producers and the creation of novel bioresources.With natural resources rapidly depleting, genomics may be an effective way to efficiently produce quantities of known and novel foods, livestock feed, fuels, pharmaceuticals and fine or bulk chemicals.
are an accredited laboratory and therefore have rigorous quality assurance procedures.Briefly, soils were air dried at 35 °C and sieved to <2 mm.These dried and sieved samples were then composited and analysed for ‘total’ heavy metals using a conc.HNO3/HCl digestion and analysed using ICP-MS.While the detection limits for individual elements in dried soil were <0.05 mg/kg, we used a quantifiable limit of 0.1 mg kg−1 for all elements because concentrations below this value are unlikely to be biologically relevant.Data that were log-normally distributed were log-transformed for statistical analyses.Data were considered lognormal when the geometric mean was closer to the median than the arithmetic mean.Statistical analyses were conducted using IBM SPSS 23 software."The significance of differences among the parameters was tested by one-way analysis of variance followed by Tukey's test, at the 95% confidence level.Correlations between concentrations of selected HMs were determined using Pearson correlation coefficient.Table 2 shows that suburban garden soils from Christchurch have significantly higher HM concentrations than rural soils, which is consistent with reports that elevated HM concentrations are generally associated with urbanization.Specifically, concentrations of As, Cu, Pb, Zn and Hg were significantly higher in the urban garden soils than in soils of the rural gardens.Mercury and Pb mean concentrations were >7 times greater, followed by Cu, As and Zn mean values some threefold greater.Wei and Yang also reported the largest ratios of urban-to-rural concentrations for Pb, Cu, Zn as well as Cd in soils of different areas of China.Here, mean concentrations of selected HMs in the urban garden soils were also higher than those in the HAIL sites.Mean Cd, Hg, and Pb concentrations were respectively 10, 5 and 3 times higher in the urban garden soils than in the HAIL sites.This may be due to the fact that activities listed in the HAIL do not include common practices that contaminate soils, such as the widespread use of lead-based paint, the use of galvanised steer, or Cu-based pesticides.Concentrations of Cu, Zn, Hg, Cr and Ni in all samples taken from the rural gardens were within the natural background ranges, whereas the Cd concentrations in 20% of these samples exceeded the rural national Cd standard with the maximum value of 9.7 mg kg−1.Cadmium concentration in eight urban garden soil samples also exceeded the urban national Cd standard.While we did not measure mobility in this study, Li et al. reported that Cd was the most mobile element in urban soils from Lianyungang, China and that this element posed the greatest ecological risk.The maximum concentrations of all other HMs were significantly higher than their respective background values in all the three land categories.One urban garden contained a Hg concentration of 308 mg kg−1, compared to its respective background concentration of 0.18 mg kg−1.This concentration is thirtyfold higher than the Dutch Standard for Hg.Similarly, there were two HAIL sites containing high concentrations of Cu or Cd compared to the Dutch and New Zealand Standards, respectively.Some 46% of the urban garden samples had Pb concentrations higher than the national standard of 210 mg kg−1.The maximum Pb concentration found was 40 times higher than its background concentration, and more than 12 and 5 times greater than the national Pb standard and Dutch Standard for Pb, respectively.Seven HAIL sites also exceeded the national standard for Pb.Moreover, 20% of the urban garden samples and 5 percent of the HAIL site samples exceeded the national standard for both As and Zn.The highest As concentration in the urban garden samples was higher than both the national and Dutch Standard concentrations.Similarly, the highest Zn concentrations among the HAIL and urban garden soil samples exceeded the Zn standard by 9 and 2 times, respectively.Most concentrations of HMs in rural gardens were within their respective background concentration ranges.However, the rural gardens contained significantly elevated Cd concentrations, with some samples in the range of 10 mg kg−1, which is well above the New Zealand Cd standard.This is most likely due to extensive application of Cd-rich phosphate fertilisers in rural areas.Correlation between HM concentrations across all samples showed that Zn was positively correlated with Cu and Cd concentrations in both the garden soils and HAIL sites.This may be associated with the application of fertilisers, which can contain these three HMs or more generally, represent the influence of human activity on soil contamination.Likewise, significant positive correlations between Ni-As, Ni-Cd and Cr-Pb occurred at the HAIL sites.In the garden soils, only Cr and Hg as well as Cd and Cr, Hg or As contents were not significantly correlated Gulan et al. also reported significant positive correlations between Zn and Pb for urban soils in Pristina, Kosovo.However, unlike our findings, this study also reported positive correlations between As and Cd.This difference is likely due to contrasting provenances of HMs in Christchurch and Pristina.HMs of anthropogenic origin would be expected to accumulate in soil over time, due to the activities listed in Table 1.Our data are consistent with this hypothesis because in general, soils from older neighbourhoods had significantly higher concentrations of HMs than those in younger neighbourhoods.Of particular note is the significantly higher Pb concentrations in the pre-50s gardens, where the mean concentration was well above the New Zealand Pb guideline.These soil Pb concentrations are similar to those found in Baltimore, Boston, and Chicago in the USA, and Naples, Palermo, and Rome in Italy.Leaded gasoline and Pb-based paint were reported as the two main historical sources of the soil Pb contamination in the Boston gardens.As with
Some 46% of the urban garden samples had Pb concentrations higher than the residential land use national standard of 210 mg kg−1, with the most contaminated soil containing 2615 mg kg−1 Pb.Concentrations of As and Zn exceeded the residential land use national standards (20 mg kg−1 As and 400 mg kg−1 Zn) in 20% of the soils.
other cities, a significant number of suburban soils from Christchurch gardens were above the national standards for HM concentrations, particularly soils from older neighbourhoods.Unlike some other cities, many suburban soils are used for food production.While we did not test food crops from suburban gardens in this study, there is clearly an increased risk that such crops may contain HM concentrations above food safety standards, which may result from plant uptake or attached soil particles.Vegetable HM contamination was found in vegetables grown in a Pb-contaminated community garden in Omaha, Nebraska.Lead concentrations in samples of leafy greens, eggplant, okra, and tomato grown in soils containing more than 100 mg Pb kg−1 exceeded a limit of 0.5 mg kg−1 established by the U.S. Food and Drug Administration for Pb.While the most concerning contaminants, As, Hg and Pb are not readily taken up by plants into their aerial tissues, contaminated dust may be attached to edible portions, which is difficult to remove, even by repeated washing.Root vegetables may accumulate significant concentrations of HMs.High soil HM concentrations may present a risk to local populations through the entry of dust indoors and its subsequent inhalation and ingestion.Tong found elevated concentrations of Pb and Cu in both indoor and outdoor dust among residential properties in Cincinnati.She did not find a significant relationship between indoor and outdoor dust concentrations across the 121 sites analysed; however, these concentrations were higher than the background soil concentrations and were related, among other things, to the age of the building and the neighbourhood.Subsequent review by Laidlaw and Filippelli suggested that elevated blood Pb levels in children may be linked to soil transported into residences as dust; they also highlighted the importance of variables that influence the likelihood of wind erosion contributing to indoor dust levels.Christchurch is regularly subject to strong seasonal winds and low levels of precipitation, which promote the formation of airborne soil particles.Moreover, there has been considerable demolition and reconstruction work in the Christchurch area subsequent to the earthquakes, which may have contributed to additional dust generation in some neighbourhoods.We did not analyse indoor dust samples, so we cannot comment on whether this may have increased the risk posed by high HM concentrations in household dust; however, this should be investigated further.Human exposure to soil borne HMs can be limited by using soil conditioners that reduce HM bioavailability or by selecting plants where the edible portions are unlikely to contain significant concentrations of HMs.Tree borne fruits are unlikely to have significant amounts of attached soil, as are fruits or vegetables that are peeled or contained within a capsule or pod.The highest risks would be associated with low-growing leafy vegetables or unpeeled root vegetables.Soil HM concentrations should be key criteria when determining the future land use of former residential areas that have been demolished because of the earthquakes in 2010 and 2011.Redeveloping these areas as parklands or forests would result in less human HM exposure than agriculture or community gardens where food is produced and bare soil is exposed.The likely risk of any soil being contaminated is higher for older neighbourhoods.Future research could investigate HM concentrations in fruits and vegetables grown in suburban gardens in Christchurch.Communication of the results of this and subsequent studies to the public should be done in a way to avoid unwarranted hysteria: there is no evidence that Christchurch residents are suffering more ill effects of HMs than the populations of other cities.
Numerous studies have shown that urban soils can contain elevated concentrations of heavy metals (HMs).Christchurch, New Zealand, is a relatively young city (150 years old) with a population of 390,000.Most soils in Christchurch are sub-urban, with food production in residential gardens a popular activity.Earthquakes in 2010 and 2011 have resulted in the re-zoning of 630 ha of Christchurch, with suggestions that some of this land could be used for community gardens.We aimed to determine the HM concentrations in a selection of suburban gardens in Christchurch as well as in soils identified as being at risk of HM contamination due to hazardous former land uses or nearby activities.Heavy metal concentrations in suburban Christchurch garden soils were higher than normal background soil concentrations.Older neighbourhoods had significantly higher soil HM concentrations than younger neighbourhoods.Neighbourhoods developed pre-1950s had a mean Pb concentration of 282 mg kg−1 in their garden soils.Soil HM concentrations should be key criteria when determining the future land use of former residential areas that have been demolished because of the earthquakes in 2010 and 2011.Redeveloping these areas as parklands or forests would result in less human HM exposure than agriculture or community gardens where food is produced and bare soil is exposed.
In this paper, we investigate learning the deep neural networks for automated optical inspection in industrial manufacturing.Our preliminary result has shown the stunning performance improvement by transfer learning from the completely dissimilar source domain: ImageNet.Further study for demystifying this improvement shows that the transfer learning produces a highly compressible network, which was not the case for the network learned from scratch.The experimental result shows that there is a negligible accuracy drop in the network learned by transfer learning until it is compressed to 1/128 reduction of the number of convolution filters.This result is contrary to the compression without transfer learning which loses more than 5% accuracy at the same compression rate.
We experimentally show that transfer learning makes sparse features in the network and thereby produces a more compressible network.
which are needed to develop the best drugs possible for Chagas patients.Given the amount unknown in the field this is not an easy task.Box 2 depicts a few priorities that might be worth considering for the future.The immediate priority should be to better characterize the two current drugs available for the disease with a specific aim to decrease side-effects and allow for better compliance.Clinical trials looking at reduced treatment length and doses of benznidazole are due to start shortly.In addition, results from recent studies will validate or invalidate the current murine disease models.A global and concerted effort is needed to identify surrogate markers of treatment efficacy and possibly also predictors of disease progression.This effort is key for the development of new drugs despite the many associated challenges such as the need for extended patient follow-up or access to well preserved sera samples in trials.This is worthwhile for practitioners too, for whom being able to propose rapid and appropriate information to the patient on the treatment outcome with the ultimate aim of improving compliance, is a priority.Clearly, a switch from the currently fragmented basic and clinical research landscape to a broader collaborative approach with clear focus is needed.This is required in order to generate solid data answering the key Chagas questions which would enable the design of the best drug for Chagas patients as well as the assessment of their efficacy.There is still a significant gap in the Chagas disease R&D landscape and numerous hurdles to overcome.We are dealing with a complex parasite and complex disease with a lot still unknown.Not enough clinical research is feeding back into drug discovery and the lack of markers of cure or treatment efficacy is a major limitation to current R&D efforts.This all adds to the challenge of developing drugs, a process which is already very complex resulting in an extremely high attrition rate for Chagas drug discovery.Although major changes have shaken the Chagas R&D landscape during the last 5 years and there is more confidence today, a lot remains to be achieved.There is a need to redefine R&D priorities for Chagas disease and act accordingly; in particular, taking care in the design of studies in order to make sure that solid data is generated, giving an answer to key questions relevant for a better understanding of the disease and the potential outcome for the patient.We have to challenge current thinking, define the next goals and priorities for Chagas disease as well as the strategy to attain these goals.All this needs not only a broader collaborative approach but also a concerted effort given the limited available resources and the range of questions to answer.In short, this means more basic and clinical research to solve the puzzle piece by piece which will require the Chagas research community as a whole pulling together with a spirit of collaboration.Finally, Chagas disease is also a political challenge that will need to be addressed; serious measures and processes are required to overcome the current situation and barriers to access to treatment as currently too few patients are being treated .This is a sine qua non condition to ensure that CD patients will benefit from new R&D developments for that disease.
Chagas disease, or American trypanosomiasis, is the result of infection by the parasite Trypanosoma cruzi.Although it was first identified more than a century ago, only two old drugs are available for treatment and a lot of questions related to the disease progression, its pathologies, and not to mention the assessment of treatment efficacy, are subject to debate and remain to be answered.Indeed, the current status of evidence and data available does not allow any absolute statement related to treatment needs and outcome for Chagas patients to be made.Although there has been some new impetus in Research and Development for Chagas disease following recent new clinical trials, there is a scientific requirement to review and challenge the current status of evidence and define basic and clinical research priorities and next steps in the field.This should ensure that the best drugs for Chagas disease are developed, but will require a focused and collaborative effort of the entire Chagas disease research community.
benefits to such an alternative.We have focused upon decisions relating to a supplier newly integrated to a company’s supply base in a context where the lead times allow for both learning and improvement activities to be initiated before the regular supply of parts starts.However, a related problem is that of developing existing suppliers with whom the company has a past relationship.Conceptually, our Bayesian stochastic modelling framework supports decisions regarding existing suppliers since it is possible to determine appropriate probability distributions using relevant historical data for the supplier of interest.We have assumed a Gamma prior distribution.Our choice is aligned with our underlying probability model, which is sufficiently flexible to represent many epistemic uncertainty scenarios.We make the common assumption that non-conformances follow a Poisson distribution.The assumptions support the mathematics of the methods developed and can be validated using standard statistical model checks.However, now that our framework has been articulated, a future challenge is to develop a wider class of probability models that might be suitable to capture different supplier data patterns.This might be especially useful if we extend the set of performance characteristics beyond quality to, for example, late deliveries, or consider situations when there is anticipated improvement in supplier quality as might be expected for start-up companies or new production technologies.The EVPI can be expanded to assess the value of learning about the effectiveness parameter γ.Assessing the uncertainty about γ may be complicated by the confounding effect of the supplier’s willingness to engage in development activities.Additionally, when there is value in knowing the effectiveness of an intervention prior to engagement then learning about both the non-conformance and effectiveness rates is needed to assess the net impact.Developing a bivariate model to simultaneously assess the EVPI for both non-conformance rate and improvement effectiveness would allow the synergies of learning within activities and the dependency between the uncertainties to be analysed.Modelling the epistemic uncertainty in the effectiveness rate within the model also presents additional challenges for elicitation of the prior.We express the buyer loss due to a non-conforming part supplied as an unknown parameter, which is typical in the literature.For example, Ketzenberg, Rosenzweig, Marucheck, and Metters find that few studies in an inventory management context report total costs of scenarios considered in value of information analysis in the inventory context.We made this modelling choice partly because of the challenge of accessing financial data and estimating such costs accurately, but also because we found that expressing choices relative to this loss is more useful to supply chain managers since it accords with their practice on penalties.There is a need to provide further guidance in the articulation of these costs even if only for applications support, since we know from our theoretical and empirical work that they will also impact the optimal decision.
We consider supplier development decisions for prime manufacturers with extensive supply bases producing complex, highly engineered products.We propose a novel modelling approach to support supply chain managers decide the optimal level of investment to improve quality performance under uncertainty.We develop a Poisson–Gamma model within a Bayesian framework, representing both the epistemic and aleatory uncertainties in non-conformance rates.Estimates are obtained to value a supplier quality improvement activity and assess if it is worth gaining more information to reduce epistemic uncertainty.The theoretical properties of our model provide new insights about the relationship between the degree of epistemic uncertainty, the effectiveness of development programmes, and the levels of investment.We find that the optimal level of investment does not have a monotonic relationship with the rate of effectiveness.If investment is deferred until epistemic uncertainty is removed then the expected optimal investment monotonically decreases as prior variance increases but only if the prior mean is above a critical threshold.We develop methods to facilitate practical application of the model to industrial decisions by a) enabling use of the model with typical data available to major companies and b) developing computationally efficient approximations that can be implemented easily.Application to a real industry context illustrates the use of the model to support practical planning decisions to learn more about supplier quality and to invest in improving supplier capability.
used as a screener and not as a gold-standard measure of child development.However, these measures have been complemented with more reliable direct measures such as the Woodcock-Muñoz and anthropometric measures.Third, we did not collect daily attendance data, which prevents us from studying more rigorously the effects of actual duration of exposure to the program.We have tried to overcome this issue by presenting heterogeneous effects by rainfall which proxies for delays in center opening.The results of this study show that although modernization of early childhood services is desirable, it is critical that quality in some specific dimensions, related to processes, is specifically targeted and monitored during the transition.The findings in this study and the comparative costs of the new centers and the existing HCBs suggest that the strategy should be thought through carefully.Alternatives might have to be tested, including improvements to HCBs and training of MCs.Different pedagogical models should also be assessed and adult–child ratios carefully considered.However, these results do not necessarily imply that center-based childcare cannot have positive effects, if implemented with the characteristics required to guarantee key elements of structural and process quality and if programs are adequately supported during the transition in order to optimize procedures and ensure quality.Some examples include the results presented by Nores, Bernal, and Barnett who assess a high-quality childcare center-based service in Colombia.The aeioTu program is characterized by high quality components such as high qualifications requirements for staff, pre- and in-service training, strong support staff and services, child monitoring and information system, family and community participation, and the use of a structured developmentally oriented curriculum.The authors reported positive effects on language and cognitive after only 8 months of program attendance.Similarly, Attanasio, Baker-Henningham, Bernal, Meghir, and Rubio-Codina reported positive effects on children’s cognition and language as a result of the implementation of a structured early stimulation curriculum in addition to training and coaching of paraprofessional personnel in a parenting program in rural areas in Colombia.Both are public programs targeted to poor households.Thus, it is important that in implementing strategies to improve child care services for vulnerable children at scale, key factors such as the provision of teacher pre-service and in-service training, children assessment and monitoring, and strong curricular background are kept as a priority.This study was funded by the Colombian Family Welfare Agency through interinstitutional agreements number 059-2010, 309-2011, 483-2011 and 2424-2012.The funding source supported the implementation of this study as the programs under evaluation are managed and funded by this institution.However, the institution did not participate in the study design; in the collection, analysis or interpretation of data; in the writing of the report; or in the decision to submit the article for publication."Professor Attanasio and Professor Vera-Hernández also received funding for this project from the European Research Council under the European Union's Horizon 2020 research and innovation programme.
Colombia's national early childhood strategy launched in 2011 aimed at improving the quality of childcare services offered to socio-economically vulnerable children, and included the possibility that children and their childcare providers could transfer from non-parental family daycare units to large childcare centers in urban areas.This study seeks to understand whether the offer to transfer and the actual transfer from one program to the other had an impact on child cognitive and socioemotional development, and nutrition, using a cluster-randomized control trial with a sample of 2767 children between the ages of 6 and 60 months located in 14 cities in Colombia.The results indicate a negative effect of this initiative on cognitive development, a positive effect on nutrition, and no statistically significant effect of the intervention on socioemotional development.We also explored the extent to which these impacts might be explained by differences in the quality of both services during the transition, and report that quality indicators are low in both programs but are significantly worse in centers compared to community nurseries.
adhesion and needs further investigation.A comparative analysis of patterned and random nanotopographies affecting the bactericidal activity could enhance the current state of understanding in this field.Aside from surface roughness, the other critical parameters that should be studied and correlated with the bactericidal activity include surface wettability and surface energy .Not all hydrophobic surfaces will resist the cells whereas not all hydrophilic surfaces will attract the cells .Therefore, more investigations are warranted to understand the dependence of bactericidal activity on surface energetics of such topographical features.Also, to better understand the adhesion behavior, bacterial strains of various shapes and membrane compositions may also be studied.A simple etching method to generate multi-scale roughness on the surfaces of Al alloys was developed.The etching step is rapidly performed on large sized substrates.The resulting surface is highly bactericidal and the surface displays antibiofouling property against rod-shaped and coccoid shaped bacterial cells.The antibacterial activity is ascribed to the surface topography.It is proposed that a combination of various roughness parameters with similar values will be able to generate a highly bactericidal surface.Particularly effective values of surface parameters for lysing a majority of the attached bacterial cells on the surfaces were identified and could form the basis for engineering antibacterial surfaces.The multi-scale topography was also found to be effective against several drug resistant strains isolated from hospital environments.Such etched Al surfaces with excellent antibacterial activity are expected to benefit several industries in particular in hospital environments to minimize spread of nosocomial infections.
Toward minimizing bacterial colonization of surfaces, we present a one-step etching technique that renders aluminum alloys with micro- and nano-scale roughness.Such a multi-scale surface topography exhibited enhanced antibacterial effect against a wide range of pathogens.Multi-scale topography of commercially grade pure aluminum killed 97% of Escherichia coli and 28% of Staphylococcus aureus cells in comparison to 7% and 3%, respectively, on the smooth surfaces.Multi-scale topography on Al 5052 surface was shown to kill 94% of adhered E. coli cells.The microscale features on the etched Al 1200 alloy were not found to be significantly bactericidal, but shown to decrease the adherence of S. aureus cells by one-third.The fabrication method is easily scalable for industrial applications.Analysis of roughness parameters determined by atomic force microscopy revealed a set of significant parameters that can yield a highly bactericidal surface; thereby providing the design to make any surface bactericidal irrespective of the method of fabrication.The multi-scale roughness of Al 5052 alloy was also highly bactericidal to nosocomial isolates of E. coli, K. pneumoniae and P. aeruginosa.We envisage the potential application of engineered surfaces with multi-scale topography to minimize the spread of nosocomial infections.
authors contributed to the review and drafting of the paper.We declare no competing interests.In this case series analysis, we examined all melioidosis cases diagnosed by the Microbiology Laboratory of Mahosot Hospital in Vientiane, Laos, from October, 1999, to August, 2015, and all culture-confirmed melioidosis cases presenting to the Angkor Hospital for Children in Siem Reap, Cambodia from February, 2009, to December, 2013.We identified patients with melioidosis using hospital microbiology records, and they were defined as patients in whom B pseudomallei had been isolated from at least one clinical specimen.We stratified patients by age, sex, diabetic status, primary occupation, clinical presentation, and organ involvement.Localised infection was defined as a single anatomical focus of infection, whereas disseminated infection was defined as two or more discrete anatomical areas of infection or B pseudomallei bacteraemia."We used the GPS coordinates of patients' home villages or communes to extract weather data from the nearest reliable weather station to the patient's home, using a global climate data repository that sources data from local meteorological stations and has been used in other epidemiological studies.16,23–25",We extracted data from 15 weather stations in Laos and two weather stations in Cambodia.If weather data from the nearest weather station were unavailable at the time of patient presentation, we extracted data from the next nearest weather station."We collected weather data for the 4 weeks leading up to a patient's presentation to the hospital, and included minimum, maximum, and mean temperature, precipitation, mean humidity, visibility, wind speed, and maximum sustained wind speed.Meteorological visibility refers to the transparency of air, and is subject to humidity and air-pollution levels.This study was approved by the Oxford Tropical Ethics Committee, the Lao National Ethics Committee for Health Research, and the Angkor Hospital for Children Institutional Review Board.We aggregated counts of patients with melioidosis by week and by month to allow sufficient resolution to generate a conditional estimate of the incubation period, and sufficient sample sizes to detect annual trends.We averaged or summed weather variables over corresponding units of time.To identify factors associated with melioidosis cases, we performed univariable and multivariable negative binomial regressions by week and by month, with weather variables as the independent variables and the outcome being number of cases.We used weather data from the site with the most patients for regression analyses, using aggregated country-wide patient counts.For all other analyses, we used weather data from patient home villages.Variables that were significantly associated with melioidosis cases in univariable regression analyses were examined simultaneously in multivariable regression models.We performed a correlation analysis among significant variables to rule out multicollinearity.We also calculated the odds of melioidosis for different subpopulations, by stratification factor, during months of low, medium, and high humidity, and months of low, medium, and high wind speed to identify subpopulations that were especially susceptible to fluctuations in these variables.We selected cutoffs for humidity and maximum wind speed to obtain roughly equal case numbers in the low and high categories for each variable.We fitted negative binomial regression models to examine hypothetical timing of B pseudomallei exposure.We estimated the date of exposure by subtracting a range of hypothetical incubation periods from the date of case presentation.By examining the likelihood scores corresponding with models fitted with different assumed incubation periods, we generated a conditional estimate of the melioidosis incubation period.We performed all statistical analyses using R.For regressions, we used the glm.nb function in the MASS package, with a log-link function.We performed a likelihood-ratio test to determine that the negative binomial regression model was required instead of a standard Poisson model due to overdispersion in the count data.We examined residuals and found that errors were not skewed across seasons.The funder of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report.The corresponding author had full access to all the data in the study and had final responsibility for the decision to submit for publication.There were 870 patients diagnosed with melioidosis in Laos, and 173 patients diagnosed in Cambodia during the study periods.Of the 870 patients in the Lao cohort, 357 of 869 were female, 330 of 840 had a medical history of diabetes, and 254 of 564 adults listed rice farming as their primary occupation.Median blood glucose on admission was 226 mg/dL for patients with a history of diabetes and 98 mg/dL for those with no history of diabetes.480 of 866 patients presented with disseminated infection as opposed to localised infection, and the most commonly infected sites were the lung, skin and soft tissue, parotid gland, bone or joint, lymph node, spleen, liver, and urinary tract.The Cambodian cohort was comprised entirely of children, with a median age of 5·7 years, and 74 were girls.The majority of Cambodian patients presented with localised infection as opposed to disseminated infection, especially of the skin and soft tissue, parotid gland, and lungs, and their clinical features have been previously reported in detail.22,Weather data from the nearest weather station were unavailable for 6% of patients, for whom data were extracted from the next nearest.Vientiane in Laos and Siem Reap in Cambodia had the most patients, so these sites were used in the country-wide regression analyses.Melioidosis cases peaked annually during the rainy season in both Laos and Cambodia, and declined during the dry season.In the univariable regression analyses, melioidosis admissions in Laos were significantly associated with humidity, precipitation, low minimum temperature, mean temperature, low visibility, maximum wind speed, total days with rain, and total days with thunderstorms.When
Methods: We examined the records of patients diagnosed with melioidosis at the Microbiology Laboratory of Mahosot Hospital in Vientiane, Laos, between October, 1999, and August, 2015, and all patients with culture-confirmed melioidosis presenting to the Angkor Hospital for Children in Siem Reap, Cambodia, between February, 2009, and December, 2013.We also examined local temperature, humidity, precipitation, visibility, and wind data for the corresponding time periods.We estimated the B pseudomallei incubation period by examining profile likelihoods for hypothetical exposure-to-presentation delays.Findings: 870 patients were diagnosed with melioidosis in Laos and 173 patients were diagnosed with melioidosis in Cambodia during the study periods.
these variables were examined simultaneously in a multivariable regression model, only humidity, low visibility, and maximum wind speed remained significantly associated with melioidosis admissions.In univariable regression analyses, melioidosis admissions in Cambodia were significantly associated with humidity, minimum temperature, maximum wind speed, total days with rain, and total days with thunderstorms.In a multivariable regression model examining humidity and total days with rain, both variables remained significantly associated with cases.When examined in a multivariable model with humidity, maximum wind speed also remained significantly associated with admissions.When we ran multivariable models with the Lao cohort stratified by age, humidity, low visibility, and maximum wind speed were independent predictors of adult admissions, whereas humidity, maximum wind speed, and low minimum temperature were independent predictors of paediatric admissions.A correlation analysis of significant climate variables revealed a moderate positive correlation between weekly humidity and precipitation, and weak or negligible correlations among other variables.The large sample size of the Lao cohort enabled comparison of patients by demographic and clinical features to determine which subpopulations were most strongly affected by seasonal variables.Specifically, we examined the odds of infection during months of low, medium, and high monthly humidity, by each stratification factor.Children were almost three times more likely than were adults to become infected during months of high humidity, relative to low humidity months.The odds of infection during months of high humidity were not significantly different between women and men, or rice farmers and other professions.Patients with blood glucose concentrations of 150mg/dL or more at presentation were not at significantly greater odds of presenting with melioidosis during humid months than those with blood glucose levels lower than 150 mg/dL.Patients with a medical history of diabetes were less likely to present during high humidity months than patients without a medical history of diabetes.Localised infections were more likely to occur than were disseminated infections during months of high humidity.Lung and skin and soft tissue infections were also less common in months of high humidity.We examined these same subpopulations with respect to low, medium, and high maximum wind speeds, and identified a near-significant increased odds of lung infection in months of high wind speed.The odds of developing a disseminated infection were significantly higher than those of a localised infection during months of high wind speed.Plots of the mean humidity and visibility 0–4 weeks before melioidosis presentation in Laos showed a clear pattern of increased humidity and decreased visibility in the 2 weeks before presentation.We incorporated hypothetical exposure-to-presentation delays in our regression model to link melioidosis cases in Laos with weekly humidity, and compared the goodness of fit of these models for delays of 0–7 weeks to generate an estimate of the melioidosis incubation period that was conditional on the validity of the association with humidity.We selected humidity because it was the strongest predictor of cases in regression analyses.The likelihood scores for the regression model incorporating incubation periods of 0–4 weeks indicate a maximum likelihood estimate of the melioidosis incubation period of 1 week.Elucidating the climatic factors that contribute to the marked seasonal incidence of melioidosis is crucial to understanding the epidemiology of this deadly disease, and to developing evidence-based preventive strategies for communities living in endemic areas.In this study, we examined climatic variables to identify drivers of infection in two highly endemic low-income countries, Laos and Cambodia.We uncovered several new features of melioidosis epidemiology, which have potential implications for prevention.Several studies have examined the climatic predictors of melioidosis incidence,6,8–10 but to our knowledge this is the first study to do so at high spatial and temporal resolution in Laos and Cambodia.Additionally, we believe this study is the first to examine the association between climatic variables and melioidosis for specific demographic and clinical subpopulations, including children.There is precedent for climatic factors driving the incidence of infectious disease, either by affecting host and vector dynamics or by directly affecting the pathogen.Notable examples of seasonal infections include influenza, malaria, bacterial meningitis, and penicilliosis.16,27–29,Our findings are consistent with proposed mechanisms of B pseudomallei spillover from the environment into human populations.We suspect that humidity, rain, and high-speed winds play an important part in exposing susceptible populations to B pseudomallei, partly by promoting the formation and spread of contaminated aerosols, or by facilitating growth of B pseudomallei at the soil surface.The strong association between melioidosis incidence and low visibility in Laos is consistent with this hypothesis, because high aerial water content would be expected to impair visibility.Our findings are consistent with other studies examining climatic drivers of melioidosis in Australia, Singapore, and Taiwan.6,8,9,In these settings, melioidosis case clusters were identified after periods of heavy rainfall and high humidity, and in Taiwan, wind speed also seemed to play a part, supporting the hypothesis that severe weather events drive exposure.It is alarming that children are three-times more likely to become infected during months of high humidity, compared with adults, and this finding might reflect increased environmental exposure for children or immunological naivety, since B pseudomallei seroprevalence rates among children have been shown to increase with age.30,Ingestion of contaminated drinking water or swimming in contaminated bodies of water during these months could also account for this association.31,Alternatively, children could have a shorter incubation period and therefore present to the hospital sooner after exposure than adults.We were surprised to find that patients with a history of diabetes were less likely to present during months of high humidity than those without such a history, because diabetes is a known risk factor for melioidosis
Melioidosis incidence is highly seasonal.Melioidosis cases were significantly associated with humidity (p<0.0001), low visibility (p<0.0001), and maximum wind speeds (p<0.0001) in Laos, and humidity (p=0.010), rainy days (p=0.015), and maximum wind speed (p=0.0070) in Cambodia.Compared with adults, children were at significantly higher odds of infection during highly humid months (odds ratio 2.79, 95% CI 1.83–4.26).The maximum likelihood estimate of the incubation period was 1 week (95% CI 0–2).
and one might suspect that these patients would be particularly susceptible during months with high environmental exposure.32,33,One possible explanation would be that patients with known diabetes had better controlled disease than did those with potentially undiagnosed diabetes; however, significantly higher average blood glucose levels on admission among those with a history of diabetes seem to suggest otherwise.A more intriguing but perhaps less likely explanation for this paradoxical finding is that patients with a history of diabetes might have become infected with B pseudomallei in the past.For a subset of these patients, admission to the hospital could have represented reactivation of latent infection rather than primary infection, which would not be expected to follow a seasonal pattern.Another explanation could be that poor glycaemic control lowers the required inoculum to cause disease, or increases the risk of transformation of B pseudomallei infection to overt disease, enabling year-round exposures that lead to disease and therefore a non-seasonal pattern.Finally, although diabetes is not thought of as a seasonal illness, several reports from diverse climates have shown that haemoglobin A1c concentrations increase during cold and dark months.34–37,Although such a study has not been done in Laos or Cambodia, theoretical seasonal fluctuations in HbA1c could also help to explain our finding that patients with a history of diabetes were less likely to present than were patients without a history during the months of presumed highest exposure.We found a non-significant association between lung infection and high wind speed, which is consistent with the hypothesis that wind contributes to inhalational exposure to B pseudomallei.We suspect that the OR for lung infection during high wind speed months did not reach significance because of the relatively low sample sizes.The finding that disseminated infection was more likely to occur during months of high wind speed could reflect higher rates of dissemination from the lung, as opposed to other foci.It was somewhat surprising that lung infections were less likely to occur than non-lung infections in very humid months.This result, in conjunction with the finding that localised infections were more likely than were disseminated infections during months of high humidity, might suggest that high humidity facilitates non-airborne exposure to B pseudomallei, perhaps by facilitating growth or survival in topsoil or air-exposed surfaces.Our estimate of the melioidosis incubation period is consistent with previous estimates,6,9 and underscores the speed with which melioidosis can progress following environmental exposure.Our study is limited in that it did not include data on all known risk factors for melioidosis, or exposure history.These data would have helped rule out potential confounders and provide an additional means of testing hypotheses about how patients become exposed to B pseudomallei.The rainy season is the time when people are most likely to be performing agricultural work, and this might have confounded some of our results.Although only 45% of adults in Laos reported rice farming as their primary occupation, the true proportion of study participants who farm rice was probably higher because many adults in Laos farm rice as a secondary occupation or hobby.This might explain our finding that rice farmers were not at increased risk of infection during humid or windy months.However, our finding that climate factors predicted case numbers at the weekly level suggests that these associations are real, since agricultural work would not be expected to vary by the week within the rainy season, or to increase during humid weeks.Another limitation is the fact that our study probably only included a fraction of the total population affected by melioidosis in Laos and Cambodia, because of limited access to health care.Additionally, our analysis did not account for potential fluctuations in population size over time and across regions, which could have affected the number of individuals at risk of infection at any given time.This study benefited from a large sample size and detailed clinical and climatic data over a 16-year period in Laos and a 4-year period in Cambodia, enabling precise quantification of associations between melioidosis incidence and climatic variables.The study was also strengthened by inclusion of data from multiple study sites, and from multiple demographic groups, including children.Our findings should help guide prevention strategies in these endemic settings.In particular, during highly humid or windy weeks, it is reasonable to presume that the risk of exposure is significantly increased.Children seem to be at especially high risk on these occasions.Our results suggest that regular screening and treatment to control diabetes in adults and children, and avoidance of soil contact during humid and windy weeks, might reduce incidence.However, these approaches would be difficult to achieve in low-income settings such as Laos and Cambodia.Improved awareness of the risk of infection among at-risk communities and clinicians, especially during weeks of high humidity or wind speed, could reduce mortality due to melioidosis in these resource-limited settings.Further study will be needed to identify the specific human activities, and the specific ecological or environmental changes occurring in the soil, water, and air, that drive exposure to B pseudomallei, as well as the potential impact of climate change.
We aimed to identify the climatic drivers of infection and to shed light on modes of transmission and potential preventive strategies.Lung and disseminated infections were more common during windy months.Our findings highlight the risks of infection during highly humid and windy conditions, and suggest a need for increased awareness among at-risk individuals, such as children.
the pertinent tumor–immune interface.In one study 92% concordance between biopsy and resection was reported , while in another the concordance rate was much lower with underestimations found with small biopsies .In this study SP142, a slightly less sensitive antibody, was used and also evaluated immune cells.Because in practice more biopsies than resection specimens are stained for PD-L1, an inaccuracy in PD-L1 testing due to sampling of heterogeneous tumors is unavoidable.The maximally attainable accuracy for PD-L1 IHC in daily practice is not 100%, at least due to heterogeneity in PD-L1 expression.The clinical used thresholds actually generate categories in a continuum.The requested reading by pathologists will inherently have some variation in interpretation.The maximally attainable accuracy between pathologists from around the world for reading of PD-L1 may possibly be elucidated with data of the upcoming ‘blueprint 2′ study.Reference images are likely to be helpful in this new field of histopathology.Cytology is not used in the NSCLC immunotherapy phase III trials and therefore, not clinically validated.A recent study shows with careful methodology comparable outcome for cytological and histological PD-L1 staining .Although promising, more studies are needed and care should be taken for alcohol fixed samples, which do not necessarily reveal the same results with the for formalin fixed samples validated PD-L1 IHC protocols .Overall, in performing PD-L1 IHC as predictive biomarker, a clinical validated assay should be used.In daily practice the pathologist should have reference images of various PD-L1 thresholds available.The laboratory should regularly and successfully participate in external quality assessment schemes.
In summary, the PD-L1 biomarker discussed here highlights the importance of understanding the practice of IHC.This can be used to the patients advantage, with appropriate usage.The currently available literature on PD-L1 IHC from a methodologic point of view has not shown that different assays are comparable.The route of laboratory developed test and commercial test validation is the same: challenging and complex.Executing this process along proper methodologic lines is needed to ensure that patients receive the most accurate and representative test outcomes.
Periostin is a 90 kD matricellular protein with well-described roles in osteology, tissue repair, oncology, cardiovascular and respiratory systems, as well as inflammatory processes .Periostin interacts with signaling pathways and controls the expression of downstream genes that regulate cellular interactions within the extracellular matrix .Periostin has been implicated as a potential systemic biomarker for a number of diseases, including asthma .Asthma is a complex heterogeneous disorder, estimated to affect up to 334 million people worldwide .Type 2 asthma is a subtype characterized by release of the cytokines interleukin-4, IL-5 and IL-13 .These cytokines are thought to play a central role in the inflammatory process and symptoms in asthma and are considered to be important targets for the management of patients with severe uncontrolled disease .IL-13 is a key mediator of Type 2 inflammation and, as such, is being explored as a target for new treatment options .Periostin is expressed in lung fibroblasts and bronchial epithelial cells and has recently been identified as a candidate biomarker downstream of IL-13 signaling; its expression is upregulated in patients with Type 2-driven asthma .Periostin is associated with increased eosinophilic airway inflammation, increased fibrosis and mucus composition , and can be detected in peripheral blood .The Roche Elecsys® Periostin immunoassay is an electrochemiluminescence assay developed for the in vitro quantitative determination of periostin in human serum.A clinical trial version of the assay has been used to stratify patients with moderate-to-severe asthma into ‘periostin-low’ and ‘periostin-high’ groups .The 50 ng/mL cut-off was derived in the lebrikizumab MILLY study as a cut-off to predict benefit from the anti-IL-13 blocker lebrikizumab , and was pre-specified in subsequent trials with lebrikizumab.The Elecsys® Periostin immunoassay was developed to provide an automated assay platform to help ensure that reproducible measurements of serum periostin are obtained across different laboratories worldwide.The clinical trial version of the assay has previously demonstrated robust technical performance and precision .Here, we characterize performance of the final Elecsys® Periostin immunoassay.This study included multi-lot analysis of repeatability, intermediate precision and reproducibility in a multicenter evaluation of the assay.The Elecsys® Periostin assay is a fully automated immunoassay operated on the e601 module of the cobas 6000 system equipped with software version 05–01 or higher.The assay employs two monoclonal antibodies that target different epitopes of periostin in a sandwich immunoassay configuration .Detection is based on electrochemiluminescence technology .The assay has a total turnaround time of 18 min and requires serum for testing.The measuring range spans 10–160 ng/mL .Calibration is performed using the Periostin CalSet®, consisting of two lyophilized calibrators at approximately 0 and 50 ng/mL, and a master curve provided via the reagent barcode.As there is currently no international standard or established in vitro diagnostics assay for periostin available, the value assignment was based on weight using recombinant human periostin.For the described studies, four reagent lots of the final assay were available for testing.For the method comparison experiment one lot of the clinical trial version was used.Limit of blank, limit of detection, and limit of quantification were determined according to Clinical and Laboratory Standards Institute document EP17-A2 .LoB was determined over 3 days.Analyte-free equine serum was used for assessing LoB.For LoD, five different human serum samples with low periostin concentrations were each diluted with equine serum to cover a range from the LoB to a level approximately 3-fold higher than the LoB.Samples were analyzed in duplicate using one instrument and Lot A, on two runs per day over 3 days.For LoQ, eight different human serum samples were diluted with equine serum to yield samples covering the concentration range from LoB to approximately eight times LoD.Samples and dilutions were tested over 5 days on one instrument using Lot A.As there is no reference method or material available, the LoQ was estimated as the concentration with an intermediate precision of 20%.Linearity was assessed according to CLSI document EP06-A using Reagent Lot A. Briefly, two human serum samples with periostin concentrations just above the anticipated measuring range and one sample from a patient with asthma spiked with recombinant periostin were used.For each sample, a dilution series was prepared to below the measuring range using equine serum.Periostin in each diluted sample was measured in triplicate on one instrument.The measured values were plotted on the y-axis against the expected values on the x-axis.Regression of the measured values against the expected values was performed using a first, second, or third order polynomial function.Spiking experiments were performed to examine various substances for potential interference with the assay.A serum sample with a periostin concentration in the range of ~ 50 ng/mL was spiked with various interferents and drugs.Common interferents tested included serum albumin, bilirubin, biotin, hemoglobin, immunoglobuline A, IgG, IgM, Intralipid® and rheumatoid factor.In addition, 34 common general drugs and asthma drugs were tested.Recovery of periostin compared with a non-spiked reference sample was conducted for each compound.Each sample was measured in triplicate or 5-fold on one instrument, using Lot A.Serum was obtained from 13 healthy volunteers under informed consent.Recovery of periostin in serum under different storage conditions and handling conditions was evaluated using measurements in fresh serum as reference.For the lowest concentration in this experiment, one serum sample was diluted with multi-assay diluents; to achieve the highest concentration, one sample was spiked with recombinant periostin.All measurements were performed with one instrument, using Lot A.A method comparison between the final and the clinical trial version of the assay was performed according to CLSI guideline EP09-A3 .In total, 129 samples were
Objective The multifunctional cytokine IL-13 is thought to play a central role in Type 2 inflammation in asthma.Serum periostin has been explored as a candidate biomarker for evaluating IL-13 activity in the airway.We describe the technical performance characteristics of a novel, fully automated immunoassay for the determination of periostin in serum.
to keep the specificity of the assay.Furthermore, standardization procedures were implemented to ensure a comparable readout with both assay versions.The method comparison experiment indicates that there are no relevant systematic differences between both versions of the assay.For the final version of the Elecsys® Periostin immunoassay, the measured values for LoB, LoD and LoQ outperformed the pre-specified values.Using the clinical trial version of the assay, the lowest sample observed in MILLY , LUTE and VERSE studies was measured at approximately 23 ng/mL.Fingleton et al. reported values from 15 to 165 ng/mL in 386 adults with symptomatic airflow obstruction using the same assay .In adults without asthma or chronic obstructive pulmonary disease, periostin values ranged from 28.1 ng/mL to 136.4 ng/mL .Periostin values in several hundreds of commercial serum samples from presumably healthy volunteers were always above 10 ng/mL.Hence, sensitivity of the assay and the anticipated lower end of the measuring range of 10 ng/mL are sufficient to detect periostin in human serum samples.Samples reading above the measuring range of 160 ng/mL can be diluted with Elecsys® multi-assay diluent, based on analyte-free equine serum.Our study showed satisfactory linearity using this diluent.No significant differences in the recovered periostin concentrations were observed in the presence of a broad range of potentially interfering substances and drugs.There is a limitation for N-acetylcysteine, which interferes at very high testing concentrations recommended by the CLSI guideline EP7-A2 , leading to an over-recovery of periostin beyond the acceptance limit.However, at the tested concentration of 15 μg/mL, which is at least 3-fold higher than the maximum pharmacologic peak concentration observed after standard oral dosage of 600 mg/day , recovery was within the acceptance limits.We speculate that reducing agents such as NAC induce the dissociation of presumably disulfide-linked periostin dimers, generating a higher concentration of immunologic active species.Addition of dithiothreitol, a commonly used reducing agent, to serum samples did induce a similar effect.Periostin is stable in serum and suitable for use in routine clinical practice.However, serum must be allowed to clot, centrifuged and separated from the clot within 4 h after collection.Extended storage of clotted, non-centrifuged serum can lead to an over-recovery of periostin.Sufficient precision and accuracy are required for Periostin testing to ensure that patients are correctly classified as “low” or “high”, given the relatively small distribution and the 50 ng/mL cut-off used for patient stratification in lebrikizumab studies.An external reproducibility study performed at three participating laboratories using three different reagent lots confirmed good assay performance under field conditions.Two of the samples were chosen such that their concentrations were close to the cut-off used in past studies.However, the observed CV of all samples tested is essentially constant, indicating very little concentration dependency of assay precision and reproducibility across the measuring range.In conclusion, the results of the present study demonstrate that the Elecsys® Periostin immunoassay is accurate, precise and shows no significant interferences in the range of values important for routine clinical use.The following are the supplementary data related to this article.Limit of Quantification of the Elecsys® Periostin immunoassay determined across eight serum samples at 20% CV.Supplementary data to this article can be found online at http://dx.doi.org/10.1016/j.clinbiochem.2016.10.002.The Roche Elecsys® Periostin immunoassay is not yet approved for clinical use.Elecsys® is a trademark of Roche.SP, RK, GK, and RPL are all employees of Roche Diagnostics.RHC has acted as consultant for Roche Diagnostics.SAJ and REO have nothing to declare.
Design and methods Limit of blank [LoB], limit of detection [LoD] and limit of quantitation [LoQ], linearity, precision and reproducibility across sites and lots were evaluated according to Clinical and Laboratory Standards Institute guidelines.Interferences and sample stability were also investigated.Results The pre-specified values for LoB (2 ng/mL), LoD (4 ng/mL) and LoQ (10 ng/mL) were met.The assay was linear throughout the measuring range (10–160 ng/mL) with recoveries within ± 10% of target at concentrations > 30 ng/mL and within ± 3 ng/mL at concentrations ≤ 30 ng/mL.Recovered periostin concentrations were also within ± 10% of target in presence of 43 potentially interfering substances and drugs.Samples were stable across various storage conditions and durations (24 h at room temperature, 7 days at 4 °C, 12 weeks at − 20 °C, and 3 freeze/thaw cycles).The final assay correlates to the assay version used in previous clinical trials (Pearson's r = 0.998, bias at 50 ng/mL = 1.2%).Conclusion The performance evaluation of the Elecsys® Periostin immunoassay including a multicenter precision analysis demonstrated that the assay is suitable for measuring serum periostin at clinically important concentrations around 50 ng/mL.
Air pollution monitoring requires high-precision instruments, which are expensive and often limit the number of locations observed .In addition to financial constraints, time constraints presented by monitoring multiple sites with a limited number of monitors often prevent long-term continuous monitoring, which is required to accurately estimate long-term air pollution at a given location .To address these challenges, researchers often deploy portable monitors for short periods to observe air pollution at many different locations.The goal of the portable monitor use is to obtain spatially and temporally diverse data.The data collected from the short-term monitoring campaigns have been used to estimate long-term concentrations and develop predictive models of air pollution.Long-term estimates are necessary because these are the values most often used in epidemiological studies of health effects from air pollution .Aside from calculating the raw average of short-term samples and treating these values as the long-term concentration estimate, multiplicative temporal adjustments are often employed to predict long-term exposure estimates .Air pollutants are often log-normally distributed, with many low concentration observations and fewer high concentration observations, see Fig. 1.This method has been demonstrated to improve estimates compared to other multiplicative adjustments .Chastko and Adams , simulated mobile monitoring campaigns with air pollution observations from three different cities and eight pollutants.Mobile pollution samples were adjusted using multiple temporal adjustment approaches to predict long-term concentrations for each pollutant.This analysis revealed that the log median-scaled adjustment was more accurate the all other temporal adjustments included in the study.For full details on model validation, see Chastko and Adams .The method fusion approach can be applied to any mobile air pollution monitoring dataset and can produce more accurate long-term estimates compared to existing temporal adjustments.These estimates are more accurate because the method fusion approach controls for inflation of the central tendency produced by log-normal distributions, which are often present in air pollution data.The following example demonstrates how to apply the log median-scaled adjustment to a sample of air pollution data.This workflow is presented in the programming language R. For this example, mobile data will be represented as a subset of data from a stationary air pollution monitor in Paris France.To access various functions used in this demonstration, the following R libraries are loaded.To access the sample data for this demonstration, a CSV file containing hourly Nitrogen Dioxide observations from 2016 in Paris France is loaded into R.This data is hosted on GitHub and was originally obtained from AirParif .The following code block loads the data into R, assigning it to a variable air.pollution.data.A sample of the air pollution data is provided in Table 1, it is a collection of air pollution observations from two air pollution monitors with time signatures for each observation.With the data loaded into R, a sample of air pollution data can be taken from one of the monitors to represent the mobile data.In this example, 24 h of air pollution observations will be used.Table 2 displays the sample data and the temporally corresponding reference data that will be used to adjust each sample from the mobile data.Station 1 is used as the reference station and Station 2 is used as the sample station.Additionally, the annual median NO2 concentration is calculated for the reference monitor.Fig. 2 displays the time slice of mobile data in relation to the entire time series of NO2 values observed at Station 2.Now that all the required data have been selected, we can proceed to apply the temporal adjustment.The R implementation of the adjustment is shown below as the LogMedianScaled function.This function requires a vector containing the reference data, a vector containing the mobile data and the annual median pollutant concentration calculated from the entire reference dataset.The LogMedianScaledAdjustment function returns a single value representing the annual average air pollution concentration estimated from the mobile sample by averaging each adjusted value in the mobile sample.Table 3 shows the adjusted values calculated by the temporal adjustment and the raw input values used to adjust the data.In this example, the raw data produce a long-term estimate of 42.62 ppb, the LogMedianScaledAdjustment produces an annual NO2 estimate of 37.15 ppb, and the actual long-term value was 30.6 ppb.By applying the log median-scaled adjustment, estimation error was reduced from 12.02 ppb to 6.55 ppb using only a 24 h sample to estimate the annual average.To visualize the accuracy of the temporally adjusted estimate we can plot the observed average annual NO2 value, calculated from the stationary data, alongside the raw sample average, the temporally adjusted average and the sample data.Fig. 3 shows that the temporally adjusted average is more accurate than the raw sample’s average.
Mobile air pollution monitoring is an effective means of collecting spatially and temporally diverse air pollution samples.These observations are often used to predict long-term air pollution concentrations using temporal adjustments based on the time-series of a fixed location monitor.Temporal adjustments are required because the time-series is often incomplete at each spatial location.We describe a method-fusion temporal adjustment that has been demonstrated to improve the accuracy of long-term estimates from incomplete time-series data.Our adjustment approach combines the techniques of using a log transformation to modify the air pollution samples to a near normal distribution and incorporates the long-term median of a reference monitor to mediate the effects of estimate inflation created by outliers in the data.We demonstrate the approach with hourly Nitrogen Dioxide observations from Paris, France in 2016.Method-Fusion Benefits: .Log transformations control for estimate inflation created by log normally distributed data.Adjusting data with the long-term median, rather than the mean, controls for estimate inflation.Produces more accurate long-term estimates than other adjustments independent of the pollutant being estimated.
were saved into a separate folder for quantification.Systematic random sampling and unbiased stereological methods were used for quantification as adopted from previously published studies .Six raw tiles were chosen from area of interest for quantification, the first raw tile was always included, and 5 other raw tiles were selected at regular intervals according to the number of tiles collected for the section.Number of cells and staining intensity were accounted for as follows.A counting frame was superimposed onto the raw tile, the right and upper boundary of the tile were “forbidden lines,” the left and bottom boundary were “acceptance line.,Number of cells was counted only if they either lay entirely within the counting frame or cross an acceptance line without also crossing a forbidden line .The staining intensity was estimated with Image J.The data obtained were added together to form an estimate of the number of POMC-, MOR-, and ENK-positive cells or fibres.This process was repeated for each animal.For the purpose of presentation, images were also captured on a Leica DMIRE fluorescent microscope with a TCS SP1 confocal head and a 40× oil immersion objective.All individual data points were represented as mean ± SEM.EMG data were normally distributed.Comparisons were made between the baseline values and those obtained following intra-PAG microinjections within each individual animal using repeated-measures one-way analysis of variance with Bonferroni posttests.Statistical comparisons between the age groups and drugs were made using 2-way ANOVA with Bonferroni posttests.Statistical comparison between the age groups for the expression of various endogenous opioid targets in TaqMan RT-PCR and immunohistochemical experiments were made by one-way ANOVA with Bonferroni posttests.Baseline EMG activity in P10, P21, and adult rats were not significantly different.As neonatal animals have significantly lower mechanical thresholds than older animals, different ranges of stimuli were used in the different age groups.Fig. 2A and B shows typical raw EMG traces evoked by mechanical stimulation of the plantar hind paw using vFh in each age group before and after spinal administration of DAMGO and CTOP.Direct application of DAMGO onto the L4-5 spinal cord of lightly anaesthetised rats produced a reduction in spinal reflex excitability .Significant reductions in spinal excitability were observed at all age groups between saline- and DAMGO-treated animals.The degree of inhibition observed was significantly greater in the P10 rats compared to adults, illustrating the greater potency of opioidergic ligands in younger animals.The MOR antagonist CTOP applied spinally had no effect on reflex excitability in all ages when compared to saline or to predrug responses .Changes in reflex excitability were also accompanied by parallel changes in mechanical threshold.Mechanical thresholds significantly increased after DAMGO was applied spinally .Comparisons between the different drug treatment groups showed that DAMGO significantly increased thresholds compared to saline in P10 and P21 rats, but not in adults.As with changes in EMG excitability, comparisons between the age groups showed that DAMGO-mediated increases in threshold were larger in P10 and P21 rats when compared to DAMGO-treated adults.Spinal application of CTOP had no effect on mechanical threshold at any age when compared to saline-treated groups or predrug responses, but in P10 and P21 animals, post-CTOP responses were significantly different from post-DAMGO responses.Previously, we have shown that DAMGO applied to the RVM is pronociceptive in P21 rats whilst being analgesic in adults .The PAG is the major source of afferent input to the RVM and we were therefore interested in investigating whether this excitatory effect of DAMGO was also present in the immature PAG.Fig. 3A and B shows typical raw EMG traces evoked by mechanical stimulation of the plantar hind paw using vFh in each age group before and after intra-PAG microinjection of DAMGO and CTOP, respectively.Administration of DAMGO and CTOP produced differential responses in the different age groups with respect to reflex excitability and mechanical thresholds .In P10 rats, neither DAMGO nor CTOP had any effect on spinal excitability when compared to saline-treated animals or predrug responses .DAMGO, however, significantly facilitated spinal excitability in P21 rats and this was accompanied by a significant reduction in mechanical thresholds when compared to saline-treated rats.The comparisons of the effects of intra-PAG DAMGO between P21 and both P10 and adult rats showed that DAMGO is pronociceptive in P21 rats only.Spinal reflex excitability after DAMGO injection into the PAG in P21 was significantly higher when compared to P10 and adult rats.Mechanical threshold was also significantly lower in P21 when compared to adults.CTOP significantly increased spinal excitability in adult rats.In P21 rats, reflex excitability was increased compared to predrug responses only, and this effect was small.No effect was observed in P10 when compared to saline-treated age-matched controls or predrug responses.The increase in adult spinal excitability was accompanied by a significant reduction in mechanical threshold when compared to saline or changes in threshold in P10 or P21 rats.These data show that MOR-mediated signalling in the PAG produces differential responses over postnatal development.At P21, but not P10, DAMGO in the PAG is pronociceptive, whereas it is inhibitory in adult rats.Furthermore, antagonising MOR has significant effects in adult rats, indicating the presence of a tonic opioidergic tone within the mature PAG.The switch from net facilitatory to tonic inhibitory output in opioidergic signalling within the PAG over postnatal development may reflect underlying changes in the expression of MOR and opioidergic peptides.To test this hypothesis, TaqMan RT-PCR was employed to assess whether expression of endogenous opioidergic targets are changed during the postnatal period.TaqMan RT-PCR for POMC, ENK, and MOR was performed in the PAG and
Microinjection of the μ-opioid receptor (MOR) agonist DAMGO (30 ng) into the PAG of Sprague-Dawley rats increased spinal excitability and lowered mechanical threshold to noxious stimuli in postnatal day (P)21 rats, but had inhibitory effects in adults and lacked efficacy in P10 pups.A tonic opioidergic tone within the PAG was revealed in adult rats by intra-PAG microinjection of CTOP (120 ng, MOR antagonist), which lowered mechanical thresholds and increased spinal reflex excitability.Published by Elsevier B.V. All rights reserved.
anaesthetised rats, which are routinely used in studies investigating pain control in neonates .This experimental preparation removes confounding factors such as handling stress, novel testing environment, and perhaps most importantly, the effects of a protracted period of maternal separation, which are known to significantly affect pain responding in neonates via an opioid-dependent pathway .Our electrophysiological studies showed that within the PAG, tonic opioidergic control of neurotransmission exists in adult rats.Although a small increase in spinal reflex excitability was seen in P21 rats when CTOP was injected into the PAG, there were no significant differences between post-CTOP and postsaline responses in both reflex excitability and mechanical threshold.On the other hand, in adult rats, CTOP increased spinal nociceptive reflex excitability, and mechanical thresholds were also significantly decreased in adult rats after the drug was injected into the PAG.These results suggest that supraspinal MORs are tonically active in the healthy, mature pain-modulation circuit.It is also evident that, in our anatomical studies, the expression of POMC in the PAG increased as the animals aged.Together, these physiological and anatomical data suggest that opioid-related activity in the mature PAG is higher when compared to neonates.Neonatal injury is known to induce changes in the functioning of pain modulation in later life and increase opioidergic tone from the PAG .Neonatal pain experience therefore shapes pain responding in adulthood, and supraspinal opioidergic systems are central to this process.Application of DAMGO to the spinal cord produced profound analgesia in all ages tested.This effect of DAMGO was significantly greater in P10 pups when compared to adults.These findings suggest that opioid-mediated signalling in the spinal cord is stronger in the younger rats when compared to adults, which is in agreement with findings from previously published studies .Opioid receptors and related peptides are present on both primary sensory afferents and intrinsic neurons within the adult DH .These are often co-localised with calcitonin-gene related peptide and substance P .In neonatal animals, MOR binding sites in the spinal cord are equally concentrated in the superficial and deeper laminae, and as animals age, their expression becomes refined to the superficial DH .Calcium imaging in cultured dorsal root ganglia has shown that neonatal sensory neurones of all fibre types are sensitive to morphine, whereas only small-calibre fibres are sensitive in adult dorsal root ganglia .It is perhaps this functional reorganisation of MORs that leads to the differences in sensitivity and selectivity of opioidergic actions in the spinal cord between neonatal and adult rats.Our data failed to show a significant increase in mechanical threshold following DAMGO administration to the adult spinal cord.Baseline thresholds in adult rats are significantly higher than in either P21 or P10 rats , and a ceiling effect was observed whereby thresholds increased beyond the range of mechanical stimuli available to us.It should be noted that spinal excitability in adults was significantly decreased.The switch in PAG MOR-mediated activity may also reflect the developmental regulation of the underlying cellular expression and distribution of opioid receptors within the circuit, and/or epigenetic control of genes that code for downstream signalling proteins .It has been suggested that sufficient levels of endogenous opioids are needed for the maturation of pain pathways during a critical period between P21 and P28 .The endogenous opioid system has also previously been implicated to play a role in the regulation of cell growth and differentiation in the developing brain .One explanation for the increase in expression of opioidergic target may be that MOR-mediated activity is necessary for synaptogenesis within the descending pain modulatory circuit.The majority of previous studies investigating the ontogeny of the opioid peptides have relied upon SYBR Green or radioimmunoassay assays, but both have lower spatial resolution compared to immunohistochemistry, and lack the sensitivity of TaqMan RT-PCR used in this study .Both immunofluorescence and TaqMan RT-PCR illustrated that expression level of POMC was highest at P21, which agrees with our previously described opioid-dependent critical period in the maturation of top-down pain control .The expression of MOR shows a similar trend: increasing numbers of MOR immunopositive cells were observed in the DH as the animals aged, and highest level of expression was found at P21.The significant increase in POMC immunoreactivity around P21 coincides with the critical period of the development of supraspinal pain pathways , whereby the output of activating these pathways changes from being primarily facilitatory to increasingly inhibitory.Immunofluorescence data show that ENK significantly increased in the DH as the animals aged, but the TaqMan RT-PCR data show that mRNA copy numbers were the highest in P10 DH.Although one may expect the increase in mRNA expression to precede protein translation, this is an unlikely explanation for these discrepancies.Instead it should be remembered that tissue taken for TaqMan will not include the cell bodies of the cells whose ENK-positive fibres are so prominent in the adult DH.The origin of these fibres is currently unknown, yet it is likely that these fibres are primary afferent nerve terminals rather than terminals of descending fibres.These results altogether show that the endogenous opioid system undergoes crucial refinements in the descending pain modulatory pathway during postnatal development.The authors declare that they have no conflicts of interest with any of the work presented in this manuscript.
Significant opioid-dependent changes occur during the fourth postnatal week in supraspinal sites (rostroventral medulla [RVM], periaqueductal grey [PAG]) that are involved in the descending control of spinal excitability via the dorsal horn (DH).Spinal adminstration of DAMGO inhibited spinal excitability in all ages, yet the magnitude of this was greater in younger animals than in adults.We found that pro-opiomelanocortin peaked at P21 in the ventral PAG, and MOR increased significantly in the DH as the animals aged.These results illustrate that profound differences in the endogenous opioidergic signalling system occur throughout postnatal development.
and FAO on the 21st of February.This document provided detailed information about the evolution and expected impacts of the drought.Critical food security and the need of humanitarian assistance were repeatedly confirmed in the following months and persisted at the time of writing this manuscript.A number of improvements have been developed and are currently being tested.The cropland and rangeland masks have been updated using an optimal region-specific selection of recent available global and regional land cover products.The MetOp NDVI time series has been replaced by the Moderate Resolution Imaging Spectroradiometer NDVI filtered for optimal noise removal in near real-time applications, a processing that reduces the uncertainty and noise contamination in near real-time NDVI data.MODIS has also replaced SPOT-VEGETATION for the calculation of the phenological parameters.Copernicus biophysical products at 1 km spatial resolution and 10-day temporal frequency form the reprocessed SPOT-VEGETATION archive and Proba-V observations are now available and could be used as a back-up in the case of MODIS failure.Additional improvements have been already scheduled and will become available in 2018.In particular, the SPI1 indicator will be replaced by the standardized anomaly of the Global Water Requirement Satisfaction Index, a soil water balance model conceptually similar to that of Popov and Frere and aligned with the ASAP phenology.In addition, the ASAP system will integrate information coming from feedback provided by the food security analyst community and will extend the set of indicators provided to analysts and users as ancillary information.Among these indicators, the Heat Magnitude Day index will be implemented to monitor the effect of heat waves, shown to have a significant impact on agricultural production worldwide.Finally, we plan to replace the current ECMWF precipitation data with the new ECMWF reanalysis product ERA5.Compared to the current settings in which we use different models for the historical archive and the NRT data, ERA5 will deliver homogeneous data with 31 km spatial resolution.System updates are announced in the “news” section of the website and documented in the updates of the ASAP manual.Since 2001 MARS has developed agricultural monitoring methods for food security early warning outside Europe and has produced in house, or contributed together with other agencies, to a long series of food security assessments and bulletins.More recently, different global remote sensing based products and monthly expert analysis have been combined into a new early warning platform for the identification of agricultural production hotspot countries on a monthly basis.The hotspot country assessment, the ASAP unit level anomaly warnings and the weather anomaly data used, do directly feed into multi-agency global crop monitoring efforts such as the GEOGLAM Crop Monitor activities, and provide inputs to more detailed food security assessments such as those implementing the IPC/Cadre Harmonisé framework.The hotspot assessment is largely based on the described automated warning classification system, that monitors crops and rangelands status at a 10-day time step, globally and in near real-time.But it also includes analysis of other multi-resolution remote sensing indicators and of other information sources such as media monitoring.During recent major droughts that affected Southern Africa in 2015–2016 and the Horn of Africa in 2016–2017, the ASAP automatic warnings were found to be timely and accurate in detecting onset and spatial extent and the concerned countries have been identified as agricultural production hotspots in a timely manner.The ASAP system has been tested since mid-2016 and has been officially launched at the European Development Days in Brussels in June 2017, and is since then used in a fully operational mode.The selection and processing of input indicators is constantly under improvement, as is the online visualization platform.
Rainfall estimates provide an outlook of the drivers of vegetation growth, whereas time series of satellite-based biophysical indicators at high temporal resolution provide key information about vegetation status in near real-time and over large areas.The new early warning decision support system ASAP (Anomaly hot Spots of Agricultural Production) builds on the experience of the MARS crop monitoring activities for food insecure areas, that have started in the early 2000's and aims at providing timely information about possible crop production anomalies.The information made available on the website (https://mars.jrc.ec.europa.eu/asap/) directly supports multi-agency early warning initiatives such as for example the GEOGLAM Crop Monitor for Early Warning and provides inputs to more detailed food security assessments that are the basis for the annual Global Report on Food Crises.The second step involves the monthly analysis at country level by JRC crop monitoring experts of all the information available, including the automatic warnings, crop production and food security-tailored media analysis, high-resolution imagery (e.g.Landsat 8, Sentinel 1 and 2) processed in Google Earth Engine and ancillary maps, graphs and statistics derived from a set of indicators.
management is not likely to succeed unless there are significant tangible benefits to forest dependent communities in the vicinity of these forests.Furthermore, the relationships between the ability to obtain a forest based income and engagement in forest management groups are not always linear and need to be further investigated.Education and relative wealth are two of the socioeconomic characteristics of households that influence participation in forest management groups in our study, as in the case of Burkina Faso and Sri Lanka.Education levels are important not only as indicators of formal knowledge, but as channels for empowerment to create the capacity to participate in group decision making and being heard, thus creating facilitating conditions for individuals to participate in forest management groups.Relative wealth influences everything from power dynamics to the ability to invest in the financial or human capital as seen from several studies around the world.These factors ultimately influence an individual’s capacity to engage in business ventures to gain from tourism or other activities like honey production.Through this research we contribute to the wider body of knowledge on forest ecosystem services, expanding the understanding of storm hazard mitigation services of dry, deciduous forests, the livelihood benefits derived by smallholder farmers from these forests, and how these influence local support for forest management initiatives.We show that not only do farmers derive livelihood benefits from seasonally dry tropical forest fragments in Madagascar, but they also perceive a positive role for forest cover in hydrological processes including sedimentation and flood hazard control.However, this extensive use of forest services does not overwhelmingly translate into a willingness to take action to support forest management, and this is an important implication for forest policy and management.We demonstrate the importance of factors such as securing complementary income from forest based honey production and tourism, relative wealth and the education levels to participation in forest management.These results reflect the heterogeneous nature of different households comprising a community and consequently the differing abilities of individuals to take advantage of institutions and structures established to manage forest resources and derive benefits.We suggest that efforts to improve and broaden support for forest management should focus attention on the beneficiaries of forest-dependent income generating activities and identify steps to broaden participation in these ventures.Based upon our results, we suggest two main research areas to investigate further.The first is the relationship between the use of provisioning services and the acknowledged benefits of regulating ecosystem services, While hazard reduction services are widely perceived as benefits from forest cover we did not find a significant relationship between these variables and other ecosystem uses/benefits or with supporting forest management.A second area for further investigation is whether the local understanding of forests and provision of hazard mitigation services is obtained through observations and experiences or through exposure to external projects and education programs.Further examination of how the knowledge of forest-hydrological linkages is formed and how it is translated into decision making on forest and land use management by farmers is important for both forest policy and for considering how broader land use policy can integrate rainfall linked hazard mitigation services provided by forests in such settings.Finally, we found only two out of the fourteen ecosystem services valued by study respondents significantly influence participation in forest management groups, again bringing to bear the question of what motivates collective action and participation in forest management.Unless we are better able to identify what type of ecosystem service benefits motivate involvement in forest management and what the barriers to participation may be, it is unlikely that appropriate forest management institutions will emerge in the case study region and elsewhere.Local knowledge of ecosystem services and the rationale behind household decision making around forest use is important for effective policy interventions in forest management, the long-term sustainability of forest resource use and conservation and land use policy.
We address this research gap by quantifying the broader suite of ecosystem services that support small holder farmers and identifying farmers’ knowledge of storm hazard reduction benefits provided by forest fragments in Madagascar.Using multivariate statistics, results show a heavy dependence on forests for food and raw materials and a majority of the respondents holding a positive view of hazard mitigation services provided by forest fragments.Education levels, earning an income from forest based tourism and honey production are the only predictors of participation in forest management.Positive view of the hazard reduction benefits derived from forests could be due to external influences or personal observations, and together with barriers to participation in forest management need to be further investigated to better link forest management to reduced hazards risks.These findings are significant for forest management policy, as local knowledge and rationale for decisions are instrumental in the success of decentralized forest management and maintenance of vital forest benefits to farmers.
begin to appear.Rapid progress is being made in molecular stratification for IBD, which is now beginning to allow prediction of disease severity and the risk of developing complications.Currently the best potential stratification strategies utilise serological and faecal markers.Few transcriptomic and metabolomic studies have shown sufficiently promising stratification results, but with further research they can be expected to produce more effective ways to stratify in the future.Development of new forms of stratification will not soon replace the current methods of diagnosis, but will likely facilitate non-invasive disease monitoring and potentially enable more effective choice of the best treatments.Thus, if a patient is predicted to have severe disease together with short remission periods, more aggressive biologics and more frequent monitoring would be recommended to prevent surgery and complications.Patients predicted to have milder disease could potentially follow the step-up therapy model.Biologics targeting TNF, α4β7 or IL-12/IL-23 are effective for many, but not all patients.Since not every patient responds to these drugs, molecular stratification is desirable to predict the likelihood of response.Candidate biomarkers to predict therapeutic response include molecules involved in Th1, Th2 and Th17 pathways.The expression of oncostatin M and mTNFα in the gut, for instance, were reported to be reliable for differentiating between responders and non-responders prior to anti-TNF treatment.In addition, studies of stool samples identified microbial biomarkers to predict the response towards ustekinumab.Furthermore, measurements of the level of α4β7 integrin expressing cells in peripheral blood can predict responses to the anti-α4β7 biologic vedolizumab.Unfortunately, none of these biomarkers are yet proven to be suitable for use in routine clinical practice.Despite progress in the field there are still not biomarkers available that can be used in the clinic for prognosis or to predict treatment response, and the clinical need for such biomarkers is high.Ideally, IBD patients would receive tailor-made treatments upon diagnosis.Although this may currently be possible for rare cases of monogenic IBD, more studies elucidating the immune pathways contributing to polygenic IBD are needed before such personalised medicine can be generally applicable.Although we are still far away from truly personalised medicine in IBD, molecular stratification is already beginning to be used.We are optimistic that by expanding the use of molecular analyses beyond serological markers, both clinical and cost effectiveness of IBD therapy can be further enhanced in the future.S Milling received speaker’s fees from Janssen and participated in medical board meetings with Pfizer.He also receives fees in his role as Editor in Chief of Immunology.The rest of the authors have no conflicts of interest to disclose.
Inflammatory bowel disease (IBD) is a debilitating chronic inflammatory disease of the gastrointestinal (GI) tract.IBD is highly heterogeneous; disease severity and outcomes in IBD are highly variable, and patients may experience episodes of relapse and remission.However, treatment often follows a step-up model whereby the patients start with anti-inflammatory agents (corticosteroids or immunosuppressants) and step-up to monoclonal anti-tumour necrosis factor-α (TNFα) antibodies and then other biologics if the initial drugs cannot control disease.Unfortunately, many patients do not respond to the costly biologics, and thus often still require gut-resective surgery, which decreases quality of life.In order to decrease rates of surgery and ineffective treatments, it is important to identify markers that accurately predict disease progression and treatment responses, to inform decisions about the best choice of therapeutics.Here we examine molecular approaches to patient stratification that aim to increase the effectiveness of treatments and potentially reduce healthcare costs.In the future, it may become possible to stratify patients based on their suitability for specific molecular-targeted therapeutic agents, and eventually use molecular stratification for personalised medicine in IBD.
Dioscorea schimperiana is one of the nine commonly consumed yams in Cameroon.This yam is an integral crop for food security in the west regions where it is grown.It is the only yam with a traditional and well defined drying technology.Therefore, it became a concern when it was considered as endangered yam specie despite its well established drying technology .In an effort to increase production and diversify its end use, yam-based complementary food was formulated as an alternative to the commonly used maize complementary food.Maize is the common cereal used during weaning period in Cameroon .However, the major disadvantage of cereal based foods remains in the low protein content, limitation in some essential amino acids like lysine and the presence of antinutrients such as phytates, tannin and phenolics.Root and/or tuber based complementary food offers thereafter a nutritional advantage because of their low content in phytates compared to cereal based formulation .Tubers such as cassava, sweet potato and yam are often used as complementary food in weaning period and the targeted nutritional benefit is generally the energy supply.In avoiding nutritional deficits, tubers are often blended with protein and fat-rich foods such as soybean and/or groundnuts to meet the accompanying nutritional goals.Traditional weaning foods may also tend to be bulky and alpha amylase rich sources are often used to increase the energy density.Micronutrients are essential for growth, development and prevention of illness in young children.Proper weaning foods should be able to supply vitamins and minerals not present in breast milk while providing additional calories.Local foods rich in micronutrients like carrot or egg shell may be used as micronutrients suppliers to meet the necessary requirement.Beside nutritional value, weaning flour can offer health benefit due to the presence of many bioactive compounds; malnutrition is associated with both a reduction in antioxidant defense systems and an overproduction of free radicals due to repeated exposure to toxins and infectious agents .Although yam-based formulations have made significant advances in research , there has been no formulation with D. schimperiana.D. schimperiana is a lesser known yam specie in Cameroon.The flesh color varies from yellow to red in mature yams .Traditionally dried yam slices, sold on local market have been limited to preparation of pounded yam.However, dietary habits, availability and nutritional value may justify it usage in feeding young children in areas where there is childhood malnutrition and poverty.Dried yam slices, are even more attractive because of the increase in energy density by drying .Production of dried slices-based complementary food from D. schimperiana is innovative and the success in making a balanced diet requires taking into account parameters such as energy, carbohydrate, protein, lipid, vitamin and mineral content .The objective of this study was to formulate complementary foods from three different flesh-colored D. schimperiana using protein, fat, vitamin and mineral complementation to supply the requirements not met by milk.Soy bean flour, groundnuts paste, germinated maize flour, carrot flour and egg shell flour were used to fortify the yam flour with adequate nutrient density and high mineral bioavailability to meet the recommended macro and micronutrients specification of standards and for easy replication at the household level.Specifically, it aims at investigating the effect of the yam flesh color on nutrient composition and comparing nutritional value of the optimal blend formulations with standards.The antioxidant capacity of the blends was also investigated.Maize, soybeans, groundnut, egg, carrot were purchased from local market in Douala city, Cameroon.Dried yam slices were bought in Baham local market, in the west region of Cameroon.The choice of this locality was derived from the results published by Leng et al. , which showed that they are more suitable for baby flour formulation according to their pasting properties.Yam samples were collected in bamboo bags and transported to the laboratory.They were then divided in three groups according to the flesh color: yellow, red and yellow with red dots.All the samples were kept at room temperature before processing.Cleaning and washing: Samples were manually sorted to remove impurities.Groundnuts were toasted.Soybeans were completely de-hulled and roasted over low heat during 20 min to reduce fiber and antinutrients.Seeds were subsequently crushed, varnished and stored in plastic bags before grinding.Eggs shells were cleaned with tap water, boiled for 20 min in hot water before drying.Carrots were scraped to remove the outer cortex, sized and cut into 0.5 cm thick slices before drying.Germination of maize: Maize sample was used to produce germinated maize as sources of alpha amylase to increase the nutrient density of the food.Germination was carried out according to the technique described by Ariahu et al. .Drying and milling: Samples were dried at 45 ± 5 °C in a cross flow cabinet dryer to a moisture content less than 10%.All dried samples were ground into fine flour using a hammer mill or a robot blender.Dried yam slices were grounded in a milling machine equipped with a 1 mm sieve.Flour was then sieved through a sieve of 500 µm, packaged in an air tight polyethylene bags and stored at −18 °C until chemical analyses were conducted.Formulation of yam and soybean complementary foods: The methodology of mixture design as a mathematical approach was used to calculate the proportions of ingredients needed in order to have balanced composite flour .Optimal mixtures of ingredients in formulating balanced flour are shown in Table 1.Moisture and ash were determined by AOAC method .Crude protein has been analyzed according to Kjeldahl method .Total fat content was quantified according to Weibull–Stoldt method .Total dietary fiber
Challenges remain the difficulty in formulating nutritionally adequate diets.Flours from dried yam slices bought on local market were fortified with soybean and groundnut paste to develop a balanced diet.Carrot and egg shell were added as reinforcing sources of micronutrients.The nutritional content and antioxidant properties of the flour blends were then assessed.They can also offer health benefits due the composition of many bioactive compounds.
to increase the sweetness .Correlational analysis was carried out to assess the linear relationship between the color of the yam flesh and the Carotenoids and alpha tocopherol contents.There was a positive correlation between the color of yam and the level of ß-criptoxanthine, zeaxanthin.This linear correlation was significant for alpha and beta carotene and total carotenoid provitamin A levels.This indicates that when the coloration of the flesh varies from yellow to red, the contents of these nutrients increase.The amount of carotenoids and α-tocopherol in yam-based formulations depends upon the color of the yam flesh.Βeta carotene is the major carotenoid in yam flesh followed by α-carotene.This finding is in conformity with the results reported by Gouado et al. .Red-fleshed yam had the highest content in α-tocopherol, lutein, zeaxanthin, α- and β-carotene while yellow-fleshed yam with red dots had the lowest.The values of α-tocopherol in yellow or red-fleshed yam samples were higher than those reported by Chandrasekara and Kumar for yam; showing the additive effect of supplementation.The values of zeaxanthin, α- and β-carotene were higher than the range of values reported by Gouado et al. for zeaxanthin, α- and β- carotene in yellow and red D. schimperiana raw yam.These results again underline the beneficial effect of supplementation.The level of zeaxanthin and α-tocopherol in yellow-fleshed with red dots blended flour were 2 times lower.Its diminished content in Pro-vit A carotenoids explains the lowest Retinol Activity equivalent expressed despite its highest content in β-kryptoxanthin.The weaning flour contains lutein, zeaxanthin and lycopene as non-provitamin A carotenoids.These substances have received particular attention during this last decade due to their action as antioxidant in preventing chronic disease and in promoting eye health .The value of lycopene and β-kryptoxanthin were lower than the range of value reported by Gouado et al. for lycopene and β-kryptoxanthin in yellow and red D. schimperiana raw yam; showing the adverse effect of traditional drying on these nutriments in yam.The retinol activity of all the composite flours did not meet the requirement of the FAO/WHO specification.Yam based formulation satisfied 46.8%, 39.4%, 47.0% of the requirement.The level of total phenol and antioxidant activity in yam depend upon the color of the yam flesh There is a positive correlation between the color of yam and the levels of total phenols.This linear correlation is significant for antioxidant activity.The phenol composition and TEAC activity of the formulated yam based complementary food are shown in Table 2.The level of total phenol in yam-based formulations depends upon the color of the yam flesh.The value of total phenol recorded was in the range value reported by Leng et al. for raw and dried D. schimperiana flours sold in local market.The TEAC value obtained were higher than the range value reported by Adetuyi and Komolafe for plantain flower fortified with okra seed.Yellow-fleshed yam-based complementary flour had the highest content in polyphenol and comparatively the lowest antioxidant activity.In contrary, yellow-fleshed yam with dots flour blend had the highest antioxidant activity and the lowest content in total phenol.This result shows that there is no correlation between polyphenol content and antioxidant activity.Indeed, yellow-fleshed with red dots composite flour with the highest antioxidant activity has the lowest content in bioactive compounds such as α-tocopherol, carotenoids.This highest antioxidant activity expressed by this formulated flour may be attributed to others substances acting in a synergic action with these substances thereby raising the antioxidant activity.Correlational analysis carried out to assess the linear relationship between the color of the yam flesh and the content in zinc and iron indicated a negative correlation only between the color of yam flesh and the content in iron.This result showed that when the intensity of the coloration of yam flesh increased, the level of iron in yam flesh decreased.Zinc and iron composition of the formulated yam based complementary meals are shown in Table 2.There are differences in the level of zinc and iron between the flours depending on the coloring of the flesh.Yellow-fleshed yam with red dot had the highest content in iron and red and yellow-fleshed yam the highest content in zinc.The levels of iron and zinc recorded were lower than the range values reported by Soronikpoho et al. for iron and zinc in yam based weaning food fortified with soy and vegetables mineral sources.Complementary foods are expected to have sufficient energy and nutrient density and consumed in small amounts to provide a breastfed growing child with adequate daily energy requirement to complete the 0.7 kcal/ml of breast milk for normal, term infants.The energy expected from complementary foods for infants varies with the age.In a developing country, 6–8 months old infant consumes 200 kcal/day; 300 kcal/day is necessary for 9–11 months old infant and 550 kcal/day will be needed at 12–23 months old child in addition to daily energy coming from breast milk.The 200 kcal/day energy requirement can be covered by feeding a 6- to 8-month infant with 49.81 g, 49.54 g and 49.92 g.A daily portion intake of 74.72 g, 74.32 g and 74.87 g is needed to cover the 300 Kcal/day of daily energy requirement from complementary food for 9–11 months old child.Feeding 12–23 months old child with 136.99, 136.25, and 137.27 g would satisfy the daily energy requirement."Because of the high-energy-density of yam composite flours, children from 6 to 23 months old would be able to satisfy their daily energy needs from yam complementary foods if they received one meal per day which would be a possible task considering the
This study was carried out to investigate the effect of Dioscorea schimperiana flesh color on nutritional composition and antioxidant activity of formulated yam based complementary food.The amount of carotenoids, α tocopherol, total phenols, zinc and iron in yam-based formulations depended on the color of the yam flesh.Βeta carotene was the major carotenoid present.The yellow-fleshed yam flour blend had the highest content (10 mg/100 g) of polyphenolic compounds but comparatively lower antioxidant activity.In contrary, yellow-fleshed with red dots flour blend had the highest antioxidant activity (0.218 M Trolox equivalents/100 g) and the lowest content (8 mg/100 g) of the responsive bioactive compounds.
size of an infant's stomach.Their protein and lipids daily quantity requirement would also be satisfied when eating the formulated flours, since the value of the estimated daily proteins and fat intake from yam based complementary food are higher than the suggested intake at the age range of 6–24 months.WHO/FAO has established levels of Reference Nutrient Intakes as a guide for amounts of vitamins and minerals that should be supplied when a formulated complementary food is eaten.This nutritional guideline suggests that a daily ration of a formulated complementary food should supply at least 50% and up to 100% of the WHO RINL98 daily total quantity of each of these vitamins and/or minerals.By considering the lower levels, the daily minimum suggested intake of vitamin A and α-tocopherol were not satisfied at all the serving portions of yam composite flour and the extent varying according to the age range.Feeding 6–8 months old child with 49.54–49.92 g of yam complementary food would satisfy only 23–28% of the 200 µg retinol activity and 5.44–11.08% of the 2.5 mg α-tocopherol daily minimum requirement from a formulated complementary food.A daily portion intake of 74.32–74.87 g of yam complementary food would satisfy only 35–42% of the 200 µg retinol activity and 8.2–16.6% of the 2.5 mg α-tocopherol daily minimum requirement for 9–11 months old infant and feeding 12–23 months old healthy child with 136.25–137.27 g of yam-based complementary food would satisfy about 64–77.4% of the minimum requirement value for retinol activity and 15–30.48% of the 2.5 mg of α-tocopherol requirement.The WHO RINL98 daily total quantity of zinc and iron requirement from a formulated complementary food depends on the percentage of nutrients bioavailability in food matrix.Feeding 6–8 months old child with 49.54–49.92 g daily serving portion of yam based complementary food could satisfy only 68–70% of the average daily intake of zinc and 63–65% of iron daily requirements while considering the high zinc dietary bioavailability and 15% dietary iron bioavailability in the respective food matrices.Children of 9–11 months old receiving, respectively, 74.32–74.87 g could meet the 100% of daily average intake of zinc only in high dietary bioavailability medium and 94–97.5% of iron in 15% dietary iron bioavailability.Feeding 12–23 months old child with 136.25–137.27 g daily serving portion of yam based complementary food could satisfy the 100% of the average daily intake of zinc and iron in medium and high zinc dietary bioavailability while assuming 10 and 15% dietary iron bioavailability of the food matrix, respectively.An attempt was made to produce yam based complementary food with high-energy-density from D. schimperiana to diversify the end uses limited only to produce the common pounded yam from dried slices.From the above results, yam based formulations has a great potential as weaning flour since it satisfies the recommended energy and macronutrients requirement according to the standards.They can be used in managing protein energy malnutrition by preparing adequate weaning food with flour blended according to the percentage of ingredients stipulated in the optimal formulas.Proper weaning foods should be able to supply vitamins and minerals not present in breast milk.Taking micronutrients in consideration, we focused on iron, zinc, calcium and vitamin A because these are considered key ‘problem’ nutrients in many developing countries .The average daily quantity intake of these micronutrients increases with the increase of the serving portion which is related to the age of the child.The color of the yam flesh has an impact on the content in iron and zinc.The red-fleshed yam had the highest content in these micronutrients while the yellow fleshed yam with red dots had with red dot the lowest.Taking into consideration their contribution in meeting the micronutrients daily intake suggested reference value, a highest daily serving portion of 136.25–137.27 g intended for 12–23 months old child is the most suitable.People should be encouraged to produce weaning food from traditional dried yam slices since food-based approaches to combat micronutrient malnutrition are more likely to be sustainable in the long-term and children will be receiving other foods in addition to formulated complementary foods.The results of the present work are very promising and suggest that an optimization of the drying process could ameliorate the quality of the yam flour since the traditional processing of yam often leads to a poor product quality due to the leaching of minerals and other nutrients into the water during boiling before drying .Dried yam from Dioscorea schimperiana can be successfully used to produce weaning flour with adequate nutrients density using Mixture design.The formulated yam flour blend can be used in managing protein energetic malnutrition.The yam flour blends possesses antioxidant activities which can prevent the negative effect of free radical reactions being beneficial for the consumers.Micronutrients density appears to be the drawback.Taking into consideration their contribution in meeting the micronutrients daily intake suggested reference value, a highest daily serving portion of 136.25–137.27 g intended for 12–23 months old child is the most suitable.
Childhood malnutrition is a current and perpetual public health concern in many African countries.Results indicate the flour blendes satisfy the recommended energy and macronutrients requirements according to set standards.Thus the flour blends can be used for managing protein-energy malnutrition.The red-fleshed yam based complementary food had the highest content in these micronutrients.Feeding 6–11 months old children with yam-based complementary food would satisfy 23–77.4% of the 200 µg daily minimum suggested reference retinol activity intake of vitamin A and 8.20–30.48% of the 2.5 mg α-tocopherol requirement.At this stage of growth, yam based complementary food would satisfy 68–100% of the average daily minimum reference intake of zinc and 63–100% of iron daily requirements according to the daily serving portion needed to satisfy the 200–550 energy requirement.Taking into consideration their contribution in meeting the micronutrients daily intake suggested reference values; the weaning flour is most suitable for a 12–23 month old child.
urea formaldehyde, used here as the binder for printing, is not suitable for products intended for outdoor use because prolonged moisture exposure leads to a breakdown of bond-forming reactions.However, there are other formaldehyde-based adhesives, such as phenol and melamine formaldehyde, which do not break down when exposed to moisture, and could be used as an alternative to produce printed products for outdoor applications .In this study, wood – binder – glass fibre composites have been additively manufactured/3D printed using wood waste as the feedstock material and urea formaldehyde as the binder.Mechanical testing has demonstrated that these products have improved mechanical properties compared to the non-printed samples manufactured in this study.Furthermore, these printed products have mechanical properties comparable to or even better than commercial panel products such as particleboard and fibreboard.The superior mechanical properties of the printed composites compared to non-printed composites are due to both a densification of the paste as it is extruded/deposited through the nozzle and fibre alignment induced by the printing process.It is anticipated that the resolution of the deposited material and, therefore, its effect on the degree of densification and fibre alignment, will have a considerable effect on the mechanical properties of the product.Printing of more superior wooden products than those created in this study remains a possibility.The use of natural resins as adhesives such as lignin and tannin is also an interesting avenue of research, and there has been work into the manufacture of blends and part substitutions of formaldehyde resins with bioresins to provide environmentally friendly adhesives for use in the wood industry .In regards to suitable applications for wooden products created by AM, it is believed that the small complex/bespoke components typically created by various AM techniques would not be of use to many engineering/industrial applications requiring a wooden composite.This material and process would have more value in the creation of on-site large bespoke products for the construction industry.Further to this, the products could be made lightweight by designing honeycomb voids within the structure, a feature difficult to achieve with conventional processing of wood.Furthermore, electronic devices such as sensors could be embedded during the layer wise manufacture and used to detect early signs of failure or interact with its environment.Though not completely sustainable due to the use of resins, this process and material has the potential to integrate an environmentally-friendly solution within the manufacturing industry with utilisation of a recycled and more sustainable feedstock.
In contrast to subtractive manufacturing techniques, additive manufacturing processes are known for their high efficiency in regards to utilisation of feedstock.However the various polymer, metallic and composite feedstocks used within additive manufacturing are mainly derived from energy consuming, inefficient methods, often originating from non-sustainable sources.This work explores the mechanical properties of additively manufactured composite structures fabricated from recycled sustainable wood waste with the aim of enhancing mechanical properties through glass fibre reinforcement.In the first instance, samples were formed by pouring formulation of wood waste (wood flour) and thermosetting binder (urea formaldehyde), with and without glass fibres, into a mould.The same formulations were used to additively manufacture samples via a layered deposition technique.Samples manufactured using each technique were cured and subsequently tested for their mechanical properties.Additively manufactured samples had superior mechanical properties, with up to 73% increase in tensile strength compared to moulded composites due to a densification of feedstock/paste and fibre in-situ directional alignment.
after 3 years.Finally, the slope of the measured average vertical strain curves in July 2016 were fitted by changing the contribution from the creep, i.e. the μ∗/λ∗-ratios.This process was repeated until a good fit was obtained.The effect of changing the different parameters is demonstrated in Fig. 11.The figure shows the effect of increasing the equivalent permeability kekv by 50%, λ∗ by 20% and μ∗ by 20% compared to the parameters that gave the best fit to the average vertical strain between Magnets 2 and 3.The final parameters used in the Class C prediction are presented in Table 3.By simulating CRS tests, however by starting from the initial effective stresses, it is demonstrated that the modified parameters given in Table 3 still agree with the CRS-tests from Inclo2 as shown Fig. 12.Fig. 13 shows that the Class C back-calculation gives time-settlement curves that perfectly fits the measured settlement curves at Mex1.From Fig. 6, it is seen that in order to improve the results compared to the Class A prediction, the total settlement after 3 years needed to be increased, and the rate of settlement needed to initially be reduced and after about 1 year to be increased.Based on Fig. 8, it is seen that the strain in the upper part of the estuarine clay first of all needed to be increased.From Tables 1 and 3, it is seen that λ∗ was increased from 0.1 to 0.15.This correction could be justified by too few tests within the actual depths and correspondingly large uncertainty in the actual parameter.In addition, it was necessary to compensate for that the material model, SSM, is underestimating the shear deformation in this layer.The increased settlement rate after 1 year, is obtained by a reasonable increase in the creep contribution, i.e. μ∗/λ∗-is increased from 0.03 to 0.04 in the upper part of the estuarine clay and 0.05 between 4.5 and 11 m.This means that the creep strain after 3 years may be as large as ɛcreep = μ∗ln = 0.05⋅0.2⋅ln = 7%.This means that the contribution due to creep for this clay is significant.In addition, the equivalent permeability between 4.5 and 11 m depths is reduced from 1E−3 m/day to 0.02–0.04E−3 m/day.The parameters used in the Class C prediction can still be justified by the values in .Fig. 14 shows that the horizontal displacement in July 2016 is also increased at the periphery of the embankment and agrees significantly better with the measurements at Inclo1 compared to the Class A prediction.In order to obtained better results another material model than the SSC model is recommended to be used as discussed in Section 3.2.Fig. 15 shows that the calculated pore pressure histories also fits the measured histories better.The main reason for this is the lower equivalent permeablies used, as discussed above.Furthermore, the plane-strain idealization of the drains may overestimate the pore pressure dissipation in the middle between the drains.The paper describes the Class A prediction and Class C back-calculations of the test embankment at the NFTF near Ballina in Australia using the FE program Plaxis 2D.The Soft Soil Creep Model was used for the deformation calculation of the soft estuarine clay.The Class A prediction underestimated the measured settlement 3 years after construction by about 20%.This was due to uncertainties in the creep index of the soft estuarine clay and the stiffness of the soil above and below the soft clay.SSCM was also underestimating the shear deformation of the soft estuarine clay.In addition, the horizontal permeability was overestimated based on wrong assumptions regarding the anisotropy ratio and neglecting the effect of reduction due to void ratio decrease below the yield stress.This effect could have been accounted for by using a void dependent permeability formulation available in Plaxis.However, since equivalent horizontal permeabilities are used that account for remoulding during installation of the drains and the idealization of the 3D flow pattern by a 2D model, made it difficult to use this feature.But, it is checked that the uncorrected vertical permeabilities used in the analysis agree with the void ratio dependent permeabilities in Fig. 11a in Pineda et al. .In the Class C back-calculation, it was possible to obtain perfect match with the measured settlements by reasonable modifications of the input parameters.SSCM is thus generally well suited for modelling settlements of embankments on soft clay including the important contribution from creep.However, SSCM may underestimate the shear deformations for shear stress ratios above the KoNC-line.This can be mitigated by lowering the top point of the CamClay cap surface given by M and reducing the friction angle below the measured value from the triaxial tests.Alternatively, a model that accounts for this effect by an input parameter that control the curvature of the yield surface could be used.
The paper describes the Class A prediction and Class C back-calculations of the Ballina test embankment using the finite element program Plaxis and the Soft Soil Creep Model (SSCM).The prediction underestimated the measured settlement 3 years after construction by about 20%.This was mainly due to too high stiffness in the transition zone beneath the clay and that SSCM underestimated the shear deformation of the clay.Furthermore, the horizontal permeability of the clay was overestimated.In the back-calculation, it was possible to obtain a excellent match with the measured settlements by reasonable modifications of the input parameters.
Dataset provides raw descriptive and inferential statistics on the relationship between Ethical Leadership and Corporate Reputation.Figs. 1–4 provide data on selected characteristics of the sample, such as gender, age, years spent on the job and highest education level attained.Table 1 provides the validity test results on the research instrument, Table 2 provides data on correlations for the variables used in the empirical analysis, and Table 3 provides data on the estimates of the regression specification: Corporate reputation=f.The dataset presented focuses on the influence of ethical leadership on corporate reputation of selected Deposit Money Banks in Nigeria.Data was gathered from employees in selected Deposit Money Banks with the aid of a close-ended questionnaire designed by the authors, based on the works of Refs. .We got responses from five hundred and seventy three participants who duly completed the administered questionnaire.Responses from the questionnaire were extracted into Microsoft Excel, and subsequently coded and inputted into Statistical Package for the Social Sciences Version 22.We performed data analysis by applying inferential statistical tests which involved multiple regression analysis and descriptive analysis.The study population comprised of fourteen-thousand-one-hundred-and- forty-seven employees based in the Lagos offices of the eight selected banks, whilst the sample size obtained through the sample size calculator was seven hundred and forty-three employees.Five of the eight banks accounted for 53.7% of the total deposit base, and 53.68% of total asset base of all Deposit Money Banks in Nigeria as at 31 December 2016 .A multi-stage sampling technique involving proportional-to-size, stratified and purposive sampling was adopted for the study.Lagos was chosen as the scope of study because seventy-percent of commercial activities takes place in Lagos-State, whilst all the selected banks have their head-quarters in Lagos-State.The questionnaire was self-administered to the respondents who voluntarily completed the research instrument.Ethical issues such as prior consent, anonymity, and confidentiality of respondents among others were taken into consideration.The authors established that the respondents were knowledgeable about the background, purpose, and study variables of this research.The study identified three key measures of ethical leadership from literature , and three indicators of corporate reputation .The population comprised of employees from the level of trainee to executive director, and all the stratified job functions in the selected banks under consideration.A cross-sectional survey design using a questionnaire instrument was used to elicit data from the respondents.Similar works that have used field survey instrument to obtain data can be found in works by .The questionnaire was divided into three parts; demographic variables, ethical leadership factors and corporate reputation indicators.Demographic factors reported include respondents׳ gender, age, marital status, highest education level, job function, job position, and years spent on the job.A five-point Likert scale of equal interval to strongly agree) was used as the measure of responses.The data was analyzed by multiple regression using the Statistical Package for the Social Sciences software Version 22.The dataset is useful for bank managers to understand the key factors required to enhance ethical leadership and consequently their firms’ corporate reputation.Table 3 above presents estimates of the analysed data based on the model specification.Estimates show that ethical culture and ethical programs have positive significant influence on corporate reputation, while CEO׳s ethics show a positive but non-significant effect on corporate reputation.The dataset provides useful insights for bank practitioners, regulators and other stakeholders to understand the role of different measures of ethical leadership in influencing corporate reputation.
Banking institutions play a critical role in any economy, and their stability is crucial to the economic development of a nation.The wave of corporate scandals that rocked the industry left the public with a loss of confidence.Efforts have since been channeled by banks towards developing their corporate governance mechanisms, except that the aspect of ethical leadership and how it translates to a bank׳s corporate reputation has not received sufficient attention.The dataset presented the perception of employees in selected deposit money banks in Nigeria.A multistage sampling technique was used to elicit data from the employees.Inferential statistics such as correlation, and regression analysis were adopted.The data collected focused on the impact of ethical leadership on corporate reputation.It also provided information on the significant factors affecting ethical leadership as well as the measures of corporate reputation.The survey data when analysed can be a pointer in determining the unique ethical leadership predictors that could enhance a bank׳s reputation.
report and measure.Specifically, onset delay is a construct addressing timing, given an intention to act.Even if there may be various specific reasons for delaying onset, the result as perceived by the actor is a failure to start as intended.In contrast, delays during task execution may be due to diverse and complex factors that work indirectly to create delay.For example, low willpower can make situational temptations appear attractive, causing impulsive diversions from planned action.Although such diversions are themselves detectable, delays in task execution because of them may not be equally obvious to the individual.Hence, as we are notoriously poor at monitoring ourselves attentively when we attempt to self-regulate, delays during task pursuit may be more difficult to self-report.Furthermore, the fact that delayed onset often makes it necessary to increase goal striving effort to catch up may indicate to procrastinators that they work as hard as, or even harder, compared to others, even to non-procrastinators.Future research should assess these issues, also in situations that are not compromised by differences in onset.Turning to the limitations of the study, the sample size used in Study 1 is quite small.However, the factor loadings are generally high.Studies have shown that moderate to high loadings was the major factor in determining reproducibility.For instance, Guadagnoli and Velicer suggested that a sample size of N≥100 was sufficient if four or more variables per factor had loadings above .60.Further, the present research is based on two student samples, suggesting that future studies should assess factor structure as well as relations between onset delay and delay in sustained goal striving in samples from the general population as well.Such studies should also assess scale properties over gender, age, and even cultures.Items used to assess sustained goal striving were inspired by the work of Schouwenburg, 1995, and would profit from scrutiny and possibly adding other items.In this respect, a limitation in Study 2 is the post-hoc modification to improve the CFA model fit indices.Several measurement errors were allowed to correlate among indicators of the sustain-subscale, which, as pointed out by Hermida, moves the model testing from being confirmatory to becoming an exploratory analysis.Error correlations are likely due to sampling error, which may restrict cross-validation of the structure in future studies.Further, the underlying structure may be masked if a relevant omitted variable is estimated through measurement error.However, the model fit of the non-modified three-factor model was generally in the range of an acceptable fitting model.In particular, the alternative three-factor model produced a good fit to the data.This again suggests that items would profit from scrutiny and potentially adding other items, and future research should address cross-validation and replication.Another limitation of the present research is that we have utilized only one general procrastination scale, the IPS, as a reference.Other scales may demonstrate different relations to the subscales discussed in the present paper, even more so because different scales focus on different facets of procrastination.Hence, future studies should include other procrastination scales to verify the close relation observed between overall procrastination and onset procrastination, and the relatively weak relation between overall procrastination and procrastination in the goal-striving phase.Another important step in validating the differentiation between onset procrastination and procrastination in the goal-striving phase is to investigate whether the two facets relate differently to motivational and volitional variables.As noted, the motivational forces operating during the beginning of goal pursuit are not necessarily the same as those important during later goal striving.Thus, motivational variables, such as expectancies and values, should be related strongly to onset procrastination, whereas volitional variables, such as the ability to shield distractions or willpower in general, should relate more strongly to procrastination in the goal-striving phase.In a future study, we will relate the scale of the two facets to instruments measuring the different forms of motivation, strategies of regulation of motivation, volition, and energy.
We trace this failure to an imprecise understanding of “delay,” another core concept in procrastination.In two studies (aggregated N = 465) we demonstrate, using exploratory and confirmatory factor analysis, that although onset and sustained action procrastination measures correlate, they are still separate facets of implemental procrastination.Implications, as well as suggestions for further research, are discussed.
uncertainty are spectral interference, counting statistics, and energy dependence of the detection efficiency, mainly due to backscattering.For an accurate determination of the emission probabilities, a correction for energy dependence of the detection efficiency should be applied.In this work, Monte Carlo simulations with EGSnrc and Penelope were used to estimate the energy and angular dependence of the detection efficiency in a silicon detector.Polynomial functions were defined to reproduce the simulation results.This correction is of less importance when analysing relative intensities in the L subshell and M+ parts of the spectra separately, and also the quality of the spectral fit is better in a narrow energy region because there is less variation in peak shape.A good functional representation of the asymmetrical ICE peaks is essential, and is expected to be even more crucial when analysing spectra with less good energy resolution.The continuum fractions of the spectrum can be fitted as the result of tailing of the ICE peaks, which seems to yield good spectral fits and realistic relative peak areas.Alternatively, a polynomial function can be fitted to account fully for the continuum part, with the additional risk of distorting the relative areas of the peaks.Possibly, there is room for improvement through an intermediate approach.In a mixed source with presence of a beta emitter, an algorithm for baseline subtraction is deemed unavoidable.
Internal conversion electron (ICE) spectra of thin 238,239,240Pu sources, measured with a windowless Peltier-cooled silicon drift detector (SDD), were deconvoluted and relative ICE intensities were derived from the fitted peak areas.Corrections were made for energy dependence of the full-energy-peak counting efficiency, based on Monte Carlo simulations.A good agreement was found with the theoretically expected internal conversion coefficient (ICC) values calculated from the BrIcc database.
In 1967, an outbreak of smallpox occurred in the Nigerian town of Abakaliki.The vast majority of cases were members of the Faith Tabernacle Church, a religious organisation whose members refused vaccination.A World Health Organization report describes the outbreak, with information on not only the time series of case detections but also their place of dwelling, vaccination status, and FTC membership.The outbreak has inherent historical interest as it occurred during the WHO smallpox eradication programme initiated in 1959.Although smallpox was declared eradicated in 1980, it regained attention as a potential bioterrorism weapon in the early 2000s and continues to be of interest due to concerns about its re-emergence or synthesis.Public health planning for potential future smallpox outbreaks requires estimates of the parameters governing disease transmission, and thus being able to accurately obtain such quantities from available data is of considerable importance.Within mathematical infectious disease modelling, the Abakaliki smallpox data set has been frequently cited, the first appearance being Bailey and Thomas.The data are almost always used to illustrate new data analysis methodology, but in virtually all cases most aspects of the data are ignored apart from the population of 120 FTC individuals and the case detection times, while the models used are not particularly appropriate for smallpox.In Ray and Marzouk a more realistic smallpox model is used and account taken of the compounds where individuals lived, but again all non-FTC individuals are ignored.The main objective of this paper is to present a Bayesian analysis of the full data set.To our knowledge, the only previous analysis of the full Abakaliki data is that of Eichner and Dietz, where the authors define a stochastic individual-based transmission model that considers not only the case detection times but also the other aspects of the data.Their model takes account of the population structure, the disease progression for smallpox, the vaccination status of individuals and the introduction of control measures during the outbreak.The model parameters are then estimated by constructing and maximising a likelihood function which is itself constructed using various approximations.Specifically, the true likelihood of the observed data given the model parameters is practically intractable, since it involves integrating over all possible unobserved events, such as the times at which individuals become infected.Eichner and Dietz tackle this problem by first using a back-calculation method to approximate the distribution of unobserved event times for a given individual, and then by making various assumptions about independence between individuals in order to construct an approximate likelihood function.An alternative solution to the intractable likelihood problem is to use data-augmentation methods to produce an analytically tractable likelihood, which can then be incorporated in a Bayesian estimation framework by using Markov chain Monte Carlo methods along the lines described in O’Neill and Roberts and Gibson and Renshaw.We adopt this approach to carry out a full Bayesian analysis of the Abakaliki smallpox data, whilst also assessing how well the Eichner and Dietz approximation method works in this setting.Our approach provides results which can be directly compared with those of Eichner and Dietz, specifically estimates of model parameters, estimates of associated quantities of interest such as reproduction numbers, and the sensitivity of the results to the disease progression assumptions.In addition, we also estimate quantities derived via data-augmentation, such as who-infected-whom and the time of infection for each individual, carry out various forms of model assessment to see how well the model fits the data, and explore particular aspects of the model via simulation.None of these additional elements feature in the Eichner and Dietz analysis.The paper is structured as follows.In Section 2 we describe the data, stochastic transmission model and method of inference.Section 3 contains results and details of sensitivity analysis and model-checking procedures.We finish with discussion in Section 4.The supplementary material contains details of some likelihood calculations and the MCMC algorithm.The outbreak is described in detail in Thompson and Foege and Eichner and Dietz.There were 32 cases in total, 30 of which were FTC members.All of the infected individuals lived in compounds, which were typically one-storey dwellings built around a central courtyard, and capable of housing several families.The FTC members frequently visited one another and were somewhat isolated from the rest of the community, which is one reason why most previous data analyses only consider FTC members.Although FTC members refused vaccination, many of them had been vaccinated prior to joining FTC as described below.Table 1 contains details of the 32 cases of smallpox recorded during the outbreak, specifying the date of onset of rash, compound identifier, FTC membership status and vaccination status.Note that we set a timescale by setting day zero of the outbreak to be the first onset of rash date.The composition of the affected compounds is provided in Table 2, where the total numbers of vaccinated and non-vaccinated FTC and non-FTC members within each compound are listed.Note that on day 25, four FTC individuals from compound 1 moved to compound 2.In addition, quarantine measures were put in place in Abakaliki, but not until part way through the outbreak.The exact time these measures were introduced was not recorded.We suppose that the residents of Abakaliki form a closed population with N = 31,200 individuals, labelled 0, 1, …, N − 1.Individuals 0, 1, …, ncom − 1 are those inside the compounds, where ncom = 251 is the number of people within the compounds."Any individual k = 0, …, N − 1 may be categorised as type, where ck = 1, …, 9 is
The data themselves continue to be of interest due to concerns about the possible re-emergence of smallpox as a bioterrorism weapon.We present the first full Bayesian statistical analysis using data-augmentation Markov chain Monte Carlo methods which avoid the need for likelihood approximations and which yield a wider range of results than previous analyses.
the compounds, could well have been rather less than that assumed in the model.According to Thompson and Foege, the FTC community was largely isolated from the community at large, although several of its adult members were involved in trading activities in and around Abakaliki.Consequently, a model in which some fraction of FTC members had contact with the outside community might be more realistic, although there are no data to directly inform this.Our model also takes no account of any prior immunity within the population at large, meaning that some fraction of the population might have been previously exposed to smallpox and no longer susceptible.Another aspect that is missing from our model is that of age categories; Thompson and Foege states that the highest attack rates were among children.However, there do not appear to be sufficient data on compound composition to accurately incorporate age categories, and it seems likely that a model with age-specific transmission rates may be over-parameterised.We also do not explicitly model potential transmission between individuals in the same compound who are of a different confession, other than via the general λa rate.However, Table 2 shows that most compounds are almost entirely made up of either FTC or non-FTC individuals, and so including an additional term into the model is unlikely to have a material impact on the results.The compound that is most heterogeneous is compound 5, comprising seven FTC and fifteen non-FTC individuals, but here there were only four cases, all of whom were FTC.This in turn suggests that there is little in the data to inform estimation of a cross-confession transmission parameter.It seems likely that the advent of control measures at time tq played a crucial role in bringing the outbreak to its conclusion rapidly.Under the model assumptions, control measures reduce the rash period from an average of 16 days to just 2 days, which in turn reduces the number of new infections.Interestingly, the posterior mean of R0 after tq is around 1.5, but this in itself is insufficient to permit further large-scale spread due to the depletion of susceptibles within the compounds, and the fact that the epidemic in the population outside the compounds is sub-critical.Expanding the latter point, if we define pre- and post-control measure reproduction numbers for spread within compounds, FTC and the wider populationλa, etc.), then posterior mean estimates show that within compounds, the epidemic is super-critical before and after tq; within the FTC community, the epidemic switches from super- to sub-critical; in the wider population, the epidemic is always sub-critical.Despite this, increasing the value of tq in simulations was found to increase the outbreak size; for instance, setting tq to be 50, 100 and 200 gave mean outbreak sizes of around 24, 44 and 64, respectively.However, with no restrictions, we found the average outbreak size to be around 86, which underlines the fact that the epidemic was sub-critical in the wider population.It is of interest to see that our results are fairly similar to those obtained by Eichner and Dietz.The most plausible explanation for this is the fact that distributions used for the length of time in each disease stage do not have particularly large variances, which in turn means that the model is not all that different to one in which all event times are assumed known.For such a model, the approximation method used by Eichner and Dietz gives the true likelihood, essentially because the distributions used to approximate uncertain event times collapse to point masses around the true values.A further point of interest is that the Eichner and Dietz approximation produces a likelihood function which is numerically but not analytically tractable, specifically because it involves integrals that must be evaluated numerically.Although this is sufficient for optimization purposes such as maximum likelihood, in practice such likelihood functions can be computationally prohibitive for use within MCMC algorithms since they must be repeatedly evaluated.It would therefore be of interest to develop analytically tractable approximate likelihood functions.
The celebrated Abakaliki smallpox data have appeared numerous times in the epidemic modelling literature, but in almost all cases only a specific subset of the data is considered.The only previous analysis of the full data set relied on approximation methods to derive a likelihood and did not assess model adequacy.Our findings suggest that the outbreak was largely driven by the interaction structure of the population, and that the introduction of control measures was not the sole reason for the end of the epidemic.We also obtain quantitative estimates of key quantities including reproduction numbers.
The decision on the construction of the European Spallation Source initiated a number of studies on the instrumentation for a long pulse neutron source.It will provide an exciting opportunity to design a reflectometer of the next generation to meet the increasing demand and anticipated scientific challenges .In several meetings and in two specialized workshops carried out in 2012 and 2013, internationally recognized experts in the field of soft and hard matter discussed the science case for neutron reflectometry at the ESS.They identified the scientific drivers in which neutron reflectometry will assist in gaining valuable and unique information when the ESS starts its operation.The anticipated research topics comprise a wide range of scientific disciplines, ranging from thin film magnetism and novel topological phases in confined geometries, over the functionality and properties of hybrid materials in both soft and hard matter science to the structural biology of membrane proteins.A particular focus was put on the increasing complexity of thin film samples involving depth resolved two-dimensional patterning to enhance performance and create new functionalities as well as on the realisation that nanoscale lateral structures of “natural” interfaces in soft matter are essential for their functionality.Despite the growing number and steady advancements of neutron reflectometers in the last decades all around the world neutron reflectometry research still experiences serious restrictions that handicap the use of this technique for addressing the discussed science case in its full extent.The biggest drawback here is the generally low neutron flux available even at the nowadays strongest neutron sources which limits the accessible Q-range and thus the spatial resolution of the information gained from specular as well as from off-specular neutron reflectivity.Another restriction consists in the limited flexibility of the existing instruments allowing in general only one application of a particular operation mode in one experimental setup of the instrument.If, for example, the 3-d structure of a thin film sample is investigated, it is desired to provide a fast and easy switch between specular, off-specular and GISANS mode.Moreover, for free liquid samples it is important to allow for measurements in a wide Q-range, i.e. by applying different angles of incidence, without being forced to move the sample, since any additional movement may influence the properties of the sample or require long waiting times for the relaxation to the original state.The requirements on a neutron reflectometer free from the above limitations can be summarized as follows: it should provide the highest possible intensity for GISANS and standard reflectivity mode and be built in the horizontal sample geometry to account for – beside other systems – the increasing demand for studies of free standing liquid interfaces.Furthermore it should foster the compatibility of the different types of measurements carried out on one sample at set external parameters.It should have a flexible wavelength resolution from 1% to 10% for tuning the Q-resolution in respect to the scientific needs.It should enable to relax it to gain on flux for Q-resolutions in the regime of small length scales, but for specular reflectivity of thicker layers and for GISANS of larger lateral structures or for allowing precise depth sensitivity in the TOF-GISANS mode, it should leave the Q-resolution adjustable.Its wavelength band should be chosen as wide as possible for the investigation of fast reactions and processes to probe e.g. kinetics of self-assembly of colloidal particles, folding of proteins or in-situ investigation of the exchange processes in membranes.It should have full polarisation capability, i.e. a polarised beam with subsequent polarisation analysis, is required for the measurements of magnetic samples.Moreover, this polarisation capability may also be available for the soft matter samples to reduce incoherent background or to enhance the contrast by using a magnetic reference layer.The beam size needs to be optimized for typical sample sizes of 10×10 mm2 and 5×5 mm2 the typical size of high quality samples produced by molecular beam epitaxy or pulsed laser deposition.Since in soft matter, but also certain hard matter samples will anyway be available with larger surface areas, the instruments should also allow one to make use of the larger amount of scattering material.Thanks to a new flat cold moderator planned to be installed at the ESS, the neutron beams will have 2.5–3 times higher brilliance in comparison with standard cold TDR moderators.As a result the peak intensity of the ESS is expected to be about 60–75 times higher than the intensity of the time-of-flight instruments at the ILL.This opens an exciting opportunity to design a reflectometer of the next generation to meet the increasing demand and anticipated scientific challenges.HERITAGE is an instrument concept dedicated to studies of 3-dimensional structures in thin films fulfilling the list of the above requirements with best performance.Being designed for studies of both free-liquid and solid interfaces, HERITAGE possesses also all operation modes conceivable for a liquid reflectometer, including fast kinetic studies of liquid samples without the necessity of sample motion in neither case of illumination-from-above or -below.As we will show below, the unprecedentedly high flux of HERITAGE due to its optimized focussing elliptic neutron guide in the horizontal plane allows for studies of very thin films and interfacial regimes, down to 5 Å.The high flux can be traded for very high Qz-resolution required for example in depth-profiling that is currently only possible by X-rays and is a blind spot for neutrons due to the limited intensity of present neutron instruments.The broad overlap of the accessible lateral length scales of multibeam focussing GISANS and off-specular
It is constitutes a new class of reflectometers achieving the unprecedentedly high flux for classical specular reflectometry combined with off-specular reflectometry and grazing incidence small-angle scattering (GISANS), thus resulting in a complete 3-d exploration of lateral and in depth structures in thin films.The use of multiple beam illumination allows for reflectivity and GISANS measurements at liquid interfaces both from above and below without a need to move the sample.
reflectometry permit studies of lateral structures in an extremely wide range from 0.4 to 30 µm.All the above allow for a gap-less exploration of the Q-space of thin interfaces in all 3 directions.The full polarisation analysis in combination with the high flux will push the limits of signal to noise ratio for samples with inherent incoherent scattering since those studies were not possible before due to background issues.With these characteristics the HERITAGE design meets most requirements formulated in the science case of ESS reflectometry.Apart from very small samples the present HERITAGE design satisfies all foreseeable ambitions of the biological, hard matter and soft matter community.Recent moderator developments at the ESS resulted in the discovery of low-dimensional para-H2 pancake moderators allowing for a brilliance gain of up to an order of magnitude in comparison with a standard moderator.Such pancake moderators should certainly provide a significant intensity gain for neutron instruments with high Q-resolution, as e.g. for small-angle scattering diffractometers and reflectometers which require highly collimated beams.Neutron reflectometry relies on a beam of smooth and flat divergence profile in the scattering plane.The low dimensional ESS moderator provides neutrons for a number of neutron beams, consequently its optimum form is of a flat disc with a height of about 3 cm .Using a neutron guide with the same height, however, results in an incomplete filling of the phase space of neutrons accepted by the neutron guide.Moreover it has a rather irregular divergence profile at the exit of the neutron guide.Such an irregular divergence profile is hardly usable for neutron reflectometry.This obstacle is in general overcome by reducing the height of neutron guide by keeping it 3–5 times smaller than the height of the moderator.We achieve this by splitting the 3 cm height neutron guide into 5 equidistant horizontal channels with heights of 6 mm each.Such a design results in a smooth divergence profile necessary for the use of the multibeam illumination system.It further allows for taking advantage of the enhanced brilliance of the flat moderator, while keeping a large total cross-section of the neutron guide in the flat direction and thus accepting a high neutron flux from the source.The layout of HERITAGE is presented in Fig. 3.In the vertical plane the instrument guide is of S-shape made of two 10 m long curved neutron guides with a 7 m long straight section around the inflection point.The 2nd line of sight is lost at 24 m distance from the moderator that is 10 m upstream the sample position, so that the direct view of primary and secondary radiation sources is well avoided.The neutron guide is inclined by 2° with respect to the floor to allow for studying free liquid surfaces.The channel structure of the neutron guide will have a gap in the centre of the straight section 16 m downstream the neutron beam, where the movable transmission polariser is placed.In the vertical plane the neutron guide ends 4.5 m upstream of the sample to provide a 4 m long slot for the collimation options of the beam.In the horizontal plane, the elliptic neutron guide extends up to 0.5 m upstream of the sample.The last 4 m of this section can be modularly exchanged to provide different collimation options for the operational modes of HERITAGE described in Section 8.Basic guide parameters are listed in Table 1.In the horizontal plane, the neutron guide is of elliptic shape with the focal points on the moderator and the sample.It allows focussing a divergent neutron beam onto the sample in the horizontal plane.The ellipse is designed for the maximum brilliance transfer from the cold source to samples with surface areas between 5×5 mm2 and 10×10 mm2.The focussing effect is demonstrated in Fig. 4 showing the intensity distributions in the focal point at the sample position for different length of the end section of the elliptic guide.Fig. 4 compares the presented setup with the situation where the horizontal elliptic guide is replaced by a constant cross section guide with the same coating, demonstrating a clear factor of 4 increase of the beam intensity for elliptic focussing.The performance of the neutron beam delivery system of HERITAGE is presented in Fig. 5.The left panel shows the intensity-wavelength distribution simulated for a low wavelength resolution of Δλ/λ=10% at the entrance of the collimation base.The spectrum of the beam formed by the collimation slits S1 and S2, both of 12 mm height at 4 m distance, i.e. at a beam collimation of 3 mrad, is depicted in the middle panel.The intensity integrated over the full wavelength spectrum amounts 7.6·109 n/cm2/s.The results from the simulations above are illustrated by the following simple estimations: compared with the ILL, the average flux at the ESS with the TDR moderator is about equal; the chopping of the beam will result in losses of the useful neutron intensity due to the blocking of the neutron beam between the neutron pulses by the choppers; the opening time is defined by the ratio of the pulse width to their period that is approximately equal to 1/25; such pulse structure is naturally produced by the ESS, so no losses related to the time structuring of the neutron beam will occur; the flux gain due to the “pancake” moderator in comparison to the TDR moderator is about 2.5.Therefore, assuming that other beam parameters are similar, the expected gain is about 60.The actual gain may be even higher since a more advanced neutron optics
The instrumental concept of HERITAGE – a reflectometer with a horizontal sample geometry – well fitted to the long pulse structure of a neutron source is presented.This is achieved by specially designed neutron guides.In the horizontal direction (perpendicular to the scattering plane) the guide's elliptic shape focusses the neutrons onto the sample.In the vertical direction a multichannel geometry provides a smooth divergence distribution at the sample position while accepting the entire beam from a compact high-brilliance flat moderator.
60 provided by the HERITAGE design as compared to the today's strongest neutron sources in comparable conditions offers great opportunities in the field of neutron reflectometry.It may lead to a drastic reduction of the measurement time, allowing for instance a detailed mapping of a phase diagram of a sample, which is not feasible today.In the field of kinetics, processes in the sub-second regime are accessible which is particularly interesting in a number of soft matter projects.The increase of neutron flux in combination with focussing techniques further allows investigating small samples down to 1 mm2 sample surface or below.Another big opportunity in neutron reflectometry inherent with such a drastic increase of the incident neutron flux is the ability to access lower reflectivities and a consequent extension of the accessible Q-range.Data from a larger reciprocal space regime will deliver direct and more accurate information about smaller lengths scales than it is possible today.If we consider the ratio of the background level b to the incident neutron intensity I0 as a combined parameter and the time as another crucial parameter, the relationship between the lowest necessary counting time with respect to a signal of certain intensity can be illustrated as shown in Fig. 20.By knowing the incident neutron flux and the background level b, the lowest reflectivity can be estimated.With a neutron flux of 108 n/s and a background level of 10−7, for example, reflectivities of R=2·10−9 are already measurable in 1 h or reflectivities of R=5·10−10 in 12 h.The loss in incident neutrons can be proportionally compensated by a reduction in the background level if one wants to keep the measurement time constant.Having more incident neutrons, on the other hand, allows for a corresponding higher background as long as the ratio between b and n remains constant.The considerations here imply that with the huge increase in neutron flux at the ESS and particularly with the HERITAGE concept, it is possible to measure reflectivities of the order of 10−9 or even below if the background level can be kept as low as here assumed.Theoretical intensity calculations for a sample system is illustrated in Fig. 21 to show how reflectivity levels of 10−9 can be measured in five hours or even less.It should be noted, however, that such low reflectivity levels require additional conditions to be fulfilled.At first, the incident neutron flux n on the sample is much lower at small than at large incident reflectivity angles due to the sample illumination condition.In case of samples with rough interfaces, the reflectivity drops already at very small incident angles to levels where the incident neutron flux is still too low for achieving a reasonable low b/n ratio that enables one to detect the signal.Second, and very crucial point: any systematic background components or statistical errors of time scales comparable to the typical measurement time are ignored and should be avoided.Such conditions make it impossible to distinguish the background from a weak signal.This latter condition, of course, is not limited to reflectometry, but is necessary to fulfil at any instruments at any neutron source.Therefore it is essential to place the experimental environment in an extremely stable environment, with respect to e.g. temperature or humidity changes that occur during the day/night cycle or other sources of long term external fluctuations.Moreover, it is essential to shield the experiment very well against parasitic neutrons stemming from neighbouring instruments, a possible major source of non-systematic, non-statistical background contribution.The reflectometer concept presented here is not only supporting static and kinetic measurements on all possible types of interfaces but combining unprecedentedly high flux with high lateral resolution allowing for the first time to investigate within one experimental session in a complete 3d-structural study of a thin film sample.The extremely high flux of 7.6·109 n/cm2/s is achieved not only by the outstanding performance of the ESS pancake moderator but it is flanked by the modern neutron guide concept in the horizontal direction focussing on the sample via an elliptically shaped neutron guide.The intensity gain may be used just for achieving a significantly higher dynamic Q-range than feasible today, for considerably shorter time scale kinetic studies, for a higher Q-resolution, for the exploration of 3-d structure in thin films, for shorter measurement times or studies of very small samples at a reasonable time.The strength of the HERITAGE concept lies in its potential to investigate all kind of interfaces types by its ability to quickly switch between the available operational modes inside the collimation by keeping the complexity of the instrument low.HERITAGE operation range from single to multi-beam setups illuminating the sample from above or below allowing for the measurements of specular, off-specular reflectivity and GISANS modes combined with the full variety of λ-resolution, polarisation analysis and kinetic measurements.The realisation of such a reflectometer concept provides an extremely high flux, far exceeding values at the currently best reflectometers in the world.The gain factor of about 2 orders of magnitude is expected in comparison with reflectometers at ILL and SNS .Moreover, in comparison with the reflectometer design FREIA for ESS, the gain amounts to a factor of about 5 for 4×4 cm2 liquid samples and to factor about 8 for gain for 1×1 cm2 solid samples.
This concept assures the delivery of the maximum possible and usable flux to the sample in both reflectivity and GISANS measurement regimes.The presented design outperforms the flux of all present-day and already for the ESS planned reflectometers and GISANS setups in flux and in measuring time for standard samples.
and the water droplet distribution of the impact depends on the area covered by the water droplet.To analyze the spray uniformity of water distribution, the nozzle spacing Sn and the heights between the nozzle and coil H are adjusted.In this work, to investigate the water spray uniformity of the sprinklers, Table 1 shows the parametric ranges used.Besides, the influence of the flow rate generated by the nozzle opening, three kind of nozzles with different opening length and opening width are tested, as shown in Table 2.In the separate experimental runs, it was found that when the working fluid is sprayed from the nozzle and the opposite nozzle, an impact is generated.The striking path has not disappeared and would cause a secondary impact with the adjacent other jets until the impacting force completely disappears.Therefore, to examine the different parameter effects, 76 experimental cases are conducted experimentally to evaluate the spray uniformity.In this work, the Power Analyzer is used to record the power consumption of the pump with current range and accuracy being 0.5–500 A and ±0.2%, respectively.To observe the influence of different parameters on the spray uniformity, the height of the water collection container is changed by the lifting trolley.Besides, the electronic balance was used to measure the water weight in each water collection container.The nozzle opening length and opening width are the two key geometric parameters which affect the water spray uniformity and the collection ratio.For a nozzle with narrow opening width and large opening length, the spray water droplet would be strong with a high spray range when the flow rate is high.Therefore, many droplets splash directly on the chamber wall.When the flow rate is small, the flow velocity is low and the droplets are excessively concentrated in the middle region.Therefore, in this experiment, three kinds of nozzles with various nozzle opening width and length are tested.In the separate experimental runs, flow observations of water droplets for various nozzle designs are firstly tested.The Nozzle 2 is designed according to the maximum flow rate through the opening cross-sectional area, and the flow rate is controlled at 1.2 m/s.The amount of water received is large and the square root error of the water collection rate can be reduced.The Nozzle 3 has a small opening length, and it is observed that the droplet spraying range can be narrowed to achieve a better effect.Fig. 5 shows the effects of the height H on the root mean square error of water collection rate and water collection ratio for three kind of nozzles with water flow rate of 125.1 LPM and nozzle spacing of 17 cm.It is clear in Fig. 5 that the higher water collection ratios are noted for Nozzle 2 and Nozzle 3.However, as measured in the separate runs, the flow rate is 125.1 LPM under the nozzle spacing of 17 cm, the flow velocity of Nozzle 1 is 1.34 m/s, and the nozzle velocities of Nozzle 2 and Nozzle 3 are 0.97 m/s and 1.53 m/s, respectively.Although a small water collection ratio is notice for Nozzle 1, the square root error of the water collection rate is much less and more uniform.The Nozzle 2 has a tendency to concentrate in the intermediate water collection container because the flow rate is too low.As for Nozzle 3, due to the smaller opening length, flow rate becomes faster and the droplet spray range is reduced, which in turn, cause the water impact being obviously concentrated in the central water collecting container.It can be observed from the figure that the standard deviation of the collection rate will start to gradually decrease with an increase in the height H, indicating that the splash of the droplet would be affected by gravity after the first and second impacts, so that the standard deviation of the collection rate will not rise.While the ratio of the amount of water received is also reduced owing to the droplet splash along the wall when the height is increased.In this work, the experimental study was carried out at different flow rates, nozzle spacing and nozzle geometric designs.The results can be summarized as follows:In the watering efficiency of the evaporative condensing chiller, the cross-sectional area design of the nozzle opening and the flow rate change have a great influence on the uniformity.If the flow rate can be properly adjusted in the nozzle, the droplet loss and the impact can be reduced.In this work, the Nozzle 2 has better capacity and uniformity than the Nozzle 1 in high flow rate.At low flow rate, the Nozzle 1 provides better impacting effect with the nozzle spacing of 17 cm, yet the Nozzle 2 performed better with the nozzle spacing of 15 cm.
This study aims to investigate the water spray uniformity and collection ratio of sprinkler in an evaporative condenser of a water chiller.Experiments of water droplet distribution are conducted with 50 water collectors during the tests.Three different combinations of nozzle opening length and width were tested with the flow rates varied at 135 LPM and 176.4 LPM.Measured results show that the cross-sectional area of nozzle opening and flow rate significantly affect the water spray uniformity.In this work, at high flow rate, the Nozzle 2 with opening of 4»cm in length and 1»cm in width has better water spray uniformity compared to the nozzle 1 with opening of 4»cm in length and 0.7»cm in width.On the other hand, at low flow rate, the Nozzle 1 provides better impacting effect with the nozzle spacing of 17»cm, yet the Nozzle 2 performed better with the nozzle spacing of 15»cm.The latter case, with the smaller nozzle spacing and bigger nozzle opening size, led to a shorter impact distance of the spraying flow from two facing nozzles.Subsequently the spattering of water droplets was more pronounced, and distributed more uniformly.
marine fuel combustion, is this not an opportunity to address the challenges of sulphur and CO2 together?,Links between SOx and CO2 emissions mean the sector runs the risk of taking a very short-sighted approach if chooses to tackle SOx emissions without thought for the carbon repercussions.Addressing the co-benefits would reduce the chances of infrastructure and marine engine lock-in, as well as reducing potential lock-out of future low carbon fuels.Failing this and continuing to pursue only sulphur regulation, means the sector is likely to have to again make changes to its fleet and fuel infrastructure in the coming decades.The argument of lock-in is not just made in the shipping industry, but it is also an argument that is frequently made in the energy sector when it considers low carbon pathways .Whilst it is clear that one alternative fuel or technology measure will not be applicable for the entire fleet, there are a range of technologies that lend themselves to certain types of vessels and markets .With the help of industrial stakeholder input, our research is currently exploring technology roadmaps for a range of shipping vessels.For example, whereas small vessels operating in coastal waters could achieve large-scale decarbonisation through the use of energy storage and fuel cells, tankers operating on the high seas have potential to exploit wind, given their greater flexibility with regards to available deck space.In exploring the potential benefits and challenges of any new developments or retrofit options, the vessels should, as a minimum, seek to satisfy the sulphur regulation in the short-term but ensure that such measures do not limit the potential for low carbon technologies in the longer-term.As an example, to ensure that LNG infrastructure is capable of storing either biogas or hydrogen in the future.A more controversial suggestion to this challenge could be to delay the implementation of ECAs, so that the sector can develop and introduce lower carbon propulsion from the outset – so as to deliver these co-benefits.Arguably, the move towards low sulphur propulsion is missing the opportunity to tackle the wider systemic issue of climate change.This option would be premised on the implementation of a meaningful global CO2 reduction strategy in the coming years to incentivise low-carbon technology development.Such a suggestion could help the sector to,move away from technology measures that only provide incremental CO2 savings,develop and implement more radical, step change technologies such as wind power, battery, fuel cells and biogas from the outset,reap the co-benefits of addressing cumulative CO2 emissions and the localised impacts of SOx and NOx.Let us not forget that many of these lower carbon forms of propulsion are seen as being cost-effective by industry themselves and there are already pioneers in the industry exploiting such measures like B9 Shipping, Sky Sails and Enercon."Of course, developing a meaningful global CO2 strategy in the interim is very challenging and from following discussions at the IMO's Marine Environment Protection Committee to date, it could take considerable time to reach agreement.Furthermore, the agreement could lead to unintended consequences such as loss of economic competitiveness and trans-modal shift.However, referring back to discussion at the SEAaT event, such challenges are just as apparent when addressing the sulphur regulations and the sector is moving forward with these.Whilst the stricter sulphur regulations in ECAs are impending, the widespread agreement amongst the scientific community is that climate change is here and the regulations surrounding a reduction in CO2 emissions are only going to tighten.In response to this, rather than taking a short-sighted approach, the shipping industry should consider the choices that it makes in the coming years with regard to dealing with sulphur emissions.The sector should be open to the idea that addressing CO2 and SOx emissions simultaneously is an opportunity to embrace the wider issues – to take a systems view of the role of shipping in addressing not just local pollutants, but climate change too.This in turn could secure a more sustainable future for the industry, rather than one that increases its costs by only meeting one regulation at a time.
Although this is demanding, a greater challenge for all sectors is climate change.With a deep-seated change to the type of fuel burnt in marine engines, this should be seen as an opportunity to explore co-benefits of sulphur and carbon reduction - instead of taking a short-sighted approach to the problem.It is argued here that the upcoming sulphur regulations should be postponed and instead, a co-ordinated suite of regulations should be implemented that tackles cumulative CO2 emissions and localised SOx emissions in chorus.This would ensure that less developed, yet more radical, step-change forms of propulsion such as wind, battery and biofuels are introduced from the outset - reducing the risks of infrastructure lock-in and preventing the lock-out of technologies that can meaningfully reduce absolute emissions from the sector.© 2013 The Author.
in-stent restenosis.Of particular relevance is the ability of EPAC1 to induce SOCS3 gene expression, as SOCS3 exerts multiple protective effects in both cell types, while immunohistochemical studies have shown that neointimal lesions from a pig coronary artery injury model have significantly lower SOCS3 expression levels within proliferating neointimal smooth muscle cells versus those in normal media .Thus, SOCS3 can inhibit VSMC migration, via inhibition of IL-6-mediated induction of matrix metalloproteinase-2 and -9 , and proliferation in vitro, via inhibition of STAT3-mediated induction of cyclin D1 and NH in vivo .In addition, SOCS3 overexpression can inhibit VSMC inflammation in vitro by inhibiting STAT3 activation , while multiple studies have demonstrated that EPAC1-inducible SOCS3 can limit proinflammatory JAK–STAT and ERK1/2 signalling by IL-6 trans-signalling complexes and leptin in VECs .Coupled with the well-described ability of EPAC1 to enhance endothelial barrier function , localised activation of EPAC1 would be anticipated to suppress NH via inhibition of endothelial inflammation, VSMC proliferation and migration, and remodelling.The ongoing development of drug-eluting and bioabsorbable polymer-eluting stents for PCI also provides an obvious route through which strategies to activate EPAC1 locally at the site of stent deployment could be achieved, thereby minimising any adverse effects of EPAC1 activation in non-damaged tissue.Testing these types of approach in additional disease models, coupled with the development of EPAC1-selective small molecules, would also allow an informed assessment of whether the potential for such approaches can be realised in a range of therapeutic indications.
Pharmaceutical manipulation of cAMP levels exerts beneficial effects through the regulation of the exchange protein activated by cAMP (EPAC) and protein kinase A (PKA) signalling routes.Recent attention has turned to the specific regulation of EPAC isoforms (EPAC1 and EPAC2) as a more targeted approach to cAMP-based therapies.For example, EPAC2-selective agonists could promote insulin secretion from pancreatic β cells, whereas EPAC1-selective agonists may be useful in the treatment of vascular inflammation.By contrast, EPAC1 and EPAC2 antagonists could both be useful in the treatment of heart failure.Here we discuss whether the best way forward is to design EPAC-selective agonists or antagonists and the current strategies being used to develop isoform-selective, small-molecule regulators of EPAC1 and EPAC2 activity.
Providing transparency of AI planning systems is crucial for their success in practical applications.In order to create a transparent system, a user must be able to query it for explanations about its outputs.We argue that a key underlying principle for this is the use of causality within a planning model, and that argumentation frameworks provide an intuitive representation of such causality.In this paper, we discuss how argumentation can aid in extracting causalities in plans and models, and how they can create explanations from them.
Argumentation frameworks are used to represent causality of plans/models to be utilized for explanations.
practice’.Farmers are aware that they often overuse anthelmintic drugs, that this seems to be a product of inherited practices over the years, and that it might mean that their practices are unsustainable.The process of perpetuating established social norms in farming thus play a key role in maintaining the rules of the game of deworming in livestock systems.This context evokes what Bourdieu calls the doxa, i.e. the knowledge of what is taken-for-granted on a field that sets social boundaries and limits our social behaviour."For farmers' deworming habitus to change, it is necessary for the game and its rules to also change. "If farmers' use of anthelmintics is part of a longstanding tradition of food production intensification and low genetic diversity in livestock that increases infectiousness, pathogenicity of pathogens, and animal health deterioration, overuse of anthelmintics as well as other drugs will persist.In fact, resistance does not represent a sufficient short-term risk for farmers, as compared to their ethical responsibility, and the sustainability and resilience of their businesses and local profit margin costs.Moreover, the low costs of anthelmintics that are easily accessible from drug sellers who are non-health professionals and who are themselves in need of selling their products, may hinder attempts to influence patterns of anthelmintic use, particularly considering the fact that animal medication can mask other deficiencies of the system that could be more expensive to solve.For instance, in Europe, a ban in the use of antibiotics as growth promoters directly added to feed has contributed, in some countries, to a 50% increase in the therapeutic requirements of antibiotics due to the emergence of other diseases caused by poor animal management and biosecurity.Different spheres of drug practices exist within animal and human health systems, whereby rules and norms are created and persist."The study discussed in this paper provides insights into farmers' deworming habitus within the particular context of livestock production. "They reveal that aspects such as time and resource constraints, and concerns over animal care translate farmers' deworming habitus into immediate risk mitigation and routine-based practices.This echoes the observations in the context of antimicrobial use in animal agriculture and human hospital settings where similar financial and operational pressures exist.What this means is that drugs are more than technical commodities and reflect cultural processes, particularly in the case of production systems such as livestock farming."Understanding farmers' logics of practice is crucial to clarify the drivers behind drug use practices in farming and to develop more comprehensive strategies - beyond regulations - against drug resistance.In the case of animal agriculture, this also means that, besides farmers, we need to explore the roles of other players, such as the food industry and consumers, who are also responsible for defining the structures of the system and the ‘value’ of livestock animals, something that, ultimately, influences the emergence of diseases, the assessment of risks, the practices related to animal medication and drug resistance itself.
Farm practices related to drug use in animal agriculture play an important role in the development of drug resistance.In this paper, I use Bourdieu's theory of practice to explore the field of deworming and how the use of deworming medications, also called anthelmintics, by farmers may be a pragmatic choice within the habitus of livestock farming.Drawing on 42 in-depth interviews with livestock farmers across England I show how farmers prioritise farm productivity and animal health and welfare to the detriment of an adequate use of anthelmintics, which may lead to an increase in drug resistance.I also discuss some of the particularities regarding the engagement between farmers, veterinarians and the industry, and how expert advice is commonly limited to one-way flow of information.As a strategy to address drug resistance in livestock, mainstream policy approaches to drug management in the farm have prioritised the development and dissemination of technical guidelines.However, these guidelines are usually disconnected from the farming context, do not take into account the complexity and challenges of farm everyday practices and are eventually rejected by farmers.Although there has been increased interest from the social sciences in studying the intersection between drug resistance and farmers' perceptions and behaviour, there is still a need for unpacking the often hidden dynamics and logics of farm practices, understanding how they shape animal health management and, more specifically, drug use.I argue that farm practices related to drug use are situated within a larger context of intensive animal production systems, which themselves contribute to the emergence of animal diseases, the medicalisation of animal production and drug resistance.
while amygdala activity increased.Activity in the secondary somatosensory cortex, posterior insula, ACC, and caudate nucleus was correlated with the heat stimulation, while only right anterior insula activity correlated with the heat stimulation during tVNS.The anterior insula has been implicated in modulating the perception of pain.Specifically, shifting attention away from pain decreases pain intensity and pain-evoked activity in the anterior insular cortex.Inflammatory conditions that produce or are induced by pain may also improve with VNS as descending vagal signals activate anti-inflammatory pathways that suppress secretion of pro-inflammatory cytokines, which could subsequently ameliorate associated pain.Yuan and Silberstein provide an up-to-date review on VNS and the anti-inflammatory response.The studies summarized in Tables 1 and 2 provide evidence that stimulation of vagal afferents produces significant, and in some cases, clinically significant analgesic effects in healthy participants and patient populations.Nevertheless, some discrepancies exist and further work is necessary in order to fully elucidate the mechanism by which VNS modulates pain perception in both favorable and unfavorable directions.To this end, we propose a new model that includes psychological variables that have previously been shown to modulate pain perception and that are also modulated by vagal stimulation.With few exceptions, investigators have focused on the effects of VNS on psychological factors or on pain.Future studies investigating vagal modulation of pain should include measures of psychological factors, such as mood and attention, as they significantly interact with the way pain is perceived.Fig. 2 depicts the direct effects that VNS has on pain perception, as well as the psychological variables that can differentially modulate the components of pain.Inclusion of these factors may better characterize the role of the vagus nerve in the perception of pain, and may help resolve the discrepancies in the literature.Additional factors to consider that may also result in discrepant findings are the duration, parameters, and location of VNS stimulation.It is worth noting that many of the VNS studies described above are testing acute effects of vagal stimulation, which may produce immediate significant changes in pain responsive brain regions but not necessarily behavioral responses, as observed in the study by Usichenko et al.Four-weeks of tVNS reduced clinical symptoms in MDD patients and produced resting-state functional connectivity changes in associated neural-networks.Thus, more longitudinal studies are required to gain a better understanding of the potential persisting affects of vagal stimulation.While increased pain perception in response to VNS may be a function of stimulation parameters, it is also likely that possible increased attention, alertness, and vigilance in response to VNS may increase attention to evoked pain and thereby increase pain intensity.An occurrence of increased pain perception in response to VNS is, indeed, a response and evidence of vagal modulation of pain.Thus, participants with such reactions should not be deemed “unresponsive”, as that would, by definition, indicate that VNS neither increased nor decreased pain perception.Pro-nociceptive effects in response to vagal stimulation are intriguing phenomena that warrant further examination.It is also possible that in a true case of unresponsiveness, as measured by pain threshold or pain intensity ratings, VNS may, indeed, be having an effect on a component of pain that is not being measured, i.e., the unpleasantness associated with pain, which can be independent of the perceived pain intensity.In other words, it is possible that in cases where VNS either increases or has no effect on pain, it may also be decreasing the unpleasantness associated with the pain, as noted in the anecdotal comments of one of the patients in a previous study who was no longer “bothered” by his existing pain.Furthermore, a decrease in pain unpleasantness in response to VNS may be attributed to improved affect, which evidently has not been reliably measured in studies investigating the effects of VNS on pain.To this effect, based on the findings reported in Usichenko et al., one could argue that tVNS may have possibly increased mood and decreased pain unpleasantness as the activity in the secondary somatosensory cortex, which is associated with coding the affective component of pain, i.e., unpleasantness, was no longer associated with the painful heat stimulation during tVNS.In conclusion, the present studies discussed provide preliminary evidence in humans that stimulation of the vagus nerve alters the way pain is perceived and the findings corroborate early reports in animals.The present studies also provide evidence that the vagus nerve alters psychological processes that are known to modulate pain perception differentially.It remains unclear as to whether VNS-induced psychological effects partially mediate vagal pain modulation.However, evidence of vagal-induced analgesia and vagal-induced improvement in mood, together with the known psychological effects on pain perception strongly indicate a need for including psychological measures in future studies investigating the effects of vagus nerve stimulation on pain.This research was supported by the Intramural Research program of the NIH, National Center for Complementary and Integrative Health.There is no conflict of interest for any of the authors.
There is preclinical and clinical evidence that vagus nerve stimulation modulates both pain and mood state.Mechanistic studies show brainstem circuitry involved in pain modulation by vagus nerve stimulation, but little is known about possible indirect descending effects of altered mood state on pain perception.To date, human studies investigating the effects of vagus nerve stimulation on pain perception have not reliably measured psychological factors to determine their role in altered pain perception elicited by vagus nerve stimulation.Here, we present a rationale for including psychological measures in future vagus nerve stimulation studies on pain.
least a 4-fold increase in post-vaccination reciprocal titer.SPR was defined as the percentage of participants who attained reciprocal HI titers of ≥1:40.MGI was defined as the geometric mean of the within-subject ratios of the post-vaccination/pre-vaccination reciprocal HI titer.For the PPV23, anti-pneumococcal antibody concentrations were determined by 22F-inhibition enzyme-linked immunosorbent assay for the 12 serotypes shared by PPV23 and PCV13: 1, 3, 4, 5, 6B, 7F, 9V, 14, 18C, 19A, 19F, and 23F.The following parameters were derived: geometric mean concentration; proportion of seropositive participants; proportion of participants with concentration ≥0.2 µg/mL; proportion of participants with at least a 2-fold and 4-fold increase in post-vaccination concentration.Participants recorded solicited injection site and general symptoms on the day of vaccination and for the next 6 days.Spontaneously reported symptoms were recorded until 28 days after vaccination.Serious adverse events, potential immune-mediated diseases, and medically attended adverse events were recorded until the final study contact on Day 180.The co-primary objectives were to demonstrate immunologic non-inferiority of co-administration versus separate administration of the IIV4 and PPV23 in terms of: immune response measured by HI antibody titer for each of the four vaccine strains 28 days after IIV4 vaccination; immune response measured by anti-pneumococcal antibody concentrations for each of six pneumococcal serotypes 28 days after PPV23 vaccination.These six serotypes were selected because they are reported as leading causes of IPD in Europe .Non-inferiority criteria were met if the upper limit of the 95% confidence interval of the GMT or GMC ratio was ≤2.0 for: each of the four influenza vaccine strains and each of the six pneumococcal serotypes evaluated for the primary endpoint.Secondary objectives were to: describe the safety and reactogenicity of the vaccines when co-administered or administered separately; describe the immunogenicity of the IIV4 and PPV23 when co-administered or administered separately for all four influenza vaccine strains and 12 pneumococcal serotypes that are common to both the PPV23 and the PCV13.A post hoc analysis evaluated immunogenicity of the IIV4 in participants with comorbidities versus those without in a pooled co-administration/control group of participants ≥60 years of age who were considered higher-risk older adults.The primary immunogenicity analysis was based on the per-protocol cohort which included all participants who met eligibility criteria, complied with the protocol and their vaccine schedule, and had post-vaccination immunogenicity results against at least one antigen contained in the study vaccine.The safety analysis was based on the total vaccinated cohort which included all vaccinated participants for whom safety data were available.The calculations of study power and the GMT, GMC and ratios are described in the Supplement.A total of 356 and 334 participants were included in the TVC and PPC, respectively.Demographics were similar in both study groups; mean age was 68.3 years and most participants were of White Caucasian/European heritage.Most participants had received an influenza vaccination in the previous three seasons and had at least one comorbidity.Participants with no comorbidities were approximately 8 years older than participants with one or more comorbidities; in those ≥60 years of age, the difference was approximately 5 years.The study met its co-primary objectives by demonstrating non-inferiority of the immune response to both IIV4 and PPV23 with co-administration versus separate administration, with the upper limit of the 95% CI being ≤2.0 for the GMT and GMC ratios for all antigens evaluated.The IIV4 was immunogenic against all four vaccine strains in terms of GMT, SPR, SCR and MGI.High prevaccination seropositivity rates were observed for all vaccine strains.Postvaccination, there was a similar immune response in the co-administration and the control groups.GMTs were higher in participants 50 to <60 years of age versus those ≥60 years of age.The PPV23 was also immunogenic against all 12 vaccine serotypes that were evaluated and there were no differences in the immune response between study groups.Again, seropositivity rate was high before vaccination.Immunogenicity of the IIV4 was similar in those with one, two or at least three comorbidities.Only one participant in the co-administration group and four in the control group had three or more comorbidities.In the post hoc analysis pooling participants ≥60 years of age from the co-administration and control groups, the immunogenicity of the IIV4 was similar in those with versus without comorbidities.Overall, the co-administration and control groups had a comparable incidence of injection site and general solicited symptoms, with the exception of injection site pain.Pain occurred in more participants in the co-administration group than in the control group: 33.5% versus 15.8%, respectively, after IIV4 administration and 43.9% versus 35.6%, respectively, after PPV23 administration.The incidence of grade 3 pain was low.All other injection site symptoms occurred at a low rate, similar in both study groups.SAEs occurred in 4.0% and 6.1% of participants in the co-administration and control groups, respectively; none were considered related to vaccination.Our study assessed a population of mainly older adults, most of whom had comorbidities such as chronic heart disease, chronic respiratory disease and diabetes and were therefore at higher risk of infection or complications of influenza or pneumococcal disease.The study met its co-primary objectives by demonstrating non-inferiority of co-administration of the IIV4 and PPV23 versus separate administration, with the upper limit of the 95% CI of the GMT or GMC ratio being ≤2.0 for each of the four influenza vaccine strains and each of the pre-selected six pneumococcal serotypes evaluated as a primary endpoint.We selected pneumococcal serotypes 1, 3, 4, 7F, 14 and 19A for assessment of the primary endpoint because these are shared by both PPV23 and PCV13 and are the most prevalent serotypes
Immunogenicity was assessed by hemagglutination inhibition (HI) titers for IIV4 and 22F-inhibition ELISA for PPV23.Co-primary objectives were to demonstrate non-inferiority of co-administration versus separate administration in terms of geometric mean titer (GMT) ratio for each influenza strain in the IIV4 and geometric mean concentration (GMC) ratio for six pneumococcal serotypes (1, 3, 4, 7F, 14, 19A) in the PPV23 in the per-protocol cohort (N = 334).Results The study met its co-primary objectives, with the upper limit of the 95% confidence interval of the GMT and GMC ratios (separate administration over co-administration) being ≤2.0 for all four antigens of the IIV4 and the six pre-selected serotypes of the PPV23, respectively.In a post hoc analysis pooling participants ≥60 years of age from the co-administration and separate administration groups, IIV4 immunogenicity was similar in higher risk adults with comorbidities (diabetes; respiratory, heart, kidney, liver, or neurological diseases; morbid obesity) versus those without.Both vaccines had an acceptable safety and reactogenicity profile; pain was the most common symptom, occurring more often with co-administration than separate administration.
in Europe that are associated with invasive disease .With regard to the secondary objectives, all estimated immunogenicity parameters for all four influenza vaccine strains and all 12 pneumococcal serotypes evaluated were similar with co-administration and separate administration.We also evaluated immunogenicity of the IIV4 according to age group, and found that it was similar regardless of whether it was co-administered with the PPV23 or administered separately in participants 50 to <60 years of age and older adults ≥60 years of age.As expected, GMTs were lower in the older age group.Several studies have evaluated the immunogenicity of a TIV and either a PPV23 or a PCV13 when administered concomitantly or separately.In a study of adults with chronic respiratory disease who received TIV and PPV23, there was no difference between administration groups in titers of influenza or pneumococcal antibodies .In a 4-group study of MF59-adjuvanted TIV and PPV23 in adults ≥65 years of age, there was a similar immune response to influenza and pneumococcal antigens regardless of concomitant vaccination .In contrast to these data for PPV23, there is some evidence of a lower immune response with the PCV13 for some pneumococcal serotypes when co-administered with a TIV compared with separate administration.A study of co-administration versus separate administration of TIV and PCV13 in adults ≥65 years of age showed a similar immune response to TIV regardless of administration regimen, but a lower immune response to PCV13 when administered concomitantly compared with when administered alone .However, non-inferiority criteria were achieved for all 13 pneumococcal serotypes with the exception of 19F.Non-inferiority criteria were not achieved for influenza A/H3N2, although the authors concluded that this likely resulted from the high pre-vaccination antibody titers to A/H3N2 .In another study of TIV and PCV13 in healthy adults 50–59 years of age, the immune response to the TIV was robust and non-inferior when administered concomitantly with the PCV13 compared with when administered separately, but the immune response to the PCV13 was lower when administered concomitantly with TIV for all 13 serotypes .Overall, however, both studies demonstrated no clinically important differences in immunogenicity of the TIV or PCV13 with concomitant versus separate administration.Our study recruited participants eligible for influenza and pneumococcal vaccination under Belgian and French public health recommendations i.e. people with comorbidities that increase their risk of infection and those ≥65 years of age.In fact, approximately 85% of participants had at least one comorbidity.GMTs for each influenza vaccine strain were similar in participants with one comorbidity compared with two comorbidities.Only five individuals in the PPC had three or more comorbidities, so meaningful interpretation of immunogenicity in this group was not possible.In participants with one or two comorbidities, immunogenicity was similar regardless of administration schedule.To obtain a group of sufficient size for a meaningful comparison of IIV4 immunogenicity in individuals with and without comorbidities, we conducted a post hoc analysis pooling data from the co-administration and separate administration study groups in participants ≥60 years of age.The analysis showed no consistent difference in the immunogenicity of the IIV4 in people with comorbidities versus without comorbidities.It should be noted that participants ≥60 years of age with no comorbidities were approximately 4–5 years older than participants with comorbidities, which may have helped to balance out any impact of comorbidities on the robustness of the immune response.Overall, however, the data suggest that the immune response to the IIV4 is similar regardless of comorbidities, and, indeed, support the value of administering IIV4 to individuals ≥60 years of age with common chronic medical conditions such as heart disease, respiratory disease and diabetes that place them at increased risk for complications of influenza.Previous studies found a reduced immune response to seasonal influenza vaccine in individuals with comorbidities, but these studies were too small to afford adequate precision in estimating immunogenicity, lacked adequately matched controls, and in one case reported incomplete serological data .General symptoms were similar in the co-administration and separate administration study groups.Fatigue and muscle aches were the most common solicited symptoms during the 7-day post-vaccination period.Few participants experienced a serious adverse event.Pain was more common with both the IIV4 and PPV23 when co-administered than when administered separately, despite the vaccines being administered in different arms.However, grade 3 pain was rare.A previous study also reported a higher proportion of participants with pain on co-administration of IIV4 and PPV23 in different arms compared with separate administration .The study had both strengths and limitations.A key strength was that it recruited participants in an outpatient setting so both healthy people and people with comorbidities were enrolled.Because frail elderly people tend to reside in a care facility, they did not form part of the study population.An important limitation was the high level of seropositivity for anti-influenza and anti-pneumococcal antibodies before vaccination, which impacted endpoints reflecting fold-change between pre- and post-vaccination.High seropositivity was to be expected in an older population with substantial previous exposure to influenza and pneumococcal vaccinations as recommended by the French and Belgian health authorities.In conclusion, the study demonstrated immunological non-inferiority of co-administration of the IIV4 and PPV23 compared with separate administration in adults ≥50 years of age, indicating that the vaccines can be co-administered at different injection sites without reducing immunogenicity against influenza and pneumococcal illness.In our study, comorbidities had no impact on the immunogenicity of the IIV4.Co-administration of PPV23 at the annual influenza vaccination visit may improve convenience and result in increased vaccine uptake.
Introduction We compared co-administration versus separate administration of an inactivated quadrivalent influenza vaccine (IIV4) with a 23-valent pneumococcal polysaccharide vaccine (PPV23) in adults at high risk of complications of influenza and pneumococcal infection.Immunogenicity of the IIV4 and PPV23 was similar regardless of administration schedule.Conclusion The IIV4 and PPV23 can be co-administered without reducing antibody responses reflecting protection against influenza or pneumococcal disease.Co-administration of PPV23 at the annual influenza vaccination visit may improve uptake.Comorbidities had no impact on IIV4 immunogenicity, supporting its value in older adults with chronic medical conditions.
Autosomal dominant diseases represent a challenge for development of effective therapies.Whereas haploinsufficiency may be treated by conventional gene replacement, diseases linked to gain-of-function or toxic effect in essential proteins require the targeting of the mutated allele.The mutated allele can be either inactivated, leading to haploinsufficiency, or corrected to revert to a wild-type genotype.In the case of disease-implicated genes essential for cellular functions, allele-specific correction would be preferred to avoid deleterious effects due to an overall decrease of the protein expression.In this study, we tested the potential of genome editing to rescue the phenotypes of a dominant disease through allele-specific inactivation or correction.Allele-specific gene inactivation can be achieved through gene silencing with shRNA1 or antisense oligonucleotides, leading to haploinsufficiency.Mutation correction can be obtained at the RNA level through trans splicing,2,3 although allele-specific targeting is still challenging with this strategy.The most upstream approach would be to target the mutation at the DNA level.Genome editing using programmable nucleases has emerged as a powerful tool for targeting specific sequences.The clustered regularly interspaced short palindromic repeats-related CRISPR-associated protein 9 system is the most commonly used tool.Cas9 endonuclease is guided to a specific DNA sequence by a single-guide RNA4 and makes a double-strand break that triggers the cellular repair machinery.The main repair pathways are “error-prone” non-homologous end-joining, mainly leading to gene inactivation or the precise correction via homology-directed repair with the help of a DNA repair template.5,Monteys et al.6 assessed the allele specificity of CRISPR/Cas9 based on SNPs in cis that form a protospacer adjacent motif on the huntingtin mutated allele in patients’ fibroblasts, and Yamamoto et al.7 assessed the specificity of CRISPR/Cas9 to target a heterozygous single point mutation in the CALM2 gene in human induced pluripotent stem cells.Both studies disrupted specifically the mutated allele by NHEJ but did not report allele-specific correction.Allele-specific correction in a dominant disease was achieved after the integration of an antibiotic resistance cassette to promote the selection of the corrected allele.8,As a paradigm for testing allele-specific genome editing without integration of a selection cassette that might affect the genome structure or regulation, we focused on a dominant form of centronuclear myopathy.CNMs are rare muscle disorders belonging to the group of congenital myopathies.Several forms of the disease have been described with different severities.9,The autosomal dominant form is characterized by neonatal to adult onset, muscle weakness, and delayed motor milestones.Autosomal dominant CNM is caused by heterozygous mutations in the DNM2 gene, which encodes for the Dynamin 2 GTPase enzyme.10,11,The DNM2 R465W point mutation represents the most frequent mutation occurring in approximately one of four patients.12,A knock-in mouse model of this mutation has been generated and develops a progressive muscle weakness with reduced muscle force and histological features, including reduced fiber size and central accumulation of oxidative staining.13,In vitro, DNM2-CNM mutations enhance the GTPase activity and promote oligomerization, suggesting a gain-of-function pathomechanism.14,15,However, this gain-of-function hypothesis has not been fully validated at the cellular level.DNM2 is a ubiquitous GTPase involved in the fission of endocytic vesicles during clathrin-mediated endocytosis,16 cytoskeleton interaction,17–19 and autophagy regulation.20,Alterations of transferrin or EGFR uptake in cells expressing different DNM2 mutants have been reported, suggesting a defect in endocytosis.21–23,In mouse embryonic fibroblasts harboring the DNM2 R465W mutation in the homozygous state, autophagy defects were also reported.24,However, the cellular pathomechanism of the disease is still not well understood.To date, there is no specific treatment for the autosomal dominant form of CNM, and the potential of using CRISPR/Cas9 to treat dominant inherited diseases has yet to be better explored.The aim of this study was to assess the impact of the most common DNM2 R465W mutation on cellular pathways in myoblasts, providing a cellular context for investigating the disease, and to determine if genome editing can inactivate or correct the DNM2 mutation in an allele-specific manner and reverse the disease-related phenotypes.To be able to assess the cellular pathology and the efficiency of the CRISPR/Cas9 system in cells relevant for the disease, we established Dnm2R465W/+ immortalized myoblasts.Primary myoblasts were isolated from lower limb muscles of postnatal day 5 WT and Dnm2R465W/+ mice and transduced with a lentivirus expressing CDK4.After antibiotic selection of CDK4-expressing clones, cell sorting, and clone expansion, three WT and two Dnm2R465W/+ clones were established.The genotype was confirmed by PCR and Sanger sequencing.We verified that immortalization did not affect their ability to fuse into myotubes.After 7 days in differentiation medium, all clones differentiated into myotubes, as assessed by the fusion index and the expression of caveolin 3.The DNM2 mutation had no significant impact on the DNM2 protein level in these muscle cells.DNM2-related CNM is mainly caused by heterozygous single point mutations.The CGG codon in humans and the AGG codon in mice code for a conserved arginine residue at amino acid position 465.In patients and Dnm2R465W/+ KI mouse CGG and AGG codons are changed into TGG, encoding a tryptophan.To inactivate or correct specifically the mutated human and mouse alleles, we designed pan-allelic guide RNAs targeting both the mutated and WT alleles or allele-specific gRNAs targeting only the mutated allele.For the pan-allelic gRNAs, the PAM was selected precisely at the codon encoding R465 or at the R465W mutation.As the first nucleotide of the PAM is variable, these gRNAs should recognize both alleles.To avoid off-target effects and to provide allele-specific recognition of the mutated allele, we used 18 bp truncated gRNAs.25,The PAM for the allele-specific gRNAs was selected downstream of the R465W mutation, thus generating a 1 bp mismatch between the gRNA and the WT
Genome editing with the CRISPR/Cas9 technology has emerged recently as a potential strategy for therapy in genetic diseases.In this study, we tested allele-specific inactivation or correction of a heterozygous mutation in the Dynamin 2 (DNM2) gene that causes the autosomal dominant form of centronuclear myopathies (CNMs), a rare muscle disorder belonging to the large group of congenital myopathies.These findings illustrate the potential of CRISPR/Cas9 to target and correct in an allele-specific manner heterozygous point mutations leading to a gain-of-function effect, and to rescue autosomal dominant CNM-related phenotypes.
Capsule Networks have shown encouraging results on benchmark computer vision datasets such as MNIST, CIFAR and smallNORB.Although, they are yet to be tested on tasks where the entities detected inherently have more complex internal representations and there are very few instances per class to learn from and where point-wise classification is not suitable.Hence, this paper carries out experiments on face verification in both controlled and uncontrolled settings that together address these points.In doing so we introduce , a new variant that can be used for pairwise learning tasks.We find that the model improves over baselines in the few-shot learning setting, suggesting that capsule networks are efficient at learning discriminative representations when given few samples. We find that perform well against strong baselines on both pairwise learning datasets when trained using a contrastive loss with-normalized capsule encoded pose features, yielding best results in the few-shot learning setting where image pairs in the test set contain unseen subjects.
A pairwise learned capsule network that performs well on face verification tasks given limited labeled data
studies and is more effective when combined with others measures, especially bone debridement and/or surgery.Unlike ORN, protocol with PENTO associated with ATB did not show good results in healing of MRONJ.In contrast, platelet-rich plasma was also a good treatment alternative, succeeding in over 80% of cases.Low-level laser therapy, in turn, was presented as a more efficient approach when combined with ATB and bone debridement.HBO had contrasting results with varying success rates between 25% and 90%.Regarding MRONJ, it is known the better the oral condition of patient to be subjected to treatment with BPs, the more favorable the prognosis.However, often the patient and attending physician were unaware of the possible oral repercussions that this drug class can cause.And once injury is installed, dentist should make use of the measures recommended by AAOMS to try to solve the disease, such as ATB, mouthwash with 0.12% chlorhexidine gluconate, pain management, bone debridement when needed and infection prevention, as well as keeping up to date on the new effective treatment alternatives that are emerging.2,Surgery is the treatment option more adopted for MRONJ.32,43,Regardless of whether conservative or extended, it is usually associated with ATB.42,47,50,With a varying success rate among cases reported in literature, average treatment success with conservative surgery and extensive surgery are 53% and 67%, respectively.Thus, VELscope system is reported as a promising surgical tool which allows identifying the margin between viable and necrotic bone through bone fluorescence.48,Thumbigere-Math et al.50 treated MRONJ with HBO associated with ATB and extensive surgery solving 25% of cases, whereas Freiberger et al.41 solved 52% of cases associating HBO exclusively with ATB.Therapy performed with platelet-rich plasma associated with ATB has shown good results in patients who are undergoing surgical procedures, achieving a cure rate higher than 80%.38,42,Unusual but effective, ozone therapy had a 60.6% and 100% success rate in solving 57 and 10 cases, respectively.39,Another therapy that has brought good results in combating MRONJ is LLLT.However, their action is most effective when combined with other therapeutic modalities as well as surgery, platelet-rich plasma and ATB42 or associated with non-surgical debridement, ATB and PDT.49,The decision of the best approach for management of ONJ patients, in its different modalities, should always be performed by a multidisciplinary team, considering the general state of the patient and the risks/benefits ratio.Infections, trauma and decreased vascularity have a triggering role both for MRONJ and ORN, which are challenging diseases with no specific treatment that acts alone and resolutely.Different therapeutic modalities can be employed in an associated manner, such as prophylactic and/or stabilizing measures.Furthermore, continuous updated knowledge of the dental professional is essential for the management of these patients.The authors declare no conflicts of interest.
Introduction: Osteonecrosis of the jaws can result either from radiation, used in radiotherapy for treatment of malignant tumors, or medications used for bone remodeling and anti-angiogenesis such as bisphosphonates.These conditions can be associated with triggering factors such as infection, trauma and decreased vascularity.The management of patients with osteonecrosis of the jaws requires caution since there is no specific treatment that acts isolated and decidedly.However, different treatment modalities can be employed in an associated manner to control and stabilize lesions.Objective: To review the current knowledge on etiology and management of osteonecrosis of the jaws, both radio-induced and medication-related, aiming to improve knowledge of professionals seeking to improve the quality of life of their patients.Methods: Literature review in PubMed as well as manual search for relevant publications in reference list of selected articles.Articles in English ranging from 1983 to 2017, which assessed osteonecrosis of the jaws as main objective, were selected and analyzed.Results: Infections, traumas and decreased vascularity have a triggering role for osteonecrosis of the jaws.Prophylactic and/or stabilizing measures can be employed in association with therapeutic modalities to properly manage osteonecrosis of the jaws patients.Conclusion: Selecting an appropriate therapy for osteonecrosis of the jaws management based on current literature is a rational decision that can help lead to a proper treatment plan.
Children, especially those aged <5 years, are at the highest risk of suffering from serious complications from influenza infection, including acute otitis media,1 bacterial co-infections, acute respiratory infection, hospitalisation, and death .Severe outcomes of influenza are frequently associated with underlying conditions but occur even in children without risk factors .Influenza illness is caused by A and B virus subtypes, both of which can cause epidemics and lead to hospitalisation and death in all age groups .Efforts to reduce influenza B illness have been complicated since the 1980s, when two immunologically distinct lineages of B virus, Victoria and Yamagata, began co-circulating worldwide .The distribution of these two lineages varies greatly between and even within seasons and regions, resulting in frequent mismatches between the B strain in trivalent influenza vaccines and the circulating B strains .Due to uncertainty about cross-lineage protection and the potential for decreased vaccine efficacy , quadrivalent influenza vaccines containing both B lineages have been developed and, since the 2013–2014 influenza season, have been included in World Health Organization recommendations.A quadrivalent split-virion inactivated influenza vaccine is licensed for individuals aged ≥6 months.A recently completed multi-season placebo-controlled phase III trial conducted in the Northern and Southern Hemispheres demonstrated the efficacy of IIV4 in children 6–35 months of age .Overall VE to prevent laboratory-confirmed influenza was 50.98% against any A- or B-type influenza and 68.40% against influenza caused by vaccine-similar strains.The trial also showed that safety profiles were similar for IIV4, the placebo, and comparator trivalent split-virion inactivated influenza vaccines.As part of the phase III trial, data were collected on healthcare use, antibiotic use, parental absenteeism from work, and the occurrence of severe outcomes of influenza, including AOM, acute lower respiratory infection, and inpatient hospitalisation.Here, we describe the efficacy of IIV4 based on these additional endpoints.Furthermore, to add to the evidence for efficacy of IIV4 in the youngest children, we determined VE for different age subgroups.This was an analysis of data from the phase III, randomised, multi-centre, placebo-controlled trial of IIV4 in healthy children aged 6–35 months.2,The participants were randomised to receive two full doses 28 days apart of IIV4; the licensed trivalent split-virion inactivated influenza vaccine, an investigational trivalent split-virion inactivated influenza vaccine containing the World Health Organization-recommended A strains and a strain from the alternate B lineage; or a placebo.Further details of the study design and the primary efficacy, immunogenicity, and safety results are described elsewhere .The objective of the current analysis was to examine the VE of IIV4 in preventing laboratory-confirmed influenza in age subgroups; and to determine the relative risk for IIV4 vs. placebo for severe outcomes, healthcare medical visits, and parental absenteeism from work associated with laboratory-confirmed influenza within 15 days after the onset of the influenza-like illness.VE was calculated for the co-primary endpoints of the trial, i.e. the occurrence of influenza-like illness starting ≥14 days after last vaccination and laboratory-confirmed as positive for any circulating influenza A or B types or vaccine-similar strains .Briefly, influenza was confirmed by reverse transcription-polymerase chain reaction or viral culture of nasal swabs, and subtypes and strains were identified by Sanger sequencing, ferret antigenicity testing, or both.Genetic sequences identified by Sanger sequencing were compared with a database of known sequences corresponding to the vaccine and major circulating strains from 2005 up to the time of testing.AOM, ALRI, and healthcare utilization were recorded during ILI-associated visits occurring within 10 days of the onset of ILI and during follow-up phone calls 15 days after the onset of ILI.AOM was defined as a visually abnormal tympanic membrane suggesting an effusion in the middle ear cavity, concomitant with at least one of the following symptoms: fever, earache, irritability, diarrhoea, vomiting, acute otorrhea not caused by external otitis, or other symptoms of respiratory infection.ALRI was defined as a chest X-ray confirmed pneumonia, bronchiolitis, bronchitis, or croup.Inpatient hospitalisation was defined as a hospital admission resulting in an overnight stay.Outpatient hospitalisation was defined as hospitalization without an overnight stay.An outpatient visit was defined as an unscheduled ambulatory visit with a physician or other health professional.The phase III trial was approved by the independent ethics committee or institutional review board for each study site and was conducted in accordance with Good Clinical Practice and the Declaration of Helsinki.Written informed consent was provided by the parents or legal representatives of all participating children.VE in preventing laboratory-confirmed influenza caused by any A or B strain or by vaccine-similar strains was examined by age subgroup.The analysis was performed according to randomisation in the full analysis set for efficacy, defined as all randomised participants who received two doses of study vaccine and had at least one successful surveillance contact at least 14 days after the last dose.Relative risk in preventing laboratory-confirmed influenza associated with AOM and ALRI were performed in the per-protocol analysis set for efficacy, defined as all randomised participants without significant protocol deviations.RR in preventing laboratory-confirmed influenza associated with healthcare medical visits, inpatient hospitalisation, parent absenteeism, and antibiotic use were calculated in the full analysis set for efficacy.RR was calculated as 100% × /.The 95% CIs for VE and RR were calculated by an exact method conditional on the total number of cases in both groups.The study protocol did not include statistical tests for these endpoints, so no assessment of statistical significance was made.Missing data were not replaced.Statistical analysis was performed using SAS® version 9.4.This analysis included the 5436 participants in the phase III trial who were randomised to receive
Methods: Data collected during the phase III trial were analysed to examine the vaccine efficacy (VE) of IIV4 in preventing laboratory-confirmed influenza in age subgroups and to determine the relative risk for IIV4 vs. placebo for severe outcomes, healthcare use, and parental absenteeism from work associated with laboratory-confirmed influenza.Trial registration: EudraCT no.
IIV4 or placebo, as described previously .The IIV4 and placebo groups were balanced for sex, age, and prevalence of at-risk conditions, regions, and ethnicities.Five participants in the IIV4 group and 16 in the placebo group had AOM associated with laboratory-confirmed influenza, and five participants in the IIV4 group and 23 in the placebo group had ALRI associated with laboratory-confirmed influenza.The RR of IIV4 vs. placebo was 31.28% for AOM and 21.76% for ALRI.Compared to placebo, IIV4 reduced the risk of healthcare medical visits, parent absenteeism from work, and antibiotic use associated with laboratory-confirmed influenza.Inpatient hospitalisation associated with laboratory-confirmed influenza occurred for three participants in each group, resulting in no difference in risk between IIV4 and placebo.VE against any A or B strain was 54.76% for participants aged 6–23 months and 46.91% for participants aged 24–35 months.For vaccine-similar strains, VE was 74.51% for participants aged 6–23 months and 59.78% for participants aged 24–35 months.Further exploration of the 6–23 month age group showed a VE against any A or B strain of 35.06% for participants aged 6–11 months and 63.13% for participants aged 12–23 months and a VE against vaccine-similar strains of 43.63% for participants aged 6–11 months and 80.54% for participants aged 12–23 months.A recent phase III trial conducted over four influenza seasons in the Northern and Southern Hemispheres demonstrated the efficacy and safety of two full doses of IIV4 in children 6–35 months of age in preventing laboratory-confirmed influenza .The current analysis, based on exploratory endpoints in the phase III trial, demonstrated similar efficacy of IIV4 in reducing the risk of severe outcomes of influenza in these children as well as on the burden of influenza for their parents and the healthcare system.The World Health Organization stated in 2012 that they had only moderate confidence in the efficacy of inactivated influenza vaccines in children aged 6 months to <2 years due to limited evidence .In the current study, we confirmed that IIV4 can protect children aged 6–23 months against laboratory-confirmed influenza.Efficacy was also confirmed in the subgroup of children aged 12–23 months but not in children aged 6–11 months, most likely because of insufficient numbers.Efficacy of another full-dose split-virion quadrivalent influenza vaccine in children aged 6–35 months was also demonstrated in a multinational randomised placebo-controlled trial across five influenza seasons .The VE was reported to be 50% against RT-PCR-confirmed influenza, which is similar to the overall VE in the current trial.Although age subgroups were different, they also demonstrated efficacy in children aged <2 years.Our analysis also demonstrated that IIV4 reduced antibiotic use associated with influenza.Despite current guidelines, unnecessary antibiotic use in influenza remains common and is an important cause of antibiotic drug resistance .A retrospective analysis of the US Impact National Benchmark Database from 2005–2009 found that antibiotics were prescribed for about 22% of patients with influenza, 79% of which was judged to be inappropriate because the patient had neither a secondary infection nor evidence of comorbidity .Another study in Europe showed that influenza results in antibiotic prescriptions in 7–55% of cases .This may be because both influenza and bacterial infections can cause high fever, AOM, and ALRI in young children .Thus, although influenza accounts for a relatively small proportion of antibiotic use, IIV4 can help reduce their inappropriate use in young children.The findings of this analysis should be widely applicable because they are based on a large study conducted over a wide geographical area in both hemispheres and over several influenza seasons.However, there are some limitations.Most importantly, the trial was not powered for the calculations included in this analysis.Indeed, insufficient numbers likely precluded efficacy from being confirmed in children aged 6–11 months.This also can explain the failure to confirm an effect on influenza-associated inpatient hospitalisation.Another limitation, shared by all influenza vaccines, is that efficacy depends on the specific strains circulating, so care should be taken when applying results to a specific region or season.The analysis showed that in children aged 6–35 months, vaccination with two full doses of IIV4 can protect against influenza and reduces the frequency of severe outcomes of influenza.IIV4 thereby helps reduce the burden of influenza in young children, their parents, and the healthcare system.These findings reinforce evidence that influenza vaccination can protect and can be used for infants and young children aged 6–35 months.This work was supported by Sanofi Pasteur.The sponsor participated in study design, the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication.
Background: A multi-season phase III trial conducted in the Northern and Southern Hemispheres demonstrated the efficacy of a quadrivalent split-virion inactivated influenza vaccine (IIV4) in children 6–35 months of age.Results: VE (95% confidence interval [CI]) to prevent laboratory-confirmed influenza due to any A or B strain was 54.76% (40.24–66.03%) for participants aged 6–23 months and 46.91% (23.57–63.53%) for participants aged 24–35 months.VE (95% CI) to prevent laboratory-confirmed influenza due to vaccine-similar strains was 74.51% (53.55–86.91%) for participants aged 6–23 months and 59.78% (19.11–81.25%) for participants aged 24–35 months.Compared to placebo, IIV4 reduced the risk (95% CI) by 31.28% (8.96–89.34%) for acute otitis media, 21.76% (6.46–58.51%) for acute lower respiratory infection, 40.80% (29.62–55.59%) for healthcare medical visits, 29.71% (11.66–67.23%) for parent absenteeism from work, and 39.20% (26.89–56.24%) for antibiotic use.Conclusion: In children aged 6–35 months, vaccination with IIV4 reduces severe outcomes of influenza as well as the associated burden for their parents and the healthcare system.In addition, vaccination with IIV4 is effective at preventing against influenza in children aged 6–23 and 24–35 months.2013-001231-51.
In this paper, we propose a novel technique for improving the stochastic gradient descent method to train deep networks, which we term .The proposed PowerSGD method simply raises the stochastic gradient to a certain power during iterations and introduces only one additional parameter, namely, the power exponent.We further propose PowerSGD with momentum, which we term , and provide convergence rate analysis on both PowerSGD and PowerSGDM methods.Experiments are conducted on popular deep learning models and benchmark datasets.Empirical results show that the proposed PowerSGD and PowerSGDM obtain faster initial training speed than adaptive gradient methods, comparable generalization ability with SGD, and improved robustness to hyper-parameter selection and vanishing gradients.PowerSGD is essentially a gradient modifier via a nonlinear transformation.As such, it is orthogonal and complementary to other techniques for accelerating gradient-based optimization.
We propose a new class of optimizers for accelerated non-convex optimization via a nonlinear gradient transformation.
Complex and unstructured problems like speech or image recognition are currently solved most effectively on by deep learning algorithms running on specialized electronic circuits such as graphical processing units .The extensive computation effort on these standard von-Neumann computer architectures requires however massive computing resources.This leads to high power consumption which is orders of magnitude higher of today’s hardware compared to the human brain when dealing with these tasks.Therefore, alternative, brain-inspired hardware solutions are being researched that can mimic and accelerate the neural network approach of machine learning techniques and perform with lower power consumption .Common approaches to build custom neuromorphic hardware are based on spiking artificial neural networks .As an alternative approach, Hopfield networks model the physical phenomena of synchronization of oscillators, that comprise associative memory capabilities .Weakly coupled oscillators present time-domain characteristics that can be used for computation: the oscillators lock in frequency, while maintaining a fixed phase difference, determined by the weight of the coupling.Changing the coupling strength, this phase relation can be tuned, advancing the possibility of encoding information in the time-domain.The computation capabilities of coupled oscillator systems have been widely demonstrated in literature .We intend to follow this novel approach as a hardware implementation to perform tasks such as image recognition.Exploring time domain encoding and information processing is of interest to build a technology that doesn’t suffer from scaled power supplies, representing an advantage for the next integrated technology nodes.Using the phase change material VO2, very compact relaxation oscillators can be fabricated .Previous works obtained stable phase relations in capacitive coupled oscillators .In it has been demonstrated that it is possible to build a network that is suitable for performing tasks as image recognition using resistances as coupling elements for the oscillators, envisioning implementation of the coupling with resistive RAM .This would bring to the network the advantage of having reconfigurable weights on chip, allowing online learning of the network,Currently, the best performing oscillators operate at a maximum frequency of 9 MHz and operate at a scaled voltage of 1 V with a power consumption around 10 μW .These results refer to devices build on TiO2 substrate, that being lattice-matched to VO2, allows deposition of crystalline material.It is however technologically relevant to develop oscillators on Si, with a process that respect CMOS compatibility.In this work oscillators are built on Si compatible technology on SiO2 substrate, resulting in polycrystalline devices.Two oscillators are coupled with external resistive components, and modulation of the phase relations is demonstrated upon the tuning of the resistive coupling.The effect of the realization of these devices on Si is explored: it is shown how the variability of the devices resulting from granular films influences the value of the coupling element that is necessary to use to bring the oscillators at the frequency locking condition, and how this impact on the performances of the network.The VO2 resistors were realized on a 4″ Si wafer over 1 μm thermal SiO2.The VO2 was deposited with two different techniques, resulting in different film quality.Pulsed Laser Deposition was used for devices 1 and 2, and Atomic Layer Deposition for devices 3 and 4, targeting a thickness of 50 nm.The ALD devices are annealed at 450 °C for 20 min under oxygen flow, with a partial oxygen pressure of 5·10−2 mbar.Fig. 1 shows an ALD-deposited VO2 film.During the annealing step, grains in the order of 50–80 nm formed in the previously amorphous layer.The film was patterned with ICP etching and contacted with Ni/Au electrodes.The VO2 stripes between the electrodes were are around 250 nm long and 1 μm wide.The devices were characterized in vacuum and contacted with a 16-needle probecard.Electrical measurements during a temperature sweep were performed to determine the resistivity ratio between the insulating and metallic phase of the two films, and to estimate the width of the hysteresis.The phase transition is measured on two devices with equal dimensions, for the devices fabricated with both deposition techniques.The resistivity of a single device is measured in a probe-station equipped with a heating chuck.Extensive resistivity measurements on device level using two-probe and four-probe techniques were taken.An example of the resulting resistivities graph measured on patterned devices is shown in Fig. 2.For the ALD-deposited sample, the average hysteresis width is 18°, with an on–off ratio of two order of magnitude.The PLD-deposited sample has an average resistivity width of 14°, with an on–off ratio of a factor 30.The resistivity values and the width of the hysteresis are comparable to the values already recorded in literature .The insulating to metallic phase change is not smooth but proceeds in defined, reproducible steps that we attribute to transitions of single grains switching inside the devices.The single grain switching is only visible in scaled devices with a low number of grains between the electrodes.The high and low resistivity values for the PLD and ALD samples result slightly different from each other.Raman analysis of the material was performed for the films, revealing that the films are purely VO2, without contamination from other vanadium oxides stoichiometries.The difference in the resistivity values are attributed to the difference in density of the two films, and the gaps between the grains.For the ALD devices, each device was conditioned with a forming procedure, consisting of consecutive current sweeps with increasing current range, between 1 μA and 90 μA.The measurements, executed at 300 K, investigate the electrical excitation of the phase transition via Joule Heating.At a current of around 30 μA, the device undergoes
New computation schemes inspired by biological processes are arising as an alternative to standard von-Neumann architectures, to provide hardware accelerators for information processing based on a neural networks approach.As previously shown, these oscillating neural networks can efficiently solve complex and unstructured tasks such as image recognition.We have built nanometer scale relaxation oscillators based on the insulator–metal transition of VO2.by tuning the coupling strength.
required to understand the dynamics that cause this behavior: the first device changes to the metallic state and the output voltage Vosc3 stays fixed at a value slightly larger than the threshold voltage VTL2, necessary to trigger the metal to insulator transition.The threshold is reached only when the first device also changes its phase, becoming metallic and lowering the voltage partition across the VO2 resistors.The minimum peaks of the two devices are aligned, as shown in the enlargement in Fig. 9: the metal to insulator transition happens in-phase for the coupled oscillators.Fig. 10 shows a plot of the resistances of the oscillators over time.The data analysis was performed proceeding from the circuit relation of the configuration depicted in Fig. 3.From the measured output voltage of the oscillators, the resistance variation over time was calculated as follows:For obtaining the high impedance state value, additional filtering of the noise through averaging was necessary, due to the resulting small current values.The analysis of the resistance variation over time shows how the rising and falling edges of the oscillators correspond to different resistance states of the vanadium oxide devices.Moreover, it appears clearly during the rising edge that the vanadium oxide insulating resistance of both devices is not constant but spans between values around 100 kΩ and 30 kΩ as the power inside the device increases, in accordance with the I-R characteristic shown in Fig. 2.The same happens when the devices are in the metallic state: their resistances vary depending on the instant power.In this plot it is in addition visible that the low to high transition of the two devices is perfectly aligned, defining the in-phase nature of the coupling.For device 4, two states of low resistivity are reported: at first, when the insulating to metallic phase change occurs, the resistance of the device is reported to be a few 100 Ω, but suddenly it increases and stabilizes to around 7 kΩ.This is attributed at the phase change of different grains inside the materials at high current values.Two different low resistance states are reported also for device 3, but are visible only in the out-of-phase configuration, upon the analysis of the resistance values over time.Fig. 11 shows again the resistance over time variation of the two oscillators.In the middle graph, the resistance of the device 3 presents an additional jump between a few Ohm resistance value and the 4.7 kΩ value before the metallic-to-insulating phase transition.Highly scalable electrical relaxation-oscillators can be built using VO2 metal–insulator switches.If connected with electrical components, the oscillators show frequency and phase-locking properties, that can be used for computing.In this paper we investigated the properties of VO2 coupled oscillator fabricated on a Si substrate.We experimentally showed frequency and phase-locking of VO2 on Si resistively-coupled oscillators.With experiments and circuits simulations we achieved control of the phase relation between the two oscillators using variable resistors.Oscillators were built using different VO2 films, obtained with PLD and ALD deposition on a SiO2 substrate.The devices build with both techniques were compared in their coupling capabilities.For the PLD devices, in-phase and out-of-phase coupling was obtained despite the significant variability between the devices.As the device’s variability increases, a stronger coupling is necessary to obtain frequency-locking.For the ALD devices the variability was greatly reduced, allowing use an order of magnitude weaker coupling strength to achieve frequency-locking.Both devices exhibited distorted patterns in the out-of-phase configuration, that has been linked with the relative value of the coupling resistance compared to the value of the insulating state of the VO2 devices upon phase change.Analysis of the impact of ultra-scaled devices on the waveform of the oscillators was performed.Due to the discrete number of single grains, individual switching events mask the electrical characteristics of oscillators.Oscillators still couple in phase and out of phase upon the tuning of the coupling element but can show different oscillating patterns in relation to the phase change steps that occur across the devices.This doesn’t impact the phase relation that it is possible to establish between two oscillators.The control of the output phase-relation with the tuning of the coupling element is in fact ultimately retained in the VO2 on Si devices, despite the grain-size effects and the variability in the device characteristics, demonstrating the potential for this technology to achieve Si-compatibility and to be employed for dedicated hardware for neuromorphic computing.Ultimately, improvements of the VO2 film quality are required to obtain more similar oscillators and to allow the scaling of the network to larger sizes.
Systems of frequency-locked, coupled oscillators are investigated using the phase difference of the signal as the state variable rather than the voltage or current amplitude.Coupling these oscillators with an array of tunable resistors offers the perspective of realizing compact oscillator networks.In this work we show experimental coupling of two oscillators.The phase of the two oscillators could be reversibly altered between in-phase and out-of-phase oscillation upon changing the value of the coupling resistor, i.e.The impact of the variability of the devices on the coupling performances are investigated across two generations of devices.
that there are a number of key controls on fracture reactivation, many of which need further investigation.Using a Bowland Shale gouge, which was shown to have a high carbonate content, we have defined the mechanical and reactivation properties of the gouge.When comparing these results to tests on other mineralogy gouges we see that the results are not similar.Therefore showing mineralogy to be a key control on fault properties.The Bowland Shale was also shown to be a relatively weak fault gouge under the pressure conditions used in this study.Two sets of tests were performed in this study; one on a fully saturated gouge and another on a lesser saturated gouge.It is clear from the mechanical results that decreasing water content had a positive influence on the strength of the gouge.This had a consequence on the reactivation properties as the lesser saturated gouge reactivated at slightly higher pore pressures.Therefore, saturation state of a fault gouge is one of a number of controls on both mechanical strength and reactivation pressure.Results in this study show that reactivation can occur several times on the same fault.The slip and energy release associated with each reactivation is complex.For the fully saturated gouge the energy released increased after each event for the first three reactivations.However, the less saturated gouge was much more complex and therefore difficult to interpret a pattern.This study has shown that in order to gain a more detailed view on the potential for fault reactivation as a result of pore pressure increase during hydraulic stimulation a knowledge of the gouge mineralogy and saturation state is necessary.In addition, an estimate of fault orientation, in situ stresses, stress history, and operational pressure regimes are required.Further experimental work can look at a greater number of variables e.g. fluid pressurisation rate, fluid composition, and liquid absorption.It is clear that even in idealised analogue faults with well constrained boundary conditions, fault reactivation is a complex phenomena.This research will benefit from investigations on the reactivation properties of competent shale samples.This will help aid in bridging the gap between analogue tests, such as this study, and the field scale.
During the hydraulic stimulation of shale gas reservoirs the pore pressure on pre-existing faults/fractures can be raised sufficiently to cause reactivation/slip.There is some discrepancy in the literature over whether this interaction is beneficial or not to hydrocarbon extraction.Some state that the interaction will enhance the connectivity of fractures and also increase the Stimulated Reservoir Volume.However, other research states that natural fractures may cause leak-off of fracturing fluid away from the target zone, therefore reducing the amount of hydrocarbons extracted.Furthermore, at a larger scale there is potential for the reactivation of larger faults, this has the potential to harm the well integrity or cause leakage of fracturing fluid to overlying aquifers.In order to understand fault reactivation potential during hydraulic stimulation a series of analogue tests have been performed.These tests were conducted using a Bowland Shale gouge in the Angled Shear Rig (ASR).Firstly, the gouge was sheared until critically stressed.Water was then injected into the gouge to simulate pore fluid increase as a response to hydraulic stimulation.A number of experimental parameters were monitored to identify fracture reactivation.This study examined the effect of stress state, moisture content, and mineralogy on the fault properties.The mechanical strength of a gouge increases with stress and therefore depth.As expected, a reduction of moisture content also resulted in a small increase in mechanical strength.Results were compared with tests previously performed using the ASR apparatus, these showed that mineralogy will also affect the mechanical strength of the gouge.However, further work is required to investigate the roles of specific minerals, e.g.quartz content.During the reactivation phase of testing all tests reactivated, releasing small amounts of energy.This indicates that in these basic conditions natural fractures and faults will reactivate during the hydraulic stimulation if critically stressed.Furthermore, more variables should be investigated in the future, such as the effect of fluid injection rate and type of fluid.
easily accommodate 10 kHz systems, but scaling to 100 kHz lasers will require a more careful study of laser-induced damage to the fiber.Our current system produces a fixed set of wavelengths.Continuous wavelength tuning between 1190 and 1225 nm could be possible by combining two lasers inside the fiber, as shown in Fig. 8.The “pump” consists of 1047 nm pulses producing Stokes lines at 1098 nm and 1153 nm in the usual manner.The “signal” is a continuous-wave laser that is tunable between 1190 and 1225 nm.The pulsed S2 amplifies the weak signal via SRS to produce a high power pulsed output at the signal wavelength.Amplification over a wide wavelength range is possible due to the broad bandwidth of the Raman gain in fused silica .Only modest signal power is needed for amplification by SRS , making semiconductor lasers an attractive implementation.Although the linewidth of the pulsed output would be somewhat broader than the CW signal input , it is still a significant improvement compared to our current system.The slow wavelength selection of our current system is limited by the dielectric filter wheel.Millisecond-scale wavelength switching could be achieved with an acousto-optic tunable filter.One potential pitfall to this approach is a large optical insertion loss, since the Stokes linewidths of our current system are considerably wider than the AOTF spectral resolution.This drawback could be mitigated by the narrow linewidth pulses produced by the injection-seeded amplifier technique depicted in Fig. 8.Our technique is scalable to higher energies.The key is to use an optical fiber that has both: single-mode propagation for efficient SRS a large mode area for large saturation energy.The GIMF in our experiments does emulate single-mode behavior for SRS, but the mode field diameter is not dramatically higher than conventional single-mode fibers.Yb-doped fibers with mode field diameters exceeding 80 μm have been used to produce high energy amplification of nanosecond pulses .This would be sufficient to produce SRS pulse energies of a few hundred μJ.SRS techniques using bulk optics, such as barium nitrite , may be more suitable for producing mJ-level pulses near 1200 nm for deep-tissue acoustic-resolution PAM .We have demonstrated a pulsed multi-wavelength laser source based on stimulated Raman scattering in a graded-index multimode fiber.The μJ-level pulses at 1215 nm have sufficient beam quality for in-vivo OR-PAM of lipid-rich tissue.Our technique is attractive for practical applications due to the simple apparatus and scalability to higher repetition rates.Future work will concentrate on extending our technique to develop a more narrowband and continuously tunable pulsed laser to possibly identify specific lipids using multispectral OR-PAM.National Science Foundation grant.
We demonstrate optical resolution photoacoustic microscopy (OR-PAM) of lipid-rich tissue using a multi-wavelength pulsed laser based on nonlinear fiber optics.1047 nm laser pulses are converted to 1098, 1153, 1215, and 1270 nm pulses via stimulated Raman scattering in a graded-index multimode fiber.Multispectral PAM of a lipid phantom is demonstrated with our low-cost and simple technique.
South Africa has poor health outcomes given its level of economic development.1,2,Despite being an upper-middle-income country,3 South Africa has high mortality levels resulting from a unique quadruple disease burden, described in the first National Burden of Disease study in 2000.4, "The 2009 Lancet Series on Health in South Africa2,5 ascribed the poor health status to the country's history of colonialism and apartheid, which resulted in every aspect of life being racially segregated, exploitation of the working class, high poverty and unemployment, and extreme wealth inequalities.6",Although the beginning of democracy in 1994 led to efforts to build a society with racial equality, post-apartheid macroeconomic policies have focused more on economic growth than on wealth inequality.2,6,The 2012 update of the Lancet Series7 acknowledged improved access to water, sanitation, and electricity, and increased provision of social grants6 but noted the large racial differentials in social determinants of health.The health service faces considerable challenges, including inefficiencies and inequities.5,6, "More than half of the country's health-care financing, and more than 70% of the country's doctors are employed in the private sector, serving about 20% of the population.8",The South African Government is moving towards national health insurance to provide accessible, quality health care to all.8,9,Understanding the disease burden nationally and subnationally is crucial to identify priorities and monitor changes and differentials in health status.Although improvements in the quality of vital registration data have occurred, these data are not complete and cause of death information is problematic10 with a high proportion of so-called garbage11 causes, misclassification of HIV/AIDS deaths, little information about external causes of non-natural deaths, non-medical certification of deaths by rural headmen, and poor content validity.12–16,In this, the second National Burden of Disease study, we describe the trends in mortality during a 16-year period and estimate deaths by specific causes and years of life lost to premature mortality nationally and provincially, after adjusting for these data inadequacies.Additionally, we analyse trends by apartheid-defined population groups to describe differentials in health status.Evidence before this study,The first National Burden of Disease Study for South Africa, conducted for the year 2000 and undertaken by researchers from the South African Medical Research Council, showed a unique quadruple burden of disease for the country.Before using national burden of disease methods in South Africa, policy makers in the country had access to cause of death statistics that could not be used at face value because of data deficiencies or country estimates on the basis of global models produced by WHO and IHME.To our knowledge, no other National Burden of Disease Studies have been undertaken for South Africa.Added value of this study,Our study has used local data to develop estimates that confront the data deficiencies in the vital registration data from Statistics South Africa and has highlighted the start of the reversal of several epidemics.Nonetheless, HIV/AIDS continues to be the main cause of mortality, and we report substantial mortality burden from non-communicable diseases, including increases in diabetes and renal disease.Although the burden from some forms of injuries has reduced, we report no change in mortality from infectious diseases, such as respiratory diseases, septicaemia, or neonatal causes.Implications of all the available evidence,Countries should continue to improve cause of death data and make use of burden of disease approaches to track population health.Variations in mortality levels and profiles reflect health inequalities and emphasise the need for health planning and resource allocation to be at subnational level.Future research should focus on methods that can provide subnational estimates including uncertainty levels.The second NBD study for South Africa has generated trends in causes of death derived from empirical country-specific data.These trends show the persistence of the quadruple disease burden due to continued high levels of HIV/AIDS and tuberculosis; other communicable diseases, perinatal conditions, maternal causes, and nutritional deficiencies; non-communicable diseases; and injuries identified by the initial 2000 study.4,This study also reveals the reversal of three epidemics with little change in communicable diseases, perinatal conditions, maternal causes, and nutritional deficiencies, and extends and strengthens the emerging trends reported in the 2012 Lancet Series update for South Africa.7,We report a marked decline in HIV/AIDS and tuberculosis mortality since 2006, which can be attributed to the intensified antiretroviral treatment rollout for adults since 2005.30,According to the National Department of Health, more than 2 million people received antiretroviral therapy in 201231 versus an estimated 47 500 in 2004.32,The rollout of the prevention of mother-to-child transmission programme since 2002 has reduced infections and hence deaths in infants.33,However, HIV/AIDS remains a major concern, accounting for almost half of all premature mortality.Treatment provision should be sustained and prevention strategies strengthened if South Africa is to move towards the Sustainable Development Goal of ending this epidemics by 2030.34,South Africa has one of the highest HIV and tuberculosis incidence rates in the world, highlighting the major challenge that the country still faces.35,36,Our study also shows that there was a gradual decline in the death rates because of injuries and, since 2003, a decline in non-communicable diseases.The latter decline is partly associated with a decrease in tobacco-related conditions, such as ischaemic heart disease, chronic obstructive pulmonary disease, and lung cancer, probably because of tobacco-control efforts.37,The substantial burden of non-communicable diseases, particularly cardiovascular diseases and diabetes, COPD and cancers, together with the ageing and growing population, emphasises the need to implement the national non-communicable disease strategic plan.38,This plan focuses attention on primary prevention and the management of the high burden of non-communicable diseases and their risk factors.Recent studies point
Background The poor health of South Africans is known to be associated with a quadruple disease burden.In the second National Burden of Disease (NBD) study, we aimed to analyse cause of death data for 1997–2012 and develop national, population group, and provincial estimates of the levels and causes of mortality.Interpretation This study shows the reversal of HIV/AIDS, non-communicable disease, and injury mortality trends in South Africa during the study period.Mortality differentials show the importance of social determinants, raise concerns about the quality of health services, and provide relevant information to policy makers for addressing inequalities.
to the changing cardiovascular disease risk profile associated with antiretroviral therapy;39 therefore integrated management is essential.The decline in overall injury death rates can be attributed largely to a decrease in deaths from interpersonal violence between 1997 and 2012.The decrease in firearm homicide25 has been attributed to the introduction of the Fire Arms Control Act of 200040 and political stabilisation post-apartheid probably contributed to the general decrease in interpersonal violence.Nonetheless, these rates remain high with a heavy toll on young males.Intimate-partner femicide is also a problem.41,Multisectoral initiatives42 are needed to address violence and other injuries, particularly traffic accidents.Unchanged mortality rates for communicable diseases, perinatal conditions, maternal causes, and nutritional deficiencies reflect poor progress in dealing with infectious conditions, such as lower respiratory infections, diarrhoeal diseases, and septicaemia despite such deaths being preventable or treatable.Although gains have been made in maternal and child mortality,43 South Africa will probably carry an unfinished agenda as it moves into the era of the SDGs.Although progress has been made in reducing the effect of HIV/AIDS in children, addressing the other causes of death becomes important to ensure a continued decrease in child mortality.Modelling the effect of interventions known to be effective from 2010 onward44 showed the need to expand their coverage, especially exclusive breast feeding and handwashing.Integration of quality maternal and neonatal care is a key requirement to further reduce neonatal mortality rates.Furthermore, improvement in living conditions is needed to address child mortality across all ages.Population group differentials reflect the legacy of apartheid and the stage of health transition of the groups.Black Africans and coloureds are faced with the quadruple burden of disease while profiles for Indians or Asians and whites are dominated by non-communicable diseases.The effect of HIV/AIDS and tuberculosis has been greatest in black Africans, exacerbating mortality differentials.In 2012, age-standardised death rates for black Africans were 2·2 times higher than for whites.Different rankings and mortality burdens were observed for provinces.Review of socioeconomic, health, and demographic indicators reveals that provincial rankings of all-cause mortality cannot be explained by any single indicator.The low age-standardised death rates for Limpopo are unexpected and difficult to explain.The differences highlight the need for subnational mortality estimates and for each province to identify their burden before planning priorities and resource allocation.The ranking of causes according to premature mortality should go some way to help provinces identify priority activities for health promotion and disease prevention.Our study encountered several challenges.Limited data resulted in us estimating the injury cause of death profile at two timepoints and extrapolating the trend over time from these.Annual variations in injury deaths have probably been attenuated.We noticed age misreporting in the deaths after 2000, suggesting that a small proportion of child deaths were misreported as old-age deaths.Unexplained changes in the number of deaths have been noted in the data since 2011 and these might account for a slightly exaggerated decline in death rates, especially in Free State and Eastern Cape.Uncertainty about our estimates has not been quantified in this study, nor have we undertaken a sensitivity analysis.For the study period, 18% of deaths were from under-registration, 14% from garbage causes, and 13% were ill-defined, with 17% of deaths reallocated to HIV/AIDS.Imputation of population group was necessary for a high proportion of deaths.Our estimates have a margin of error and future research should explore Bayesian regression approaches to quantify all forms of uncertainty for these estimates.Collaboration between IHME GBD team and the South Africa NBD team on mortality estimates for South Africa resulted in the number of all-cause deaths by age and sex having a less divergent age-sex pattern for deaths than the initial IHME GBD estimates for 2010.45,However the IHME GBD 2013 study estimates a substantially higher number of HIV/AIDS deaths, which follows an unsubstantiated time trend that reached a peak in 2010/2011.46,By contrast, our estimates peaked in 2006, following the strong signal from all-cause death data from vital registration,20,43 and is consistent with the THEMBISA model of the HIV epidemic and treatment programme in South Africa.24,47,Additionally, GBD deaths from injuries, hypertensive heart disease, tuberculosis, and diarrhoeal disease are lower than our estimates.Our higher estimate of cerebrovascular disease deaths arises from our decision to reallocate deaths where diabetes was reported with cerebrovascular disease and ischaemic heart disease as the immediate cause of death to cerebrovascular disease to bring it in line with international certification practices.48,The IHME GBD studies aim to use a consistent statistical approach to generate estimates for all countries.The strength of the GBD study also has the disadvantage that the multicountry modelling restricts the use of local information and insights.Despite extensive modelling work on the HIV epidemic in South Africa,47 for example, the constraints of the global model has not allowed for the incorporation of the insights from the model.Furthermore, trends from other countries might have affected estimates for South Africa.Lozano and colleagues suggest that “cause-specific mortality is arguably one of the most fundamental metrics of population health”.19,Our study shows that during the study period a major increase in mortality from HIV/AIDS occurred and this trend reversed but no change was reported in the top ten causes of death.The quadruple burden of disease still prevails.Subnational variations reflect health inequalities, and disease profiles indicate that population groups and provinces are at different health transition stages.Subnational estimates should therefore be used to guide resource allocation towards equity and to address local disease burdens.This study has shown the importance of local researchers undertaking a national burden of disease study with
Injury deaths were estimated from additional data sources.Comparison with the IHME GBD estimates for South Africa revealed substantial differences for estimated deaths from all causes, particularly HIV/AIDS and interpersonal violence.Differences between GBD estimates for South Africa and this study emphasise the need for more careful calibration of global models with local data.
a bottom-up approach.However observed differences between the South Africa NBD study and the IHME GBD study could not be resolved by IHME because of the restrictions of their global and regional modelling approaches.The South African NBD team with local and international experts revised the 2000 NBD list to reflect local cause-of-death patterns.This list differs from the Global Burden of Disease list.17,The main difference is the level of aggregation of ICD-10 codes,18 resulting in 140 causes compared with 107 causes in the GBD 1992 study17 and 235 in the Institute of Health Metrics and Evaluation GBD 2010 study.19,Another difference is the inclusion of septicaemia, even though it is not a valid underlying cause of death as defined by Lozano and colleagues.19,Causes are grouped into 24 categories.Although the GBD group reports three broad cause groups—namely, communicable diseases, maternal causes, perinatal conditions, and nutritional deficiencies; non-communicable diseases, including cardiovascular diseases and cancers; and injuries, this study reports four broad causes—namely, HIV/AIDS and tuberculosis; other communicable diseases with perinatal conditions, maternal causes, and nutritional deficiencies; non-communicable diseases; and injuries.HIV/AIDS and tuberculosis is introduced as a fourth group because of the size of the burden and the need to integrate HIV/AIDS and tuberculosis programmes.The base information was the Statistics South Africa underlying cause of death data20 from death notifications for 1997–2012 including the late registrations.Statistics South Africa manually codes the causes to the 10th version of the International Classification of Diseases and undertakes automated selection of underlying cause of death according to ICD rules.18,The data were categorised according to apartheid defined population groups.After excluding stillbirths, deaths that occurred outside the country, individuals with unknown or unspecified province information, and population groups other than those listed above, the remaining data were cleaned and adjusted for missing information, under-registration, misclassification of HIV/AIDS deaths, insufficiently reported injury deaths, and deaths attributed to ill-defined causes.The data were first assessed for completeness of registration and quality.21,Figure 1 summarises the data sources used and the data adjustments done to generate the number of deaths.Completeness of reporting of deaths in children younger than 5 years was estimated by comparing uncorrected rates with rates derived from census and survey data, constraining the trend in completeness of reporting to be monotonically increasing over time.21,Completeness was estimated for adults with death distribution methods22 and for adolescents by interpolating between the child and adult estimates.Provincial estimates were rebalanced to ensure that for each sex the sum of the deaths in the provinces, allowing for changes to the provincial boundaries during the study period, was the same as the estimate for the country as a whole.More detail about the estimation of completeness can be found in the technical report on the cleaning and validation of the data.23, "Most HIV/AIDS deaths in South Africa have been misclassified as AIDS indicator causes because of medical doctors' reluctance to report HIV on the death certificate or possibly because of not knowing the HIV status of the deceased.16",Therefore, a new method, regressing the cause-specific mortality in excess of projected non-HIV/AIDS mortality on antenatal HIV prevalence a number of years before the deaths, was used to estimate and reallocate the misclassified HIV/AIDS deaths.24,The Injury Mortality Survey,25 a national survey of Forensic Pathology Service mortuaries, provided an estimate of non-natural deaths in 2009 according to an ICD-compatible shortlist that was mapped to the NBD list.Completeness of registration of injury deaths in the vital statistics was assessed against the survey estimate for that year.For all other years, a scaling factor was calculated by assuming that the percentage change in non-natural deaths and the all-cause completeness was the same relative to 2009."Having estimated the annual numbers of injury deaths, the 2009 Injury Mortality Survey25 and the 2000/2001 National Injury Mortality Surveillance System26 were used to estimate trends in the external cause profiles according to five common injury categories using with linear interpolation.23",A multinomial logistic regression model was applied to the 2009 Injury Mortality Survey data by age, sex, province, and population group, to smooth out sampling fluctuations in the cause fractions.The 2000/2001 National Injury Mortality Surveillance System data was apportioned by province and population group based on the 2009 Injury Mortality Survey data after adjusting for demographic changes.Further breakdown of injury categories was done with the 2009 profile.23,Garbage causes of death were proportionally redistributed by age, sex, and population group to specified causes.23, "Trends in age-standardised death rates for the country's nine provinces and four population groups were calculated using mid-year population estimates generated by Dorrington27 and the 2001 WHO age standard.28",Population group analysis was not done for years before 2000 because of limited reporting in this period.Finally, we compared our findings with those generated by the GBD 2013 study for South Africa.29,GBD data were obtained from the IHME GHDx website."Ethics approval was not required for secondary analysis of Statistics South Africa data or aggregate data from the National Injury Mortality Surveillance System; primary data analysis of Injury Mortality Survey data was approved by the South African Medical Research Council's ethics committee.The funder of the study had no role in study design, data collection, data analysis, data interpretation, or writing of the report.VP-vW, DB, WM, RL, and TG had access to the database used to derive the estimates.The corresponding author had final responsibility to submit the manuscript for publication.We estimated that total deaths rose from 416 209 in 1997, peaked at 677 078 in 2006, and declined to 528 946 in 2012.In 2012, 43·4% of
Method We used underlying cause of death data from death notifications for 1997–2012 obtained from Statistics South Africa.These data were adjusted for completeness using indirect demographic techniques for adults and comparison with survey and census estimates for child mortality.A regression approach was used to estimate misclassified HIV/AIDS deaths and so-called garbage codes were proportionally redistributed by age, sex, and population group population group (black African, Indian or Asian descent, white [European descent], and coloured [of mixed ancestry according to the preceding categories]).Age-standardised death rates were calculated with mid-year population estimates and the WHO age standard.Institute of Health Metrics and Evaluation Global Burden of Disease (IHME GBD) estimates for South Africa were obtained from the IHME GHDx website for comparison.Funding South African Medical Research Council's Flagships Awards Project.
the deaths were attributed to non-communicable diseases, 33·6% to HIV/AIDS and tuberculosis, 13·5% to other communicable diseases, perinatal conditions, maternal causes, and nutritional deficiencies, and 9·6% to injuries.The all-cause age-standardised death rates increased from 1215 per 100 000 population in 1997 to peak at 1670 per 100 000 population in 2006 and declined to 1232 per 100 000 population in 2012.Deaths from HIV/AIDS and tuberculosis increased rapidly between 1997 and 2006 then declined, whereas the non-communicable disease deaths increased steadily and deaths from injuries declined slightly.Similar trends were observed for broad cause age-standardised death rates, with the exception of age-standardised death rates for non-communicable diseases, which increased slightly until 2003 then decreased slightly.A substantial decrease in infant deaths particularly from HIV/AIDS and tuberculosis since 2005, a noticeable increase in HIV/AIDS and tuberculosis deaths in young adults up to 2005, and an increase in deaths from non-communicable diseases at the older ages was observed.Deaths from injuries mainly affected young adults.The top ten causes of death have not changed from 1997 to 2012, although rates and rankings have changed.HIV/AIDS remains the leading cause of death for males and females, accounting for 14·5% of deaths in 1997 and 29·1% in 2012.The age-standardised death rates for HIV/AIDS increased by 120·1%.Cerebrovascular disease remained the second leading cause of death, accounting for 7·6% of deaths in 1997 and 7·5% of deaths in 2012.Interpersonal violence moved from third in 1997 to eighth in 2012, with a 52·0% decrease in age-standardised death rates during this time.Diabetes moved from tenth in 1997 to sixth in 2012, with a 29·3% increase in age-standardised death rates.In 2012, interpersonal violence featured in the top ten causes of death for males but not for females; hypertensive heart disease featured in the top ten causes for females but not males.Among males, the age-standardised death rates for interpersonal violence decreased by 52·4% from 1997 to 2012, whereas diabetes increased by 40·7%.Among females, the age-standardised death rates for diabetes increased by 22·6%, and renal disease by 38·3%.HIV/AIDS contributed 18·8% of total years of life lost in 1997, and 35·7% in 2012.The proportion of years of life lost was higher in males than in females for both 1997 and 2012.Compared with the ranking based on the number of deaths, interpersonal violence, road injuries, tuberculosis, and diarrhoeal disease ranked higher when considering years of life lost, whereas ischaemic and hypertensive heart disease and diabetes ranked lower.The number of deaths and age-standardised death rates for 1997, 2000, 2005, 2010, and 2012 for single causes are reported in the appendix.Considerable subnational mortality differences were reported.The trend in the percentage of deaths due to the four broad cause groups within each population group for 2000–12 illustrated the disproportionate burden of HIV/AIDS, with black Africans most substantially affected.In 2012, about 80% of deaths in Indians or Asians and whites were attributed to non-communicable diseases, about 61% for coloureds, and about 37% for black Africans.Differences in the leading cause of death by population group are shown in figure 6.Tuberculosis and interpersonal violence do not feature for whites, being replaced by cancers and other chronic diseases.Other population group-specific differences were that Indians or Asians and whites have renal disease in the top ten causes of death, while black Africans have diarrhoeal disease in the top ten causes of death.Diabetes was not in the top ten causes for whites in 2000 or 2012, but accounted for the largest increase in age-standardised death rates for black Africans between 2000 and 2012, a 40·8% decrease for Indians or Asians, and a 12·4% decrease coloureds.The total deaths and age-standardised death rates for 2000, 2005, 2010, and 2012 for single causes of death by population group are reported in the appendix.In terms of provinces, in 2012, Western Cape had the lowest age-standardised death rates and KwaZulu-Natal the highest, 1·7 times higher than that of Western Cape.Some provinces, Western Cape, Northern Cape, Mpumalanga, Eastern Cape, and Free State showed a decrease in age-standardised death rates from 1997 to 2012.All provinces showed a decrease in age-standardised death rates since 2005.HIV/AIDS was the leading cause of premature mortality in all nine provinces; however, the provinces have unique profiles that reflect the various states of health transition.For example, cerebrovascular disease was the second leading cause of premature mortality in the KwaZulu-Natal and North West provinces, whereas lower respiratory infections were second for the Free State and Limpopo provinces.The total deaths and age-standardised death rates for 2000, 2005, 2010, and 2012 for single causes of death are reported in the appendix for each province.For 2010, the GBD 2013 study29 estimated substantially more deaths for diabetes and lower respiratory infections and fewer deaths for road injuries, interpersonal violence, and cerebrovascular disease than our study."Comparison of our 2012 estimates with GBD's 2013 estimates showed that the GBD study estimated substantially more deaths for HIV/AIDS, and fewer deaths for interpersonal violence, hypertensive heart disease, tuberculosis, and cerebrovascular disease.
Findings All-cause age-standardised death rates increased rapidly since 1997, peaked in 2006 and then declined, driven by changes in HIV/AIDS.Mortality from tuberculosis, non-communicable diseases, and injuries decreased slightly.In 2012, HIV/AIDS caused the most deaths (29.1%) followed by cerebrovascular disease (7.5%) and lower respiratory infections (4.9%).All-cause age-standardised death rates were 1.7 times higher in the province with the highest death rate compared to the province with the lowest death rate, 2.2 times higher in black Africans compared to whites, and 1.4 times higher in males compared with females.
and isotype controls IgG1, IgG2a and IgG2b, all in 2% bovine serum albumin/phosphate buffered saline.After washing with 2%BSA/PBS, cells were further incubated for 30 min with either goat anti-mouse IgG2b-RPE, goat anti-mouse IgG2a-RPE, or goat anti-mouse IgG-FITC secondary antibodies.Following washing samples were analysed on a FACScan flow cytometer with CellQuestPro software.Leukocyte populations were identified by their forward scatter versus side scatter profiles and by their strong expression of CD14, CD3 or CXCR1.Immunofluorescence staining of synovial tissue cryosections was performed with sections fixed in acetone for 10 min on ice, rinsed in PBS and incubated for 1 h with anti-human GPR15/BOB antibody.For double label experiments anti-GPR15/BOB antibody was incubated together with mouse monoclonal antibodies to either CD14, CD68, CD20, CD3 or CD138 all in PBS.Detection of anti-GPR15/BOB antibody was with Alexa 488 goat anti-mouse IgG2b.Antibodies to CD14 and CD20 were detected with Alexa 594 goat anti-mouse IgG2a and antibodies to CD68, CD3 and CD138 were detected with Alexa 594 goat anti-mouse IgG1 secondary antibody.All secondary antibodies were diluted 1:400 in PBS containing 10% human serum.Nuclear staining was performed with 4′,6-diamidino-2-phenylindole dihydrochloride for 3 min before mounting.Controls were performed using isotype-matched IgGs in place of primary antibodies, followed by respective Alexa secondary antibodies.Total RNA was extracted from frozen blocks of synovia using TRIreagent solution or from isolated monocytes/macrophages using RNAqueous kit according to the manufacturer’s instructions.The quantity recovered was determined by spectrophotometry and the integrity was assessed by agarose gel electrophoresis.Total RNA was reverse transcribed using oligo primers and MMLV reverse transcriptase at 37 °C for 1 h.The reactions were then heated to 70 °C for 5 min to inactivate the enzyme, placed on ice and 60 μl H2O was added.Appropriate dilutions of the resulting cDNA were then used for semi-quantitative PCR using specific primers for GPR15/BOB .PCR primers were run through a BLAST program to ensure gene specificity.The PCR reactions were normalised against the ribosomal RNA L27 using specific primers .The annealing temperature was 57 °C for each primer pair.GraphPad Prism Version 5.01 was used for all statistical analysis.Expression of GPR15/BOB protein was analysed on synovial cryosections by unpaired t test.Levels of receptor protein on PB leukocytes were analysed by Mann Whitney test and percentage positive PB leukocytes were analysed by unpaired t test.Expression of GPR15/BOB was examined in synovial tissue cryosections by immunofluorescence staining.The receptor was detected in all RA patients examined in both lining and sub-lining layers where expression was generally strong, localising to the cell membrane and cytoplasm.The proportion of stained cells varied between patients from a few scattered positive cells to widespread positivity.By contrast, in non-RA synovial sections weaker expression of GPR15/BOB was observed in the lining layer and was essentially negative in the sub-lining.No staining was observed using the isotype controls for all antibodies used.To identify the cell type expressing GPR15/BOB, double label immunofluorescence was carried out using antibodies to GPR15/BOB and markers for monocytes/macrophages, T cells, B cells and plasma cells; neutrophils are rarely present in synovial tissue and were not examined .GPR15/BOB was present on monocytes/macrophages identified by the specific marker CD68 in all RA synovia examined.GPR15/BOB co-localised with CD68 expression in the sub-lining and lining layers.Non-RA synovia exhibited co-localisation of GPR15/BOB with CD68 positive cells in the lining but not in the sub-lining which was basically GPR15/BOB negative.CD14+ cells also colocalised with GPR15/BOB and agreed with the CD68 results, suggesting macrophages were positive for the receptor.No co-localisation was observed between GPR15/BOB and CD3 or CD20 in either RA or non-RA synovia.Non-RA synovium demonstrated limited staining with CD3 and occasional staining with CD20.Co-localisation was effectively negative between GPR15/BOB and CD138 in RA synovia.Double labelling of GPR15/BOB and CD138 was not carried out on non-RA tissue as plasma cells are not present in non-infiltrated synovium.Tissue sections stained with isotype controls in place of primary antibodies were negative in each case.To quantitate the difference in expression of GPR15/BOB in RA and non-RA synovia the number of GPR15/BOB+ and DAPI+ cells were counted in the lining and sub-lining layers of both RA and non-RA tissue.The percentage of total cells that expressed GPR15/BOB was significantly higher in RA synovium in comparison to non-RA in both the lining and sub-lining layers.GPR15/BOB expression by leukocytes from RA and healthy PB was examined by flow cytometry.Leukocyte populations were identified by their FSC/SSC profiles and by their strong cell surface expression of characteristic markers: CD14 on monocytes, CD3 on T lymphocytes and CXCR1 on neutrophils.GPR15/BOB was detected on monocytes and neutrophils.A significant increase in GPR15/BOB expression as measured by mean fluorescence intensity was observed on RA PB neutrophils and an increase close to significance was observed on RA PB monocytes when compared to these cell populations from healthy donors.Furthermore a significant increase in the percentage of cells expressing GPR15/BOB was observed in RA neutrophil populations and also in RA monocyte populations.GPR15/BOB was also detected on the surface of lymphocytes but expression was comparable between RA and healthy donors.Expression of GPR15/BOB mRNA was analysed in RA and non-RA synovia by RT-PCR to confirm observations from immunofluorescence staining.GPR15/BOB mRNA was detected in all RA patients examined although the band intensity varied between individuals, being considerably stronger in patients 2 and 8.GPR15/BOB mRNA was barely detectable in non-RA synovia, being detected in only one out of seven non-RA patients examined.L27 ribosomal gene was used to normalise PCR reactions to allow comparison between samples.To confirm monocytes/macrophage expression of GPR15/BOB, RT-PCR was performed on these cells isolated
GPR15/BOB expression on peripheral blood leukocytes was analysed by flow cytometry and GPR15/BOB messenger RNA was examined in peripheral blood monocytes by RT-PCR.GPR15/BOB protein was observed in CD68+ and CD14+ macrophages in synovia, with greater expression in RA synovia.GPR15/BOB protein was expressed in all patient synovia whereas in non-RA synovia expression was low or absent.Similarly GPR15/BOB messenger RNA was detected in all RA and a minority of non-RA synovia.
from the PB of healthy donors.GPR15/BOB mRNA was detected in isolated monocytes/macrophages from these individuals.Infiltration of the synovial membrane by inflammatory leukocytes is a characteristic pathological feature in rheumatoid arthritis patients.Leukocytes including macrophages are recruited by the action of chemoattractant cytokines secreted within the synovium by both resident and infiltrated cells .In this study we demonstrated that the GPR15/BOB receptor was expressed by macrophages in synovial tissue with up-regulation in RA tissue.In circulating blood both monocytes and neutrophils expressed GPR15/BOB.Percentages of GPR15/BOB+ cells were significantly raised in RA blood and the abundance of GPR15/BOB on these cells was also greater in RA in comparison to healthy donor blood.This suggests that in RA there may be preferential recruitment of GPR15/BOB+ monocytes to the synovium or alternatively GPR15/BOB may be up-regulated on monocytes, via the actions of pro-inflammatory cytokines, once the monocytes have reached the synovium.Examination of synovial tissue mRNA expression suggested GPR15/BOB was expressed in all RA patients examined.The mRNA band intensity appeared greatest in patients 2 and 8 which may be related to their elevated disease severity and absence of DMARD or steroid therapies.However, real-time PCR would be needed to confirm differences in GPR15/BOB expression between RA patients.GPR15/BOB mRNA was barely or not detected in non-RA synovium.In addition, GPR15/BOB protein was detected at low level in the synovial lining layer and was essentially negative in the sub-lining.We have therefore shown that GPR15/BOB is up-regulated in RA synovia compared to non-RA controls.A previous initial study by our group using gene microarrays of synovial tissue showed that GPR15/BOB mRNA expression was present in RA and not detected in non-RA and the results of the present study are therefore in agreement with and extend this earlier report.Chemokine receptor expression is regulated as a result of signal transduction cascades which occur following the ligation of cell surface receptors by cytokines such as TNFα which are known to increase in the circulation in RA .The subsequent transcriptional activation of genes involved in inflammation leads to production of pro-inflammatory cytokines, chemokines and increased expression of cell surface receptors.Therefore elevated cytokines in RA may be up-regulating GPR15/BOB on monocytes/macrophages and neutrophils.In this connection, expression of CCR9 was found to increase significantly on THP-1 monocytic cells following TNFα stimulation .Cell surface expression of GPR15/BOB may be induced by extracellular signals that activate the PI3-kinase pathway and increase binding of 14-3-3 to GPR15/BOB leading to its subsequent expression on the cell-surface .The increase in expression of GPR15/BOB on monocytes in PB may be related to the increased expression of GPR15/BOB in RA synovium in that PB monocytes bearing the receptor may be recruited from the blood into the synovium in RA, possibly involving an unknown ligand to GPR15/BOB.GPR15/BOB is expressed by CD4+ T cells and although PB lymphocytes expressed GPR15/BOB in this study they were not positive in the synovium suggesting that GPR15/BOB+, CD4+ T cells were not recruited to the synovium via a GPR15/BOB ligand.However, neutrophils bearing GPR15/BOB may be recruited into the RA joint.Neutrophils are not present in the RA synovium to any appreciable extent , but they are present in synovial fluid and have been found to express GPR15/BOB in RA patients .Therefore increased expression of GPR15/BOB on PB neutrophils in RA may be involved in disease pathology.Interestingly, SIV envelope-ligation of GPR15/BOB on the surface of circulating neutrophils from Chinese rhesus macaques was found to induce neutrophil apoptosis during SIV infection .HIV patients also suffer from increased neutrophil apoptosis in chronic disease and the rate of neutrophil apoptosis and associated neutropenia is related to rate of disease progression .A role for GPR15/BOB in regulating neutrophil apoptosis in RA is currently unknown.Macrophages play a major role in the pathogenesis of RA due to their production of pro-inflammatory cytokines including TNFα and IL-1β, inflammatory chemokines including CXCL8 and CCL2, and also degradative enzymes.These factors all contribute to synovial inflammation and joint erosion.Neutrophils accumulate in the synovial fluid in RA where they become activated, releasing proteases and lysozomal enzymes leading to cartilage damage, and also pro-inflammatory cytokines and chemokines including IL-1, CXCL8 and CCL3 .The GPR15/BOB receptor on monocytes/macrophages and neutrophils may therefore play a role in RA by attracting these cells into the synovial joint in response to unknown ligand to GPR15/BOB.Since macrophages are important cells in RA pathology GPR15/BOB may provide an interesting therapeutic target in the treatment of inflammation and joint destruction in patients, and further work on the function of this receptor in RA would be of interest.A recent study by Kim et al. has found the presence of GPR15/BOB on T cells, especially FOXP3 regulatory T cells, which regulated homing of these cells to the large intestine, leading to altered inflammation.In the current study GPR15/BOB was mainly expressed by monocytes/macrophages in the RA synovium rather than T cells.Therefore there may be tissue-specific differences in the expression and role of GPR15/BOB.This work was supported by Keele University and the Wellcome Trust.
Chemokine receptors on leukocytes mediate the recruitment and accumulation of these cells within affected joints in chronic inflammatory diseases such as rheumatoid arthritis (RA).Identification of involved receptors offers potential for development of therapeutic interventions.The objective of this study was to investigate the expression of orphan receptor GPR15/BOB in the synovium of RA and non-RA patients and in peripheral blood of RA patients and healthy donors.GPR15/BOB protein and messenger RNA expression were examined in RA and non-RA synovium by immunofluorescence and reverse-transcription polymerase chain reaction (RT-PCR) respectively.GPR15/BOB protein was expressed on peripheral blood leukocytes from RA and healthy individuals with increased expression by monocytes and neutrophils in RA.GPR15/BOB messenger RNA expression was confirmed in peripheral blood monocytes.In conclusion GPR15/BOB is expressed by macrophages in synovial tissue and on monocytes and neutrophils in peripheral blood, and expression is up-regulated in RA patients compared to non-RA controls.This orphan receptor on monocytes/macrophages and neutrophils may play a role in RA pathophysiology.© 2014 The Authors.
in Table 6 for a carboxylate solution and paper mill wastewater.Table 6 shows that the desorption method is effective for the five different carboxylates studied.High methyl ester yields are obtained for the smaller carboxylates, and the yield decreases with the increase in molecular size.This result is comparable to results in other studies, which concluded that the reactivity of the carboxylic acids was controlled by steric factor with an increase in alkyl chain length.Additionally, longer reaction times increase the yield for all the methyl esters, thus indicating that the system did not achieve equilibrium after 4 h.The carboxylate solution does not contain inorganic anions, and as a result there is a higher carboxylate loading on the resin in comparison with wastewater experiments, which affects the desorption-esterification step.This is especially noticeable in the methyl lactate case, in which the loading on the resin is low.As a consequence, the amount of methyl lactate produced, with the paper mill wastewater, is below the detection limit.In the present study, it has been demonstrated that the recovery of carboxylates from wastewater and further desorption and esterification with CO2-expanded methanol is an option for valorization of wastewater.However, the methyl esters still have to be recovered from the methanol and carbon dioxide mixture.Development of this separation is required to assess the applicability of the overall method.The current concentration of about 4 mg total methyl esters per g methanol will have to be increased.Moreover, the integration of the recovery and purification steps to the process is an important factor that has to be studied in detail.For example, the bicarbonate solution produced after the adsorption of the carboxylates might be recycled to the anaerobic digestion for pH control.It would be interesting to study the effect of this recycle on the mixed culture fermentation and on the spectrum of carboxylates produced.Several researchers have studied the effect of addition of CO2, removal of carboxylate, and ratio of metal cation used on the amount and type of carboxylate produced.A new technique that combines anion exchange with CO2-expanded methanol was successfully used to recover and esterify carboxylates from paper mill wastewater.The effects of pH and the presence of other components in the paper mill wastewater were analyzed.During recovery, the carboxylate loading was higher at pH 5.1 and CaCO3 precipitation can be avoided by controlling the CO2 pressure.During desorption, esters were successfully produced with yields from 1.08 ± 0.04 mol methyl acetate/ mol of acetatein to 0.57 ± 0.02 mol methyl valerate/ mol of valeratein at 5 bar and 60 °C.
This paper describes a new option for integrated recovery and esterification of carboxylates produced by anaerobic digestion at a pH above the pKa.The carboxylates (acetate, propionate, butyrate, valerate and lactate) are recovered using a strong anion exchange resin in the bicarbonate form, and the resin is regenerated using a CO2-expanded alcohol technique, which allows for low chemicals consumption and direct esterification.Paper mill wastewater was used to study the effect of pH and the presence of other inorganic anions and cations on the adsorption and desorption with CO2-expanded methanol.Calcium, which is present in paper mill wastewater, can cause precipitation problems, especially at high pH.Esters yields ranged from 1.08 ± 0.04 mol methyl acetate/mol of acetatein to 0.57 ± 0.02 mol methyl valerate/mol of valeratein.
gamma pulses and 1% of gamma pulses were identified as alpha pulses.The pulse area is directly proportional to the deposited energy.The detector was calibrated with an unsealed 241Am α-source with 5.486 MeV.The resulting ionisation energy of the diamond detector was 12 eV/eh-pair.This calibration applies to all particles interacting with the diamond detector.In Fig. 4 the expected spectrum of the 6Li4He reaction is shown.This result was obtained using a numerical simulation with Geant4 .The two peaks correspond to the energy deposition of alpha and triton in the diamond detector.In this simulation the energy losses of the alpha and triton particles in the converter foil, in the air gap between the converter foil and the diamond, as well as in the detector electrode, are taken into account.The FWHM of the triton peak in the simulation was 98 keV.The spectrum of the alpha particles is considerably wider than the spectrum of the tritons due to their higher energy straggling in the converter foil, the air gap and the detector electrode.In Fig. 5 the spectrum measured with a CIVIDEC Cx Spectroscopic Shaping Amplifier is shown.The Cx amplifier has an excellent energy resolution due to its low noise.The output pulse of the amplifier has a FWHM of 180 ns and a counting rate up to 1 MHz can be achieved.The peak, which corresponds to the triton from the 6Li4He reaction with a mean energy of 2.5 MeV and 103 keV FWHM, can be clearly distinguished from the background spectrum which extends up to 2 MeV.Since the alpha energy deposited in the detector is below 2 MeV, it is impossible to distinguish the alpha peak from the gamma background in the measured spectrum.By the introduction of a pulse height threshold in the readout electronics it is only possible to discriminate between the gamma background and the triton spectrum.This amplitude cut also removes the information about the alpha particles.In Fig. 6 the spectrum measured with the Cx amplifier is compared to the spectrum which was measured with a CIVIDEC C2 Broadband Amplifier with the pulse shape analysis applied to the data.The C2 amplifier is a low-noise fast current amplifier with an analogue bandwidth of 2 GHz.After the analysis of the pulse shapes recorded with the current amplifier the rectangular pulses corresponding to the alpha particles and the tritons of the 6Li4He reaction were separated from the triangular pulses corresponding to the gamma background.After this background rejection by pulse shape analysis, the peaks corresponding to both charged particles produced in the reaction can be seen.The alpha peak is broader than the triton peak due to the higher energy straggling of the alpha particles, as it was expected from the result of the numerical simulation.The integral of the alpha peak is 46% of the integral of the triton peak because of a noise threshold set in the readout electronics which reduced the number of registered alpha particles.The threshold in the measurement was at 0.63 MeV.The background rejection rate below 2 MeV was 96%, i.e. 4% of the pulses were identified as corresponding to the alpha-particles.The FWHM of the triton peak measured with C2 was 186 keV.Although the resolution is worse than in the spectroscopic measurement with Cx amplifier, C2 allows for discrimination between the alpha particles and the gamma background which was impossible in the conventional measurement.In Fig. 7 the distribution of FWHM of the pulses recorded with the C2 amplifier, with respect to the deposited energy, is shown.FWHM has a resolution of 0.2 ns which corresponds to 5 GS/s sampling rate of the readout system.Pulses with FWHM below the threshold of 7.5 ns correspond to the gamma background.Pulses with FWHM above the threshold correspond to the alpha particles and tritons.In Fig. 8 the measured alpha and triton spectra are compared to the results of the numerical simulation.The result of the measurement is in good agreement with the result of the simulation.The rapid cut-off of the measurement signal below 1 MeV is due to the noise threshold set in the readout electronics.A diamond detector was successfully used for spectroscopic measurements with thermal neutrons.In the experiment, the spectra of the alpha particles and tritons of the 6Li4He reaction were measured using two different experimental setups.In the first method, the Cx Spectroscopic Shaping Amplifier was used and the triton peak can be clearly seen in the recorded spectrum, but the alpha peak is lost in the gamma background.In the second method, the C2 Broadband Amplifier was used in combination with the novel pulse shape analysis algorithm.The algorithm separated the rectangular peaks corresponding to the alpha particles and tritons of the neutron capture reaction from the triangular peaks corresponding to the gamma background.Both the alpha peak and the triton peak can be clearly distinguished in the measured spectrum after the background rejection and they both closely agree with theoretical predictions.Thus, the pulse shape analysis provides a simple and precise method for gamma background rejection.This opens the way for CVD diamond detectors to be used for neutron flux monitoring/counting and neutron cross-section measurements in gamma environments.Furthermore, the pulse-shape analysis should be equally applicable to other charged particle signals provided the particles are absorbed at a shallow depth in the detector and to other background signals that fully traverse the monitor and vice versa.
Spectra of the alpha particles and tritons of <sup>6</sup>Li(n,T)<sup>4</sup>He thermal neutron capture reaction were separated from the gamma background by a new algorithm based on pulse-shape analysis.The thermal neutron capture in <sup>6</sup>Li is already used for neutron flux monitoring, but the ability to remove gamma background allows using a CVD diamond detector for thermal neutron counting.The pulse-shape analysis can equally be applied to all cases where the charged products of an interaction are absorbed in the diamond and to other background particles that fully traverse the detector.
which is reported as 86.2 m.s−1 in the study.It is worth noting that an experimental investigation on the MILD-oxy combustion of pulverized coal at ordinary temperatures was conducted at a 0.4 MW pilot-scale facility at HUST.MILD-oxy combustion of pulverized coal was successfully achieved without highly preheating the oxidant.The integral system of flue gas recirculation was applied in this experiment, including ash separation and water vapor condensation.This result indicates the feasibility of an industrial application for this technology.After 30 years of development, oxyfuel technology has matured; it now possesses the fundamental characteristics necessary for commercial application.Most importantly, oxyfuel combustion is suitable for use in the large number of existing coal-fired power plants in China.The development of oxyfuel combustion technology in China keeps in step with international developments, and the fundamental studies described here provide a useful and invaluable reference for the design of key equipment, operation modes, industrial system flow, and more."In order for China's coal-power-dominated energy mix to achieve greenhouse gas emission reduction targets, large-scale demonstrations must be launched as soon as possible to increase the likelihood of commercializing oxyfuel technologies.At the same time, in order to reduce the high cost of this CO2-capture technology, this novel concept and its methods must be strongly promoted and thoroughly developed.
Oxyfuel combustion with carbon capture and sequestration (CCS) is a carbon-reduction technology for use in large-scale coal-fired power plants.Significant progress has been achieved in the research and development of this technology during its scaling up from 0.4 MWth to 3 MWth and 35 MWth by the combined efforts of universities and industries in China.A prefeasibility study on a 200 MWe large-scale demonstration has progressed well, and is ready for implementation.The overall research development and demonstration (RD&D) roadmap for oxyfuel combustion in China has become a critical component of the global RD&D roadmap for oxyfuel combustion.An air combustion/oxyfuel combustion compatible design philosophy was developed during the RD&D process.In this paper, we briefly address fundamental research and technology innovation efforts regarding several technical challenges, including combustion stability, heat transfer, system operation, mineral impurities, and corrosion.To further reduce the cost of carbon capture, in addition to the large-scale deployment of oxyfuel technology, increasing interest is anticipated in the novel and next-generation oxyfuel combustion technologies that are briefly introduced here, including a new oxygen-production concept and flameless oxyfuel combustion.
et al. fabricated AuNP containing TiO2 NF DSSCs and achieved a PCE of 7.8%, improving their control NF PECs by 15%, which was similar to the improvement margin demonstrated in this work.However, they did not control the particle size of their AuNPs.In contrast to existing reports, we provide a compressive performance evaluation for precisely size controlled sub-12 nm AuNPs in TiO2 NF photoelectrodes and determine the upper and lower size limits of incorporating small AuNPs into assembled PECs.Interestingly, the dominating roles of each different sized AuNP was clearly different.For the 5 and 8 nm AuNP based PECs, benefits were seen from increased light harvesting and improved charge transport which resulted in the increased JSC, FF and decrease in the series resistance.On the other hand, the dominating role of the 10 and 12 nm AuNPs was less likely to be due to the light harvesting due to a decrease in the JSC.Despite this, the consistent increase in FF for all AuNP based PECs suggests a reduction in parasitic resistive losses.While the overall PCE was consistently higher for all AuNP based PECs, the highest performing cells were those that contained the 8 nm AuNPs.Therefore, the dominant role of the 8 nm AuNPs is the improvement in the light harvesting from the plasmonic features, decrease in series resistance and increase in electron transport and injection rate.In this paper, we have demonstrated the benefits of adding sub-12 nm AuNPs into the scattering layer of TiO2 NF photoelectrodes.The composite photoelectrodes were assembled into PECs and the gold loaded photoelectrodes show improvements in the current density and the fill factor which is attributed to increased spectral absorption, decrease in series resistance, and increase in electron transport and injection rate.Compared to the control PECs, devices loaded with 8 nm AuNPs show 20% improvement in average PCE, where the highest performing device obtained an PCE of 8% which is among the highest reported for TiO2 NF PECs.We show that the precise control over NP size is critical to performance enhancement in TiO2 NF PECs and we believe that this knowledge will help shape future studies into work incorporating plasmonic NPs into renewable energy applications.Experiments were designed by T.J.M with guidance from H.Y, I.P, J.G.S, and I.P.P. All PECs were fabricated by T.J.M, F.A, M.B, and Y.L. Electron microscopy was completed by T.J.M, D.Y, and C.C.The manuscript was written by T.J.M with contributions of all authors.All authors have given approval to the final version of the manuscript.
Incorporation of gold nanoparticles (AuNPs) into titanium dioxide (TiO2) photoelectrodes has been used traditionally to increase the performance of photoelectrochemical cells (PECs) through their tailored optical properties.In contrast to larger AuNPs, previous studies have suggested that smaller AuNPs are the most catalytic or effective at increasing the photovoltaic (PV) performance of TiO2 photoelectrodes based on PECs.Despite this, AuNPs are often only compared between sizes of 12–300 nm in diameter due to the most common synthesis, the Turkevich method, being best controlled in this region.However, the optimum radius for citrate-capped AuNPs sized between 5 and 12 nm, and their influence on the PV performances has not yet been investigated.In addition to using AuNPs in the photoelectrodes, replacing traditional TiO2 NPs with one-dimensional nanofibers (NFs) is a promising strategy to enhance the PV efficiency of the PECs due their capability to provide a direct pathway for charge transport.Herein, we exploit the advantages of two different nanostructured materials, TiO2 NFs and sub-12 nm AuNPs (5, 8, 10, and 12 nm), and fabricate composite based photoelectrodes to conduct a size dependent performance evaluation.The PECs assembled with 8 nm AuNPs showed ∼20% improvement in the average power conversion efficiency compared to the control PECs without AuNPs.The highest performing PEC achieved a power conversion efficiency of 8%, which to the best of our knowledge, is among the highest reported for scattering layers based on pure anatase TiO2 NFs.On the basis of our comprehensive investigations, we attribute this enhanced device performance using 8 nm AuNPs in the TiO2 NF photoelectrodes to the improved spectral absorption, decreased series resistance, and an increase in electron transport and injection rate leading to an increase in current density and fill factor.
In recent decades, the frequency and intensity of climate-related natural hazards have both increased.Extreme flooding and flood-related events are leading this trend, and the United Kingdom has been particularly affected by these phenomena.These events have resulted in severe social and economic costs all over the world.Damages to labour and capital productivity after a disaster create knock-on effects that exacerbate the initial losses of the flooded assets, disturbing not only the impacted economic sectors but also other sectors that are indirectly affected through economic mechanisms.This sequence of events can be observed in the 2007 summer floods that occurred in England, which caused a major civil emergency nationwide.Thirteen people were killed and approximately 7000 had to be rescued from flooded areas; 55,000 properties were flooded and over half a million people experienced shortages of water and electricity.The most affected region was Yorkshire and the Humber which accounted for 65.5% of total national direct damage.Approximately 1800 homes were flooded and more than 4000 people were affected.Additionally, more than 64 businesses, schools and public buildings were flooded, and infrastructure services such as roads and electricity substations suffered significant disruptions as well.Traditional assessments of economic losses resulting from disasters of this type consider only direct damages to the physical infrastructure.Nevertheless, it has been well documented that knock-on effects are triggered by these direct damages and that they constitute a considerable share of the total socioeconomic burden of the disaster.Therefore, accurate flood risk management requires more than proper assessments of losses from capital and labour productivity disruptions; it must also consider the ripple effects of the recovery process, which are dispersed through sectoral and regional interdependencies.Knock-on effects can arise in two main ways.On the one hand, damages to capital such as roads and offices will interrupt transportation and further disrupt economic activities, while damages to labour – including injuries and death – can be perceived as losses of labour productivity that ultimately prevent economic functioning.During an economic recovery, both capital and labour should be restored.On the other hand, production loss in a single sector, as a result of either capital or labour productivity losses, affects both customer and supplier industries, namely the ‘downstream’ and ‘upstream’ sectors.This indicates that an initial economic loss in a single sector can eventually spill over into the entire economic system and even into other previously unaffected regions through sectoral and regional interdependencies.Flood risk management2 requires, first, accurate estimates of losses from both capital and labour productive constraints after a flooding."Second, to estimate a flood's indirect effects on the economy, it is essential to consider the ripple effects resulting from sectoral and regional interdependencies.Flood risk management can also reduce vulnerability and increase the resilience3 of affected regions in the future."Third, all accumulated production losses that occur prior to the full recovery of the economy, as well as the costs of capital and labour restoration during the flood's aftermath, should be taken into consideration.This paper introduces the new concept of flood footprint to describe an accounting framework that measures the total economic impact that is directly and indirectly caused to the productive system, triggered by the flooding damages to the productive factors, infrastructure and residential capital; on the flooded region and on wider economic systems and social networks.This framework can not only capture the economic costs derived from capital and labour productivity losses but also account for the post-disaster recovery process.Here, we define the productivity loss, from capital or labour, as the reduction in the production level of equilibrium at pre-disaster conditions due to constraints in the availability of any of the productive factors, which in the case of the Leontief production functions are capital and labour.This type of production functions is a particular case of constant elasticity of substitution production functions, where the level of production is determined as a function of the productive factors.In the case of the Leontief production functions, or perfect complements, it is assumed that the proportion of productive factors is fixed, or in other words, the technology is fixed and there is no possibility of substitution between de productive factors.Owing to the above, a constraint in the availability of any of the productive factors will have a proportional effect in the level of production.For instance, the reduction of 10% in the availability of labour force, due to transport disruptions, illness, displacements or other factors after a flooding, would represent a decrease of 10% in the level of production.Additionally, as the flood footprint framework is developed based on an Input-Output model, it is also able to measure the knock-on effects resulting from sectoral and regional interdependencies.The concept of flood footprint will therefore improve upon existing flood risk assessment and better assist professionals working on disaster risk assessment, preparation and adaptation.This paper constitutes the first empirical application of the flood footprint framework to a real past event.It is evaluated the total economic cost in the region of Yorkshire and The Humber, caused by the 2007 summer floods in the UK.While, a sensitivity analysis is carried out to provide robustness in the results.This paper is structured as follows.The next section reviews selected literature on disaster impact analysis.Section 3 describes the methodology and rationale of the flood footprint model.Section 4 presents the data gathering and codification methods used to analyse total economic losses in Y&H resulting from the floods in 2007.Section 5 presents the main results of the flood footprint assessment.Finally, conclusions are discussed in section 6.The impact assessment of natural
These disasters represent high costs and functional disruptions to societies and economies.To obtain an accurate assessment of total flooding costs, this paper introduces the flood footprint concept, as a novel accounting framework that measures the total economic impact that is directly and indirectly caused to the productive system, triggered by the flooding damages to the productive factors, infrastructure and residential capital.The assessment framework account for the damages in the flooded region as well as in wider economic systems and social networks.The results suggest that the total economic burden of the floods was approximately 4% of the region's GVA (£2.7 billion), from which over half comes from knock-on effects during the 14 months that the economy of Yorkshire and The Humber last to recover.
sector groups.For example, Manufacturing is shown to be the most affected sector, with a share of indirect loss 60% higher than direct loss, and the total damages in this group account for 23% of the total flood footprint.The utilities sector suffers major direct damages, as infrastructure damages are allocated among this sector.The Financial & Professional sector is the most indirectly affected, with 21% of total indirect damages, while just 9% of total direct damages are concentrated in this group.At a more disaggregated level, Fig. 4 depicts the ten most affected sectors for direct and indirect economic losses, respectively.The major direct damage is concentrated in those sectors forming the Utilities Sector group.The most affected sector is Water, Sewerage & Waste, accounting for 35% of direct economic loss in the Utilities Sector group and 12% of the total direct damage."Regarding indirect damages, the IT services sector, from the Information & Communication group sector, was the most damaged, accounting for 86% of this group's losses and 11% of the total indirect damages.Finally, it is noteworthy that two sectors appear in both categories: the IT Services and Health sectors.This indicates they are among the most vulnerable sectors in the region.The flood footprint in these sectors accounts for 13% of the total flood footprint.Uncertainty in the model mainly comes from the lack of data in labour and final demand variables, and some assumptions applied to calibrate the correspondent parameters.To prove the robustness of the results, a sensitivity analysis is performed on labour and final demand parameters.The sensitivity analysis comprises the upwards and downwards variation of 30% of the parameters in intervals of 5%.The variation of parameters comprises the proportion of labour not available for traveling, and the proportion and time of labour delayed by transport constraints.The results of the sensitivity analysis, as presented in Fig. 6, show that variations in labour parameters have a less-than-proportional effect in indirect costs and the total production capacity, and these are decreasing over time.Other variables are not affected by variations in labour parameters.The standard deviation of the total variation of labour productive capacity is about £483 million, which causes a standard deviation of £297 million in total production capacity, and a standard deviation of $168 million in indirect damages.The variation of parameters comprises the decreased proportion of consumption in non-basic products.The results of the sensitivity analysis, as presented in Fig. 7, show that variations in final demand parameters have a less-than-proportional effect in indirect costs and the total production capacity, and these are decreasing over time.Other variables are not affected by variations in labour parameters.The standard deviation of the total variation of total production required by final demand is about £96 million, which causes a standard deviation of £93 million in total production capacity, and a standard deviation of $54 million in indirect damages.The increasing frequency and intensity of weather-related disasters require more accurate and comprehensive information on damages.This will support better risk management and adaptation policies to achieve economic sustainability in the affected cities in the upcoming years.For instance, the 2007 summer floods caused a national emergency in England, and Yorkshire and the Humber was the most affected region.This paper is the first study to apply the flood footprint framework to a real past event, the 2007 summer floods in the Yorkshire and The Humber region.This analysis supports the important lesson that losses from a disaster are exacerbated by economic mechanisms, and that knock-on effects constitute a substantial proportion of total costs and that some of the most affected sectors can be those that are not directly damaged.For this case study, the proportion of indirect damages accounts for over half of the total flood footprint.The sensitivity analysis proves the stability of the model and the robustness of results.This research provides a quantitative evidence for policy stakeholders that any direct damage may incur significant indirect impact along the economic supply chain.The climate change adaptation policy should start to consider minimising indirect impact, especially those sectors hidden in the supply chain which are vulnerable to labour loss, such as the services sectors.Not considering the indirect effects would mislead for actions in flood risk management and would lead to an inefficient use of resources.There are, however, some caveats that must be noted.The current study is subject to some degree of uncertainty.First, data scarcity is the main source of uncertainty, making the use of strong assumptions unavoidable in certain cases.Engineering flood modelling and GIS techniques have been rapidly evolving in recent years, providing new sources of information with great precision and constructing the so-called damage functions,9 although this progress has demanded substantial computing, time and monetary resources.The implementation of these techniques in future research would considerably improve the accuracy of the analysis.Second, although the model effectively accounts for knock-on effects in the affected regional economy, global economic interconnectedness requires us to move the analysis towards a multi-regional approach if we are to make an exhaustive impact assessment.Finally, additional research on labour and consumption recovery would greatly improve the analysis, as these are areas that have attracted less attention from researchers.
International headlines over the last few years have been dominated by extreme weather events, and floods have been amongst the most frequent and devastating.The consequent breakdown of the economic equilibrium exacerbates the losses of the initial physical damages and generates indirect costs that largely amplify the burden of the total damage.Neglecting indirect damages results in misleading results regarding the real dimensions of the costs and prevents accurate decision-making in flood risk management.The framework was applied to the 2007 summer floods in the UK to determine the total economic impact in the region of Yorkshire and The Humber.This paper is the first to apply the conceptual framework of flood footprint to a real past event, by which it highlights the economic interdependence among industrial sectors.Through such interrelationships, the economic impacts of a flooding event spill over into the entire economic system, and some of the most affected sectors can be those that are not directly damaged.Neglecting the impact of indirect damages would underestimate the total social costs of flooding events, and mislead the correspondent actions for risk management and adaptation.
the following categories are also included: primary energy demand, volumetric water consumption and water footprint.PED has been quantified through GaBi.The WC accounts for blue and green water, based on data from the Water Footprint Network.However, only blue water is included in the WF, taking into account water stress in different regions.The WF has been estimated using the CCaLC software tool.The environmental impacts are discussed first at the product level, followed in section 3.2 by a sectoral analysis.As can be seen in Fig. 3, the chocolate-coated biscuits are the worst option for 13 out of 18 categories, while the low fat/sugar biscuits are the best option for 14 impacts.Thus, the latter are not only healthier but also more environmentally sustainable than the other biscuits considered here.These results are discussed in more detail below for each of the impacts, with an emphasis on the contribution analysis to help identify improvement opportunities.Further information on the contribution of different life cycle stages to the impacts can be found in Tables S1 and S2 in the Supporting Information.Primary energy demand: Chocolate-coated biscuits have the highest primary energy demand of all biscuit types, followed by the two cream varieties with approximately 14% lower consumption.Low fat/sugar biscuits require the least amount of primary energy, estimated at 16.9 MJ/kg.Raw materials and biscuit manufacturing are the highest consumers.Global warming potential: As shown in Fig. 3b, the highest GWP is found for chocolate-coated biscuits and the lowest for low fat/sugar biscuits.The raw materials and manufacturing are again the main hotspots, contributing 41%–61% and 24%–38%, respectively.Within manufacturing, biscuit baking is the most influential process, causing 10%–19% of the impact.In the case of chocolate-coated biscuits, milk powder is the most significant contributor.Ozone depletion: The values for ozone depletion range between 71 and 98 μg CFC-11 eq./kg, with low fat/sugar biscuits being the best and chocolate-coated biscuits the worst alternative.Raw materials account for 42%–57% of the total, followed by transport with 29%–35%.The third most impacting stage is manufacturing.Wheat cultivation is the most significant individual activity in the life cycle, followed by manufacturing.The vast majority of ozone depletion is caused by emissions of halogenated organic compounds released along the life cycle, mostly associated with fossil fuel energy chains.Fossil fuel depletion: For this category, crackers have the lowest impact and chocolate-coated biscuits the highest.The main source is manufacturing, contributing between 35% and 47%.Within this stage, baking accounts for 16% of the total impact of vanilla cream biscuits, rising to 27% for semi-sweet biscuits.Milk powder is also significant for the latter, with a contribution of 16% to the total.Freshwater eutrophication: Similarly to the previous impact categories, the chocolate-coated biscuits have the highest eutrophication, while the impact of all other types lies between 0.36 and 0.42 g P eq./kg.Raw materials account for more than 66% of the total, with wheat contributing most of that, followed by manufacturing.Marine eutrophication: This impact ranges from 1.5 to 4.6 g N eq./kg for low fat/sugar and chocolate-coated biscuits, respectively.The vast majority is attributable to raw materials, which contribute from 90% in crackers to 95% in chocolate biscuits.This is largely due to wheat cultivation, palm/kernel oil, sugar and cocoa powder with contributions of 12%–43% each.Human toxicity: Crackers have the highest human toxicity potential eq./kg), followed by the low fat/sugar biscuits.Again, raw materials are the main hotspot with a share of 88%–91%.The most relevant ingredients are wheat, palm oil and sugar.However, the contribution of the latter for the crackers and low fat/sugar biscuits is negligible.Terrestrial ecotoxicity: At 46.7 g 1,4-DCB eq./kg, chocolate-coated biscuits have more than three times higher an impact than the low fat/sugar option.Most of this is due to raw materials across the biscuit types, of which palm oil contributes 27%–81% to the total.Freshwater ecotoxicity: For this impact, chocolate cream biscuits are the worst option with 67 g 1,4-DCB eq./kg, while the values for other biscuits range from 41 to 64 g/kg.Raw materials and transport are the most relevant stages, accounting together for more than 94% of the total.Within the raw materials stage, the most significant activities are the production of palm oil and sugar.For the chocolate and cream containing biscuits, palm kernel oil is also significant.Marine ecotoxicity: As indicated in Fig. 3j, chocolate cream biscuits are the worst option in this category with 44.2 g 1,4-DCB eq./kg.The low fat/sugar variety is again the best option, with over 2.5 times lower impact.Raw materials are the major hotspot contributing 65–85%, followed by transport and manufacturing.Significant activities are palm oil production, contributing 14%–43% to the total and wheat cultivation with 5%–20%.For the chocolate biscuits, palm kernel oil and cocoa powder production are also relevant.Terrestrial acidification: Chocolate-coated biscuits have a four time higher acidification than the low fat/sugar; see Fig. 3k. Raw materials cause most of the impact, largely due to wheat and palm oil.Milk powder is a hotspot for chocolate and vanilla-containing biscuits.Lastly, palm kernel oil is important in biscuits that contain chocolate.Urban land occupation: ULO ranges from 143 cm2a/kg for vanilla cream biscuits to 174 cm2a/kg for chocolate-coated biscuits.As for the other impacts, raw materials are the main hotspot, followed by transport.The latter is mainly due to the transport infrastructure.Agricultural land occupation: Following the previous trend, chocolate biscuit have the highest ALO and low fat/sugar the lowest, tailed closely by crackers and semi-sweet biscuits, all with the occupation of around 1 m2a/kg.As expected, most of this is due to the production of raw materials, particularly wheat cultivation.Natural land transformation: This impact follows
The most significant life cycle stage for all types is the raw materials production, causing 41%–61% of the total impacts, with flour, sugar and palm oil being the key hotspots.These results can help guide manufacturers in mitigating the hotspots in the supply chain and consumers in selecting environmentally more sustainable biscuits.
contrast to the per-kg impacts, the annual impacts are dominated by crackers, low fat/sugar biscuits and semi-sweet biscuits due to their high sales volumes.Crackers contribute 7%–17% to the total impacts, while low fat/sugar biscuits make up 10%–24% and semi-sweet biscuits 7%–16%.The chocolate-coated biscuits, despite having the highest impacts per kg, contribute only 3%–11%.The contribution of both types of cream biscuit together is <6%.The annual energy consumption and GWP of biscuit supply chains can be contextualised using values for these two categories for the whole UK food sector.The annual energy consumed by the latter is estimated at 126 TWh/yr.Biscuits require 33.5 PJ/yr, hence contributing 7.4% to the total.Therefore, energy consumption is important for the biscuits sector and should be targeted for improvements.The contribution to the GWP, however, is much less significant: 2.57 Mt CO2 eq. or 0.48% of the 550 Mt for the food sector as a whole.However, it should be noted that the total national values refer to previous years, and therefore the results for the sectoral contributions to energy demand and GWP should be interpreted cautiously.As no national data exist for the other impact categories, a comparison similar to this cannot be carried out.Several improvement opportunities can be considered based on the hotspot analyses in sections 3.1 as well as the insights drawn from the sensitivity analyses.Firstly, the raw materials stage is the major hotspot with an average contribution of 67% across all impact categories and biscuit types.Within this stage, agricultural activities are a key contributor, especially wheat cultivation as wheat–derived flour forms the basis of the biscuit product formulation.As a result, wheat cultivation contributes on average 28% to the life cycle impacts across all categories and biscuit types.Some of this contribution can be linked to fertiliser use: for instance, wheat accounts for 38%–68% of the total freshwater eutrophication.Therefore improvements should include more effective use and more efficient production of fertilisers to reduce the impacts.However, it should be borne in mind that changes in the use of fertilisers might have drawbacks for the yield and subsequent effects for other impacts, such as increasing agricultural land use.Consequently, optimisation of yields and agricultural efficiency should be considered carefully.Additional measures could include more sustainable use of energy in wheat cultivation, including reduction of fuel usage which would lead to a reduction in GWP.Such improvements would also be reflected in the majority of the impact categories as wheat is a hotspot for most of them.In addition to wheat cultivation, similar mitigation measures can be applied to sugar and palm oil.Mitigation could be reflected in the majority of the impact categories as sugar and palm oil are crucial hotspots for the toxicity and eutrophication-related categories as well as for agricultural land occupation, photochemical oxidants and terrestrial acidification.A replacement of palm oil with other fat sources might result in trade-offs, as revealed in the sensitivity analysis in which substitution with rapeseed oil led to improvements in five impact categories but deterioration in another five.Therefore, the cultivation and production of palm oil, rather than its replacement, should be targeted for improvements.Mitigations of sugar impacts can be achieved by reducing fertiliser use and optimising sugar manufacturing.After the raw materials, manufacturing is the second most significant stage with an average contribution of 21% to the impacts.It is particularly relevant for impacts related to energy consumption, such as primary energy demand and fossil fuel depletion.Therefore, improvements should target energy demand reduction via measures, such as real-time monitoring and heat integration, as part of an energy management strategy.An investigation carried out within this study has considered two scenarios with energy reductions of 15% and 25%.In the latter case, noteble reductions have been found in primary energy demand, GWP, fossil fuel depletion and natural land transformation.Even for the less ambitious scenario of 15% energy reduction, noteworthy improvements can be achieved for chocolate biscuits if land use change is included.Thus, land use change influences significantly the GWP of chocolate-coated biscuits, despite their containing only 1.35% of cocoa.Some impacts are also influenced by loss of raw materials in the manufacturing process and allocation assumptions but this effect is small-to-moderate.The uncertainty analysis suggests that the results are robust with respect to the influencing parameters.The annual impacts of biscuits based on their yearly consumption within the UK have also been estimated.The results suggest that the energy consumption in the biscuit subsector accounts for 7.4% of the total for the UK food sector, while their contribution to the sectoral GWP is relatively minor at 0.48%.The annual impacts are dominated by crackers, low fat/sugar biscuits and semi-sweet biscuits due to their high sales volumes.The chocolate-coated biscuits contribute 3%–11% to the total, while the combined contribution of both types of cream biscuit is <6%.The outputs of this work will be of interest to producers by assisting in the targeted reduction of environmental impacts, benchmarking against other products and monitoring future improvements.The results can also assist consumers in tailoring their food choices to minimise impacts on the environment.
Therefore, this paper sets out to evaluate the life cycle environmental sustainability of the following widely-consumed types of biscuit, both at the product and sectoral levels: crackers, low fat/sugar, semi-sweet, chocolate-coated and sandwich biscuits with chocolate or vanilla cream.The results obtained through life cycle assessment demonstrate that, in addition to being healthier, low fat/sugar biscuits have the lowest impacts across most of the 18 categories considered.Replacing palm with rapeseed oil would improve five impacts but worsen another five, including a 34% increase in agricultural land occupation and marine eutrophication.Therefore, the cultivation and production of palm oil, rather than its replacement, should be targeted for improvements.Reducing energy consumption by 25% in manufacturing would reduce primary energy demand by 8%–12%, fossil fuel depletion by 9%–12% and global warming potential by 6%–9%.The analysis at a sectoral level in the UK, the leading consumer of biscuits in Europe, reveals that biscuits contribute 7.4% of primary energy demand and 0.5% of greenhouse gas emissions of the whole UK food sector.
State-of-the-art deep neural networks typically have tens of millions of parameters, which might not fit into the upper levels of the memory hierarchy, thus increasing the inference time and energy consumption significantly, and prohibiting their use on edge devices such as mobile phones.The compression of DNN models has therefore become an active area of research recently, with emerging as one of the most successful strategies.A very natural approach is to prune connections of DNNs via regularization, but recent empirical investigations have suggested that this does not work as well in the context of DNN compression.In this work, we revisit this simple strategy and analyze it rigorously, to show that: any of an-regularized layerwise-pruning objective has its number of non-zero elements bounded by the number of penalized prediction logits, regardless of the strength of the regularization; successful pruning highly relies on an accurate optimization solver, and there is a trade-off between compression speed and distortion of prediction accuracy, controlled by the strength of regularization.Our theoretical results thus suggest that pruning could be successful provided we use an accurate optimization solver.We corroborate this in our experiments, where we show that simple regularization with an Adamax-L1 solver gives pruning ratio competitive to the state-of-the-art.
We revisit the simple idea of pruning connections of DNNs through regularization achieving state-of-the-art results on multiple datasets with theoretic guarantees.
Italy, which relies entirely on import for covering its domestic consumption .The analysis of the pulp and paper sub-sector clearly shows that any discussion over the issue of energy and carbon intensity of a country in relation to the efficiency of the technologies used in the economy should start from an analysis of the mix of economic activities carried out in the different sectors and the selective externalization of the most energy intensive economic activities by means of import/export of products.The mix of domestic production and the openness of the industrial sector are closely related and should be analyzed simultaneously.Moreover, in a globalized economy, none of these two factors is directly affected by local consumption patterns!,This is an important point to consider in the evaluation of policies regarding the reduction of energy and carbon intensity.A better understanding of how energy use is related to the functioning and the size of the economy and the use of other production factors is paramount for evaluation of energy policies.Consider the following questions: Is the EU 20% energy efficiency target by 2020 achievable?,What has to be changed in the actual pattern of energy use in the industrial sector to achieve this goal?,What would be the cost of achieving this target?,We firmly believe that available quantitative analyses of the energy intensity of the economy do not provide the information required for answering these questions and hence that at present energy policies are made on the basis of wishful thinking.Ours is an attempt to characterize the energy performance of the industrial sector across hierarchical levels of organization by exploring the complex set of relations between energy consumption, requirement of human activity and value added generation.Our analysis characterizes the quantitative and qualitative energy metabolic characteristics of the various sub-sectors and sub-sub-sectors of the industrial sector, with the economic job productivity flagging the expected pattern of externalization.A key feature of our approach is the end-uses data array composed of extensive and intensive variables.The use of this data array facilitates the extension of the analysis to include additional resources and sink-side impacts.All the same, the analysis carried out at the level of the industrial sub-sector still leaves out important aspects as the end-uses data array at this level may refer to end-uses that are still qualitatively very different.Indeed, it would be important to move further down to a still lower level of analysis—that of production processes carried out at the level of sub-sub-sectors—in order to describe the end-uses in terms of technical coefficients that refer to homogenous typologies of processes.In this way, the level of analysis can reach a point in which one establishes a bridge between bottom-up information and top-down information.The proposed method of accounting would then become a powerful complement—offering the biophysical perspective—of the aggregate production function in neoclassical economics.The information provided by production functions described in macroeconomics analysis could be scaled down tracking the biophysical roots of the economic process across levels.This integration would avoid some of the problems associated with the excessive reliance on neo-classical economic tools .Unfortunately, the inclusion of lower levels of analysis beyond the industrial subsector is currently still problematic as it requires a better definition of the categories of accounting by the various statistical offices.Statistical offices should make a joint effort to offer energy balances, trade, and labor data using a uniform classification of all economic activities so that assessments of the consumption of energy carriers, hours of human activity, and monetary indicators match with each other at all levels of analysis, thus avoiding the comparison of apples with oranges in the same category of accounting.
Within the context of the controversial use of the concept energy intensity to assess national energy performance, this paper proposes an innovative accounting framework: the energy end-use matrix.This tool integrates quantitative assessments of energy use of the various constituent compartments of socio-economic systems.More specifically it identifies, moving across levels of analysis, what compartments (or sub-compartments) are using what type of energy carriers for what type of end-use.This analysis is integrated with an assessment of labor requirements and the associated flows of value added.The end-use matrix thus integrates in a coherent way quantitative assessments across different dimensions and hierarchical scales and facilitates the development of integrated sets of indicators.Challenges to improving the usefulness of biophysical analysis of the efficiency of the industrial sector are identified and discussed.Increasing the discriminatory power of quantitative analysis through better data standardization by statistical offices is the major challenge.
for crude samples.Mg2+ ions have no effect on the assay in the final concentration range of 0.5–0.1 M and the salt concentrations of up to 3 M NaCl will not affect the assay negatively.3.Confocal microscopy preparation and imaging,For visible light and high numerical aperture objectives a pixel size of ∼0.1–0.2 μm is recommended .The pixel size of the CLSM system was optimized to 0.116 μm.In order to process fast dynamic scans, sequential raster scan was used and the scan speed was optimized at 0.9 μs/pixel to reduce delays between acquisitions.The other parameters were optimized and set up accordingly by using LSM 510 software.These parameters were as follows: Amplifier offset at 0.1; Amplifier gain at 1; Power of 405 nm at 0.5 mW; Pinhole at 0.58 Airy equivalent; Optical slice 0.6 μm; Frame size at 512 μm × 512 μm; Interval at 0.1 μm.The emission spectra of HO, MA and autofluorescence are 460 nm, 560 nm and 725 nm respectively.Therefore, the dyes were discriminated from each other and autofluorescence using coated filters.Images were collected in different tracks.MA was set at track 1 with a bandpass filter at 505–570 nm, and HO was set at track 2 with a bandpass filter at 420–480 nm.Both HO and MA were set with the mirror reflection.Autofluorescence of Synechocystis sp.PCC 6803 was excited by Argon/2 at 488 nm with a different channel and a longpass filter 650.The emitted light then hits the secondary dichroic which reflect light of wavelength lower than 545 nm and transmit light longer than 650 nm.E. coli K-12 MG1655 maintains the similar amount of cells in the mixed community during the first two phases.The total number of E. coli K-12 MG1655 drastically increased in phase 3.However, the analysis of the inferential comparative genomic copy number shows that cells are more actively dividing at phases 1 and 2 than at phase 3.Maybe due to the lack of nutrients, most of the cells in phase 3 were relatively slow dividers, which indicates a temporal shifting of the population composition in the near future.During phase 3 the cell number of cyanobacteria was dramatically reduced but the genomic copy number was higher than that of the standard indicating a faster growth.These results suggest an adaptive role of cyanobacteria as previously shown by others .Cell death is observed when environmental stress exceeds cell tolerance and in some situations it has been categorized as programmed cell death.Thus, an adaptive role has been suggested for cyanobacteria PCD whereby the death of some individuals optimizes the probability of population persistence .In this assay, the autofluorescence of Synechocystis sp. strain PCC 6803 was concurrently recorded to verify the accuracy of SAMI in differentiating species.The autofluorescent image overlapped the outline of Synechocystis sp. strain PCC 6803 in the mixed culture, hence distinguishing them from E. coli K-12 MG1655.The autofluorescent overlaying images validated the accuracy of SAMI and).The resulting 3D Images from the SAMI analyses are shown and).The autofluorescence property of Synechocystis sp.Strain PCC 6803 was utilized to validate SAMI in this particular example.Autofluorescence was useful as a validating method for this particular example, but it is not a universal method applicable for any mixture of microorganisms.According to the statistical analysis, the sampling size was implied to be sufficient to represent the population, and therefore population size is equal to sampling size and population standard deviation is equal to sample standard deviation.A standardized variable of 1.5 was used in setting up the relative standing of each population to obtain the optimal confidence interval as well as the confidence level between the species.The confidence level was 93.3% at a standardized variable of 1.5 from the statistic equations and the Z score Table .The confidence interval of each species with different dyes was calculated.A 93% of accuracy was obtained in a case study of a two species mixed culture.The genomic copy number of slow growth pure culture can be analyzed by real time PCR method and it is well agreed with fluorescence-activated cell sorting analysis or radioactive labeling genome analysis .
Most molecular fingerprinting techniques, including denaturing gradient gel electrophoresis (DGGE) [1], comparative genomic hybridization (CGH) [2], real-time polymerase chain reaction (RT-PCR) [3], destroy community structure and/or cellular integrity, therefore lost the info.of the spatial locus and the in situ genomic copy number of the cells.An alternative technique, fluorescence in situ hybridization (FISH) doesn't require sample disintegration but needs to develop specific markers and doesn't provide info.related to genomic copy number.An application was performed with a mixture of Synechocystis sp.PCC 6803 and E. coli K-12 MG1655.The intrinsic property of their genome, reflected by the average fluorescence intensity (AFI), distinguished them in 3D.And their growth rates were inferred by comparing the total genomic fluorescence binding area (GFA) with that of the pure culture standards.A 93% of accuracy in differentiating the species was achieved.SAMI does not require sample disintegration and preserves the community spatial structure.It measures the 3D locus of cells within the mixture and may differentiate them according to the property of their genome.It allows assessment of the growth rate of the cells within the mixture by comparing their genomic copy number with that of the pure culture standards.
adaptiveness of the model, i.e., how finely split the subgroups are, and the size of the subgroups, i.e., how robust and precise the prediction models are, must be found.In this case, optimization for this was not performed, but a good balance was achieved by the sheer number of samples and the design of the enzyme-raw material combinations that were chosen.This two-level approach may also be further developed to handle unspecified hydrolysate samples.For this, an unsupervised classification system is needed to group samples with similarities.An example of this has been presented by Perez-Guaita D. et al. .It is important to note that full cross-validation was used for all regression models in the present study.As the sample size of the different local regression models in the hierarchical approach varied from 11 samples to 132 samples, the cross-validation approach was the only validation allowing to appropriately compare modeling results.Segmented cross-validation leaving out sets of replicates corresponding to each material-enzyme-time combination was tested, but these results were closely comparable to the ones presented here.In future work, when larger subgroups are present, it will be essential to also employ proper test set validation of the local regression models.The results of this study clearly illustrate the potential of using FTIR for quantifying protein sizes in a range of different protein hydrolysates.The study also provides a feasible solution for building a generic calibration for protein sizes in hydrolysates.The approach of hierarchical modeling is also expected to be a potential solution in other FTIR approaches where the aim is to quantify a generic component in different raw materials.As the use of dry-film FTIR for automated high-throughput analysis including automated sample handling and robotics is gaining increasing attention, a commercial system for protein size estimations in enzymatic protein hydrolysis industry could thus be expected when proper technical developments are made.A tool for protein size estimation would potentially also find applications in a range of different fields, including reaction kinetics, in vitro protein digestion, protein production by fermentation, and characterization of protein and peptide compositions of dairy products.In the present study, we have shown that Mw of protein hydrolysates can be predicted with high accuracy using FTIR spectroscopy.The best result was obtained using a hierarchical PLSR approach where FTIR spectra of the protein hydrolysates were classified according to raw material type and enzyme prior to local modeling.This shows that prediction of protein sizes in protein hydrolysates can be achieved for a range of different raw materials using a single mathematical model.The results therefore demonstrate the potential of using FTIR for monitoring protein sizes during enzymatic protein hydrolysis in industrial settings, while also paving the way for measurements of protein sizes in other applications.Kenneth Aase Kristoffersen: Writing - original draft, Data curation, Conceptualization, Visualization, Investigation, Formal analysis.Kristian Hovde Liland: Writing - original draft, Software, Methodology, Conceptualization, Visualization, Investigation.Ulrike Böcker: Data curation, Conceptualization, Writing - review & editing.Sileshi Gizachew Wubshet: Conceptualization, Writing - review & editing.Diana Lindberg: Writing - review & editing.Svein Jarle Horn: Writing - review & editing.Nils Kristian Afseth: Writing - original draft, Writing - review & editing, Conceptualization, Investigation, Methodology.
In the presented study, Fourier-transform infrared (FTIR) spectroscopy is used to predict the average molecular weight of protein hydrolysates produced from protein-rich by-products from food industry using commercial enzymes.Enzymatic protein hydrolysis is a well-established method for production of protein-rich formulations, recognized for its potential to valorize food-processing by-products.The monitoring of such processes is still a significant challenge as the existing classical analytical methods are not easily applicable to industrial setups.In this study, we are reporting a generic FTIR-based approach for monitoring the average molecular weights of proteins during enzymatic hydrolysis of by-products from the food industry.A total of 885 hydrolysate samples from enzymatic protein hydrolysis reactions of poultry and fish by-products using different enzymes were studied.FTIR spectra acquired from dry-films of the hydrolysates were used to build partial least squares regression (PLSR) models.The most accurate predictions were obtained using a hierarchical PLSR approach involving supervised classification of the FTIR spectra according to raw material quality and enzyme used in the hydrolysis process, and subsequent local regression models tuned to specific enzyme-raw material combinations.The results clearly underline the potential of using FTIR for monitoring protein sizes during enzymatic protein hydrolysis in industrial settings, while also paving the way for measurements of protein sizes in other applications.
km long lagoon defined by a reef running parallel to the beach.The reefs defining such lagoons are typically several hundred metres wide, but outside Xai-Xai it is only about 20 m wide, and very regularly shaped defining a straight line, see Fig. 1.The main opening of the reef is found in the east-northeastern end of the reef.The main currents inside the reef have been observed to be uni-directional towards east-northeast, independent on wind direction.Here we present measurements of water level and the state of the waves inside and outside the reef, and currents inside the reef using both Lagrangian and Eulerian current meters.The experiments were conducted during spring tide, and the tidal range was 2.2 m with a dominating semi-diurnal period.Significant wave heights outside the reef were measured in the range 1.1–4.4 m with dominating periods typical for swell.The current minimum was observed around low tide, but the current maximum occurred during flooding and ebbing, see Fig. 6.Such sub-tidal current variability is typical for tropical lagoons, see Kraines et al.; Taebi et al.Using the observed state of the waves and water level outside the reef as forcing functions we have formulated a model for the alongshore flow and water level inside the lagoon.The model is quite simple and can easily be run on ordinary desktop computers, using a modest amount of geophysical data as input.Still our model was able to reconstruct the observed time variability of the currents and water level quite well, including the characteristic dip in the current speed at high tide.This local minimum speed at high tide is usually explained as a reduction in the radiation stress gradient as less wave breaking occur over the reef when the water level is high e.g., Kraines et al.At Xai-Xai, where the reef is only 20 m wide we explain this current minimum as the increased area available for the outgoing water flux at high tide reducing the speed necessary to maintain the volume flux.We have found that flushing of the lagoon is very efficient, with renewal time less than one hour.However, if waves are ignored and only the tidal component is considered the flushing time was found to be 16 h. Hence the cross-reef volume flux is dominated by the wave-induced transport.A two years simulation has been run using forcing from a tidal model, and wave climate from ERA-Interim reanalysis.The mean sea level differences become unrealistically high for very strong wave forcing, possibly related to non-linear wave effects.The timing of the extreme current speed for the two year period has been compared with the frequent drowning casualties occurring at this beach, but no relation was found.
Alongshore flows strongly driven by tides and waves is studied in the context of a one-dimensional numerical model.Observations from field surveys performed in a semi-enclosed lagoon (1.7km×0.2km) outside Xai-Xai, Mozambique, are used to validate the model results.The model is able to capture most of the observed temporal variability of the current, but sea surface height tends to be overestimated at high tide, especially during high wave events.Inside the lagoon we observed a mainly uni-directional alongshore current, with speeds up to 1ms-1.The current varies primarily with the tide, being close to zero near low tide, generally increasing during flood and decreasing during ebb.The observations revealed a local minimum in the alongshore flow at high tide, which the model was successful in reproducing.Residence times in the lagoon were calculated to be less than one hour with wave forcing dominating the flushing.At this beach a high number of drowning casualties have occurred, but no connection was found between them and strong current events in a simulation covering the period 2011-2012.
greater evidence compared to at least half of the other tracts examined, reducing the potential for these significant artefactual terminations.Within the current study, the meta-analyses used in the functional analyses were obtained using the automated meta-analysis software package Neurosynth, which is freely available online.A full description of the meta-analysis method used by Neurosynth and its limitations can be found in Yarkoni et al., but we outline the most salient limitations here.As the procedure used to obtain a Neurosynth meta-analysis is automated, it relies on words or phrases that appear in abstracts as proxies for cognitive functions."While this approach is convenient, it limits the user's ability to define more fine-grained and detailed subcomponents or sub-domains for a given cognitive function.For example, the term “semantic” may be used in the context of lexical semantics, visual semantics etc., and it is important to take this into consideration when interpreting the results of any ‘semantic’ meta-analysis.However, comparisons between Neurosynth meta-analyses of broad terms and more formal meta-analyses of the same cognitive function have been found to produce similar results.An additional consideration when using Neurosynth is the fact that the analyses are limited to a database that is accessible to the program.While this limits the scope of the analyses, it does not allow for any user ‘cherry-picking’ as is possible in a less constrained meta-analysis.Finally, it is important to note that the Neurosynth approach utilises an FDR rather than FWE correction for practical computational reasons; hence one must be aware that the maps may contain some locations that are false positives.The tract terminations in the temporal lobe have hither to not been comprehensively explored.This study is the first attempt to understand the termination structure of the white matter tracts of the temporal lobe and explore the functional information that they may be responsible for carrying, using non-invasive in vivo MR imaging.
Despite an upsurge of interest in connectional neuroanatomy, the terminations of the main fibre tracts in the human brain are yet to be mapped.This information is essential given that neurological, neuroanatomical and computational accounts expect neural functions to be strongly shaped by the pattern of white-matter connections.This paper uses a probabilistic tractography approach to identify the main cortical areas that contribute to the major temporal lobe tracts.In order to associate the tract terminations to known functional domains of the temporal lobe, eight automated meta-analyses were performed using the Neurosynth database.Overlaps between the functional regions highlighted by the meta-analyses and the termination maps were identified in order to investigate the functional importance of the tracts of the temporal lobe.The termination maps are made available in the Supplementary Materials of this article for use by researchers in the field.
using software iCODEhop .Entire specimen or an appendage was used for isolation of DNA.DNA was isolated using GenElute Mammalian Genomic DNA Miniprep Kit following the protocol for DNA isolation from tissues » Mammalian Tissue Preparation«.One specimen was fixed in formalin.Therefore for the successful amplification of DNA we followed the protocol for DNA isolation from formalin-fixed samples .The PCR amplifications were conducted in a 15-μL reaction mixture as in Ref. .PCR cycling protocols followed conditions in subsection 1.2.PCR products were purified using Exonuclease I and shrimp alkaline phosphatase as in Ref. .Each fragment was sequenced in both directions using PCR amplifications primers by Macrogen Europe.Chromatograms were assembled and sequences were edited manually using Geneious R8.1.6.and 11.1.2 .Alignments of nucleotide sequences for each marker were performed using plug-in software ClustalW implemented in Geneious R8.1.6 .The alignments were translated into amino acids and checked for stop codons and inconsistencies.All the new sequences were submitted to GenBank repository.The best substitution model for each marker was calculated based on Akaike information criterion using SMS – Smart model selection on web server: http://www.atgc-montpellier.fr/phyml-sms/ .Unilocus phylogenetic trees were estimated by Bayesian analysis using MrBayes 3.2.2 on the Cipres Science Gateway v 3.3.Two simultaneuous runs with four chains each were run for three to four million generations until both runs reached convergence.Runs were sampled every 1000th generation.First 25 % of the sampled trees were discarded as burnin and the consensus tree of each marker was constructed by 50 % majority rule.The trees were visualised in FigTree v.1.4.3 software.
The data presented here includes selection of 5 successfully amplified protein-coding markers for inferring phylogenetic relationships of the family of amphipod crustaceans Niphargidae.These markers have been efficiently amplified from niphargid samples for the first time and present the framework for robust phylogenetic assessment of the family Niphargidae.They are useful for phylogenetic purposes among other amphipod genera as well.In detail, the data consists of two parts: 1.Information regarding markers, specific oligonucleotide primer pairs and conditions for PCR reaction that enables successful amplification of specific nucleotide fragments.Two pairs of novel oligonucleotide primers were constructed which enable partial sequence amplification of two housekeeping genes: arginine kinase (ArgKin) and glyceraldehyde phosphate dehydrogenase (GAPDH), respectively.Additionally, 3 existing combinations of oligonucleotide primer pairs for protein-coding loci for glutamyl-prolyl tRNA synthetase (EPRS), opsin (OP) and phosphoenolpyruvate carboxykinase (PEPCK) were proven to be suitable to amplify specific nucleotide fragments from selected amphipod specimens; 2.Information on novel nucleotide sequences from amphipod taxa of the family Niphagidae and related outgroup taxa.Unilocus phylogenetic trees were constructed using Bayesian analysis and show relationships among selected taxa.Altogether 299 new nucleotide sequences from 92 specimens of the family Niphargidae and related outgroup amphipod taxa are deposited in GenBank (NCBI) repository and available for further use in phylogenetic analyses.
sub-visible particles as endogenous components.As a part of a test package for investigational products, ESZ offers considerable advantages as a method for particle count and size distribution assessment.Considering the importance of BCG IT for clinical use such as intravesical immunotherapy in superficial bladder cancer patients and the fact that its production was developed approximately 20 years ago, development of new techniques that could support potential modernization of the manufacturing process of BCG IT would have great clinical importance.An electrical sensing zone method development, optimization, and qualification are described, with specific focus on Immucyst®, a BCG IT product characterization.Method development study design,Aspects to consider in development study: purpose and scope of the analytical procedure, product type, experimental design, and data analysis including the use of statistical methods.Aspects of ESZ method development and optimization are discussed, including rationale, reporting values, desirable performance, and characteristics.ESZ method development is presented to characterize visible and sub-visible population of particulates present in BCG IT lyophilized reconstituted product.Successful completion of the development study provides scientific evidence that the method is suitable for characterization of BCG IT reconstituted lyophilized product.Further qualification can also provide guidance and useful information for the eventual method validation, where required.The authors are employees of Sanofi Pasteur.The authors have no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript.Thus includes employment, consultancies, stock ownership or options, or royalties.No writing assistance was utilized in the production of this manuscript.
Bacille Calmette-Guerin, BCG, is a live attenuated bovine tubercle bacillus used for the treatment of non-muscle invasive bladder cancer.In this study, an Electrical Sensing Zone (ESZ) method was developed to measure the particle count and the size of BCG immunotherapeutic (BCG IT), or ImmuCyst® product using a Coulter Counter Multisizer 4® instrument.The focus of this study was to establish a baseline for reconstituted lyophilized BCG IT product using visible and sub-visible particle concentration and size distribution as reportable values.ESZ method was used to assess manufacturing process consistency using 20 production scale lots of BCG IT product.The results demonstrated that ESZ can be used to accumulate product and process knowledge of BCG IT.
harvests of the same manufactured batches due to its enrichment through the purification process.HMWI percentages in drug substances measured by CE western were plotted against those for the cell harvest samples of the same batches measured by the same method in Fig. 4A.The data are fitted to linear regression, as shown below:Y1 = 0.770 × X1 + 18.5,A positive relationship can also be established between the HMWI percentages in drug substances measured by reducing SDS-PAGE to the values obtained by CE western for the same materials:Y2 = 0.806 × X2–8.83As Y1 = X2, the two equations can be combined as:Y2 = 0.806 × − 8.83 or Y2 = 0.62 × X1 + 6.1,Therefore, HMWI percentage in the purified drug substance of a production batch measured with SDS-PAGE, can be predicted using HMWI percentage in cell harvest of the same batch measured by CE western.The proposed model is in good agreement with the historical SDS-PAGE data for eight batches, with an RSD ≤ 6.6% for%HMWI between measured and expected values.The graphical representation of calculated CE western data and those measured by SDS-PAGE is shown in Fig. 4C.This indicates that the approach could be successfully applied to predict product isoform distribution of future, manufactured drug substance batches.Recently, use of mass spectrometry based methods in support of advanced process control efforts as part of a control strategy for biopharmaceutical manufacturing has been also reported .The titer and isoform distribution of a biopharmaceutical protein have been successfully determined by automated CE western blot in cell culture harvest samples.The quantitative measurement of total biopharmaceutical titer in such early stage intermediates is required to provide the level of cellular expression yield prior to its purification process.The relative ratio of HMWI, as an indicator of manufacturing consistency, is another important product quality measured by CE western blot in this work.This approach allows for the assessment isoform distribution of recombinant protein directly after harvesting cell culture materials.The critical information is therefore obtained much earlier than the traditional SDS-PAGE analysis used at batch release.The early readout could be helpful for timely decisions on adaptive downstream measures to minimize potential batch failure due to cell culture variability.The authors declare no financial or commercial conflict of interest.
An effective control strategy is critical to ensure the safety, purity and potency of biopharmaceuticals.Appropriate analytical tools are needed to realize such goals by providing information on product quality at an early stage to help understanding and control of the manufacturing process.In this work, a fully automated, multi-capillary instrument is utilized for size-based separation and western blot analysis to provide an early readout on product quality in order to enable a more consistent manufacturing process.This approach aims at measuring two important qualities of a biopharmaceutical protein, titer and isoform distribution, in cell culture harvest samples.The acquired data for isoform distribution can then be used to predict the corresponding values of the final drug substance, and potentially provide information for remedy through timely adjustment of the downstream purification process, should the expected values fall out of the accepted range.
as described above, mice lacking functional PI3Kδ can show increased IgE-mediated eosinophilia and inflammation , but the extent to which this might be transferred to humans is unclear.The recent discovery of activating mutations in the p110δ subunit in a group of primary immune deficiency patients both illustrates the importance of this isoform in the immune system and indicates some major gaps in our understanding .These patients suffer recurring infections of the respiratory tract.They possess dominant mutations in p110δ that are analogous to oncogenic mutations in the kinase domain of p110α.These mutations result in enhanced binding of p110δ to the membrane and enhanced activity.In agreement, PIP3 levels in primary T cells from these patients were found to be significantly elevated.These patients exhibit distorted B and T cell differentiation and modestly reduced and skewed antibody responses, but a coherent explanation of their phenotype is still some distance away.Whatever the mechanism however, these patients are clearly candidates for treatment with PI3Kδ-selective inhibitors, such as those currently in development for the treatment of B-cell lymphomas .A huge amount of work still remains to be done to reconcile information gained in studying PI3K signalling in simple in vitro models of immune cell function with complex in vivo models of inflammation.This is particularly so in humans, where the capacity for experimentation is necessarily more limited, though recent developments in high throughput sequencing promise to yield further insights from linking polymorphisms and mutations in PI3K pathway components to inflammatory disease.However, we already know that for most of the in vivo processes involved, the effects of selective Class I PI3K isoform inhibition are usually partial.Specifically, we know that inhibition of PI3Kγ can blunt recruitment and activation of innate immune cells, but this is not complete; inhibition of PI3Kδ prevents a normal antibody response, but some antibodies are made; inhibition of PI3Kδ and β can inhibit antibody-dependent activation of neutrophils and macrophages, but bacterial uptake and killing is relatively unscathed.Thus, the robust and redundant processes that underlie the inflammatory response may allow an opportunity to inhibit Class I PI3K-dependent processes to a level where significant alleviation of the pathology is possible but sufficient capacity in the immune system still remains.Moreover, the tissue selective expression of PI3Kγ and δ in leukocytes offers the opportunity to inhibit Class I PI3Ks in these cells without necessarily incurring widespread toxicity and organ damage.Thus far, initial studies with mouse models of chronic inflammation appear to support this.Further, the development of isoform-selective PI3K inhibitors by academic and commercial laboratories has proceeded at pace, driven largely by the promise of inhibiting cancer cell growth .Several potential drugs are now in clinical trials and the results from these studies, particularly the development of the PI3Kδ-inhibitor idelalisib, suggest ATP-site inhibitors do indeed have the potential to turn into effective drugs, with little ‘off-target’ toxicity.The key question then becomes: what singly- or multiply-selective PI3K inhibitors are likely to prove most useful to treat which chronic inflammatory conditions ?,Arguments can be made in favour of δ, γ, γ/δ or β/δ but in the end there is sufficient uncertainty in extrapolation from mouse models to human disease that a significant effort to trial various combinations in the best pre-clinical and clinical settings available seems unavoidable.
Abstract PI3Ks regulate several key events in the inflammatory response to damage and infection.There are four Class I PI3K isoforms (PI3Kα,β,γ,δ), three Class II PI3K isoforms (PI3KC2α, C2β, C2γ) and a single Class III PI3K.The four Class I isoforms synthesise the phospholipid 'PIP3'.PIP3 is a 'second messenger' used by many different cell surface receptors to control cell movement, growth, survival and differentiation.These four isoforms have overlapping functions but each is adapted to receive efficient stimulation by particular receptor sub-types.PI3Kγ is highly expressed in leukocytes and plays a particularly important role in chemokine-mediated recruitment and activation of innate immune cells at sites of inflammation.PI3Kδ is also highly expressed in leukocytes and plays a key role in antigen receptor and cytokine-mediated B and T cell development, differentiation and function.Class III PI3K synthesises the phospholipid PI3P, which regulates endosome-lysosome trafficking and the induction of autophagy, pathways involved in pathogen killing, antigen processing and immune cell survival.Much less is known about the function of Class II PI3Ks, but emerging evidence indicates they can synthesise PI3P and PI34P2 and are involved in the regulation of endocytosis.The creation of genetically-modified mice with altered PI3K signalling, together with the development of isoform-selective, small-molecule PI3K inhibitors, has allowed the evaluation of the individual roles of Class I PI3K isoforms in several mouse models of chronic inflammation.Selective inhibition of PI3Kδ, γ or β has each been shown to reduce the severity of inflammation in one or more models of autoimmune disease, respiratory disease or allergic inflammation, with dual γ/δ or β/δ inhibition generally proving more effective.The inhibition of Class I PI3Ks may therefore offer a therapeutic opportunity to treat non-resolving inflammatory pathologies in humans.This article is part of a Special Issue entitled Phosphoinositides.