query dict | pos dict | neg dict |
|---|---|---|
{
"abstract": "A view-based composition mechanism suited to specifying cooperating environments in terms of groups of previously specified tools is presented. It is argued that this form of composition is more suited to database-centered environments than conventional channel-based composition techniques for the reuse of tools in tool configurations. The composition model, related ideas, limitations and possible extensions to the idea are discussed. An example from the domain of software engineering, where such a mechanism can prove useful, is also presented.<<ETX>>",
"corpus_id": 1349384,
"title": "Integrated tool support in object-based environments"
} | {
"abstract": "A persistent problem in software engineering is how to put complex software systems together out of smaller subsystems, the problem of software composition. The emergence of software architectures and architectural styles has introduced a higher level of abstraction at which we can create and compose software systems. We examine the problem of providing formal semantics to the composition of different architectural styles within software systems, i.e. the problem of composing heterogeneous architectures. We describe a model of pure styles, and a model of their composition. Our model of pure styles is highlighted by a uniform representation for describing many different styles. An architectural style space of major conceptual features is introduced which allows new styles to be rapidly incorporated into the model, including commercial-off-the-shelf packages which embody a specific style(s). We show a disciplined approach to the process of architectural composition , and show how architecture mismatches can be generated during composition. Finally, we describe a prototype tool which is built on top of the models. In the following sections, we present a high-level introduction to the field of software architectures and style composition, and a concise description of the problem which we examined for the dissertation. We also summarize the steps in our approach towards solving the problem. A persistent problem in computer science is how to put software systems together out of smaller subsystems, the problem of software composition. There are many levels of granularity at which this problem can be tackled. For example, some of the earliest software engineers dealt with systems at the machine language or assembly language level of granularity. Succeeding engineers addressed the composition problem at a coarser granularity using higher level programming languages. The emergence of software architectures and architectural styles has introduced a still coarser level of granularity (and higher level of abstraction) at which we can create and compose software systems. One of the earliest discussions of software composition at the architec",
"corpus_id": 59692519,
"title": "Composing heterogeneous software architectures"
} | {
"abstract": "We present the PathCrawler prototype tool for the automatic generation of test-cases satisfying the rigorous all-paths criterion, with a user-defined limit on the number of loop iterations in the covered paths. The prototype treats C code and we illustrate the test-case generation process on a representative example of a C function containing data-structures of variable dimensions, loops with variable numbers of iterations and many infeasible paths. PathCrawler is based on a novel combination of code instrumentation and constraint solving which makes it both efficient and open to extension. It suffers neither from the approximations and complexity of static analysis, nor from the number of executions demanded by the use of heuristic algorithms in function minimisation and the possibility that they fail to find a solution. We believe that it demonstrates the feasibility of rigorous and systematic testing of sequential programs coded in imperative languages.",
"corpus_id": 7401680,
"score": 1,
"title": "PathCrawler: Automatic Generation of Path Tests by Combining Static and Dynamic Analysis"
} |
{
"abstract": "Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the high-dimensional raw inputs into low-dimensional features. In this paper, we are able to evolve extremely small recurrent neural network (RNN) controllers for a task that previously required networks with over a million weights. The high-dimensional visual input, which the controller would normally receive, is first transformed into a compact feature vector through a deep, max-pooling convolutional neural network (MPCNN). Both the MPCNN preprocessor and the RNN controller are evolved successfully to control a car in the TORCS racing simulator using only visual input. This is the first use of deep learning in the context evolutionary RL.",
"corpus_id": 16475533,
"title": "Evolving deep unsupervised convolutional networks for vision-based reinforcement learning"
} | {
"abstract": "In this extended abstract we introduce the Arcade Learning Environment (ALE): both a challenge problem and a platform and methodology for evaluating the development of general, domain-independent AI technology. ALE provides an interface to hundreds of Atari 2600 game environments, each one different, interesting, and designed to be a challenge for human players. ALE presents significant research challenges for reinforcement learning, model learning, model-based planning, imitation learning, transfer learning, and intrinsic motivation. Most importantly, it provides a rigorous testbed for evaluating and comparing approaches to these problems. We illustrate the promise of ALE by presenting a benchmark set of domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning. In doing so, we also propose an evaluation methodology made possible by ALE, reporting empirical results on over 55 different games. We conclude with a brief update on the latest ALE developments. All of the software, including the benchmark agents, is publicly available.",
"corpus_id": 1552061,
"title": "The Arcade Learning Environment: An Evaluation Platform for General Agents"
} | {
"abstract": "Almost all organizations and sectors are currently faced with the problem of insider threats to vital computer assets. Internal incidents can cause more than just financial losses, the costs can also include loss of clients and damage to an organization's reputation. Substantial academic research investigating internal threats has been conducted. This paper examines a number of theoretical models drawn from academic literature to identify a set of factors that are thought to be behavior factors associated with insider threats. These factors are then critiqued using empirical evidence from reported incidents, resulting in insights into areas where the theoretical perspectives of academic literature are both supported and unsupported by actual case evidence. The paper concludes with recommendations for future research directions for academic researchers.",
"corpus_id": 2929011,
"score": -1,
"title": "Insider Threat Behavior Factors: A Comparison of Theory with Reported Incidents"
} |
{
"abstract": "In this article we present recent developments in (3+2) cycloadditions with special emphasis on 1,3-dipolar reactions involving azomethine ylides and alkenes possessing electron withdrawing groups. It is found that there is not a general mechanism for these reactions since both concerted aromatic [(π)4(s)+(π)2(s)] mechanisms and stepwise processes involving zwitterionic intermediates can be found. These computational models can be extended to analyse the role of chiral catalysts in these reactions in order to understand the nature of the catalytic cycle and the origins of chiral induction.",
"corpus_id": 8008118,
"title": "Stereocontrolled (3+2) cycloadditions between azomethine ylides and dipolarophiles: a fruitful interplay between theory and experiment."
} | {
"abstract": "A wide range of new dipoles and catalysts have been used in 1,3-dipolar cycloadditions of N-metalated azomethine ylides onto C60 yielding a full stereodivergent synthesis of pyrrolidino[60]fullerenes with complete diastereoselectivities and very high enantioselectivities. The use of less-explored chiral α-iminoamides as starting 1,3-dipoles leads to an interesting double asymmetric induction resulting in a matching/mismatching effect depending upon the absolute configuration of the stereocenter in the starting α-iminoamide. An enantioselective process was also found in the retrocycloaddition reaction as revealed by mass spectrometry analysis on quasi-enantiomeric pyrrolidino[60]fullerenes. Theoretical DFT calculations are in very good agreement with the experimental data. On the basis of this agreement, a plausible reaction mechanism is proposed.",
"corpus_id": 646319,
"title": "Stereodivergent Synthesis of Chiral Fullerenes by [3 + 2] Cycloadditions to C60"
} | {
"abstract": "This is a book by the American psychoanalyst who specializes in the treatment of children with emotional disturbances and who is known for his work with autistic children. The author also wrote \"A Good Enough Parent\", \"The Informed Heart\",\"The Uses of Enchantment\" and \"Surviving the Holocaust\".",
"corpus_id": 141664394,
"score": 0,
"title": "Freud And Man's Soul"
} |
{
"abstract": "REVIEW QUESTION / OBJECTIVE What are the effects of cross-gender hormone treatment on the quality of life of transgender persons? INCLUSION CRITERIA Types of participants This review will consider studies whose intervention group(s) include transgender women, (male-to-female transgender individuals) and transmen (female-to-male transgender individuals), as well as people who do not identify with the gender binary on cross-gender hormones. There will be no age limitations on the study participants. Types of intervention(s) and comparators The review will include studies that evaluate cross-gender hormone treatment (or administration). Any study examining the effects of cross gender hormones will be considered, regardless of the length of time of the hormone treatment, variety of cross-gender hormones used, and level of the dosage. The control group(s) will include individuals who identity as transgender who do not use hormones. Types of outcomes The outcome is quality of life (all domains including but not limited to social distress, anxiety and depression) measured using any psychometric tool. These tools include specific quality of life tools such as the SF-36, WHOQOL-BREF, SWLS, SHS and SQUALA. Other tools that investigate depression, anxiety, mood and self-esteem that are equivalent to the various domains of quality of life will also be considered. These tools include the Social Self-Esteem Inventory, the Beck Depression Inventory, the Minnesota Multiphasic Personality Inventory, the Global Assessment of Functioning, the Social Anxiety and Depression Scale, the Spielberger’s Trait Anxiety and the State-Trait Personality Inventory.",
"corpus_id": 1734478,
"title": "The effects of cross‐gender hormones on the quality of life of transgender individuals: a systematic review protocol"
} | {
"abstract": "OBJECTIVE\nThe objective of the review was to evaluate the effectiveness of cross-sex hormone use in improving quality of life and the related measures of depression and anxiety in the transgender population versus no use of cross-sex hormones.\n\n\nINTRODUCTION\nTransgender medicine as a specialty is still in its infancy and is beginning to attract more primary care providers. The use of hormones to aid in gender transition is expected to provide benefit with regard to quality of life, but there have been few high-quality studies. Two previous systematic reviews were found. One review included studies where participants had gender-affirming surgery, and the other review considered only prospective studies. Both reviews found a benefit with the use of hormones, despite the lack of high-quality studies. To describe outcomes specifically associated with hormone therapy, this review focused on patients who had not yet had surgical interventions, with an aim to inform primary care providers who are considering providing gender transition related-care in their office or clinic.\n\n\nINCLUSION CRITERIA\nStudies were considered that included participants who were trans women, trans men or who did not identify with the gender binary and were using cross-sex hormones. This review only considered studies where the hormone use was under medical supervision. Studies that included participants who already had any form of gender-affirming surgery among those who used hormones were excluded, as were studies that did not use a validated tool to measure quality of life, depression or anxiety.\n\n\nMETHODS\nA comprehensive database search of PubMed, CINAHL, Embase and PsycINFO was conducted in August and September of 2017. The search for unpublished studies and grey literature included Google, the New York Academy of Medicine and the World Professional Association for Transgender Health (WPATH) Conference Proceedings. No date limits were used in any part of the search. Study selection, critical appraisal and data extraction were conducted by two independent reviewers using the Joanna Briggs Institute protocols, standardized critical appraisal and data extraction tools.\n\n\nRESULTS\nSeven observational studies met the inclusion criteria for this review. The total number of transgender participants in all the included studies was 552. Population sizes in the studies ranged from 14 to 163. In general, the certainty of the findings was low to very low due to issues with imprecision and indirectness. The use of cross-sex hormones was associated with improved quality of life, depression and anxiety scores, although no causation can be inferred.\n\n\nCONCLUSIONS\nTransgender participants who were prescribed cross-sex hormones had statistically significant scores demonstrating improvement on the validated scales that measured quality of life, anxiety and depression when compared to transgender people who had enrolled in a sex-reassignment clinic but had not yet begun taking cross-sex hormones. However, because the certainty of this evidence was very low to low, recommendations for hormone use to improve quality of life, depression and anxiety could not be made. High-quality research on this issue is needed, as is the development of a quality-of-life tool specific to the transgender population.",
"corpus_id": 133606402,
"title": "The effect of cross-sex hormones on the quality of life, depression and anxiety of transgender individuals: a quantitative systematic review."
} | {
"abstract": "Prolactinomas can be induced in rats by large doses of estrogens. Whether prolactinomas can be induced in humans by estrogens, however, is not known. This report describes the development of a prolactinoma in a man with previously normal plasma PRL levels after the administration of pharmacological doses of estrogen. The patient, a 26-yr-old male to female transsexual, took cyproterone acetate (100 mg/day, orally) and ethinyl estradiol (100 micrograms/day, orally) for 10 months and (surrepititiously) estradiol-17-undecanoate (100 mg, twice weekly, im) for about 6 of the 10 months. Plasma PRL levels rose from 0.05 to 5.20 U/L within 10 months (normal, 0.05-0.30 U/L). A computed tomographic scan showed a pituitary mass with suprasellar extension. After all estrogen therapy was discontinued, his plasma estradiol levels gradually declined from 2.8 to 0.77 nmol/L (normal, 0.04-0.12 nmol/L), but PRL levels rose further to 6.2 U/L. Bromocriptine treatment (2.5 mg twice daily) then was given. Plasma PRL fell gradually to 0.43 U/L and a computed tomographic scan after 5 months showed reduction in tumor size. The patient then discontinued bromocriptine treatment. Four months later his plasma estradiol level was normal, while plasma PRL had risen to 4.6 U/L, indicating autonomous PRL secretion. We conclude that 1) estrogen in pharmacological doses can induce prolactinomas in man; and 2) subjects treated with high doses of estrogen must, therefore, be surveyed for the development of such tumors.",
"corpus_id": 25799093,
"score": 2,
"title": "Estrogen-induced prolactinoma in a man."
} |
{
"abstract": "~olwnerisation, determined whether the emulsion would form Laboratory results show that by optimising the polymer type and stabiliser system, emulsions can be produced which, when diluted with water and mixed into sands or soils with a high clay content, can produce thick aggregates with high load bearing and water holdout characteristics. Unconfined compressive strength (UCS) and water uptake results on cores before and after soaking in water are given for a wide range of soil types and levels of polymer. The minimum requirement of 0,75 MPa for a C4 pavement is exceeded with only 0,25% of emulsion on soil. Practical results with surface applications only and incorporation to depth on several roads, a parking lot and the entrance to a sugar mill are reported. Introduction In southern Africa, finance for maintenance and construction of roads is scarce. The CSIR (Jones, 1996) have indicated that for gravel roads in townships a surface only treatment costing less than E l m 2 and remaining completely effective for one, but preferably two, years is required. For roads carrying heavier loads, the traditional method of importing aggregate is becoming prohibitive because of the scarcity of suitable material and the high cost of transporting aggregates over long distances. Polymer (plastic) emulsions are used extensively to form thin layers of less than 3 mm, in paints and waterproofing screeds on walls and roofs. Their excellent resistance to embrittlement, UV light and acid rain, and their good adhesive properties, made them suitable for blending at low levels into bitumen, tar and cement to upgrade these materials. This paper describes how polymer emulsions were modified to give thick consolidated layers with high load bearing strengths and good resistance to wear by water and traffic. Also discussed are the best techniques of mixing, drying and compacting the soils to give optimum results in laboratory tests and the performances of several polymer treated roads after extended periods of trafficking. Procedures Experiment I In 1983 the Transvaal Provincial Administration (Zadzick, 1983) carried out a practical road trial in which a diluted polymer emulsion was applied at the low rate of 0,06 L/m2 to the surface of a recently completed gravel road. The origin of this emulsion was from earlier laboratory work which showed that the type and level of stabiliser that was used during ihiik load bear& slabs, or thin surface skins with underlying loose sand, when poured onto the surface of sand and allowed to dry (Bishop, 1978). Those polymer emulsions stabilised with the lowest levels of high surface tension stabilisers produced the strongest sand aggregates. For the emulsion used in Experiment 1 the polymerisation was carried out with the lowest level of polyvinyl alcohol stabiliser necessary to prevent the polyvinyl acetate emulsion from coagulating on storage. The minimum film forming temperature (mft) of the polymer was reduced to 12°C with an external plasticiser to ensure film formation in most climatic conditions in southern Africa. This emulsion at 58% solids content was diluted 50 times with water in a spray tanker and then applied evenly over the surface of a road hear Pretoria, at the ;ate of 0,06 L in 3 L/m2 of water. The gravel of the road contained clay, with a plastic index (PI) of approximately 10, and the application was made in May, at the start of the dry winter months. The condition of the road was compared regularly with two adjacent sections which had received no aggregating agent. The results after four months of trafficking are given in Table 1. Experiment 2 To improve the resistance to water of the PVAc homopolymer a number of colloid stabilised copolymer emulsions were formulated with the following monomer combinations: vinyl acetate-acrylic, styrene-acrylic, veova-acrylic and acrylicacrylic. These emulsions were compared to cement, hydrated lime and a bitumen emulsion as soil bonding agents in a clay containing soil (PI=16). The unconfined compressive strength (UCS) was the preferred Civil Engineering test, as it compares load bearing performances in both the dry state and after submersion in water for four hours. These results are also applicable for other load bearing applications such as earth bricks. For an aggregate to conform to the requirements for the base of a C4 pavement it must achieve a UCS of 0,75 MPa. For the true potential of the polymer emulsions to be realised in laboratory tests it was found that the following mixing and drying procedures had to be closely followed. After determining the optimum moisture content (OMC) of the particular soil, the polymer emulsion at 1% active on aggregate, was pre-blended with 33% more'water than was needed to achieve OMC. This blend was then mixed thoroughly with a fresh sample of soil. The damp mixture was then spread out in a layer h50 mm thick and left in a shaded area (h23\"C and 55% relative humidity) for 48 hours. It was Proc S Afr Sug Technol Ass (1998) 72 Stabilisation cf earth roads with water-basedpolymer emulsions RT Bishop, BA McAlpin & D Jones then compacted into the moulds. The moulds were split and the free standing cores were dried at ambient temperatures for 24 hours, and then at 60°C in a forced draft oven for 72 hours. Finally, the dry soil cores were weighed before one was crushed dry and the other duplicate was submerged in water for four hours. After removing and dabbing off the excess water the core was re-weighed and crushed immediately. The extent to which water had penetrated into the core was visually assessed. The results from other experiments using cores bound with 4% cement, hydrated lime and a bitumen emulsion, dried in the ways recommended, are included for comparison. The results are given in Table 2. Experiment 3 Using the same test procedure as in Experiment 2, decreasing levels of the best styrene-acrylic copolymer (B) in Table 2 were mixed into a high clay containing soil (PI=31) and a sand (PI=2) to determine the levels at which it ceased to have any visual bonding effects. The criteria for assessment after the four hour soak were the ability of the cores to (i) be removed and have a UCS measurement conducted on them, (ii) prevent the ingress of water, (iii) remain dimensionally stable without swelling or collapsing and (iv) be sufficiently bound so as to prevent clay particles from being permanently suspended in the supernatant liquid after physically stirring the collapsed cores for 20 revolutions with a palette-knife. Cores bound with 4% cement and hydrated lime were included for comparison. The results are given in Table 3. Experiment 4 In 1989, a road was built in a red sand (PI=3) using the styrene-acrylic copolymer (A) at Sodwana Bay, KwaZuluNatal. The area was 100 X 10 m and the copolymer was incorporated to 150 mm. To ensure drying in a reasonable time in the very humid environment, dry cement at 1% (assuming that 1 m2 X 0,15 m of sand weighs 270 kg) was premixed into the 150 mm layer. The styrene-acrylic emulsion with a solids content of 50% was diluted 1: 1 with water and was applied by a water tanker evenly over the test area at a rate of 2,4 ~ / m ~ . A grader thoroughly mixed the 150 mm layer before the road was shaped and compacted with a pneumatic roller. The dry copolymer content on sand was 0,22%. After four months of trafficking the load bearing strengths were measured in the 0 to 150 mm and the 150 to 300 mm layers (Table 4). The road was also visually assessed on an annual basis (Brotherton, 1997). Table 1. Condition of road four months after treatment in Experiment 1. Table 2. Unconfined Compressive Strengths and water uptakes in Experiment 2 (means of duplicates) at 1% dry polymer on soil. Treatment",
"corpus_id": 14228732,
"title": "STABlLlSATlON OF EARTH ROADS WITH WATER-BASED POLYMER EMULSIONS"
} | {
"abstract": "Rapid stabilization of weak soils is one of the important and current topics in geotechnical researches such as military application and stabilization of landslides. Deep mixing is an improvement method applied in the form of creating mixed columns which involves in-situ mixing of soil and lime or Portland cement with special equipment. The aim of this study was to evaluate the feasibility of utilizing polymers as a binder for rapid stabilization of sandy soils with deep mixing method. For this purpose, a series of unconfined compression tests were conducted on three dierent sandy soils improved with polyester. In the experiments, polyester was used at three dierent ratios of 10%, 20% and 30% and samples cured for 3 hour, 1, 3, 7 and 28 days. The laboratory test results of 3 hours samples showed that soils mixed with adequate polyester could reach a similar strength range of 28 days cured soils improved with cement or lime which was reported in the literature. The unconfined compressive strength increased with the increasing polyester ratio, effective diameter, and relative density and curing period, whereas, the changes on unconfined compressive strength were insignificant with the increase of freeze-thaw cycles. The overall evaluation of results has revealed that polyester is a good promise and a potential candidate for rapid deep mixing applications.",
"corpus_id": 44764733,
"title": "Rapid Stabilization of Sands with Deep Mixing Method Using Polyester"
} | {
"abstract": "ABSTRACT The bglA gene of Escherichia coli encodes phospho-β-glucosidase A capable of hydrolyzing the plant-derived aromatic β-glucoside arbutin. We report that the sequential accumulation of mutations in bglA can confer the ability to hydrolyze the related aromatic β-glucosides esculin and salicin in two steps. In the first step, esculin hydrolysis is achieved through the acquisition of a four-nucleotide insertion within the promoter of the bglA gene, resulting in enhanced steady-state levels of the bglA transcript. In the second step, hydrolysis of salicin is achieved through the acquisition of a point mutation within the bglA structural gene close to the active site without the loss of the original catabolic activity against arbutin. These studies underscore the ability of microorganisms to evolve additional metabolic capabilities by mutational modification of preexisting genetic systems under selection pressure, thereby expanding their repertoire of utilizable substrates.",
"corpus_id": 5071613,
"score": 0,
"title": "Evolution of Aromatic β-Glucoside Utilization by Successive Mutational Steps in Escherichia coli"
} |
{
"abstract": "RECENT lake sediments can be dated using 210Pb and fall-out 137Cs. Pennington et al.1, and Robbins and Edgington2 have set out the assumptions used in calculating dates and estimating accumulation rates from the declining concentration of unsupported 210Pb in the near-surface sediments of Blelham Tarn and Lake Michigan respectively. In both cases, as in other papers3,4, an essential assumption is a constant initial concentration (c.i.c.) of unsupported 210Pb per unit dry weight in the sediment at each depth, whether or not any variations may have occurred in the rate of accumulation. This assumption requires that in undisturbed cores, unsupported 210Pb concentrations should always decline monotonically with depth. Figure 1 shows unsupported 210Pb concentrations in cores from Lough Erne, Northern Ireland and Lake Ipea, Papua New Guinea. The profiles are ‘kinked’ and show at one or more points, a marked increase in unsupported 210Pb concentration with depth. The levels at which this occurs in the cores range from 6 to 30cm. The increases cannot, therefore, be the result of the anomalously low surface concentrations noted elsewhere4. Associated biological, chemical and geophysical studies show that the profiles have not been significantly disturbed by physical or biological mixing. These profiles are not consistent with the c.i.c. deposition model and are regarded as evidence for the dilution of unsupported 210Pb by accelerated sediment accumulation. If such dilution has taken place without leading to a kink in 210Pb concentrations, the assumption of c.i.c. will lead to underestimation of the true age of the sediment below the onset of acceleration. In the case of the ‘kinked’ profiles, dates are not calculable using the c.i.c. deposition model alone. We have adopted an alternative approach to calculating 210Pb dates using as our main assumption a constant rate of supply (c.r.s.) of unsupported 210Pb to the sediment per unit time and deriving dates from the integrated activity of the radionuclide. Previous authors have referred to the possibility of calculating dates using this assumption5,6 but have not given a full account of the method or evaluated the results of its application. Details of the methods used here are set out elsewhere7. This paper compares and briefly evaluates the two alternative models as applied to the sediments of Lough Erne and Lakes Ipea and Egari. The c.r.s. based dates obtained have been compared with those derived either from c.i.c. based 210Pb dates or, in the case of the ‘kinked’ profiles, from a combination of 137Cs dating8 and c.i.c. based calculations. Figure 2 plots the resulting age against depth curves from Lough Erne, Fig. 3, some of the results from Lakes Ipea and Egari.",
"corpus_id": 4217296,
"title": "Alternative 210Pb dating: results from the New Guinea Highlands and Lough Erne"
} | {
"abstract": "USE of 210Pb dating is increasing rapidly and applications include studies of accelerated eutrophication in major lakes1, salt-marsh accretion2, the recent history of heavy metal pollution3 and accelerating soil erosion resulting from subsistence agriculture4. As dating models have increased in variety and complexity, it is important to compare models against precise and unambiguous independently derived time scales. In each area of application of 210Pb dating, the inferences drawn from the calculated age–depth curves and the estimates of changing flux rates are often highly dependent on the 210Pb dating model used. In this report 210Pb-derived estimates of lake sediment age and dry-mass sedimentation rates are compared with ages and rates calculated directly by counting annual laminations. The results support a model of 210Pb dating which assumes a constant net rate of supply (c.p.s.) of unsupported 210Pb to the sediment despite fluctuations in dry mass sedimentation rates. Our findings underline the need for empirical evaluation of alternative 210Pb dating models in the widest possible range of contexts. They also cast doubt on some published studies in which strongly ‘kinked’ profiles of unsupported 210Pb concentration have been interpreted within the framework of conventional constant initial concentration (c.i.c.) assumptions.",
"corpus_id": 4316917,
"title": "210Pb dating of annually laminated lake sediments from Finland"
} | {
"abstract": "High-resolution video showed freely swimming Diaptomus sicilis attacking and capturing inert 50 µm polystyrene beads that were outside the influence of the copepod feeding current. The beads were frequently more than half a body length away and were attacked after the 'bow wake' of the moving copepod displaced the bead away from the copepod. To investigate the hypothesis that deformation of streamlines around the copepod and its first antennae stimulated the attack response, a finite element numerical model was constructed. The model described the fluid interactions between a large object approaching a smaller object in a laminar flow at Reynolds number 5, which is charac- teristic of the fluid regime experienced by foraging copepods. The model revealed that fluid velocity fluctuations and streamline deformations arose in the region between the two objects as separation distance between the objects decreased. The video observations and the model results support the hypotheses that chemoreception is not required for the detection and capture of large phytoplankton cells (Vanderploeg et al., in Hughes,R.N. (ed.), Behavioral Mechanisms of Food Selection. NATO ASI Series G20, 1990; DeMott and Watson, J. Plankton Res., 13, 1203-1222, 1991), and that swimming behavior plays an integral role in prey detection.",
"corpus_id": 54750492,
"score": 1,
"title": "Perception of inert particles by calanoid copepods: behavioral observations and a numerical model"
} |
{
"abstract": "Abstract The Amazon basin is likely to be increasingly affected by environmental changes: higher temperatures, changes in precipitation, CO2 fertilization and habitat fragmentation. To examine the important ecological and biogeochemical consequences of these changes, we are developing an international network, RAINFOR, which aims to monitor forest biomass and dynamics across Amazonia in a co-ordinated fashion in order to understand their relationship to soil and climate. The network will focus on sample plots established by independent researchers, some providing data extending back several decades. We will also conduct rapid transect studies of poorly monitored regions. Field expeditions analysed local soil and plant properties in the first phase (2001–2002). Initial results suggest that the network has the potential to reveal much information on the continental-scale relations between forest and environment. The network will also serve as a forum for discussion between researchers, with the aim of standardising sampling techniques and methodologies that will enable Amazonian forests to be monitored in a coherent manner in the coming decades. Abbreviation: PSP = Permanent sample plot.",
"corpus_id": 4847485,
"title": "An international network to monitor the structure, composition and dynamics of Amazonian forests (RAINFOR)"
} | {
"abstract": "Tropical forests are global centres of biodiversity and carbon storage. Many tropical countries aspire to protect forest to fulfil biodiversity and climate mitigation policy targets, but the conservation strategies needed to achieve these two functions depend critically on the tropical forest tree diversity-carbon storage relationship. Assessing this relationship is challenging due to the scarcity of inventories where carbon stocks in aboveground biomass and species identifications have been simultaneously and robustly quantified. Here, we compile a unique pan-tropical dataset of 360 plots located in structurally intact old-growth closed-canopy forest, surveyed using standardised methods, allowing a multi-scale evaluation of diversity-carbon relationships in tropical forests. Diversity-carbon relationships among all plots at 1 ha scale across the tropics are absent, and within continents are either weak (Asia) or absent (Amazonia, Africa). A weak positive relationship is detectable within 1 ha plots, indicating that diversity effects in tropical forests may be scale dependent. The absence of clear diversity-carbon relationships at scales relevant to conservation planning means that carbon-centred conservation strategies will inevitably miss many high diversity ecosystems. As tropical forests can have any combination of tree diversity and carbon stocks both require explicit consideration when optimising policies to manage tropical carbon and biodiversity.",
"corpus_id": 270039,
"title": "Diversity and carbon storage across the tropical forest biome"
} | {
"abstract": "The appearance of Daphnia (Ctenodaphnia) magna Straus, 1820 in the pelagial of Sevan Lake caused significant changes in the communities of planktonic algae, bacteria, and heterotrophic nanoflagellates. Phytoplankton and nanoflagellates were the most affected by the direct impact of D. magna, and the number and biomass of bacteria increased due to the reduction in the trophic pressure of the protists. It was also facilitated by the increased supply of phosphorus as a result of the activity of the cladocerans, as well as the decrease in the number and biomass of phytoplankton, which competes with heterotrophic bacteria for the nutrients.",
"corpus_id": 254291292,
"score": 1,
"title": "The Plankton Community of Sevan Lake (Armenia) after Invasion of Daphnia (Ctenodaphnia) magna Straus, 1820"
} |
{
"abstract": "The way developers edit day-to-day code tend to be repetitive and often use existing code elements. Many researchers tried to automate this tedious task of code changes by learning from specific change templates and applied to limited scope. The advancement of Neural Machine Translation (NMT) and the availability of the vast open source software evolutionary data open up a new possibility of automatically learning those templates from the wild. However, unlike natural languages, for which NMT techniques were originally designed, source code and the changes have certain properties. For instance, compared to natural language source code vocabulary can be virtually infinite. Further, any good change in code should not break its syntactic structure. Thus, deploying state-of-the-art NMT models without domain adaptation may poorly serve the purpose. To this end, in this work, we propose a novel Tree2Tree Neural Machine Translation system to model source code changes and learn code change patterns from the wild. We realize our model with a change suggestion engine: CODIT. We train the model with more than 30k real-world changes and evaluate it with 6k patches. Our evaluation shows the effectiveness of CODIT in learning and suggesting abstract change templates. CODIT also shows promise in suggesting concrete patches and generating bug fixes.",
"corpus_id": 52901814,
"title": "Tree2Tree Neural Translation Model for Learning Source Code Changes"
} | {
"abstract": "The ability to generate natural language sequences from source code snippets has a variety of applications such as code summarization, documentation, and retrieval. Sequence-to-sequence (seq2seq) models, adopted from neural machine translation (NMT), have achieved state-of-the-art performance on these tasks by treating source code as a sequence of tokens. We present ${\\rm {\\scriptsize CODE2SEQ}}$: an alternative approach that leverages the syntactic structure of programming languages to better encode source code. Our model represents a code snippet as the set of compositional paths in its abstract syntax tree (AST) and uses attention to select the relevant paths while decoding. We demonstrate the effectiveness of our approach for two tasks, two programming languages, and four datasets of up to $16$M examples. Our model significantly outperforms previous models that were specifically designed for programming languages, as well as state-of-the-art NMT models.",
"corpus_id": 51926976,
"title": "code2seq: Generating Sequences from Structured Representations of Code"
} | {
"abstract": "The multilingual Paraphrase Database (PPDB) is a freely available automatically created resource of paraphrases in multiple languages. In statistical machine translation, paraphrases can be used to provide translation for out-of-vocabulary (OOV) phrases. In this paper, we show that a graph propagation approach that uses PPDB paraphrases can be used to improve overall translation quality. We provide an extensive comparison with previous work and show that our PPDB-based method improves the BLEU score by up to 1.79 percent points. We show that our approach improves on the state of the art in three different settings: when faced with limited amount of parallel training data; a domain shift between training and test data; and handling a morphologically complex source language. Our PPDB-based method outperforms the use of distributional profiles from monolingual source data.",
"corpus_id": 14908501,
"score": -1,
"title": "Improving Statistical Machine Translation with a Multilingual Paraphrase Database"
} |
{
"abstract": "The role of land values in the dairy industry of an urban-influenced region is investigated by estimating a dairy herd equation based on pooled cross-section and time-series data from counties in New Jersey, Pennsylvania, and New York. The use of cross-terms between hypothesized causal variables and a dummy variable capturing the effect of location allowed the estimation of the differences across states in the effects of milk, feed, and land prices. Results confirm the important role of rising land values in the decline of the dairy industry in the tri-state area, and suggest greater vulnerability of dairy enterprises in urban-influenced areas to rising adverse economic forces. The adverse effects of declining milk prices and higher land values are greatest in New Jersey. The results support the notion that programs such as price support, farmland preservation, farmland assessment, and right-to-farm may have to be maintained in order to retain dairy farms at the urban fringe, where land values are rising rapidly.",
"corpus_id": 152978965,
"title": "Land Values, Market Forces, and Declining Dairy Herd Size: Evidence from an Urban-Influenced Region"
} | {
"abstract": "An economically relevant aggregate production function and an aggregate profit function are used to decompose milk supply response into technology and price effects. Decomposing milk supply response in this way provides insights into dairy industry efficiency and its impact on the effectiveness of price support programs as well as the extent to which aggregate milk production will expand despite prices motivating supply curtailment. An empirical analysis of supply response decomposition for the state of Washington is presented. Expected market price effects were obtained but were overwhelmed by technology effects resulting in milk output expansion when price signals motivated supply reduction.",
"corpus_id": 153823753,
"title": "Decomposition of Milk Supply Response into Technology and Price-Induced Effects"
} | {
"abstract": "Hierarchical foreground and background analysis (HFBA) was used to discriminate soil properties from two valleys in the Santa Monica Mountains Recreation Area, California. The analysis was organized in two levels. First, spectral data from laboratory measured soil samples were used to train a vector in AVIRIS data for classifying the soils between valleys. The prediction of organic matter and iron contents is performed at a second level of resolution. Results showed that, in the laboratory, soils could be classified at a high level of accuracy. When applied to the image, the spatial predictions of organic matter and iron content were consistent for the first level of classification. The ranges of predicted organic matter and iron contents developed at the second level of classification were also consistent with the magnitude and distribution of field samples. The presence of vegetation and the steep terrain affect adversely the ability to resolve these soil properties.",
"corpus_id": 54678289,
"score": 1,
"title": "Remote sensing of soils in the Santa Monica mountains : II. Hierarchical foreground and background analysis"
} |
{
"abstract": "ABSTRACT 1. The aim of this study was to establish how different moulting methods and body weight losses influenced post-moult performance and USDA egg weight distribution. 2. Data on 5 laying flocks (#34–38) of the North Carolina Layer Performance and Management Test were used in this meta-analysis. 3. The moulting methods were non-fasted moulted (NF), short feed restricted (SF), 13-d feed restricted (FR), non-anorexic moult programme (NA), non-anorexic moult programme with low sodium (NALS) as well as non-moulting programme as control treatment. The percentages of targeted body weight loss during the moulting period were 20, 24, 25 and 30% of body weight at the end of the first egg production cycle. 4. Post-moult egg production and egg mass were influenced by all moulting methods. Maximum increase in post-moult egg production rate and egg mass occurred with FR and NF programmes, respectively, at 30% of body weight loss, compared to non-moulted hens. Non-fasting methods reduced mortality rate more effectively than fasting methods. 5. Moulting resulted in increases in percentage of grade A and decreases in percentage of grade B eggs. Non-fasting methods increased percentage of grade A eggs more effectively than fasting methods. Percentage of cracked eggs decreased in moulted rather than non-moulted hens and the lowest rate was associated with the NA programme. 6. Post-moult egg weight was not significantly influenced by moulting methods. However, percentage of body weight reduction affected egg weight. The optimum increment in egg weight was achieved by 24% body weight reduction. 7. Overall, non-fasting methods resulted in similar egg production compared with fasting methods. Considering post-moult mortality and USDA egg weight distribution, non-fasting methods, especially NF and NA programmes, performed much better than fasting methods, indicating that non-fasting moulting methods, which are better for animal welfare, are effective alternatives to fasting methods.",
"corpus_id": 3402017,
"title": "An appraisal of moulting on post-moult egg production and egg weight distribution in white layer hens; meta-analysis"
} | {
"abstract": "A low-sodium diet (.08% Na) was used to force molt 409 hens. Another 421 hens were force molted by conventional water and feed restrictions. Birds were 68 weeks of age at that time, and two strains were about equally represented in each treatment group. These strains were: A, DeKalb; and B, Hisex. The low-sodium diet and water were provided ad libitum to the first group for the entire period of molt (42 days). The second group received no water for 3 days, and no feed for 4 days, after which incremental amounts of whole oats were provided as the only feed until Day 18. After Day 18, incremental amounts of a regular laying mash and decremental amounts of whole oats were provided through Day 26, when laying mash only was given ad libitum. Both treatment groups were in the same building and received the same lighting program during the force-molt period. This program consisted of: Days 1 to 3, no light; Days 4 to 18, 8 hr/day; Days 19 to 25, 9 hr/day; Day 26, 10 hr; then, light was increased 30 min weekly until 14 hr daily was attained. The low-sodium group ceased laying after 28 to 31 days, lost 8.7% of their premolt body weight, reduced feed consumption by 35%, and decreased egg production from 62.3% (in the 28 days preceding the molt) to 19% in the 42 days of molt; their mortality rate was 3.4%.(ABSTRACT TRUNCATED AT 250 WORDS)",
"corpus_id": 3654254,
"title": "A comparison of the effect of two force molting methods on performance of two commercial strains of laying hens."
} | {
"abstract": "Egg type hens were recycled by the use of low sodium diet treatments compared to a conventional forced-molt procedure and an unrecycled control. Use of a low sodium diet containing .02 to .06% sodium for 6 weeks with reduction in daily photoperiod resulted in improvements in egg production, egg specific gravity, and albumen thickness similar to those of a forced-molt group in three separate experiments. Egg production was increased 11 to 13%, egg specific gravity was increased by .002 to .004, and albumen thickness was increased by 2 to 8 Haugh units over the 32-week posttreatment period for both treatments. Hens fed the low sodium diet for 3.5 or 4 weeks did not respond as favorably as hens fed this diet for 6 weeks. Eight weeks on the low sodium diet did not further improve performance. Results comparable to the forced-molt procedure were achieved with a decline in egg production at .03 to .07% sodium in the diet, a decline in feed intake at .03 to .07% sodium, a loss in body weight at .03 to .10% sodium, and an increase in molt score at .03 to .11% sodium during the experimental period. During the posttreatment period, results comparable to the forced-molt procedure were obtained for egg production increase at .03 to .08% sodium, for egg specific gravity increase at .03 to .12% sodium, and for egg albumen thickness increase at .03 to .12% dietary sodium. Mortality was unchanged.",
"corpus_id": 3602895,
"score": -1,
"title": "Effectiveness of low sodium diets for recycling of egg production type hens."
} |
{
"abstract": "Gibberellins, a class of plant hormones, consist of more than 120 members. Only a few of them are recognized by a receptor that remains unknown. The haptenic mouse monoclonal antibody, 4-B8(8)/E9, was generated against gibberellin A(4) (GA(4)) to recognize biologically active GA selectivity, and we attempted to confirm the binding properties between the antibody and GA(4). We carried out an X-ray crystallographic analysis of the 4-B8(8)/E9 Fab fragment complexed with GA(4) at a 2.8 A resolution by using the molecular replacement method. The crystal structure of the Fab fragment showed the typical immunoglobulin fold of the beta-barrel structure which is the common motif of all antibodies. A small hapten-combining site was made up of three heavy chain CDR loops. On the other hand, CDRs of the light chain did not interact directly with GA(4). The C/D rings of the GA(4) molecule were in van der Waals contact mainly with the aromatic side chain of Tyr100AH and Phe100BH of CDR-H3. The 3 beta-hydroxyl and 6 beta-carboxyl groups were, respectively, hydrogen-bonded to the main chain of Ala33H and to the Thr53H heavy chain.",
"corpus_id": 9192867,
"title": "Crystal structure of the liganded anti-gibberellin A(4) antibody 4-B8(8)/E9 Fab fragment."
} | {
"abstract": "To develop a new immunological detection system of gibberellins (GAs), a class of phytohormones, peptides that interact with an antibody against GA4 in a GA4-dependent manner, were screened from phage display random peptide libraries. The biopanning procedure yielded peptides designated as anti-metatype peptides (AM-peps), which showed specific binding to the complex of the antibody and its ligand GA4; that is, the antibody could not be replaced with the other anti-GA4 antibody, and GA4 could not be replaced with GA1, another ligand of the antibody. Together with computational analyses such as analysis of structural propensity of the AM-peps and docking simulation of the AM-peps and the 8/E9-GA4 complex, it was suggested that AM-peps formed a helix in their central region and interacted with a part of the 8/E9-GA4 complex located in close proximity to the GA4 molecule. Based on the property of AM-peps to make a ternary complex with antibody and its ligand, a noncompetitive enzyme-linked immunosorbent assay (ELISA) system corresponding to sandwich ELISA was developed to detect GA4. GA4 as low as 30 pg, which could not be achieved by conventional competitive ELISA, could be detected by the new system, demonstrating the feasibility of this system.",
"corpus_id": 29309585,
"title": "Anti-metatype peptides, a molecular tool with high sensitivity and specificity to monitor small ligands."
} | {
"abstract": "The role of six indigenous macrophytes (Cypreus grandis, C. dubis, Kyllinga erectus, Phragmites mauritianus, Typha domingensis and T. capensis) was investigated for nitrogen removal in horizontal subsurface flow constructed wet- lands at the University of Dar es Salaam in Tanzania receiving waste stabilization ponds effluent. Seven horizontal sub- surface flow constructed wetlands were fed with the same source of domestic wastewater, where six of them were planted with a monoculture macrophytic species while the seventh was not planted and it acted as a control cell. On alternatedays' basis for twenty eight weeks both the influent and effluent water samples from each cell were collected and sent to the laboratory for ammonia-N, nitrate-N and Total Kjeldahl-N analysis. Nitrogen bioaccumulation and plant biomasses were analyzed during the transplanting time, after ten weeks and after flowering. Temperature, pH and plant heights were de- termined in situ. Results show that overall nitrogen removal was through denitrification where K.erectus performed better (75.59%) than the rest.Since P.mauritianus(74.37% )established well and had the longest growing period after harvest useso therefore it was selected as the best macrophyte. More research needs to be done prior to making a final decision on the use of any of these macrophytes for nitrogen removal depending on the weather and soils of the specific area.",
"corpus_id": 17274162,
"score": 0,
"title": "Potential Macrophytes for Nitrogen Removal from Domestic Wastewater in Horizontal Subsurface Flow Constructed Wetlands in Tanzania"
} |
{
"abstract": "Although physiological tremor has been extensively studied within a single limb, tremor relationships between limbs are not well understood. Early investigations proposed that tremor in each limb is driven by CNS oscillators operating in parallel. However, recent evidence suggests that tremor in both limbs arises from shared neural inputs and is more likely to be observed under perturbed conditions. In the present study, postural tremor about the elbow joint and elbow flexor EMG activity were examined on both sides of the body in response to unilateral loading and fatiguing muscle contractions. Applying loads of 0.5, 1.0, 1.5, and 3.0 kg to a single limb increased tremor and muscle activity in the loaded limb but did not affect the unloaded limb, indicating that manipulating the inertial characteristics of a limb does not evoke bilateral tremor responses. In contrast, maximal-effort unilateral isometric contractions resulted in increased tremor and muscle activity in both the active limb and the nonactive limb without any changes in between-limb tremor or muscle coupling. When unilateral contractions were repeated intermittently, to the extent that maximum torque generation about the elbow joint declined by 50%, different tremor profiles were observed in each limb. Specifically, unilateral fatigue altered coupling between limbs and generated a bilateral response such that tremor and brachioradialis EMG decreased for the fatigued limb and increased in the contralateral nonfatigued limb. Our results demonstrate that activity in the nonactive limb may be due to a \"spillover\" effect rather than directly coupled neural output to both arms and that between-limb coupling for tremor and muscle activity is only altered under considerably perturbed conditions, such as fatigue-inducing contractions.",
"corpus_id": 341132,
"title": "Bilateral tremor responses to unilateral loading and fatiguing muscle contractions."
} | {
"abstract": "OBJECTIVE\nThis study compared reflex responsiveness of the first dorsal interosseus muscle during two tasks that employ different strategies to stabilize the finger while exerting the same net muscle torque.\n\n\nMETHODS\nHealthy human subjects performed two motor tasks that involved either pushing up against a rigid restraint to exert a constant isometric force equal to 20% of maximum or maintaining a constant angle at the metacarpophalangeal joint while supporting an equivalent inertial load. Each task consisted of six 40-s contractions during which electrical and mechanical stimuli were delivered.\n\n\nRESULTS\nThe amplitude of short and long latency reflex responses to mechanical stretch did not differ significantly between tasks. In contrast, reflexes evoked by electrical stimulation were significantly greater when supporting the inertial load.\n\n\nCONCLUSIONS\nAgonist motor neurons exhibited heightened reflex responsiveness to synaptic input from heteronymous afferents when controlling the position of an inertial load. Task differences in the reflex response to electrical stimulation were not reflected in the response to mechanical perturbation, indicating a difference in the efficacy of the pathways that mediate these effects.\n\n\nSIGNIFICANCE\nResults from this study suggest that modulation of spinal reflex pathways may contribute to differences in the control of force and position during isometric contractions of the first dorsal interosseus muscle.",
"corpus_id": 2829803,
"title": "Reflex responsiveness of a human hand muscle when controlling isometric force and joint position"
} | {
"abstract": "The authors of this book are two well recognized experts in physiology of the nervous system. Therefore, as expected, their cooperation in writing this book has led to a extraordinary product that I enjoyed reading and expect other will do so as well. The book deals with how the neural machinery of the spinal cord modulates the output from higher centers, contributing in this way to the organization of movement. The intricacy of spinal cord interneuronal connections has been partially elucidated thanks to the study of specific spinal cord reflex functions, namely recurrent inhibition, reciprocal Ia inhibition, homonymous (or, better, non-reciprocal group I) inhibition, group II afferent inhibition and presynaptic inhibition. For most of these tests, the soleus H reflex is the probe used for the assessment of the function under study. Consequently, the physiology of the H reflex occupies a substantial part of the initial chapter. Other meaningful tools for the study of spinal cord functions, such as peristimulus time histogram techniques, modulation of ongoing EMG activity, excitatory convergence of spatially separated inputs and transcranial magnetic stimulation, complete this important chapter. Special relevance is given to the principles underlying each of these techniques and their fundamental methodological aspects. Chapter 2 is devoted to the physiology of segmental reflex activation of alpha motoneurons and their post-activation depression, while Chapter 3 is devoted to the physiology of the fusimotor drive. Both chapters end with a relatively brief account of the type of abnormalities that could be found in various disease groups. Chapters 4–8 deal with the techniques available for the study of the main circuitries that are the core of spinal cord functions in humans. Chapters 9 and 10 present the role of spinal cord mechanisms in the organization of responses to cutaneous and descending inputs to the alpha motoneurons. These mechanisms underlie clinically important responses such as the toe-extensor reflex or the withdrawal reflex, and neurophysiologically important observations such as the collision between afferent and efferent inputs to cervical and lumbar propriospinal systems, which may subserve some forms of motor learning. Chapter 11 is, to me, the most interesting chapter in this book. Here the authors describe the spinal cord mechanisms involved in different human motor tasks, from posture to purposeful movement. It is really beautiful to see how each piece of information gathered in past years fits into the whole scheme of human motor control, as well as how some of the erroneous concepts have been clarified with the addition of new data. In Chapter 12, the authors report a relatively brief account of the pathophysiological aspects involving the spinal cord in spasticity and parkinsonian rigidity. In some instances, certain information is presented in different parts of the book and with a different approach. This, rather than being redundant, facilitates the finding of the desired information following different lines of reasoning. As expected, most literature references are those related to the first description of the techniques and the additions that have been implemented. Most of the work in this area dates from 1970 to 1980, with only roughly 10% of references more recent than the year 2000 in Chapters 1–10. Figures are clear and the legends give full explanation of the contents. In most instances, figures are reprinted from the original contributions reported in various journals. This is drastically changed in Chapters 11 and 12, where the text contains the most novel aspects and all figures are new. Certainly, this book exhales good physiology from all its pages and clinical neurophysiologists should enjoy reading it even if they may not be using the fine techniques described here in their clinical practice. The question arises at this point as to how much it is worth for clinical neurophysiologists to spend the time and effort needed to learn the skills to study with proficiency the physiological mechanisms of the human spinal cord circuitry. This obviously does not only require a detailed reading of this book, but also sitting beside the machine and trying the techniques on oneself or on cooperative collaborators. To gain new knowledge in the field, much technical care and recording accuracy are needed, which demands free research time, relatively large and quiet space and equipment slightly more sophisticated than usual. Unfortunately, practice of clinical neurophysiology generally does not satisfy any of the three demands referred to above. Reaching the necessary level of proficiency to do research and, therefore, advance in the field of spinal cord physiology is the privilege of only a few. Will there ever be the necessary support for the relatively long and dedicated schooling and coaching",
"corpus_id": 53151122,
"score": 2,
"title": "The circuitry of the human spinal cord: Its role in motor control and movement disorders \n Pierrot-Deseilligny E, Burke D, editors. Hardback. Cambridge University Press; 2005. 642 p. [ISBN: 13978052182581].\n"
} |
{
"abstract": "While non‐eosinophilic asthmatics are usually considered poorly responsive to inhaled corticosteroids (ICSs), studies assessing a step‐down of ICS in this specific population are currently lacking.",
"corpus_id": 3706998,
"title": "Step‐down of inhaled corticosteroids in non‐eosinophilic asthma: A prospective trial in real life"
} | {
"abstract": "In case-controlled studies, peanut allergy has been associated with high household peanut consumption. Case-control studies are though very vulnerable to unrecognized confounding. So, Brough et al have looked at whether post-natal environmental peanut exposure is associated with later allergic sensitization to peanut in a population-based cohort. Maternal bed dust was collected post-natally in the BAMSE cohort. There was a significant association between peanut exposure and sensitization at age 4 years (odds ratio 1.41, 95% confidence interval: 1.05-1.90, P = .02) and 8 years (2.11, 1.383.22, P = .001) compared to sex and parental atopy-matched controls (Figure 1). Interestingly, an association was not seen when the whole BAMSE cohort was assessed. Having pointed out the problem of bias with case-controlled studies, this is not an issue in the nested case-control design used by the authors as information about the cases and controls is collected prospectively before the outcome is assessed. So why is there a difference between the case-control analysis and the cohort one? I think the comparator is the key thing here. In the case-control analysis, the comparator was children with a family history of atopy, whereas in the cohort analysis, they were children with and without such a family history. If high environmental exposure to peanut only results in peanut sensitization in genetically predisposed children, many of children with high exposure in the case-control analysis will have developed peanut sensitization as all had a family history of atopy. This contrasts with the cohort analysis where many of those with high peanut exposure will not have been genetically predisposed and so will not have developed peanut sensitization. So in this second analysis, high exposure will not seem to be associated with the development of sensitization. I would be interested in seeing what happens when the cohort analysis is repeated after splitting all the participants into those with and without a family history of atopy? My prediction is that a relationship is seen but only in those with a family history. Children with asthma are vulnerable to rhinovirus-induced exacerbations of their disease. Looi et al have investigated the mechanism underlying this observation within a bronchial epithelial model. Bronchial brushing samples were obtained from children with and without asthma undergoing a general anaesthetic. Epithelial cells were grown ex vivo into a confluent, differentiated air-liquid interface (ALI) culture. Tight junction integrity was disrupted with",
"corpus_id": 13801606,
"title": "A complicated relationship between peanut environmental exposure and the development of allergic sensitization to peanuts"
} | {
"abstract": "We consider the general form of the stochastic approximation algorithm$X_{n + 1} = X_n + a_n h(X_n ,\\xi _n )$, where h is not necessarily additive in $\\xi _n $. Such algorithms occur frequently in applications to adaptive control and identification problems, where $\\{ \\xi _n \\} $ is usually obtained from measurements of the input and output, and is almost always complicated enough that the more classical assumptions on the noise fail to hold. Let $a_n = {A / {(n + 1)^\\alpha }}$, $0 < \\alpha \\leqq 1$, and let $X_n \\to \\theta $ w.p. 1. Define $U_n = (n + 1)^{{\\alpha / 2}} (X_n - \\theta )$. Then, loosely speaking, it is shown that the sequence of suitable continuous parameter interpolations of the sequence of “tails” of $\\{ U _n \\} $ converges weakly to a Gaussian diffusion. From this we can get the asymptotic variance of $U _n $ as well as other information. The assumptions on $\\{ \\xi _n \\} $ and $h( \\cdot , \\cdot )$ are quite reasonable from the point of view of applications.",
"corpus_id": 119843387,
"score": 0,
"title": "RATES OF CONVERGENCE FOR STOCHASTIC APPROXIMATION TYPE ALGORITHMS"
} |
{
"abstract": "Banana Xanthomonas wilt (BXW) disease threatens banana production and food security throughout East Africa. Natural resistance is lacking among common cultivars. Genetically modified (GM) bananas resistant to BXW disease were developed by inserting the hypersensitive response-assisting protein (Hrap) or/and the plant ferredoxin-like protein (Pflp) gene(s) from sweet pepper (Capsicum annuum). Several of these GM banana events showed 100% resistance to BXW disease under field conditions in Uganda. The current study evaluated the potential allergenicity and toxicity of the expressed proteins HRAP and PFLP based on evaluation of published information on the history of safe use of the natural source of the proteins as well as established bioinformatics sequence comparison methods to known allergens (www.AllergenOnline.org and NCBI Protein) and toxins (NCBI Protein). The results did not identify potential risks of allergy and toxicity to either HRAP or PFLP proteins expressed in the GM bananas that might suggest potential health risks to humans. We recognize that additional tests including stability of these proteins in pepsin assay, nutrient analysis and possibly an acute rodent toxicity assay may be required by national regulatory authorities.",
"corpus_id": 3564117,
"title": "Bioinformatics analysis to assess potential risks of allergenicity and toxicity of HRAP and PFLP proteins in genetically modified bananas resistant to Xanthomonas wilt disease."
} | {
"abstract": "Capsicum fruits are widely consumed as a component of the human diet. Capsaicin is the principle substance responsible for their hot, pungent taste. Heterocyclic amines (HCAs) are formed during cooking of meats and are mutagenic/carcinogenic compounds. In this study, we looked at whether capsaicin showed anti-mutagenic effects toward HCA-induced mutagenesis in Salmonella typhimurium TA98 when incubated with 0.5 mg liver S9 protein from rat, hamster and human. The HCAs used were Trp-P-2, Glu-P-1 and PhIP. Capsaicin, at non-toxic amounts of 0.25 and 0.5 micromole/plate, expressed a dose-dependent inhibition of the mutagenicity of Glu-P-1 and PhIP when they are metabolically activated by rat, hamster and human liver S9 and of Trp-P-2 when activated by rat and hamster liver S9. In contrast, capsaicin enhanced the mutagenicity of Trp-P-2 in TA98 when incubated with human liver S9. The lack of consistency in the anti-mutagenic action of capsaicin toward HCAs is puzzling and currently unresolved.",
"corpus_id": 9784810,
"title": "In vitro antimutagenicity of capsaicin toward heterocyclic amines in Salmonella typhimurium strain TA98."
} | {
"abstract": "At concentrations of 25, 50, and 100 microM, capsaicin, which is the major component in various aspects of Capsicum hot peppers, decreased the binding of aflatoxin (AFB1) to calf thymus DNA by 19%, 44%, and 71%, respectively, in incubations with rat liver S9. At concentrations of 50 and 100 microM, capsaicin decreased the formation of AFB-DNA adducts (AFB1-N7-Gua) by 53% and 75% as determined by high-pressure liquid chromatography (HPLC). HPLC analysis of organo-soluble fractions showed that these effects correlated with a concentration-dependent decrease in S9-mediated metabolism of AFB1 by capsaicin. Capsaicin also altered the formation of water-soluble conjugates of AFB1. This was indicated by a decrease in radioactivity in water-soluble fractions and in glutathione conjugates of AFB1 analyzed by HPLC. These results suggest that capsaicin inhibited the biotransformation of AFB1 by modifying Phase I hepatic enzyme activity.",
"corpus_id": 41279972,
"score": 2,
"title": "Effects of capsaicin on rat liver S9-mediated metabolism and DNA binding of aflatoxin."
} |
{
"abstract": "Measurements of functional residual capacity (FRC) by helium gas dilution and peak expiration flow rate (PEFR) were made in 63 young asthmatic children aged 2 and 7 years before and after bronchodilator therapy. All 63 children tolerated two measurements of FRC, but only 33 children were able to perform the peak flow maneuver. Bronchodilator therapy was associated with significant change in FRC in the majority (80%) of children; in some, however, this change was an increase rather than a decrease. The change in FRC was significantly correlated with both prebronchodilator FRC and the change in PEFR. An increase in FRC following bronchodilator therapy was more common in children with severe and symptomatic asthma. We suggest that changes in FRC may be used in asthmatic children to demonstrate bronchodilator responsiveness, particularly in those too young to perform other respiratory function tests. Pediatr Pulmonol. 1989; 7:8–11.",
"corpus_id": 2063325,
"title": "Changes in functional residual capacity in response to bronchodilator therapy among young asthmatic children"
} | {
"abstract": "Ten preterm infants with recurrent respiratory symptoms (median gestational age 30 weeks) were entered into a non-randomised placebo controlled trial of bronchodilator treatment at 12.5 months of age. The infants had coughed or wheezed, or both, on at least four days a week for the past month. The infants received either placebo or 500 micrograms terbutaline from an inhaler using a coffee cup as a spacer device. Each treatment was maintained for two weeks, first placebo then active drug. The symptom score was reduced by 65% during the active treatment period compared with the placebo period and this was associated with a 32% improvement in lung function, reflected in an increase in functional residual capacity. We conclude that inhaled bronchodilator treatment given with a simple spacer device is useful for preterm infants with recurrent respiratory symptoms in the first two years of life.",
"corpus_id": 20835618,
"title": "Effective bronchodilator treatment by a simple spacer device for wheezy premature infants."
} | {
"abstract": "We thank Dr Bell and her colleagues for their comments. We were unaware of their paper when we submitted ours. There may well be differences in plasma amino acids depending on the type of preterm formula used, just as there will be differences if one formula is fed at different volumes. The purpose of our paper was not to compare formulasonly two infants were fed on formulas other than SMA Low Birthweight-but to examine the effects of a relatively high protein intake (in comparison with, for example, banked breast milk) in verv low birthweight infants. The infants in our study were appreciably smaller than those studied by Bell and her colleagues. but even so no potentially hazardous amino acid concentrations were detected.",
"corpus_id": 43440040,
"score": 2,
"title": "Nebuhaler in young asthmatic children."
} |
{
"abstract": "The multiphonon emission capture mechanism by neutral centers, in the presence of an electric field below 1 MV/cm, has been numerically simulated by the Monte Carlo method. Based on common models for the initial and final states, a simple expression of the process probability has been calculated considering both nonpolar and polar electron–phonon coupling. The validity range of this expression is assumed for a carrier energy Ek<ET, where ET is the impurity level depth. In order to check the probability rate, this mechanism was included in the framework of a previous numerical procedure as one more mechanism for calculating the capture cross section as an electric‐field function. This theoretical framework is given for both polar and nonpolar semiconductors. The Pt and Au acceptor levels in Si have been analyzed with the probability expression, particularized for the case of nonpolar coupling, by fitting the available experimental data of capture cross sections with the numerical results. In both cases, th...",
"corpus_id": 7773552,
"title": "Monte Carlo simulation of multiphonon capture mechanism by deep neutral impurities in Si in the presence of an electric field"
} | {
"abstract": "We present an original Monte Carlo procedure to account for generation‐recombination noise through impurity centers in semiconductors. Numerical calculations are specialized to the case of holes in Si at 77 K. Results are found to compare favorably with available experiments.",
"corpus_id": 119887909,
"title": "Monte Carlo algorithm for generation‐recombination noise in semiconductors"
} | {
"abstract": "In this paper we first discuss the effective-mass concept in the theory of impurity states in semiconductors. We show that one cannot be justified in replacing m by the free-electron mass mo for either shallow or deep levels. Instead, we show that rigorous calculations may be interpreted in terms of an effective mass which is either smaller than m* or negative, but not equal to mo. Finally, we report results of new calculations of binding energies using model potentials and compare with our previous results obtained with first-principles pseudopotentials.",
"corpus_id": 100541658,
"score": 2,
"title": "On the Theory of Impurity States in Semiconductors"
} |
{
"abstract": "BACKGROUND\nA single insulin injection was shown to improve microcirculatory blood flow. Our aim was to examine the effects of 4weeks of insulin therapy by three randomly assigned insulin analog regimens (Detemir, Aspart, and their combination) on cutaneous blood flow (CBF) and microcirculatory endothelial function as an add-on to metformin in type 2 diabetic patients poorly controlled on oral antidiabetic treatment.\n\n\nMETHODS\nFourty-two type 2 diabetic patients with no history of cardiovascular disease in secondary failure to oral antidiabetic agents had CBF measurements before and after acetylcholine (Ach) iontophoretic administration. CBF measurements were performed at fasting and after a standardized breakfast during the post-prandial period. Before randomization (Visit 1, V1) during the tests, participants took only metformin. The same tests were repeated after 4weeks of insulin treatment (Visit 2, V2).\n\n\nRESULTS\nThirty-four patients had good quality recordings for both visits. During V1, CBF and CBF response to Ach increased in the post-prandial period. After 4weeks of insulin treatment, metabolic parameters improved. Compared to V1, CBF at fasting did not increase at V2 but there was an improvement in endothelial function at fasting after Ach iontophoresis, without difference across insulin regimens. Oxidative stress markers were not modified, and E-selectin and vascular cell adhesion molecule 1 levels decreased after insulin treatment, without differences between insulin groups.\n\n\nCONCLUSIONS\nA strategy of improving glycemic control for 4weeks with insulin analogs improves microcirculatory endothelial reactivity and reduces endothelial biomarkers at fasting, whatever the insulin regimen used. Insulin therapy associated to metformin is able to improve fasting microvascular endothelial function even before complete metabolic control.",
"corpus_id": 1192696,
"title": "Effects of insulin analogs as an add-on to metformin on cutaneous microcirculation in type 2 diabetic patients."
} | {
"abstract": "AIMS\nDespite considerable experience with insulin lispro, few blinded comparisons with soluble insulin are available. This study compared insulin lispro with human soluble insulin in patients with Type 1 diabetes mellitus on multiple injection therapy who inject shortly before meals.\n\n\nMETHODS\nGlucose control, frequency of hypoglycaemia and patient preference were examined in the course of a prospective, randomized, double-blind, crossover comparison, with a 6-week run-in period and 12 weeks on each therapy. Ninety-three patients took part, all on multiple daily doses of insulin, with soluble insulin before meals and NPH (isophane) insulin at night. The main outcome measures were self-monitored blood glucose profiles, glycated haemoglobin, frequency of hypoglycaemic episodes, patient satisfaction and well-being and patient preference.\n\n\nRESULTS\nBlood glucose levels were significantly lower after breakfast and lunch, but higher before breakfast, lunch and supper, in patients taking insulin lispro. Levels of HbA(1c) were 7.4 +/- 1.1% on Humulin S and 7.5 +/- 1.1% on insulin lispro (P = 0.807). The overall frequency of symptomatic hypoglycaemia did not differ, but patients on insulin lispro were less likely to experience hypoglycaemia between midnight and 6 a.m., and more likely to experience episodes from 6 a.m. to midday. Questionnaires completed by 84/87 patients at the end of the study showed that 43 (51%) were able to identify each insulin correctly, nine (11%) were incorrect, and 32 (38%) were unable to tell the insulins apart. No significant preference emerged: 35 (42%) opted for insulin lispro, 24 (29%) opted for Humulin S, while the remainder had no clear preference.\n\n\nCONCLUSIONS\nSubstitution of insulin lispro for soluble insulin in a multiple injection regimen improved post-prandial glucose control at the expense of an increase in fasting and pre-prandial glucose levels. Patients who already injected shortly before meals expressed no clear preference for the fast-acting analogue, and did not improve their overall control as a result of using it. Nocturnal hypoglycaemia was however, less frequent on insulin lispro, and may emerge as a robust indication for its use.",
"corpus_id": 19940058,
"title": "A randomized, controlled trial comparing insulin lispro with human soluble insulin in patients with Type 1 diabetes on intensified insulin therapy. The UK Trial Group."
} | {
"abstract": "BackgroundExperimental approaches to limit the spinal cord injury and to promote neurite outgrowth and improved function from a spinal cord injury have exploded in recent decades. Due to the cavitation resulting after a spinal cord injury, newer important treatment strategies have consisted of implanting scaffolds with or without cellular transplants. There are various scaffolds, as well as various different cellular transplants including stem cells at different levels of differentiation, Schwann cells and peripheral nerve implants, that have been reviewed. Also, attention has been given to different re-implantation techniques in avulsion injuries.MethodsUsing standard search engines, this literature is reviewed.ConclusionCellular and paracellular transplantation for application to spinal cord injury offers promising results for those patients with spinal cord pathology.",
"corpus_id": 6225072,
"score": 1,
"title": "Cellular and paracellular transplants for spinal cord injury: a review of the literature"
} |
{
"abstract": "Nurses collect, communicate and store patient information needed for care through verbal, handwritten and electronic information sources. However, the specific categories of nurses' information needs for the care of hospitalized patients remain unknown. The purpose of this study was to identify the categories of nurses' information needs and develop an observational tool to measure the information needs through available information sources. We analyzed qualitative data from interview transcripts and conducted direct observations of nurses to identify a total of 17 categories of nurses' information needs when caring for hospitalized patients. Once identified, we developed an observational tool to quantitatively measure the category of information need, whether the information need was collected and communicated, and through which information source. Future studies will be able to measure the gathering and communication of information needs through direct observation.",
"corpus_id": 2851207,
"title": "Development of an Observational Tool to Measure Nurses' Information Needs"
} | {
"abstract": "Information seeking by nurses at the beginning of a work shift is related to planning interventions and other patient activities. Subjects were observed for one hour following morning shift report. The most frequent type of information sought was medication schedules and other information related to medications. On average, nurses spent one-quarter of the first hour after shift report looking for and retrieving information. Nursing information, such as assessments and nursing summaries, required more time to retrieve than other types of information. Findings are compared to earlier research about nurses' information seeking.",
"corpus_id": 8789321,
"title": "Information seeking by nurses during beginning-of-shift activities."
} | {
"abstract": "Loss‐of‐function mutations in human adenomatous polyposis coli (APC) lead to multiple colonic adenomatous polyps eventually resulting in colonic carcinoma. Similarly, heterozygous mice carrying defective APC (apcMin/+) suffer from intestinal tumours. The animals further suffer from anaemia, which in theory could result from accelerated eryptosis, a suicidal erythrocyte death triggered by enhanced cytosolic Ca2+ activity and characterized by cell membrane scrambling and cell shrinkage. To explore, whether APC‐deficiency enhances eryptosis, we estimated cell membrane scrambling from annexin V binding, cell size from forward scatter and cytosolic ATP utilizing luciferin–luciferase in isolated erythrocytes from apcMin/+ mice and wild‐type mice (apc+/+). Clearance of circulating erythrocytes was estimated by carboxyfluorescein‐diacetate‐succinimidyl‐ester labelling. As a result, apcMin/+ mice were anaemic despite reticulocytosis. Cytosolic ATP was significantly lower and annexin V binding significantly higher in apcMin/+ erythrocytes than in apc+/+ erythrocytes. Glucose depletion enhanced annexin V binding, an effect significantly more pronounced in apcMin/+ erythrocytes than in apc+/+ erythrocytes. Extracellular Ca2+ removal or inhibition of Ca2+ entry with amiloride (1 mM) blunted the increase but did not abrogate the genotype differences of annexin V binding following glucose depletion. Stimulation of Ca2+‐entry by treatment with Ca2+‐ionophore ionomycin (10 μM) increased annexin V binding, an effect again significantly more pronounced in apcMin/+ erythrocytes than in apc+/+ erythrocytes. Following retrieval and injection into the circulation of the same mice, apcMin/+ erythrocytes were more rapidly cleared from circulating blood than apc+/+ erythrocytes. Most labelled erythrocytes were trapped in the spleen, which was significantly enlarged in apcMin/+ mice. The observations point to accelerated eryptosis and subsequent clearance of apcMin/+ erythrocytes, which contributes to or even accounts for the enhanced erythrocyte turnover, anaemia and splenomegaly in those mice.",
"corpus_id": 18141805,
"score": 1,
"title": "Enhanced suicidal erythrocyte death in mice carrying a loss-of-function mutation of the adenomatous polyposis coli gene"
} |
{
"abstract": "Sir, We thank de Wolf and Schutzer-Weissmann for their interesting letter ‘Ventrain and driving pressure’ and their technical information about the design and function of Ventrain . The intention of our study was to investigate the impact of oxygen sources on the performance of Ventrain as ‘pressure-compensated flowmeter or flow-regulators’ are not always available in clinical emergency situations. The findings of this study are interesting and important for the user. Of course the values presented by de Wolf and Schutzer-Weissmann are correct and there is a relationship between the set flow at the oxygen source and inspiratory tidal volumes (VTi) assuming the user operates Ventrain with a pressure-compensated oxygen source, which again is not always given in clinical emergency situations. And even if a high pressure-compensated flowmeter is used, oxygen flows set at the flow rotameter can dramatically differ from the flow at the catheter tip (FACT). This for example was overserved by our research team when operating Ventrain with the auxiliary O2 flow control of the GE Aisys CS (General Electric Company, Fairfield, Connecticut, USA). This high-pressure (241 kPa/35 PSI) oxygen flowmeter has a scale from 0 to 10 l/min (see Fig. 1). Connecting Ventrain to this oxygen outlet and increasing the flow to the maximum opening of the valve will show a flow of 8.5 l/min at the auxiliary O2 flow rotameter. But measuring the FACT will reveal a flow of 16 l/min. During emergency ventilation, the user will not notice this major difference. All measurements regarding this flowmeter were performed with a calibrated pressure and gas flow analyzer (VT-plus Analyzer, Bio-Tek, VT, USA) and are shown in Table 1, revealing only one correlation: Increasing the driving pressures (DP) of the oxygen source results in correlative increase of the FACT. Therefore, using a pressure-flow control gauge with a scale showing the resulting flows (Fig. 2) between the oxygen source and Ventrain as suggested in the present study would allow flow control with the lowest DP necessary and the certainty of the FACT even in other than pressure-compensated flowmeters or flow-regulators. In addition stating that 64 ml difference between VTi and VTe is without clinical dramatic is incorrect. A respiratory rate of 10/min already means an air trapping of 640 ml/min.",
"corpus_id": 436618,
"title": "“…we still never know the flow at the catheter tip”"
} | {
"abstract": "BACKGROUND\nTranstracheal access and subsequent jet ventilation are among the last options in a 'cannot intubate-cannot oxygenate' scenario. These interventions may lead to hypercapnia, barotrauma, and haemodynamic failure in the event of an obstructed upper airway. The aim of the present study was to evaluate the efficacy and the haemodynamic effects of the Ventrain, a manually operated ventilation device that provides expiratory ventilation assistance. Transtracheal ventilation was carried out with the Ventrain in different airway scenarios in live pigs, and its performance was compared with a conventional jet ventilator.\n\n\nMETHODS\nPigs with open, partly obstructed, or completely closed upper airways were transtracheally ventilated either with the Ventrain or by conventional jet ventilation. Airway pressures, haemodynamic parameters, and blood gases obtained in the different settings were compared.\n\n\nRESULTS\nMean (SD) alveolar minute ventilation as reflected by arterial partial pressure of CO2 was superior with the Ventrain in partly obstructed airways after 6 min in comparison with traditional manual jet ventilation [4.7 (0.19) compared with 7.1 (0.37) kPa], and this was also the case in all simulated airway conditions. At the same time, peak airway pressures were significantly lower and haemodynamic parameters were altered to a lesser extent with the Ventrain.\n\n\nCONCLUSIONS\nThe results of this study suggest that the Ventrain device can ensure sufficient oxygenation and ventilation through a small-bore transtracheal catheter when the airway is open, partly obstructed, or completely closed. Minute ventilation and avoidance of high airway pressures were superior in comparison with traditional hand-triggered jet ventilation, particularly in the event of complete upper airway obstruction.",
"corpus_id": 3819488,
"title": "Transtracheal ventilation with a novel ejector-based device (Ventrain) in open, partly obstructed, or totally closed upper airways in pigs."
} | {
"abstract": "Abstract Newly designed fluorine-doped magnetic carbon (F-MC) was synthesized in situ though a facile one-step pyrolysis-carbonization method. Poly(vinylidene fluoride) (PVDF) served as the precursor for both carbon and fluorine. 2.5% F content with core-shell structure was obtained over F-MC, which was used as a adsorbent for the Cr(VI) removal. To our best knowledge, this is the first time to report that the fluorine doped material was applied for the Cr(VI) removal, demonstrating very high removal capacity (1423.4 mg g−1), higher than most reported adsorbents. The unexpected performance of F-MC can be attributed to the configuration of F dopants on the surface. The observed pseudo-second-order kinetic study indicated the dominance of chemical adsorption for this process. High stability of F-MC after 5 recycling test for the Cr(VI) removal was also observed, indicating that F-MC could be used as an excellent adsorbent for the toxic heavy metal removal from the wastewater.",
"corpus_id": 13801521,
"score": 1,
"title": "Poly(vinylidene fluoride) derived fluorine-doped magnetic carbon nanoadsorbents for enhanced chromium removal"
} |
{
"abstract": "BackgroundThe purposes of the study were the long-term evaluation of silicone implants with three-dimensional (3D) anal endosonography and its correlation with anal incontinence.MethodsFifteen patients were injected with silicone because of anal incontinence and co-existing internal anal sphincter disruption (n = 8) or thinning (n = 7). The evaluation was performed with the Wexner score and 3D anal endosonographies.ResultsForty-four implants were performed. The endosonography at 3 months detected that all the implants were properly located. At 24 months, it detected 37/44 implants of initially injected and 33/37 were properly located. Four of 37 implants had moved and 7/44 were neither in the anus nor in the rectum. A total of 8/15 patients had their implants correctly placed. Globally, silicone implants significantly improved fecal continence.ConclusionsThe silicone implants might have moved or even be lost. The continence deterioration suffered by most patients after the first year of the injection has no relation with the localization and number of implants that the patients have.",
"corpus_id": 2073960,
"title": "Evaluation by three-dimensional anal endosonography of injectable silicone biomaterial (PTQ™) implants to treat fecal incontinence: long-term localization and relation with the deterioration of the continence"
} | {
"abstract": "The treatment of faecal incontinence secondary to internal anal sphincter dysfunction is unsatisfactory. The aim of the study was to evaluate the efficacy of anal glutaraldehyde cross‐linked (GAX) collagen injections in patients with a surgically incorrectable disorder.",
"corpus_id": 23942857,
"title": "Glutaraldehyde cross‐linked collagen in the treatment of faecal incontinence"
} | {
"abstract": "We treated 47 patients with transitional cell bladder carcinoma invading the lamina propria (stage T1) from 1984 to 1986 with complete transurethral resection followed by one to three courses of endovesical BCG instillation and followed them for 14-64 months with cystoscopic and endoscopic tests and bladder biopsy. Complete response was achieved in 64%, and 36% had recurrences (recurrence rate per 100 month/patient, 2.2); 21% progressed to muscle invasion. Duration of treatment, tumor size or type (solid versus papillary), and presence of carcinoma in situ bore no relation to the final result. A history of previous T1 bladder tumor appeared associated with a higher risk of progression, although not statistically significantly. The results were compared with those obtained by transurethral resection alone in a similar group of 50 patients treated from 1982 to 1984 and followed for 12 to 100 months. Of these 90%, had recurrence, and 34% progressed to muscle invasion, with a recurrence rate per 100 month/patient of, 9.2. In light of the limits of a non-randomized historical comparison, it appears that endovesical BCG therapy favorably alters the recurrence pattern of T1 bladder cancer.",
"corpus_id": 25422087,
"score": 2,
"title": "Bladder tumors invading the lamina propria (stage T1): influence of endovesical BCG therapy on recurrence and progression."
} |
{
"abstract": "BackgroundIn nuptial gift-giving species, benefits of acquiring a mate may select for male deception by donation of worthless gifts. We investigated the effect of worthless gifts on mating success in the spider Pisaura mirabilis. Males usually offer an insect prey wrapped in silk; however, worthless gifts containing inedible items are reported. We tested male mating success in the following experimental groups: protein enriched fly gift (PG), regular fly gift (FG), worthless gift (WG), or no gift (NG).ResultsMales that offered worthless gifts acquired similar mating success as males offering nutritional gifts, while males with no gift experienced reduced mating success. The results suggest that strong selection on the nuptial gift-giving trait facilitates male deception by donation of worthless gifts. Females terminated matings faster when males offered worthless donations; this demonstrate a cost of deception for the males as shorter matings lead to reduced sperm transfer and thus give the deceiving males a disadvantage in sperm competition.ConclusionWe propose that the gift wrapping trait allows males to exploit female foraging preference by disguising the gift content thus deceiving females into mating without acquiring direct benefits. Female preference for a genuine prey gift combined with control over mating duration, however, counteracts the male deception.",
"corpus_id": 2916084,
"title": "Worthless donations: male deception and female counter play in a nuptial gift-giving spider"
} | {
"abstract": "BackgroundPolyandry is commonly maintained by direct benefits in gift-giving species, so females may remate as an adaptive foraging strategy. However, the assumption of a direct benefit fades in mating systems where male gift-giving behaviour has evolved from offering nutritive to worthless (non-nutritive) items. In the spider Paratrechalea ornata, 70% of gifts in nature are worthless. We therefore predicted female receptivity to be independent of hunger in this species. We exposed poorly-fed and well-fed females to multiple males offering nutritive gifts and well-fed females to males offering worthless gifts.ResultsThough the treatments strongly affected fecundity, females of all groups had similar number of matings. This confirms that female receptivity is independent of their nutritional state, i.e. polyandry does not prevail as a foraging strategy.ConclusionsIn the spider Pisaura mirabilis, in which the majority (62%) of gifts in nature are nutritive, female receptivity depends on hunger. We therefore propose that the dependence of female receptivity on hunger state may have evolved in species with predominantly nutritive gifts but is absent in species with predominantly worthless gifts.",
"corpus_id": 8421595,
"title": "Females of a gift-giving spider do not trade sex for food gifts: a consequence of male deception?"
} | {
"abstract": "Understanding enzyme-substrate interactions is critical in designing strategies for bioconversion of lignocellulosic biomass. In this study we monitored molecular events, in situ and in real time, including the adsorption and desorption of cellulolytic enzymes on lignins and cellulose, by using quartz crystal microgravimetry and surface plasmon resonance. The effect of a nonionic surface active molecule was also elucidated. Three lignin substrates relevant to the sugar platform in biorefinery efforts were considered, namely, hardwood autohydrolysis cellulolytic (HWAH), hardwood native cellulolytic (MPCEL), and nonwood native cellulolytic (WSCEL) lignin. In addition, Kraft lignins derived from softwoods (SWK) and hardwoods (HWK) were used as references. The results indicated a high affinity between the lignins with both, monocomponent and multicomponent enzymes. More importantly, the addition of nonionic surfactants at concentrations above their critical micelle concentration reduced remarkably (by over 90%) the nonproductive interactions between the cellulolytic enzymes and the lignins. This effect was hypothesized to be a consequence of the balance of hydrophobic and hydrogen bonding interactions. Moreover, the reduction of surface roughness and increased wettability of lignin surfaces upon surfactant treatment contributed to a lower affinity with the enzymes. Conformational changes of cellulases were observed upon their adsorption on lignin carrying preadsorbed surfactant. Weak electrostatic interactions were determined in aqueous media at pH between 4.8 and 5.5 for the native cellulolytic lignins (MPCEL and WSCEL), whereby a ∼20% reduction in the enzyme affinity was observed. This was mainly explained by electrostatic interactions (osmotic pressure effects) between charged lignins and cellulases. Noteworthy, adsorption of nonionic surfactants onto cellulose, in the form cellulose nanofibrils, did not affect its hydrolytic conversion. Overall, our results highlight the benefit of nonionic surfactant pretreatment to reduce nonproductive enzyme binding while maintaining the reactivity of the cellulosic substrate.",
"corpus_id": 3681561,
"score": 1,
"title": "Interactions between Cellulolytic Enzymes with Native, Autohydrolysis, and Technical Lignins and the Effect of a Polysorbate Amphiphile in Reducing Nonproductive Binding."
} |
{
"abstract": "An incentive problem of ad hoc networks still remains for relaying frames from other nodes. By utilizing the Social Network System such as Twitter and Facebook, it is possible to estimate the degree of intimacy of friendship links not only to direct friends but also indirect friends such as friends-of-friends. If we provide links of ad-hoc network in relation to direct/indirect friends instead of other incentive mechanisms, ad-hoc network can obtain infinite potentials to connect all nodes efficiently and cover wide areas by short range communication such as IEEE802 wireless LANs. This paper is devoted to discuss social network based (real) ad-hoc network concept and evaluation of its connection probability performance.",
"corpus_id": 1590843,
"title": "Architecture and characteristics of social network based ad hoc networking"
} | {
"abstract": "With the popularity of mobile devices and the development of the wireless technologies, humans can be connected ubiquitously. Because of the mobility of the devices, it is a hard task to maintain the end-to-end path between source node and destination node. Researchers have introduced analysis of nodes’ social behavior to solve the problem of data dissemination in the networks, which leads to the emersion of the Mobile Social Networks (MSNs). They increase the performance of data forwarding by the social relationships and interactions among nodes. Many schemes and algorithms have been proposed to enhance data forwarding performance and provide humanized service by introducing social features and digging social properties. In this paper, first, we investigate the architectures and evolutionary process of the MSNs. Then, the social features of nodes existing in the MSNs and main social properties of MSNs are described. In term of the state-of-the-art works, data forwarding strategies in the MSNs are divided into four categories, including only encounter history-based strategies, social-based strategies, incentive mechanisms, and forwarding methods that take social selfishness into consideration of this paper. Finally, the major issues and challenges are discussed.",
"corpus_id": 53787092,
"title": "A Survey of Mobile Social Networks: Applications, Social Characteristics, and Challenges"
} | {
"abstract": "In a service-oriented online social network consisting of service providers and consumers, a service consumer can search trustworthy service providers via the social network. This requires the evaluation of the trustworthiness of a service provider along a certain social trust path from the service consumer to the service provider. However, there are usually many social trust paths between participants in social networks. Thus, a challenging problem is which social trust path is the optimal one that can yield the most trustworthy evaluation result. In this paper, we first present a novel complex social network structure and a new concept, Quality of Trust (QoT). We then model the optimal social trust path selection with multiple end-to-end QoT constraints as a Multi-Constrained Optimal Path (MCOP) selection problem which is NP-Complete. For solving this challenging problem, we propose an efficient heuristic algorithm, H OSTP. The results of our experiments conducted on a large real dataset of online social networks illustrate that our proposed algorithm significantly outperforms existing approaches.",
"corpus_id": 14281060,
"score": 2,
"title": "A Heuristic Algorithm for Trust-Oriented Service Provider Selection in Complex Social Networks"
} |
{
"abstract": "The paper of Spratt et al [1], which appears in this month’s issue of European Urology, dealing with the extremely timely topic of the ideal extension of the irradiation field in the case of prophylactic irradiation of the pelvic lymphnodal area—the so-called whole-pelvis radiotherapy (WPRT)—in the setting of radical radiation treatment of clinically localized prostate carcinoma, seems clearly",
"corpus_id": 894323,
"title": "Whole-pelvis Radiotherapy in the Radiation Treatment of Intermediate- and High-risk Prostate Cancer: How to Improve the Therapeutic Ratio of a Potentially Effective but still Unsatisfactory Treatment?"
} | {
"abstract": "© Translational Andrology and Urology. All rights reserved. This present editorial comment accompanies the article by Sandler et al. (1). This article is a valuable contribution to the ongoing discussions on whether and in whom to treat the pelvis of men with prostate cancer getting radiation treatments. A different, less frequent question which seems much more important to us is: why does this discussion continue despite the fact that there are two randomized trials showing no advantage for WPRT but retrospective studies, such as the one by Sandler et al. show a benefit? What could better illustrate the controversy surrounding whole pelvic external beam radiotherapy (WPRT) than the statement made by Avkshto et al. (2) in their very recent publication that “the role of lymph node radiation in modern dose-escalated radiation therapy is a controversial topic”. It is noteworthy that their article was not about whether or not to treat the pelvis. One trial is the Radiation Therapy Oncology Group NRG/ RTOG 9413, last updated in 2018 with a median follow-up of 8.8 years and 14.8 years for surviving patients (3). Patients with localised prostate cancer with an estimated risk of lymph node involvement >15% were treated with 4 months of androgen deprivation therapy (ADT) and 70 Gy to the prostate with or without WPRT. The second trial is the French GETUG-01 study, where not all included patients had high risk cancers and therefore not all patients received ADT. Surprisingly, in a post hoc subgroup analysis, only patients with a low risk of lymph node metastasis and not receiving ADT had a lower rate of event-free survival with WPRT (4). A more complete critique of NRG/RTOG 9413 is mentioned below. In comparison, Sandler et al. analyzed patients with Gleason 9 and 10 diseases only, treated with ADT of various length and either external beam radiotherapy (EBRT) alone, or in combination with brachytherapy (BT). Of the 1,170 patients, 53% received WPRT. After a reasonably long median follow-up of 5.1 years for EBRT and 6.3 years for BT patients, they found that there was an advantage for WPRT, especially when combined with BT. However, this was not statistically significant as there was a hazard ratio of 0.7 for both biochemical recurrence free survival (P=0.07) and prostate cancer-specific survival (P=0.06). The fact that the distant metastasis-free survival, a clinically meaningful endpoint, was not influenced by WPRT (HR 0.9, P=0.7), points towards a lack of a clinical benefit for WPRT. Although these results are non-statistically significant, the Kaplan-Meier figure shows that the WPRT plus BT group (n=318) fared the best with a prostate cancer-specific survival rate at 5 years of an astonishing 98%. The editorial by Chen accompanying the article, titled “Randomized Trials and the Goldilocks Problem”, mentioned several important problems (5). However, one has first to define what the goldilocks problem is. For less literary educated people like us who had to google it, the Goldilocks problem comes by analogy to the children’s story “Goldilocks and the Three Bears”. It refers to a solution that is just right, not too much and not too little and sufficiently “close enough” so that one can stop testing. We believe that the article of Sandler et al. doesn’t put WPRT in the “just right” category. In a similarly interesting article, published within a month of Sandler’s article, Tharmalingam et al. (6) studied 812 patients enrolled in a prospective multicenter cohort study in the UK. All patients received a combination of EBRT with high-dose-rate BT and treatment of the pelvis was left to each institution’s discretion. In a subset Editorial Commentary",
"corpus_id": 225962607,
"title": "Is pelvic prophylactic radiotherapy in prostate cancer just right?"
} | {
"abstract": "To clarify the effect of aging on the mineral status of female mice, mineral concentrations in their tissues were determined. Five 2-mo-old, five 6-mo-old, and five 10-mo-old female B10BR mice were fed a commercial diet. Iron, zinc, copper, calcium, magnesium, sodium, and potassium concentrations in the blood, liver, kidney, heart, brain, lung, and spleen of the mice were determined using a flame atomic absorption spectrophotometer. Iron concentrations in the liver, kidney, heart, brain, and spleen increased with age. Significant differences were detected between mice 2 and 6 mo of age and between mice 2 and 10 mo of age. Zinc concentrations in the heart and lung decreased significantly with age. Zinc concentrations in the heart and lung of 10-mo-old mice were significantly lower than those of 2-mo-old mice. It is noteworthy that the copper concentration in the brain of 10-mo-old mice was markedly higher compared with that of younger mice. Calcium accumulation was apparent in the kidney of mice at 10 mo.",
"corpus_id": 19254832,
"score": 1,
"title": "The effect of aging on the mineral status of female mice"
} |
{
"abstract": "This paper presents a hybrid method based on the transmission line modeling method (TLM) aiming to represent the soil ionization effect for grounding systems simulation. This natural phenomenon can be better represented by taking into account the variation of the conductive components present in the TLM circuit and considering the residual resistivity remaining in the soil. The proposed analytical formulation is developed with a focus on the computational implementation of the method. The model is validated by comparing synthetized test results with measured data and other numerical models (residual resistivity, TLM, and analytical model). High precision together with an easy to implement formulation indicates that the methodology presents potential for real-life applications.",
"corpus_id": 3492104,
"title": "An Improved Soil Ionization Representation to Numerical Simulation of Impulsive Grounding Systems"
} | {
"abstract": "DC bias was observed in transformers of the ac system surrounding Jiuquan–Hunan ±800 ultra-high voltage direct current (UHVDC) transmission lines in China, which were operating in the simulation of the unipolar-earth ground mode. Applying conventional dc bias protection methods in this case that showed certain disadvantages including large time and calculation resource demand, this paper proposed to use an RC combined method to control the dc bias in this ac system. By determining appropriate capacities of resistor and capacitor based on the requirement of insulation and zero sequence current, it is able to stabilize a system under a single-phase ground fault with a dc suppression device installed in the neutral point. In addition, after analyzing the regularity of the dc distribution in the ac system with different substation connections, we also proposed an implementation strategy of dc suppression in power grid. The protection method was further tested in a numerical dc power grid model and verified by field experiments. The result showed that in the most serious case of the model system, when a current of 5000 A flows into the ground, the dc level of the neutral point of substation under protection by the proposed method is well controlled. This indicates that the proposed method is applicable to the dc bias protection of the dc–ac system operating at the highest voltage ever worked.",
"corpus_id": 96431373,
"title": "Resistor-Capacitor Combined DC Bias Protection of AC Power Grid of Jiuquan-Hunan ±800 kV Transmission Lines"
} | {
"abstract": "A simple method of controlling the Brushless Doubly-Fed Machine (BDFM) is presented. The controller comprises two Proportional-Integral (PI) modules and requires only the rotor speed feedback. The machine model and the control system are developed in MATLAB. Both simulation and experimental results are presented. The performance of the system is presented in the motoring and generating operations. The experimental tests included in this paper were carried out on a 180 frame size BDFM with a nested-loop rotor.",
"corpus_id": 33055954,
"score": 1,
"title": "Stable Operation of the Brushless Doubly-Fed Machine (BDFM)"
} |
{
"abstract": "Hierarchies have long been used for organization, summarization, and access to information. In this paper we define summarization in terms of a probabilistic language model and use the definition to explore a new technique for automatically generating topic hierarchies by applying a graph-theoretic algorithm, which is an approximation of the Dominating Set Problem. The algorithm efficiently chooses terms according to a language model. We compare the new technique to previous methods proposed for constructing topic hierarchies including subsumption and lexical hierarchies, as well as the top TF.IDF terms. Our results show that the new technique consistently performs as well as or better than these other techniques. They also show the usefulness of hierarchies compared with a list of terms.",
"corpus_id": 3793247,
"title": "Finding topic words for hierarchical summarization"
} | {
"abstract": "Most of the existing multi-document summarization methods decompose the documents into sentences and work directly in the sentence space using a term-sentence matrix. However, the knowledge on the document side, i.e. the topics embedded in the documents, can help the context understanding and guide the sentence selection in the summarization procedure. In this paper, we propose a new Bayesian sentence-based topic model for summarization by making use of both the term-document and term-sentence associations. An efficient variational Bayesian algorithm is derived for model parameter estimation. Experimental results on benchmark data sets show the effectiveness of the proposed model for the multi-document summarization task.",
"corpus_id": 189209,
"title": "Multi-Document Summarization using Sentence-based Topic Models"
} | {
"abstract": "Motivation: Alternative splicing (AS) is a regulated process that directs the generation of different transcripts from single genes. A computational model that can accurately predict splicing patterns based on genomic features and cellular context is highly desirable, both in understanding this widespread phenomenon, and in exploring the effects of genetic variations on AS. Methods: Using a deep neural network, we developed a model inferred from mouse RNA-Seq data that can predict splicing patterns in individual tissues and differences in splicing patterns across tissues. Our architecture uses hidden variables that jointly represent features in genomic sequences and tissue types when making predictions. A graphics processing unit was used to greatly reduce the training time of our models with millions of parameters. Results: We show that the deep architecture surpasses the performance of the previous Bayesian method for predicting AS patterns. With the proper optimization procedure and selection of hyperparameters, we demonstrate that deep architectures can be beneficial, even with a moderately sparse dataset. An analysis of what the model has learned in terms of the genomic features is presented. Contact: frey@psi.toronto.edu Supplementary information: Supplementary data are available at Bioinformatics online.",
"corpus_id": 1326074,
"score": -1,
"title": "Deep learning of the tissue-regulated splicing code"
} |
{
"abstract": "Optical music recognition is a challenging field similar in many ways to optical text recognition. It brings, however, many challenges that traditional pipeline-based recognition systems struggle with. The end-to-end approach has proven to be superior in the domain of handwritten text recognition. We tried to apply this approach to the field of OMR. Specifically, we focused on handwritten music recognition. To resolve the lack of training data, we developed an engraving system for handwritten music called Mashcima. This engraving system is successful at mimicking the style of the CVC-MUSCIMA dataset. We evaluated our model on a portion of the CVC-MUSCIMA dataset and the approach seems to be promising.",
"corpus_id": 253557377,
"title": "Optical Music Recognition using Deep Neural Networks"
} | {
"abstract": "Optical Music Recognition (OMR) has long been without an adequate dataset and ground truth for evaluating OMR systems, which has been a major problem for establishing a state of the art in the field. Furthermore, machine learning methods require training data. We analyze how the OMR processing pipeline can be expressed in terms of gradually more complex ground truth, and based on this analysis, we design the MUSCIMA++ dataset of handwritten music notation that addresses musical symbol recognition and notation reconstruction. The MUSCIMA++ dataset version 0.9 consists of 140 pages of handwritten music, with 91255 manually annotated notation symbols and 82261 explicitly marked relationships between symbol pairs. The dataset allows training and evaluating models for symbol classification, symbol localization, and notation graph assembly, both in isolation and jointly. Open-source tools are provided for manipulating the dataset, visualizing the data and further annotation, and the dataset itself is made available under an open license.",
"corpus_id": 6644531,
"title": "In Search of a Dataset for Handwritten Optical Music Recognition: Introducing MUSCIMA++"
} | {
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O. 1. Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms.",
"corpus_id": 1915014,
"score": -1,
"title": "Long Short-Term Memory"
} |
{
"abstract": "This short report discusses a case of solitary colonic polypoid ganglioneuroma associated with melanosis coli in a woman with no systemic manifestations. To our knowledge this is the first ganglioneuroma reported in the literature in association with melanosis coli. The nature and significance of this event remains unclear, although this may be coincidental due to the laxative intake. Further investigation is necessary to clarify this point. The interest of this case lies moreover in the rarity of this entity and its endoscopic and histologic resemblance to sessile polyps frequent in the clinical practice.",
"corpus_id": 2880210,
"title": "Solitary colonic polypoid ganglioneuroma"
} | {
"abstract": "BackgroundColorectal polyps of mesenchymal origin represent a small percentage of gastrointestinal (GI) lesions. Nevertheless, they are encountered with increasing frequency since the widespread adoption of colonoscopy screening.Case presentationWe report a case of a small colonic polyp that presented as intramucosal diffuse spindle cell proliferation with a benign cytological appearance, strong and diffuse immunoreactivity for S-100 protein, and pure Schwann cell phenotype. Careful morphological, immunohistochemical and clinical evaluation emphasize the differences from other stromal colonic lesions and distinguish it from schwannoma, a circumscribed benign nerve sheath tumor that rarely arises in the GI tract.ConclusionAs recently proposed, this lesion was finally described as mucosal Schwann cell hamartoma.",
"corpus_id": 9230207,
"title": "Schwann cell hamartoma: case report"
} | {
"abstract": "Cowden ' s disease is a rare genetic disease characterized by tumor deve lopment in many organs. In most , tumors are benign, appear in the third to fourth decade (1) and are mos t commonly seen in the face, the thyroid, and the gastrointestinal tract. Breast cancer , seen in nearly 50% of female patients with the disease, represents the major prognostic risk factor (2). Cowden ' s disease itself has been considered a marke r for breas t cancer (3). The CUiTent case repor t is of a 51-year-old woman whose illness is characterist ic of Cowden ' s disease. The specific pa t tern of gastrointestinal involvement seen in this patient, however , has not been reported. The pat ient has extensive glycogenic acanthosis of the esophagus, submucosal fibrosis of the s tomach, gastric heterotopia in the duodenum, and gangl ioneuromatosis of the colon.",
"corpus_id": 30837653,
"score": 2,
"title": "Ganglioneuromatosis of the colon and extensive glygogenic acanthosis in Cowden's disease"
} |
{
"abstract": "OBJECTIVE\nThe purpose of this study was to determine whether the bispectral index scale (BIS) would provide added benefit to established methods of monitoring conscious sedation with midazolam (M group) or midazolam supplemented with ketamine (MK group).\n\n\nSTUDY DESIGN\nBIS was prospectively and blindly examined in 22 patients receiving outpatient oral surgery with conscious sedation supplemented with local anesthesia.\n\n\nRESULTS\nThe average midazolam dose in the midazolam group over the treatment period was 0.01 mg/kg/h, and the average midazolam plus ketamine dose was 0.01 and 0.05 mg/kg/h, respectively. Mean BIS values throughout the sedation study period were 90 for the midazolam group and 94 for the midazolam plus ketamine group. The addition of ketamine did not lower BIS. BIS values did not alter significantly over time except for an expected transient drop after the midazolam bolus induction.\n\n\nCONCLUSION\nBIS levels remained close to baseline levels, suggesting that BIS would not provided any additional benefit to currently established methods of monitoring patient consciousness during conscious sedation for oral surgery.",
"corpus_id": 1008410,
"title": "BIS monitoring during midazolam and midazolam-ketamine conscious intravenous sedation for oral surgery."
} | {
"abstract": "To investigate the influence of ketamine on the bispectral index (BIS), the spectral edge frequency 90 (SEF 90) and relative power in four frequency bands (beta, alpha, theta, sigma), we studied 13 patients (ASA I-II) undergoing elective surgery. In the first study (n = 7), we administered ketamine (1.0 mg.kg-1, bolus, i.v.) during propofol anesthesia. Thirty minutes after the administration, BIS, SEF 90 and relative beta power increased significantly. In the second study (n = 6), bolus administration of ketamine (0.5 mg.kg-1 i.v.) followed by continuous infusion was started during propofol anesthesia. The infusion rate of ketamine was 0.5 mg.kg-1.h-1 for 30 minutes and then increased to 1.0 mg.kg-1.h-1. BIS, SEF 90 and relative beta power increased significantly after ketamine administration, but the parameters did not change in dose-related manner. We conclude that further investigation is necessary to use electroencephalographic parameters as an indicator of the anesthesia depth during propofol/ketamine anesthesia.",
"corpus_id": 32742061,
"title": "[The influence of ketamine on the bispectral index, the spectral edge frequency 90 and the frequency bands power during propofol anesthesia]."
} | {
"abstract": "Resectability of hepatocellular carcinoma in patients with chronic liver disease is dramatically limited by the need to preserve sufficient remnant liver in order to avoid postoperative liver insufficiency. Preoperative treatments aimed at downsizing the tumor and promoting hypertrophy of the future remnant liver may improve resectability and reduce operative morbidity. Here we report the case of a patient with a large hepatocellular carcinoma arising from chronic liver disease. Preoperative treatment, including tumor downsizing with transarterial radioembolization and induction of future remnant liver hypertrophy with right portal vein embolization, resulted in a 53% reduction in tumor volume and compensatory hypertrophy in the contralateral liver. The patient subsequently underwent extended right hepatectomy with no postoperative signs of liver decompensation. Pathological examination demonstrated a margin-free resection and major tumor response. This new therapeutic sequence, combining efficient tumor targeting and subsequent portal vein embolization, could improve the feasibility and safety of major liver resection for hepatocellular carcinoma in patients with liver injury.",
"corpus_id": 1460332,
"score": 1,
"title": "Radioembolisation and portal vein embolization before resection of large hepatocellular carcinoma."
} |
{
"abstract": "There are errors in the proof of uniqueness of arithmetic subgroups of the smallest covolume. In this note we correct the proof, obtain certain results which were stated as a conjecture, and we give several remarks on further developments. Mathematics Subject Classification (2000): 11E57 (primary); 22E40 (secondary). 1.1. Let us recall some notation and basic notions. Following [1] we will assume that n is even and n ≥ 4. The group of orientation preserving isometries of hyperbolic n-space is isomorphic to SO(1, n)o, the connected component of the identity of the special orthogonal group of signature (1, n), which can be identified with SO0(1, n), the subgroup of SO(1, n) preserving the upper half space. This group is not Zariski closed in SLn+1 thus in order to construct arithmetically defined subgroups of SO(1, n)o we consider arithmetic subgroups of the orthogonal group SO(1, n) or, more precisely, of groups G = SO( f ) where f is an admissible quadratic form defined over a totally real number field k (see [1, Section 2.1]). We have an exact sequence of k-isogenies: 1 → C → G φ → G → 1, (1.1) where G(k) Spin( f ) is the simply connected cover of G and C μ2 is the center of G. This induces an exact sequence in Galois cohomology (see [5, Section 2.2.3]) G(k) φ → G(k) δ → H1(k, C) → H1(k, G). (1.2) The main idea of this note is that by using (1.2) certain questions about arithmetic subgroups of G can be reduced to questions about the Galois cohomology group H1(k, C). A coherent collection of parahoric subgroups P = (Pv)v∈V f of G (V f = V f (k) denotes the set of finite places of the field k) defines a principal arithmetic subgroup Received October 16, 2006; accepted in revised form March 12, 2007.",
"corpus_id": 2405789,
"title": "Addendum to: On Volumes of Arithmetic Quotients of SO(1,n)"
} | {
"abstract": "In this paper, we determine the volumes of the arithmetic hyperbolic w-manifolds that arise s the orbit space of a principal congruence subgroup, of prime level, of the group of integral, positive, Lorentzian Oi + l )x ( rc + l) matrices in terms of Bernoulli numbers, the Riemann zeta function evaluated at positive odd integers, powers of π, and a Dirichlet L-function evaluated at positive even integers. We begin with some notation.",
"corpus_id": 117907668,
"title": "Volumes of integral congruence hyperbolic manifolds."
} | {
"abstract": "• \nThe Royal College of Obstetricians and Gynaecologists supports the Department of Health recommendation to increase consultant presence outside current working hours. \n \n• \nResident, 24-hour consultant cover exerts considerable pressure on staff. \n \n• \nWhether permanent consultant presence improves standards of care is unclear. \n \n \n \n \n \nLearning objectives: \n \n• \nTo explore the relationship between evidence in support of a permanent consultant presence and pregnancy outcomes. \n \n• \nTo appreciate the advantages, disadvantages and practical considerations of 24-hour resident consultant cover. \n \n \n \n \n \nEthical issues: \n \n• \nIs it possible to balance improvements in the standard of maternity care with potential detrimental effects on NHS staff? \n \n \n \n \n \nPlease cite this article as: Edmonds S, Allenby K. Experiences of a 24-hour resident consultant service. The Obstetrician & Gynaecologist 2008;10:107–111.",
"corpus_id": 72028411,
"score": 0,
"title": "Experiences of a 24-hour resident consultant service"
} |
{
"abstract": "The issue of regime complexity in global environmental governance is widely recognized. The academic debate on regime fragmentation has itself however been rather fragmented, with discussions circling around different concepts, including inter-organizational relations, polycentric governance, integrated management, landscape governance, environmental policy integration, coordination, mainstreaming, coherence, policy mixes, governance architectures and systems, regime complexes, institutional interaction, metagovernance and the nexus approach. Moreover, the topic of relationships between different policies is also discussed among practitioners, where the call for synergies is increasingly heard. This article brings together these discussions under the common heading of integrative environmental governance (IEG). The article provides a literature review, and argues for an IEG perspective in which the relationships between governance instruments take center stage.",
"corpus_id": 153233390,
"title": "Integrative environmental governance: enhancing governance in the era of synergies"
} | {
"abstract": "This Editorial introduces a special issue that illustrates a trend toward integrated landscape approaches. Whereas two papers echo older “win–win” strategies based on the trade of non-timber forest products, ten papers reflect a shift from a product to landscape perspective. However, they differ from integrated landscape approaches in that they emanate from sectorial approaches driven primarily by aims such as forest restoration, sustainable commodity sourcing, natural resource management, or carbon emission reduction. The potential of such initiatives for integrated landscape governance and achieving landscape-level outcomes has hitherto been largely unaddressed in the literature on integrated landscape approaches. This special issue addresses this gap, with a focus on actor constellations and institutional arrangements emerging in the transition from sectorial to integrated approaches. This editorial discusses the trends arising from the papers, including the need for a commonly shared concern and sense of urgency; inclusive stakeholder engagement; accommodating and coordinating polycentric governance in landscapes beset with institutional fragmentation and jurisdictional mismatches; alignment with locally embedded initiatives and governance structures; and a framework to assess and monitor the performance of integrated multi-stakeholder approaches. We conclude that, despite a growing tendency toward integrated approaches at the landscape level, inherent landscape complexity renders persistent and significant challenges such as balancing multiple objectives, equitable inclusion of all relevant stakeholders, dealing with power and gender asymmetries, adaptive management based on participatory outcome monitoring, and moving beyond existing administrative, jurisdictional, and sectorial silos. Multi-stakeholder platforms and bridging organizations and individuals are seen as key in overcoming such challenges.",
"corpus_id": 44120538,
"title": "From Synergy to Complexity: The Trend Toward Integrated Value Chain and Landscape Governance"
} | {
"abstract": "This paper endorses the claim that in transitional societies the line between business and crime is elusive because weak states foster a legal order that is formalistic, bureaucratic, politically biased and prone to corruption. Three elements are discussed in the paper. First, the paper briefly analyses legal and institutional state of affairs in transitional Western Balkans societies that condone organised crime. Secondly, it investigates the phenomenology of the (corruptive) crime behaviour, explaining how it managed to integrate into society in the context of a weak state. Thirdly, the paper investigates why state policies and measures fail to fight crime as the business. In this part of the paper, contests a traditional definition of criminal organisation as a hierarchical one, claiming that mafias are not organizations in a traditional sense. The paper emphasizes that the fight against organised crime is not just a fight against individuals or individual criminal behaviour ; but a fight to increase the efficiency of the government. The paper furthermore asserts that better understand of the organised crime problem can occur only if its fighting and addressing happens from the point of view of a good governance. Namely, the precondition to fight against organised crime or parallel activity requires increased efficiency and capacity of governmental institutions. This paper adds to the academic and practical understanding of the organised crime in the post-transitional settings. Apart from being instructive and up-todated source of information for the regional setting it deals with, the paper sheds a new light on understanding of organised crime.",
"corpus_id": 158854408,
"score": 1,
"title": "Crime as a Business, Business as a Crime"
} |
{
"abstract": "This paper evaluates the evidence on return predictability from an economic perspective: it asks whether investors would have been able and willing to exploit dividend price signals in order to allocate capital efficiently. We use a simple model that incorporates a time varying investment opportunity set into a mean-variance portfolio maximization framework, and derive the optimal capital allocation weights for: (i) a naive strategy based on average realized returns; and (ii) a class of strategies that condition on dividend-price signals. While our data supports in-sample evidence of return predictability, the out-of-sample returns of the naive strategy are higher than those of all conditional portfolio specifications based on a certainty equivalent metric and portfolio turnover. The degree of underperformance is most dramatic in the last three decades: an investor who had used dividend-price ratios as signals for capital allocation in the period 1990-2012 would have consistently generated lower returns than by following a naive strategy. These results suggest that dividend-price predictability offers no economic value to investors.",
"corpus_id": 73639452,
"title": "The economic value of predictability in portfolio management"
} | {
"abstract": "Theoretical and empirical studies document a negative relation between stock returns and individual skewness. In these studies, individual skewness has been defined with predictive models, industry groups and even with options' skewness. However, measures of skewness computed only from stock returns, such as historical skewness, do not confirm this negative relation. We propose a model-free measure of individual skewness directly obtained from high-frequency intraday prices, which we call realized skewness. We test whether realized skewness predicts future stock returns by sorting stocks every week according to realized skewness, forming five portfolios and analyzing subsequent weekly returns. We find a negative relation between realized skewness and stock returns in the cross section. A trading strategy that buys stocks in the lowest realized skewness quintile and sells stocks in the highest realized skewness quintile generates an average raw return of 38 basis points per week with a t-statistic of 9.15. This result is robust to different market periods, portfolio weightings, firm characteristics proxies and is not explained by the Fama-French-Carhart factors.",
"corpus_id": 11083509,
"title": "Skewness from High-Frequency Data Predicts the Cross-Section of Stock Returns"
} | {
"abstract": "In the oil and gas industry, there is a growing demand for application of big data analytics and artificial intelligence (AI) technologies to optimize operations and reduce cost. In this study, we work on the productivity prediction, which is an important and challenging task for operators. Unlike previous studies where full field or single well analysis was conducted, we focus on more active operation units, drilling space units. Moreover, significant information is extracted from geology reports which are saved in scanned PDF files and well logs. By using the extracted information, long short-term memory (LSTM) model which could take the special-temporal changes of DSUs into consideration is employed to predict the DSUs' productivity. After rigorous validation, it is found that the accuracy of LSTM model could reach to more than 60%, which is 10% higher than a multilayer perceptron (MLP) model proposed in a previous research.",
"corpus_id": 25828709,
"score": -1,
"title": "Long short-term memory model for predicting productivity of drilling space units"
} |
{
"abstract": "A high-density map of the region of canine Chromosome 5 (CFA5) surrounding the evolutionary breakpoint between human Chromosomes 1p32 and 17p11 was constructed by integrating a radiation hybrid map including 41 microsatellites, 10 BACs, and 59 genes and a linkage map including 18 markers. A collection of canine genomic survey sequences providing 1.5× coverage was used to identify dog orthologs of human genes, proving instrumental in the development of this map. Of particular interest is the canine BHD gene, within which we have previously described a single nucleotide polymorphism associated with Hereditary Multifocal Renal Cystadenocarcinoma and Nodular Dermatofibrosis (RCND) in German Shepherd dogs. The corresponding region of the human genome is particularly gene rich, containing genes involved in development, metabolism, and cancer that are likely to be of interest in future mapping studies. This current mapping effort on CFA5 expands the degree to which initial findings of linkage in canine families can be followed by successful positional cloning efforts and increases the value of the human genome sequence for defining candidate genes. Moreover, this study demonstrates the utility of genomic survey sequences when combined with accurate genome maps for rapid mapping of disease susceptibility loci.",
"corpus_id": 1028172,
"title": "A high-resolution comparative map of canine Chromosome 5q14.3–q33 constructed utilizing the 1.5× canine genomesequence"
} | {
"abstract": "Jaime F. Modiano*, Matthew Breen* Department of Veterinary Clinical Sciences, College of Veterinary Medicine and Masonic Cancer Center, University of Minnesota, Minneapolis/St. Paul, MN Department of Molecular Biomedical Sciences, College of Veterinary Medicine, and Center for Comparative Medicine and Translational Research, North Carolina State University, Raleigh, NC __________________________________________________________________________________ *Correspondence: Dr. Jaime F. Modiano, Department of Veterinary Clinical Sciences, College of Veterinary Medicine, University of Minnesota, 1365 Gortner Ave., St Paul, MN 55108, USA; Tel: 612-625-7436; fax: 612-624-0751; e-mail: modiano@umn.edu Dr. Matthew Breen, Department of Molecular Biomedical Sciences, College of Veterinary Medicine, North Carolina State University, 4700 Hillsborough Street, Raleigh, NC 27606, USA; Tel: 919-513-1467; Fax: 919-513-7301; e-mail: Matthew_Breen@ncsu.edu",
"corpus_id": 54144483,
"title": "Shared pathogenesis of human and canine tumors - an inextricable link between cancer and evolution"
} | {
"abstract": "Several features related to waterfowl carcasses were studied at Eyebrow Lake, Saskatchewan, Canada, during a botulism epizootic in the summer of 1989. Dummy carcasses, constructed by stretching duck skins over wooden forms, were used to assess the reaction of waterbirds to carcasses. There was no significant difference in the number of American coots, ducks, grebes, or total birds present when dummy carcasses were or were not present. Only one of 42 freshly-dead bird carcasses marked and observed twice each day was removed by a scavenger prior to the development of large maggots. Maggots developed in all carcasses and were visible externally a mean of 3.9 days after placement of the carcasses. The effectiveness of carcass collection and disposal operations was tested by marking carcasses on the day prior to two scheduled cleanups. Only 32% of marked carcasses were recovered. Large carcasses and carcasses on or near islands were recovered at a higher frequency than were small carcasses and carcasses not near islands, respectively.",
"corpus_id": 44734812,
"score": 1,
"title": "OBSERVATIONS ON WATERFOWL CARCASSES DURING A BOTULISM EPIZOOTIC"
} |
{
"abstract": "Magnetic levitation has shown its potential in many engineering fields with promising future applications. This paper deals with the asymptotic tracking problem of desired reference position trajectories in an active mechanical suspension system using magnetic levitation foundations. A differential flatness- based output feedback controller is proposed for accomplishing this control objective using only position measurements. The electromagnetic circuit dynamics is considered for design of the control voltage to regulate the position of the mechanical system in accordance with the specified motion planning. A robust observer is also presented for real-time estimation of the unavailable signals of acceleration and velocity. The electric current is algebraically reconstructed through the estimated signals. The efficient performance of the proposed observer- control scheme is verified by computer simulation.",
"corpus_id": 4816996,
"title": "Application of Magnetic Levitation in Active Mechanical Suspension Systems"
} | {
"abstract": "1 A Generalized Proportional Integral Output Feedback Controller for the Robust Perturbation Rejection in a Mechanical System Hebertt Sira-Ram´irez †, Francisco Beltr´an-Carbajal ‡ and Andr´es Blanco-Ortega ‡. Abstract—In this article, a Generalized Proportional Integral (GPI) controller is proposed for the efficient rejection of a completely unknown perturbation input in a controlled mass system attached to an uncertain mass-spring-damper mechanical system. We propose a classical compensation network form of the GPI controller, including a sufficient number of extra integra- tions, which results in a robust perturbation rejection scheme for a trajectory tracking task on the controlled mass subject to the unknown perturbation input. Aside from encouraging simulations, the proposed controller is implemented and tested in a laboratory experimental set up and its robust performance is clearly assessed by using exactly the same controller in three completely different topological situations. The experiments are repeated including infinite dimensional perturbations arising from the effects of several added un-modeled flexible appendages carrying unknown masses. Index Ter",
"corpus_id": 18432612,
"title": "A Generalized Proportional Integral Output Feedback Controller for the Robust Perturbation Rejection in a Mechanical System"
} | {
"abstract": "Abstract This is a simulation study on controlling a Kaibel distillation column with model predictive control (MPC). A Kaibel distillation column has several advantages compared with conventional binary column setups. The Kaibel column separates a feed stream into four product streams using only a single column shell. The distillation process is a multivariable process which leads to a multivariable control problem. The objective for optimal operation of the column is chosen to be minimization of the total impurity flow. An off-line optimization on a mathematical model leads to temperature setpoints to be used by a controller. An MPC generally obtain less total impurity flow compared to conventional decentralized control when the distillation column is exposed to disturbances. It also counteract process interactions better than decentralized control.",
"corpus_id": 17393928,
"score": 1,
"title": "Model Predictive Control of a Kaibel Distillation Column"
} |
{
"abstract": "A swarm of bees buzzing “Let it be” by the Beatles or the wind gently howling the romantic “Gute Nacht” by Schubert ‐ these are examples of audio mosaics as we want to create them. Given a target and a source recording, the goal of audio mosaicing is to generate a mosaic recording that conveys musical aspects (like melody and rhythm) of the target, using sound components taken from the source. In this work, we propose a novel approach for automatically generating audio mosaics with the objective to preserve the source’s timbre in the mosaic. Inspired by algorithms for non-negative matrix factorization (NMF), our idea is to use update rules to learn an activation matrix that, when multiplied with the spectrogram of the source recording, resembles the spectrogram of the target recording. However, when applying the original NMF procedure, the resulting mosaic does not adequately reflect the source’s timbre. As our main technical contribution, we propose an extended set of update rules for the iterative learning procedure that supports the development of sparse diagonal structures in the activation matrix. We show how these structures better retain the source’s timbral characteristics in the resulting mosaic.",
"corpus_id": 1069277,
"title": "Let it Bee - Towards NMF-Inspired Audio Mosaicing"
} | {
"abstract": "In this paper, the authors describe how they use an electric bass as a subtle, expressive and intuitive interface to browse the rich sample bank available to most laptop owners. This is achieved by audio mosaicing of the live bass performance audio, through corpus-based concatenative synthesis (CBCS) techniques, allowing a mapping of the multi-dimensional expressivity of the performance onto foreign audio material, thus recycling the virtuosity acquired on the electric instrument with a trivial learning curve. This design hypothesis is contextualised and assessed within the Sandbox#n series of bass+laptop meta-instruments, and the authors describe technical means of the implementation through the use of the open-source CataRT CBCS system adapted for live mosaicing. They also discuss their encouraging early results and provide a list of further explorations to be made with that rich new interface.",
"corpus_id": 14046649,
"title": "Surfing the Waves: Live Audio Mosaicing of an Electric Bass Performance as a Corpus Browsing Interface"
} | {
"abstract": "After eight years of practice on the first hyper-flute prototype (a flute extended with sensors), this article presents a retrospective of its instrumental practice and the new developments planned from both technological and musical perspectives. Design, performance skills, and mapping strategies are discussed, as well as interactive composition and improvisation.",
"corpus_id": 12353648,
"score": 2,
"title": "Eight Years of Practice on the Hyper-Flute: Technological and Musical Perspectives"
} |
{
"abstract": "This article documents recent improvements to the acoustic control system of the Thermal Acoustic Fatigue Apparatus (TAFA), a progressive wave tube test facility at the NASA Langley Research Center, Hampton, VA. A brief summary of past acoustic performance is first given to serve as a basis of comparison with the new performance data using a multiple-input, closed-loop, narrow-band controller. Performance data in the form of test section acoustic power spectral densities and coherence are presented for a variety of input spectra including uniform, band-limited random and an expendable launch vehicle payload bay environment.",
"corpus_id": 2600731,
"title": "Closed-Loop Control for Sonic Fatigue Testing Systems"
} | {
"abstract": "One important topic in the aeronautic and aerospace industries is the reproduction of random pressure field, with prescribed spatial correlation characteristics, in laboratory conditions. In particular, the random-wall pressure fluctuations induced by a Turbulent Boundary Layer (TBL) excitation are a major concern for cabin noise problem, as this excitation has been identified as the dominant contribution in cruise conditions. As in-flight measurements require costly and time-consuming measurement campaigns, the laboratory reproduction has attracted considerable attention in recent years. Some work has already been carried out for the laboratory simulation of the excitation pressure field for several random fields. It has been found that TBL reproduction is very demanding in terms of number of loudspeakers per correlation length, and it should require a dense and non-uniform arrangement of acoustic sources due to the different spanwise and streamwise correlation lengths involved. The present study addresses the problem of directly simulating the vibroacoustic response of an aircraft skin panel using a near-field array of suitably driven loudspeakers. It is compared with the use of an array of shakers and piezoelectric actuators. It is shown how the wavenumber filtering capabilities of the panel reduces the number of sources required, thus dramatically enlarging the frequency range over which the TBL vibro-acoustic response is reproduced with accuracy. Direct reconstruction of the TBL-induced panel response is found to be feasible over the hydrodynamic coincidence frequency range using a limited number of actuators driven by optimal signals. It is shown that piezoelectric actuators, which have more practical implementation than shakers, provide a more effective reproduction of the TBL response than near-field loudspeakers.",
"corpus_id": 20060632,
"title": "The reproduction of the response of an aircraft panel to turbulent boundary layer excitations in laboratory conditions"
} | {
"abstract": "The Patient Protection and Affordable Care Act (ACA) sets in motion a wide range of programs that substantially affected the health system in the United States and signify a moderate but important regulatory shift in the role of the federal government in public health. This article briefly addresses two interesting policy paradoxes about the ACA. First, while the legislation primarily addresses health care financing and insurance and establishes only a few initiatives directly targeting public health, the ACA nevertheless has the potential to produce extensive public health benefits across the United States population by improving access to health care and services and reducing cost. Essentially, the ACA does not take the explicit form of a public health law but instead strives to advance public health indirectly through its effects. Second, while the ACA does not establish a right to health - or even a right to health insurance - in the United States, it does set in motion a number of significant structural and normative changes to United States law that comport with the attainment of the right to health. Most significantly, key provisions of the bill are designed to improve availability, accessibility, acceptability, and quality of conditions necessary for health, and to prompt the government to respect, protect, and fulfill these conditions. These developments mean that, to a degree, the United States essentially has undertaken the same types of legal and policy steps that a country would be required to take to uphold the right to health without actually recognizing the right to health in any formal or legally binding way. Despite these dual paradoxes and the upside potential for public health improvements resulting from the ACA, the public health impact of the law remains uncertain and will be decided by numerous subsequent regulatory and implementation decisions. The ACA authorizes multiple federal agencies to engage in rulemaking, a process that will largely dictate the systemic and health impacts that will become its legacy. This reality opens up ample opportunity to bolster public health aspects and interpretations of the law, and to simultaneously augment the corresponding components of the right to health.",
"corpus_id": 25062020,
"score": 0,
"title": "The Patient Protection and Affordable Care Act, Public Health, and the Elusive Target of Human Rights"
} |
{
"abstract": "Graphene oxide (GO) was functionalized covalently with pH-sensitive poly(2-(diethylamino) ethyl methacrylate) (PDEA) by surface-initiated in situ atom transfer radical polymerization. The structure of the PDEA-grafted GO (GO-PDEA) were examined by Fourier-transform infrared spectroscopy, proton nuclear magnetic resonance spectroscopy, X-ray photoelectron spectroscopy, thermogravimetric analysis and atomic force microscopy. The grafted PDEA endowed the GO sheets with good solubility and stability in physiological solutions. Simple physisorption by π-π stacking and hydrophobic interactions on GO-PDEA can be used to load camptothecin (CPT), a widely used water-insoluble cancer drug. The loaded CPT was released only at the lower (acidic) pH normally found in a tumor environment but not in basic and neutral pH. GO-PDEA did not show practical toxicity to N2a cancer cells but the GO-PDEA-CPT complex exhibited high potency in killing N2a cancer cells in vitro. These results suggest that the GO-PDEA nanocargo carrier might be a promising material for site-specific anticancer drug delivery and controlled release.",
"corpus_id": 325057,
"title": "pH-sensitive nanocargo based on smart polymer functionalized graphene oxide for site-specific drug delivery."
} | {
"abstract": "Nowadays cancer remains one of the main causes of death in the world. Current diagnostic techniques need to be improved to provide earlier diagnosis and treatment. Traditional therapy approaches to cancer are limited by lack of specificity and systemic toxicity. In this scenario nanomaterials could be good allies to give more specific cancer treatment effectively reducing undesired side effects and giving at the same time accurate diagnosis and successful therapy. In this context, thanks to its unique physical and chemical properties, graphene, graphene oxide (GO) and reduced graphene (rGO) have recently attracted tremendous interest in biomedicine including cancer therapy. Herein we analyzed all studies presented in literature related to cancer fight using graphene and graphene-based conjugates. In this context, we aimed at the full picture of the state of the art providing new inputs for future strategies in the cancer theranostic by using of graphene. We found an impressive increasing interest in the material for cancer therapy and/or diagnosis. The majority of the works (73%) have been carried out on drug and gene delivery applications, following by photothermal therapy (32%), imaging (31%) and photodynamic therapy (10%). A 27% of the studies focused on theranostic applications. Part of the works here discussed contribute to the growth of the theranostic field covering the use of imaging (i.e. ultrasonography, positron electron tomography, and fluorescent imaging) combined to one or more therapeutic modalities. We found that the use of graphene in cancer theranostics is still in an early but rapidly growing stage of investigation. Any technology based on nanomaterials can significantly enhance their possibility to became the real revolution in medicine if combines diagnosis and therapy at the same time. We performed a comprehensive summary of the latest progress of graphene cancer fight and highlighted the future challenges and the innovative possible theranostic applications.",
"corpus_id": 18655264,
"title": "Graphene as Cancer Theranostic Tool: Progress and Future Challenges"
} | {
"abstract": "Wood is the most important natural and endlessly renewable source of energy and therefore has a major future role as an environmentally cost-effective alternative to burning fossils fuels. The major role of wood is not only the provision of energy but also the provision of energy-sufficient material for our buildings and many other products. In addition, developing wood cells represent one of the most important sinks for excess atmospheric CO2, thereby reducing one of the major contributors to global warming.",
"corpus_id": 655166,
"score": 1,
"title": "Update on Wood Formation in Trees Wood Formation in Trees"
} |
{
"abstract": "Abstract Toward the derivation of an effective theory for Polyakov loops in lattice QCD, we examine Polyakov loop correlation functions using the multi-level algorithm which was recently developed by Luscher and Weisz.",
"corpus_id": 1687420,
"title": "Effective potential for Polyakov loops in lattice QCD"
} | {
"abstract": "In non-abelian gauge theories without matterelds, expectation values oflargeWilsonloopsandloopcorrelationfunctionsaredicult tocomputethrough numerical simulation, because the signal-to-noise ratio is very rapidly decaying for increasing loop sizes. Using a multilevel scheme that exploits the locality of the theory, we show that the statistical errors in such calculations can be exponentially reduced. We explicitly demonstrate this in the SU(3) theory, for the case of the Polyakovloopcorrelationfunction,wheretheeciencyofthesimulationisimproved by many orders of magnitude when the area bounded by the loops exceeds 1fm 2 .",
"corpus_id": 15059053,
"title": "Locality and exponential error reduction in numerical lattice gauge theory"
} | {
"abstract": "A general introduction to the topological mechanism responsible for the absolute confinement of quarks inside hadronic bound states is given, including the effects of a finite instanton angle. We then propose a calculational technique for computing these states and their properties, where instead of topology we rely on a perturbative mechanism. It assumes that already before the topological mechanism can come into effect there is already a strong inclination of quarks to be confined. In particular the planar limit of large N QCD should exhibit this mechanism. By renormalizing the infrared divergence of one-loop diagrams, one may already realize a confining potential. In practice, our procedure will require gauge-fixing in advance, but it would be more elegant if, at an intermediate level, the theory with infrared counter terms included could be written as a gauge-invariant effective model. Models of the desired kind are described. They are not renormalizable, but they are local, gauge- and lorentz invariant.",
"corpus_id": 121696723,
"score": 2,
"title": "Confinement of quarks"
} |
{
"abstract": "Abstract Weedy species provide excellent opportunities to examine the process of successful colonization of novel environments. Despite the influence of the sexual system on a variety of processes from reproduction to genetic structure, how the sexual system of species influences weediness has received only limited consideration. We examined the hypothesis that weedy plants have an increased likelihood of being self‐compatible compared with nonweedy plants; this hypothesis is derived from Baker's law, which states that species that can reproduce uniparentally are more likely to successfully establish in a new habitat where mates are lacking. We combined a database of the weed (weedy/nonweedy) and introduction status (introduced/native) of plant species found in the USA with a database of plant sexual systems and determined whether native and introduced weeds varied in their sexual systems compared with native and introduced nonweeds. We found that introduced weeds are overrepresented by species with both male and female functions present within a single flower (hermaphrodites) whereas weeds native to the USA are overrepresented by species with male and female flowers present on a single plant (monoecious species). Overall, our results show that Baker's law is supported at the level of the sexual system, thus providing further evidence that uniparental reproduction is an important component of being either a native or introduced weed.",
"corpus_id": 745751,
"title": "Not all weeds are created equal: A database approach uncovers differences in the sexual system of native and introduced weeds"
} | {
"abstract": "The ability of weeds to evolve is key to their success, and the relationship between weeds and humans is marked by co-evolution going back to the agricultural revolution, with weeds evolving to counter human management actions. In recent years, climate change has emerged as yet another selection pressure imposed on weeds by humans, and weeds are likewise very capable of adapting to this latest stress of human origin. This review summarizes 10 ways this adaptation occurs: (1) general-purpose genotypes, (2) life history strategies, (3) ability to evolve rapidly, (4) epigenetic capacity, (5) hybridization, (6) herbicide resistance, (7) herbicide tolerance, (8) cropping systems vulnerability, (9) co-evolution of weeds with human management, and (10) the ability of weeds to ride the climate storm humans have generated. As pioneer species ecologically, these 10 ways enable weeds to adapt to the numerous impacts of climate change, including warming temperatures, elevated CO2, frequent droughts and extreme weather events. We conclude that although these 10 ways present formidable challenges for weed management, the novelty arising from weed evolution could be used creatively to prospect for genetic material to be used in crop improvement, and to develop a more holistic means of managing agroecosystems.",
"corpus_id": 234041132,
"title": "Ten Ways That Weed Evolution Defies Human Management Efforts Amidst a Changing Climate"
} | {
"abstract": "Atropa acuminata Royle Ex Lindl (Atropa acuminata) under tremendous threat of extinction in its natural habitat. However, the antimicrobial, antileishmanial and anticancer effects of the plant’s extracts have not been reported yet. In the current study, an attempt has been made to evaluate the pharmacological potential of this plant’s extracts against microbes, Leishmania and cancer. The roots, stems and leaves of Atropa acuminata were ground; then, seven different solvents were used alone and in different ratios to prepare crude extracts, which were screened for pharmacological effects. The aqueous, methanolic and ethanolic extracts of all parts carried a broad spectrum of anti-bacterial activities, while no significant activity was observed with combined solvents. Three types of cytotoxicity assays were performed, i.e., haemolytic, brine shrimp and protein kinase assays. The aqueous extract of all the parts showed significant haemolytic activity while n-hexane extracts of roots showed significant activity against brine shrimp. The acetone extracts strongly inhibited protein kinase while the methanolic extracts exhibited significant cytotoxic activity of roots and stem. The anti-leishmanial assays revealed that the methanolic extract of leaves and roots showed significant activity. These findings suggest that this plant could be a potential source of natural product based drugs.",
"corpus_id": 49907785,
"score": 1,
"title": "In vitro biological screening of a critically endangered medicinal plant, Atropa acuminata Royle Ex Lindl of north western Himalaya"
} |
{
"abstract": "The origin of electron trapping and negative charging of hydroxylated silica surfaces is predicted based on accurate quantum-mechanical calculations. The calculated electron affinities of the two dominant neutral paramagnetic defects, the nonbridging oxygen center, identical with Si-O*, and the silicon dangling bond, identical with Si*, demonstrate that both defects are deep electron traps and can form the corresponding negatively charged defects. We predict the structure and optical absorption energies of these diamagnetic defects.",
"corpus_id": 7808824,
"title": "Electron trapping at point defects on hydroxylated silica surfaces."
} | {
"abstract": "We report time-resolved studies using femtosecond laser pulses, accompanied by model calculations, that illuminate the difference in the dynamics of ultrashort pulsed laser ablation of different materials. Dielectrics are strongly charged at the surface on the femtosecond time scale and undergo an impulsive Coulomb explosion. This is not seen from metals and semiconductors where the surface charge is effectively quenched.",
"corpus_id": 6354497,
"title": "Surface charging and impulsive ion ejection during ultrashort pulsed laser ablation."
} | {
"abstract": "Abstract This paper describes the derivation of an empirical interatomic potential for the interaction of hydroxide ions with metal oxides. The model is based on the Born model of solids and its major features are firstly that the OH interaction is principally described by a Morse potential, derived originally by Saul et al. using ab initio Hartree-Fock methods, secondly that the parameters describing the short–range interaction of the hydroxyl oxygen with cations follows the approach suggested by Schroder et al. which ensured that the cation-anion equilibrium bond distances were maintained on modifying the anion charge and thirdly that electronic polarizability on the hydroxyl oxygen ion is included. The utility of this approach is described by applying this model to three systems: hydrogen defects in α-quartz; at α-quartz and sodalite surfaces; in modelling non-silicate hydroxide crystal structures, that is Mg(OH)2 and A1(OH)3",
"corpus_id": 97956155,
"score": 2,
"title": "Atomistic simulation of hydroxide ions in inorganic solids"
} |
{
"abstract": "Now the web applications are highly useful and powerful for usage in most fields such as finance, e-commerce, healthcare and more, so it must be well secured. The web applications may contain vulnerabilities, which are exploited by attackers to steal the user's credential. The Cross Site Scripting (XSS) attack is a critical vulnerability that affects on the web applications security. XSS attack is an injection of malicious script code into the web application by the attacker in the client-side within user's browser or in the server-side within the database, this malicious script is written in JavaScript code and injected within untrusted input data on the web application. This study discusses the XSS attack, its taxonomy, and its incidence. In addition, the paper presents the XSS mechanisms used to detect and prevent the XSS attacks.",
"corpus_id": 22002683,
"title": "A comparative analysis of Cross Site Scripting (XSS) detecting and defensive techniques"
} | {
"abstract": "Web applications typically interact with a back-end database to retrieve persistent data and then present the data to the user as dynamically generated output, such as HTML web pages. However, this interaction is commonly done through a low-level API by dynamically constructing query strings within a general-purpose programming language, such as Java. This low-level interaction is ad hoc because it does not take into account the structure of the output language. Accordingly, user inputs are treated as isolated lexical entities which, if not properly sanitized, can cause the web application to generate unintended output. This is called a command injection attack, which poses a serious threat to web application security. This paper presents the first formal definition of command injection attacks in the context of web applications, and gives a sound and complete algorithm for preventing them based on context-free grammars and compiler parsing techniques. Our key observation is that, for an attack to succeed, the input that gets propagated into the database query or the output document must change the intended syntactic structure of the query or document. Our definition and algorithm are general and apply to many forms of command injection attacks. We validate our approach with SqlCheckS , an implementation for the setting of SQL command injection attacks. We evaluated SqlCheckS on real-world web applications with systematically compiled real-world attack data as input. SqlCheckS produced no false positives or false negatives, incurred low runtime overhead, and applied straightforwardly to web applications written in different languages.",
"corpus_id": 7953465,
"title": "The essence of command injection attacks in web applications"
} | {
"abstract": "Graphical abstractDisplay Omitted This study examines country-level factors that influence Internet banking diffusion.Direct and mediating effects of economic and technological factors are modeled.National culture is used to explain diffusion levels across different country groups.PLS and cluster analysis provide support for mediating and moderating relationships.Findings yield implications on how to promote Internet banking for policy makers. In the last decade, Internet banking technology has made remarkable progress. However, there is a huge disparity across different nations all over the world in the diffusion of Internet banking services. This leads to the research question of this study: why different countries exhibit different levels of Internet banking adoption? Previous studies provide limited insight as they were mostly conducted at the individual user level with single-country samples. At the country level, this study proposes an Internet banking diffusion model that examines the impact of economical, technological and cultural factors on Internet banking diffusion. The hypothesized relationships in the research model were statistically tested with the secondary data collected from a sample of 33 European countries. The results indicate that the effects of socio-economic and technology-related factors on Internet banking diffusion are fully mediated by Internet access. Furthermore, the findings suggest that national culture is an important moderator as it make differences in Internet banking diffusion as well as Internet access across different country groups. The country-level analysis contributes to the advancement of Internet banking theory and practice, and provides some useful insights to researchers, practitioners and policy makers on how to enhance Internet banking diffusion.",
"corpus_id": 207711563,
"score": -1,
"title": "Internet banking diffusion: A country-level analysis"
} |
{
"abstract": "A noninvasive method for estimating the mean capillary pressure Pcap and the pre-and postcapillary resistance ratio Rv/Ra in human fingers is described. Volume change in a finger segment was detected with a transmittance-type infra-red photoelectric plethysmograph during a gradual and linear increase in occluding cuff pressure. There was an inflection point in the volume curve which would be produced by the difference in the compliance between the arterial and venous vascular bed in the segment. This transitional point was assumed to represent the complete compression of the venous vascular bed at the cuff pressure level. Thus Pcap was defined as the cuff pressure corresponding to the inflection point. Rv/Ra was calculated from the Pcap, the venous pressure Pv and the mean arterial pressure Pam. The latter two pressures, Pv and Pam, were also indirectly and simultaneously measured by the compression pressure of another cuff and by our new type of volume oscillation method, respectively. The values of Pcap and Rv/Ra were in good agreement with those reported by other investigators.",
"corpus_id": 373829,
"title": "Noninvasive method for estimating the mean capillary pressure and pre- and postcapillary resistance ratio in human fingers"
} | {
"abstract": "The purpose of this study was to compare the venous occlusion method for measuring capillary pressure with the stop-flow isovolumetric method in the cat small intestine. Venous occlusion pressures were determined from the inflection point of the venous pressure tracing after sudden occlusion of the venous outflow cannula. Venous occlusion pressure was highly correlated (r = 0.98, P less than 0.01) with stop-flow capillary pressure. This finding indicates that the major sites of fluid filtration and vascular capacitance reside at the same segment of the intestinal microcirculation. The venous occlusion method is a relatively simple technique for measuring whole-organ capillary pressure that is not constrained by the technical difficulties associated with volumetric/gravimetric techniques.",
"corpus_id": 20638105,
"title": "A new method for estimating intestinal capillary pressure."
} | {
"abstract": "Studies were made on the hemodynamics of the isolated autoperfused dog intestine preparation. An attempt was made to explain the mechanism involved in the observed increase in venous resistance seen as the arterial pressure in the segment was reduced. This rise in venous resistance was reduced or abolished by sympatholytic agents (phentolamine and Dibenzyline) and by local anesthetics (tetracaine). When areas of intestine were surgically denervated and then subjected to hemodynamic studies 10–19 days later the venous resistance response was nearly or completely absent in all cases. The response was not significantly affected by the infusion of hexamethonium or dichloroisoproterenol or by changes in hematocrit ratio. It was concluded that the rise in venous resistance seen as arterial pressure is reduced is mediated by a local sympathetic axon reflex with the receptor on the arterial side of the circulation and the effector located on the venous side. The decrease in arterial resistance seen as arterial pr...",
"corpus_id": 6492872,
"score": 2,
"title": "Evidence for local arteriovenous reflex in intestine."
} |
{
"abstract": "Ontology-based data access (OBDA) is widely accepted as an important ingredient of the new generation of information systems. In the OBDA paradigm, potentially incomplete relational data is enriched by means of ontologies, representing intensional knowledge of the application domain. We consider the problem of conjunctive query answering in OBDA. Certain ontology languages have been identified as FO-rewritable (e.g., DL-Lite and sticky-join sets of TGDs), which means that the ontology can be incorporated into the user's query, thus reducing OBDA to standard relational query evaluation. However, all known query rewriting techniques produce queries that are exponentially large in the size of the user's query, which can be a serious issue for standard relational database engines. In this paper, we present a polynomial query rewriting for conjunctive queries under unary inclusion dependencies. On the other hand, we show that binary inclusion dependencies do not admit polynomial query rewriting algorithms.",
"corpus_id": 626462,
"title": "Polynomial Conjunctive Query Rewriting under Unary Inclusion Dependencies"
} | {
"abstract": "Grounded conjunctive query answering over OWL-DL ontologies is intractable in the worst case, but we present novel techniques which allow for efficient querying of large expressive knowledge bases in secondary storage. In particular, we show that we can effectively answer grounded conjunctive queries without building a full completion forest for a large Abox (unlike state of the art tableau reasoners). Instead we rely on the completion forest of a dramatically reduced summary of the Abox. We demonstrate the effectiveness of this approach in Aboxes with up to 45 million assertions.",
"corpus_id": 9088289,
"title": "Scalable Grounded Conjunctive Query Evaluation over Large and Expressive Knowledge Bases"
} | {
"abstract": "Background Obesity and asthma have increased in westernised countries. Maternal obesity may increase childhood asthma risk. If this relation is causal, it may be mediated through factors associated with maternal adiposity, such as fetal development, pregnancy complications or infant adiposity. We investigated the relationships of maternal body mass index (BMI) and fat mass with childhood wheeze, and examined the influences of infant weight gain and childhood obesity. Methods Maternal prepregnancy BMI and estimated fat mass (from skinfold thicknesses) were related to asthma, wheeze and atopy in 940 children. Transient or persistent/late wheeze was classified using questionnaire data collected at ages 6, 12, 24 and 36 months and 6 years. At 6 years, skin-prick testing was conducted and exhaled nitric oxide and spirometry measured. Infant adiposity gain was calculated from skinfold thickness at birth and 6 months. Results Greater maternal BMI and fat mass were associated with increased childhood wheeze (relative risk (RR) 1.08 per 5 kg/m2, p=0.006; RR 1.09 per 10 kg, p=0.003); these reflected associations with transient wheeze (RR 1.11, p=0.003; RR 1.13, p=0.002, respectively), but not with persistent wheeze or asthma. Infant adiposity gain was associated with persistent wheeze, but not significantly. Adjusting for infant adiposity gain or BMI at 3 or 6 years did not reduce the association between maternal adiposity and transient wheeze. Maternal adiposity was not associated with offspring atopy, exhaled nitric oxide, or spirometry. Discussion Greater maternal adiposity is associated with transient wheeze but not asthma or atopy, suggesting effects upon airway structure/function but not allergic predisposition.",
"corpus_id": 5990469,
"score": 0,
"title": "The relationship between maternal adiposity and infant weight gain, and childhood wheeze and atopy"
} |
{
"abstract": "We prove that if the outer billiard map around a plane oval is algebraically integrable in a certain non-degenerate sense then the oval is an ellipse. In this note, an outer billiard table is a compact convex domain in the plane bounded by an oval (closed smooth strictly convex curve) C. Pick a point x outside of C. There are two tangent lines from x to C; choose one of them, say, the right one from the view-point of x, and reflect x in the tangency point. One obtains a new point, y, and the transformation T : x 7→ y is the outer (a.k.a. dual) billiard map. We refer to [3, 4, 5] for surveys of outer billiards. If C is an ellipse then the map T possesses a 1-parameter family of invariant curves, the homothetic ellipses; these invariant curves foliate the exterior of C. Conjecturally, if an outer neighborhood of an oval C is foliated by the invariant curves of the outer billiard map then C is an ellipse – this is an outer version of the famous Birkhoff conjecture concerning the conventional, inner billiards. In this note we show that ellipses are rigid in a much more restrictive sense of algebraically integrable outer billiards; see [2] for the case of inner billiards.",
"corpus_id": 3119770,
"title": "ON ALGEBRAICALLY INTEGRABLE OUTER BILLIARDS"
} | {
"abstract": "In this paper, we give a short survey of recent results on the algebraic version of the Birkhoff conjecture for integrable billiards on surfaces of constant curvature. We also discuss integrable magnetic billiards. As a new application of the algebraic technique, we study the existence of polynomial integrals for the two-sided magnetic billiards introduced by Kozlov and Polikarpov. This article is part of the theme issue ‘Finite dimensional integrable systems: new trends and methods’.",
"corpus_id": 52284487,
"title": "A survey on polynomial in momenta integrals for billiard problems"
} | {
"abstract": "The paper analyzes some fundamental properties of the solution semiflow of nonsymmetric cooperative standard (S) cellular neural networks (CNNs) with a typical three-segment piecewise-linear (pwl) neuron activation. Two relevant subclasses of SCNNs, corresponding to one-dimensional circular SCNNs with two-sided or single-sided positive interconnections between nearest neighboring neurons only, are considered. For these subclasses it is shown that the associated solution semiflow satisfies the fundamental properties of the CONVERGENCE CRITERION, the NONORDERING OF LIMIT SETS and the LIMIT SET DICHOTOMY, and that this is true although the semiflow is not eventually strongly monotone. As a consequence such CNNs are almost convergent, i.e., almost all solutions converge toward an equilibrium point as time tends to infinity. To the authors' knowledge the paper is the first rigorous investigation on the geometry of limit sets and convergence properties of cooperative SCNNs with a pwl neuron activation. All available convergence results in the literature indeed concern a modified cooperative CNN model where the original pwl activation of the SCNN model is replaced by a continuously differentiable strictly increasing sigmoid function. The main results in the paper are established by conducting a deep analysis of the properties of the omega-limit sets of the solution semiflow defined by the considered subclasses of SCNNs. In doing so the paper exploits and extends some mathematical tools for monotone systems in order that they can be applied to pwl vector fields that govern the dynamics of SCNNs. By using some transformations and referring to specific examples it is also shown that the treatment in the paper can be extended to other subclasses of SCNNs.",
"corpus_id": 11376985,
"score": 1,
"title": "Limit Set Dichotomy and Convergence of semiflows Defined by Cooperative Standard CNNs"
} |
{
"abstract": "Petal senescence is a type of programmed cell death (PCD) that is tightly regulated by multiple genes. We recently reported that a putative membrane protein, InPSR26, regulates progression of PCD during petal senescence in Japanese morning glory (Ipomoea nil). Reduced InPSR26 expression in transgenic plants (PSR26r lines) resulted in accelerated petal senescence with hastened development of PCD symptoms, and transcript levels of autophagy-related genes were reduced in the petals. Autophagy visualized by monodansylcadaverine staining indicated reduced autophagic activity in the PSR26r plants. The results from our recent studies suggest that InPSR26 acts to delay the progression of PCD during petal senescence, possibly through regulation of the autophagic process. In this addendum, we discuss the role of autophagy in petal senescence as it relates to these findings.",
"corpus_id": 2596904,
"title": "Autophagy regulates progression of programmed cell death during petal senescence in Japanese morning glory"
} | {
"abstract": "Different strategies of petal senescence and some important events associated with it have been discussed. On the basis of sensitivity to ethylene and associated symptoms of senescence, petal senescence has been classified into five different classes; besides changes in membrane permeability, autophagy and involvement of VPEs (Vacuolar processing enzymes), degradation of nucleic acids, protein turn over and remobilization of essential nutrients during petal senescence have been discussed. Nucleus appears to play a central role in administrating the execution of the events associated with petal senescence. Protein turn over appears to be an important factor governing petal senescence in both ethylene-sensitive and ethylene-insensitive flower systems and that the loss of membrane integrity, vacuolar autophagy and remobilization of essential nutrients being its important consequences. Autophagy seems to be a main process responsible for cell dismantling and remobilization of macromolecules besides final disintegration of nucleus. A large number of senescence-associated genes have been found to be differentially expressed during petal senescence. On the basis of the available literature, a schematic model representing some important events associated with petal senescence has been constructed. The review recommends that more elaborate work is required at cellular and organelle level to understand the ethylene-independent pathway and its execution in both ethylene-sensitive and ethylene-insensitive flower systems. It also recommends that ethylene sensitivity should not be generally assigned to plants at the family level on the basis of response of a few species in a family.Résumésdes stratégies différentes de pétale senescence et certains événements importants associés ont été discutés. Sur la base d’une sensibilité d’éthylène et les symptômes associés de sénescence, pétale senescence ont été classé en cinq différentes classes ; outre les modifications dans la perméabilité de la membrane, Autophagie et l’implication des VPEs (Vacuolar traitement des enzymes), la dégradation des acides nucléiques, protéines tour sur re de nutriments essentiels au cours de la senescence pétale ont été discutés. Noyau semble jouer un rôle central dans l’administration de l’exécution des événements associés pétale senescence. Tour de protéines sur semble être un facteur important régissant pétale senescence dans les systèmes de fleur d’éthylène sensibles et éthylène-insensible et que la perte d’intégrité de la membrane, Autophagie vacuolar et re de nutriments essentiels à ses conséquences importantes. Autophagie semble être un processus principal responsable du démantèlement de la cellule et re de macromolécules outre finale désintégration du noyau. Un grand nombre de gènes associés senescence a été trouvé pour être exprimé différemment au cours de la senescence pétale. Sur la base de la documentation disponible, un modèle schematic représentant certains événements importants associés pétale senescence a été construit. L’examen recommande qu’un travail plus élaboré est nécessaire au niveau cellulaire et Organite pour comprendre la voie de l’éthylène-indépendante et son exécution dans les deux systèmes de fleur d’éthylène sensibles et éthylène-insensible. Elle recommande également que éthylène sensibilité ne doit pas être généralement affectée aux plantes au niveau familial sur la base de la réponse de quelques espèces dans une famille.",
"corpus_id": 35887616,
"title": "Flower Senescence-Strategies and Some Associated Events"
} | {
"abstract": "Monitoring proteins in real time and in homogeneous solution has always been a difficult task. We have applied a fluorophore-labeled molecular probe based on a high-affinity platelet-derived growth factor (PDGF) aptamer for the ultrasensitive detection of PDGF in homogeneous solutions. The aptamer is labeled with fluorescein to specifically bind with the PDGF protein. Fluorescence anisotropy is used for the real-time monitoring of the binding between the aptamer and the protein. When the labeled aptamer is bound with its target protein, the rotational motion of the fluorophore attached to the complex becomes much slower because of an increased molecular weight after binding, resulting in a significant fluorescence anisotropy change. Using the anisotropy change, we are able to detect the binding events between the aptamer and the protein in real time and in homogeneous solutions (detection without separation). This assay is highly selective and ultrasensitive. It can detect PDGF in the subnanomolar range. The new method for protein detection is simple and inherits all of the advantages of molecular aptamers. Efficient oncoprotein detection using aptamer-based fluorescence anisotropy measurement will find wide applications in protein monitoring, in cancer diagnosis as well as other studies in which protein analysis is important.",
"corpus_id": 25325632,
"score": 0,
"title": "Molecular aptamer for real-time oncoprotein platelet-derived growth factor monitoring by fluorescence anisotropy."
} |
{
"abstract": "Evolutionary artificial neural networks (EANNS) refer to a special class of artificial neural networks (ANNs) in which evolution is another fundamental form of adaptation in addition to learning. Evolutionary algorithms are used to adapt the connection weights, network architecture and learning algorithms according to the problem environment. Even though evolutionary algorithms are well known as efficient global search algorithms, very often they miss the best local solutions in the complex solution space. We propose a hybrid meta-heuristic learning approach combining evolutionary learning and local search methods (using 1/sup st/ and 2/sup nd/ order error information) to improve the learning and faster convergence obtained using a direct evolutionary approach. The proposed technique is tested on three different chaotic time series and the test results are compared with some popular neuro-fuzzy systems and a cutting angle method of global optimization. Empirical results reveal that the proposed technique is efficient in spite of the computational complexity.",
"corpus_id": 948,
"title": "Optimization of evolutionary neural networks using hybrid learning algorithms"
} | {
"abstract": "In this paper, we apply the Beta Basis Function Neural Network (BBFNN) trained with cuckoo search (CS) for time series predictions. The cuckoo search algorithm optimizes the network parameters. In order to evaluate the effectiveness of the proposed method, we have carried out some experiments on four data sets: Mackey Glass, Lorenz attractor, Henon map and Box-Jenkins. We give also simulation examples to compare the effectiveness of the model with the other known methods in the literature. The results show that the CS-BBFNN model produces a better generalization performance.",
"corpus_id": 5401562,
"title": "Designing of Beta Basis Function Neural Network for optimization using cuckoo search (CS)"
} | {
"abstract": "First-order difference equations arise in many contexts in the biological, economic and social sciences. Such equations, even though simple and deterministic, can exhibit a surprising array of dynamical behaviour, from stable points, to a bifurcating hierarchy of stable cycles, to apparently random fluctuations. There are consequently many fascinating problems, some concerned with delicate mathematical aspects of the fine structure of the trajectories, and some concerned with the practical implications and applications. This is an interpretive review of them.",
"corpus_id": 2243371,
"score": 2,
"title": "Simple mathematical models with very complicated dynamics"
} |
{
"abstract": "Many biomaterials are being developed to be used for cartilage substitution and hemiarthroplasty implants. The lubrication property is a key feature of the artificial cartilage. The frictional behavior of human articular cartilage, stainless steel and polyvinyl alcohol (PVA) hydrogel were investigated under cartilage-on-PVA hydrogel contact, cartilage-on-cartilage contact and cartilage-on-stainless steel contact using pin-on-plate method. Tests under static load, cyclic load and 1 min load change were used to evaluate friction variations in reciprocating motion. The results showed that the lubrication property of cartilage-on-PVA hydrogel contact and cartilage-on-stainless steel contact were restored in both 1 min load change and cyclic load tests. The friction coefficient of PVA hydrogel decreased from 0.178 to 0.076 in 60 min, which was almost one-third of the value under static load in continuous sliding tests. In each test, the friction coefficient of cartilage-on-cartilage contact maintained far lower value than other contacts. It is indicated that a key feature of artificial cartilage is the biphasic lubrication properties.",
"corpus_id": 9766679,
"title": "Influence of dynamic load on friction behavior of human articular cartilage, stainless steel and polyvinyl alcohol hydrogel as artificial cartilage"
} | {
"abstract": "Natural cartilage surfaces were macroscopically curved with multi-porous viscoelastic biologic materials with extremely high water, but whether curved surface configuration could play an important role on the contact and frictional properties of natural cartilages fails to be completely understood up to now. In this current study, cartilage samples came from the 18–24 month-old bovine femora. Contact characteristic and frictional properties at two cartilage configurations were investigated using the UMT-2 testing rig and the five-point sliding average method would be adopted to analyze these tested data. These results indicated the surface displacement was extremely associated with the plate cartilage surface and seemed to be a representative of cartilage surface configuration. The summit of the surface load lagged behind that of the surface displacement at the same condition. Coefficient of friction showed obviously different variation with time at two cartilage surface configurations due to the fact that these two surface displacements had different amplitudes and opposite directions as a function of the sliding length. Therefore, surface configuration played the main role on these variables of contact displacement, contact load and coefficient of friction due to the direction and magnitude of the surface displacement while applied load and sliding velocity had a secondary role.Graphical AbstractNatural cartilage surfaces were macroscopically curved with multi-porous viscoelastic biologic materials with extremely high water, but whether curved surface configuration could play an important role on the contact and frictional properties of natural cartilages fails to be completely understood up to now. In this study, two different cartilage configurations were adopted to investigate natural cartilage properties, and the five-point sliding average method would be used to analyze these tested data. These results indicated the contact displacement was consisted of cartilage deformation and surface displacement while contact load was composed of steady load and surface load (as shown in the figure, panels (a) and (b)). Surface displacement was greatly associated with the plate cartilage surface and seemed to be a representative of cartilage surface configuration. These two surface displacements had different amplitudes and opposite directions as a function of the sliding length (as shown in panel (c)). The summit of the surface load lagged behind that of the surface displacement at the same condition (as shown in panel (d)). Surface displacement and surface load in the contact characteristic of natural cartilages were extremely related with the cartilage configurations. and their correlation coefficients varied periodically with the moving time (as shown in panel (e)). Coefficient of friction showed obviously different variation with time (as shown in panel (f)). Therefore, surface configuration played the main role on these variables of contact displacement, contact load and coefficient of friction due to the direction and magnitude of the surface displacement while applied load and sliding velocity had a secondary role. Variation in contact and frictional properties of natural cartilage at two different surface configurations (a) Contact displacement and its parts varied with time; (b) Contact load and its parts varied with time; (c) Surface displacement varied with the sliding length at two CPSTs; (d) Surface load and surface displacement varied with time; (e) Variation in the relation coefficient with the moving time; (f) Coefficient of friction varied with time at two CPSTs.",
"corpus_id": 3799015,
"title": "Investigation of contact characteristics and frictional properties of natural articular cartilage at two different surface configurations"
} | {
"abstract": "Several studies have investigated the neural basis of effortful emotion regulation (ER) but the neural basis of automatic ER has been less comprehensively explored. The present study investigated the neural basis of automatic ER supported by ‘implementation intentions’. 40 healthy participants underwent fMRI while viewing emotion-eliciting images and used either a previously-taught effortful ER strategy, in the form of a goal intention (e.g., try to take a detached perspective), or a more automatic ER strategy, in the form of an implementation intention (e.g., “If I see something disgusting, then I will think these are just pixels on the screen!”), to regulate their emotional response. Whereas goal intention ER strategies were associated with activation of brain areas previously reported to be involved in effortful ER (including dorsolateral prefrontal cortex), ER strategies based on an implementation intention strategy were associated with activation of right inferior frontal gyrus and ventro-parietal cortex, which may reflect the attentional control processes automatically captured by the cue for action contained within the implementation intention. Goal intentions were also associated with less effective modulation of left amygdala, supporting the increased efficacy of ER under implementation intention instructions, which showed coupling of orbitofrontal cortex and amygdala. The findings support previous behavioural studies in suggesting that forming an implementation intention enables people to enact goal-directed responses with less effort and more efficiency.",
"corpus_id": 10844416,
"score": 0,
"title": "The Neural Correlates of Emotion Regulation by Implementation Intentions"
} |
{
"abstract": "Introduction. Constipation is a common adverse drug reaction. Objective. Study associations between drugs and constipation in nursing home residents. Design. Cross-sectional study. Material and Methods. Nursing home residents above 60 years of age were included. Demographics, diet, physical activity, activity of daily living, nutritional status, use of drugs, and diseases were recorded. Constipation was defined as functional constipation or constipation-predominant IBS according to the Rome III criteria and/or regular use of laxatives. Drugs were classified according to the Anatomical-Therapeutic-Chemical Classification System (ATC), and anticholinergic effect was noted. Results. In all, 79 men and 188 women with a mean age of 85.4 (SD 7.1) years were included. The prevalence of constipation was 71.5%. Use of drugs in general, including polypharmacy, was not associated with constipation. Reduced activity of daily living (OR = 0.71, 95% CI : 0.60–0.84, P < 0.001), other antidepressants (N06AX) (OR 3.08, 95% CI : 1.09–8.68, P = 0.03), and benzodiazepine derivatives (N05BA) (OR = 2.80, 95% CI : 1.12–7.04, P = 0.03) were significantly associated with constipation; drugs with markedly anticholinergic effect (OR = 3.7, 95% CI : 0.78–17.53, P = 0.10), natural opium alkaloid (N02AA) (OR = 5.01, 95% CI : 0.95–25.94, P = 0.06), and propionic acid derivatives (M01AE) (OR = 7.00, 95% CI : 0.75–65.08, P = 0.09) showed a trend. Conclusion. In elderly with constipation, focus should be on specific groups of drugs and nonpharmacological factors, not on drugs in general.",
"corpus_id": 333941,
"title": "Drugs and Constipation in Elderly in Nursing Homes: What Is the Relation?"
} | {
"abstract": "AIMS AND OBJECTIVES\nTo develop and examine the effectiveness of individualised intervention to reduce constipation among older adults in nursing homes.\n\n\nBACKGROUND\nIn long-term care facilities, approximately 60-80% of the residents have symptoms of constipation. Constipation may lead to haemorrhoids, faecal impaction, ulcers, intestinal bleeding and can also lead to a decrease in quality of life. Although a high prevalence of constipation in older adults can be seen, there is a lack of empirical evidence for delivering interventions based on individual risk factors of constipation. Many factors cause constipation but the risk factors are different for each individual.\n\n\nDESIGN\nA prospective, randomised control trial conducted in northern Taiwan.\n\n\nMETHODS\nNursing home residents (n = 43) were randomly assigned to either the control group or the experimental group. The control group received no extra care from the researcher while the experimental group received an individualised intervention and an eight-week follow-up. Participants were assessed using the Bristol Stool Form Scale, the Patient Assessment of Constipation Symptoms, types and dosages of laxative, and bowel sound observations. Data were taken at baseline, four weeks as well as eight weeks after the intervention.\n\n\nRESULTS\nThe participants in the experimental group had a significantly higher increase in the frequency of defecation (group effect, p = 0·029) and in bowel sounds (interaction effect, p = 0·010) compared to those in the control group. However, the two groups did not differ significantly in symptoms and the severity of the constipation symptoms, Bristol Stool Form and use of laxatives.\n\n\nCONCLUSIONS\nThe results of this trial suggest that the individualised intervention may be appropriate for decreasing constipation among nursing home residents and encourage further study and confirmation.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nUsing individualised intervention to enhance the self-care ability related to constipation among older adults is recommended.",
"corpus_id": 9100096,
"title": "Effectiveness of individualised intervention on older residents with constipation in nursing home: a randomised controlled trial."
} | {
"abstract": "Video abstract Video",
"corpus_id": 976932,
"score": 1,
"title": "Autologous cord blood harvesting in North Eastern Italy: ethical questions and emerging hopes for curing diabetes and celiac disease"
} |
{
"abstract": "INTRODUCTION\nBreast cancer is the most prevalent cancer in women, with slightly more than ten percent developing the disease in Western countries. Mammography screening is a well established method to detect breast cancer.\n\n\nAIMS\nThe aim of the position statement is to review critically the advantages and shortcomings of population based mammography screening.\n\n\nMATERIALS AND METHODS\nLiterature review and consensus of expert opinion.\n\n\nRESULTS AND CONCLUSION\nMammography screening programmes vary worldwide. Thus there are differences in the age at which screening is started and stopped and in the screening interval. Furthermore differences in screening quality (such as equipment, technique, resolution, single or double reading, recall rates) result in a sensitivity varying from 70% to 94% between studies. Reporting results of screening is subject to different types of bias such as overdiagnosis. Thus because of the limitations of population-based mammography screening programmes an algorithm for individualized screening is proposed.",
"corpus_id": 2848229,
"title": "EMAS position statement: individualized breast cancer screening versus population-based mammography screening programmes."
} | {
"abstract": "BACKGROUND\nLifetime risks are often used in communications on cancer to the general public. The most-cited estimate for breast cancer risk (1 in 8 women), however, appears to be outdated. Here we describe the breast cancer burden in the Netherlands over time by means of lifetime and age-conditional risks. The aim is to identify changes in absolute risk of primary breast cancer diagnosis and death.\n\n\nMETHODS\nData on breast cancer incidence, mortality and size of the female population were retrieved from the Netherlands Cancer Registry and Statistics Netherlands. Lifetime and age-conditional risks were calculated for 1990, 2000 and 2010 using the life-table method (DevCan software).\n\n\nRESULTS\nThe lifetime risk of developing breast cancer (ductal carcinoma in situ and invasive) in 1990, 2000 and 2010 was estimated at 10.8 (1 in 9.3 women), 13.5 (1 in 7.4) and 15.2% (1 in 6.6), respectively. Most women were still diagnosed after the age of 50, with the highest risk between 60 and 70 years in 2010. The lifetime risk of breast cancer death was 3.8% (1 in 27) in 2010, which is lower than in 1990 (4.5%; 1 in 22) and 2000 (4.2%; 1 in 24).\n\n\nCONCLUSION\nBreast cancer risk has increased to 1 in 6.6 women being diagnosed during their lifetime (invasive cancer only: 1 in 7.4), whereas risk of breast cancer death has decreased from 1 in 22 to 1 in 27 women. To keep cancer management and prevention up-to-date, it remains important to closely monitor the ever-changing breast cancer burden.",
"corpus_id": 6028153,
"title": "Breast cancer diagnosis and death in the Netherlands: a changing burden."
} | {
"abstract": "The effect of potassium on the migration of vascular smooth muscle cells was analyzed in media made with extracellular potassium concentrations of 3, 4, 5, and 6 mmol/L. The migration of cultured porcine coronary artery cells was stimulated with platelet-derived growth factor (PDGF)-BB. In the first study, cells were exposed to PDGF-BB at concentrations of 0, 10, or 20 ng/mL for 5 hours with the use of a Boyden chamber. Cells were quiescent overnight in 0.5% fetal bovine serum in Dulbecco's modified Eagle's medium with an extracellular potassium concentration of 4 mmol/L. With increasing potassium concentration, migration was significantly inhibited (P<0. 02, 2-way ANOVA). In the cells exposed to 10 ng/mL PDGF-BB, migration ranged from 500+/-86% to 294+/-44% (value in wells with 0 ng/mL PDGF-BB and 4 mmol/L potassium concentration=100%) in medium containing 3 to 6 mmol/L extracellular potassium concentration (P<0. 03). Long-term potassium exposure was investigated in cells grown in 5% serum in Dulbecco's modified Eagle's medium with an extracellular potassium concentration of 3, 4, 5, or 6 mmol/L for 3 to 4 weeks. Migration was assessed with 0 or 20 ng/mL PDGF-BB. Migration was significantly inhibited by the elevation of extracellular potassium concentration (P<0.01, 2-way ANOVA). With 20 ng/mL PDGF-BB, the migration rates ranged from 152+/-11% in medium with 3 mmol/L potassium to 69+/-5% in 6 mmol/L potassium (P<0.01). Increases in extracellular potassium concentration within the physiological range significantly and directly inhibit vascular smooth muscle cell migration.",
"corpus_id": 7040222,
"score": 1,
"title": "Inhibition of vascular smooth muscle cell migration by elevation of extracellular potassium concentration."
} |
{
"abstract": "The very limited capacity of short-term or working memory is one of the most prominent features of human cognition. Most studies have stressed delimiting the upper bounds of this memory in memorization tasks rather than the performance of everyday tasks. We designed a series of experiments to test the use of short-term memory in the course of a natural hand-eye task where subjects have the freedom to choose their own task parameters. In this case subjects choose not to operate at the maximum capacity of short-term memory but instead seek to minimize its use. In particular, reducing the instantaneous memory required to perform the task can be done by serializing the task with eye movements. These eye movements allow subjects to postpone the gathering of task-relevant information until just before it is required. The reluctance to use short-term memory can be explained if such memory is expensive to use with respect to the cost of the serializing strategy.",
"corpus_id": 28350278,
"title": "Memory Representations in Natural Tasks"
} | {
"abstract": "The process of fixation identification—separating and labeling fixations and saccades in eye-tracking protocols—is an essential part of eye-movement data analysis and can have a dramatic impact on higher-level analyses. However, algorithms for performing fixation identification are often described informally and rarely compared in a meaningful way. In this paper we propose a taxonomy of fixation identification algorithms that classifies algorithms in terms of how they utilize spatial and temporal information in eye-tracking protocols. Using this taxonomy, we describe five algorithms that are representative of different classes in the taxonomy and are based on commonly employed techniques. We then evaluate and compare these algorithms with respect to a number of qualitative characteristics. The results of these comparisons offer interesting implications for the use of the various algorithms in future work.",
"corpus_id": 10730254,
"title": "Identifying fixations and saccades in eye-tracking protocols"
} | {
"abstract": "The aim of this paper is to explain basic kinematic concepts applied to humanoid robots. The structure of a Bioloid with three degrees of freedom (DoFs) for the arms and six DoFs for the legs is considered. The solution of direct and inverse kinematics (IK) is presented for both arms and legs. Our main contribution is to solve in a detailed manner the legs IK because this is the basis for planning walking motions.",
"corpus_id": 16606789,
"score": -1,
"title": "Explicit Analytic Solution for Inverse Kinematics of Bioloid Humanoid Robot"
} |
{
"abstract": "The commercial ly available alloy \" E v e r d u r 10t0\" (containing about 3.48 wt % Silicon, t .14 wt % Manganese, rest Copper) has a face-centred-cubic s t ruc ture and is known to fault profusely on deformat ion at room tempera tu re El, 2]. In the p resen t invest igat ion an alloy of very near ly identical composit ion was east locally and examined for faul t ing by filing at l iquid ni t rogen tempera ture . Filings of the alloy prepared at l iquid ni t rogen t empera tu re were quickly compac ted into a d i f f ractometer sample (this required about an hour) and",
"corpus_id": 1503713,
"title": "Stacking faults in a face-centred-cubic Copper-Silicon-manganese alloy"
} | {
"abstract": "Abstract In a f.c.c. metal, deformation stacking faults on the (111) planes produce peak shifts, and twin faults produce peak asymmetries. In addition, both kinds of faults contribute to the particle size broadening. Samples of o.f.h.c. copper were filed under liquid nitrogen and measured at −160°C. Under these conditions pure copper shows a probability for deformation faulting comparable to that of 80-20 brass filed at room temperature, and a probability for twin faulting. The faults anneal out rapidly at room temperature. Samples of a brass were filed under liquid nitrogen and measured at either −160°C or at room temperature. The probabilty of faulting increases with increasing Zn content and is appreciably greater if filed at liquid-nitrogen temperature than if filed at room temperature.",
"corpus_id": 135683083,
"title": "Stacking faults by low-temperature cold work in copper and alpha brass☆"
} | {
"abstract": "Root canal disinfection is of utmost importance in the success of the treatment, thus, a novel method for achieving root canal disinfection by electromagnetic waves, creating a synergistic reaction via electric and thermal energy, was created. To study electromagnetic stimulation (EMS) for the disinfection of root canal in vitro, single rooted teeth were instrumented with a 45.05 Wave One Gold reciprocating file. Specimens were sterilized and inoculated with Enterococcus faecalis ATCC 29,212, which grew for 15 days to form an established biofilm. Samples were treated with 6% sodium hypochlorite (NaOCl), 1.5% NaOCl 1.5% NaOCl with EMS, 0.9% saline with EMS or 0.9% saline. After treatments, the colony forming units (CFU) was determined. Data was analyzed by Wilcoxon Rank Sums Test (α = 0.05). One sample per group was scored and split for confocal laser scanning microscopy imaging. There was a significant effect with the use of NaOCl with or without EMS versus 0.9% saline with or without EMS (p = 0.012 and 0.003, respectively). CFUs were lower when using 0.9% saline with EMS versus 0.9% saline alone (p = 0.002). Confocal imaging confirmed CFU findings. EMS with saline has an antibiofilm effect against E. faecalis and can potentially be applied for endodontic disinfection.",
"corpus_id": 199651682,
"score": 1,
"title": "Use of electromagnetic stimulation on an Enterococcus faecalis biofilm on root canal treated teeth in vitro"
} |
{
"abstract": "An efficient graph based image segmentation algorithm exploiting a novel and fast turbo pixel extraction method is introduced. The images are modeled as weighted graphs whose nodes correspond to super pixels; and normalized cuts are utilized to obtain final segmentation. Utilizing super pixels provides an efficient and compact representation; the graph complexity decreases by hundreds in terms of node number. Connected K-means with convexity constraint is the key tool for the proposed super pixel extraction. Once the pixels are grouped into super pixels, iterative bi-partitioning of the weighted graph, as introduced in normalized cuts, is performed to obtain segmentation map. Supported by various experiments, the proposed two stage segmentation scheme can be considered to be one of the most efficient graph based segmentation algorithms providing high quality results.",
"corpus_id": 3346085,
"title": "Efficient graph-based image segmentation via speeded-up turbo pixels"
} | {
"abstract": "We present a novel image superpixel segmentation approach using the proposed lazy random walk (LRW) algorithm in this paper. Our method begins with initializing the seed positions and runs the LRW algorithm on the input image to obtain the probabilities of each pixel. Then, the boundaries of initial superpixels are obtained according to the probabilities and the commute time. The initial superpixels are iteratively optimized by the new energy function, which is defined on the commute time and the texture measurement. Our LRW algorithm with self-loops has the merits of segmenting the weak boundaries and complicated texture regions very well by the new global probability maps and the commute time strategy. The performance of superpixel is improved by relocating the center positions of superpixels and dividing the large superpixels into small ones with the proposed optimization algorithm. The experimental results have demonstrated that our method achieves better performance than previous superpixel approaches.",
"corpus_id": 10549410,
"title": "Lazy Random Walks for Superpixel Segmentation"
} | {
"abstract": "High-resolution satellite images contain a huge amount of information. Shadows in such images generate real problems in classifying and extracting the required information. Although signals recorded in shadow area are weak, it is still possible to recover them. Significant work is already done in shadow detection direction but, classifying shadow pixels from vegetation pixels correctly is still an issue as dark vegetation areas are still misclassified as shadow in some cases. In this letter, a new image index is developed for shadow detection employing multiple bands. Shadow pixels are classified from the index histogram by an automatic threshold identification procedure. The whole approach is applied on different study areas and high accuracies are achieved (average of 97%). The linear correlation method is then applied to compensate the classified shadow pixels. Two standard approaches of shadow detection are then applied to the same study areas to validate the proposed approach. The results show that the proposed approach achieves the best results. It also gives robust shadow detection results in classifying shadow from vegetation pixels comparable to the other two considered standard approaches.",
"corpus_id": 37006640,
"score": -1,
"title": "Accurate Shadow Detection From High-Resolution Satellite Images"
} |
{
"abstract": "BackgroundThis study was designed to analyze a group of non-operated patients admitted to our surgical ward for incidence and type of documented complication. We classified and categorised these complications according to the definition of the Association of Surgeons of the Netherlands (ASN). Our main interest was to identify adverse events for non-operated patients that are caused by medical management and thus preventable.MethodsComplications were prospectively collected in our registry, which is part of an electronic medical patient file, and in retrospective analysed. All non-operated patients admitted to our surgical ward between January 2003 and January 2006 have been analysed for type and incidence of complications.ResultsWe recorded 437 complications in 364 (8%) of 4602 non-operated patients and we categorised 196 (45%) of these events in the Hospital - Provider group. In this last category 161 (82%) events were related to medical management and appeared to be preventable. Numerous different types of complications were recorded (n = 69) among the 437 events. Of all the complications, 75 (17%) were found to be a negative effect/failure of therapy.ConclusionThe incidence of complications in non-operated patients at our surgical ward was 8%, with a great variety in types of events documented. Almost half of all complications (45%) were recorded in the Hospital-Provider category and appeared to be preventable, which needs further investigation.",
"corpus_id": 387853,
"title": "Incidence and type of complications in non-operated patients at a surgical ward"
} | {
"abstract": "Introduction: Adverse events in hospitals constitute a serious problem with grave consequences. Many studies have been conducted to gain an insight into this problem, but a general overview of the data is lacking. We performed a systematic review of the literature on in-hospital adverse events. Methods: A formal search of Embase, Cochrane and Medline was performed. Studies were reviewed independently for methodology, inclusion and exclusion criteria and endpoints. Primary endpoints were incidence of in-hospital adverse events and percentage of preventability. Secondary endpoints were adverse event outcome and subdivision by provider of care, location and type of event. Results: Eight studies including a total of 74 485 patient records were selected. The median overall incidence of in-hospital adverse events was 9.2%, with a median percentage of preventability of 43.5%. More than half (56.3%) of patients experienced no or minor disability, whereas 7.4% of events were lethal. Operation- (39.6%) and medication-related (15.1%) events constituted the majority. We present a summary of evidence-based interventions aimed at these categories of events. Conclusions: Adverse events during hospital admission affect nearly one out of 10 patients. A substantial part of these events are preventable. Since a large proportion of the in-hospital events are operation- or drug-related, interventions aimed at preventing these events have the potential to make a substantial difference.",
"corpus_id": 3207031,
"title": "The incidence and nature of in-hospital adverse events: a systematic review"
} | {
"abstract": "A concept of biohythane production by combining biohydrogen and biomethane together via two-stage anaerobic fermentation (TSAF) has been recently proposed and considered as a promising approach for sustainable hythane generation from waste biomass. The advantage of biohythane over traditional biogas are more environmentally benign, higher energy recovery and shorter fermentation time. However, many of current efforts to convert waste biomass into biohythane are still at the bench scale. The system bioprocess study and scale up for industrial application are indispensable. This paper outlines the general approach of biohythane by comparing with other biological processes. The technical challenges are highlighted towards scale up of biohythane system, including functionalization of biohydrogen-producing reactor, energy efficiency, and bioprocess engineering of TSAF.",
"corpus_id": 4524182,
"score": 1,
"title": "Bioprocess engineering for biohythane production from low-grade waste biomass: technical challenges towards scale up."
} |
{
"abstract": "Public safety is a matter of national security and people’s livelihoods. In recent years, intelligent video-surveillance systems have become important active-protection systems. A surveillance system that provides early detection and threat assessment could protect people from crowd-related disasters and ensure public safety. Image processing is commonly used to extract features, e.g., people, from a surveillance video. However, little research has been conducted on the relationship between foreground detection and feature extraction. Most current video-surveillance research has been developed for restricted environments, in which the extracted features are limited by having information from a single foreground; they do not effectively represent the diversity of crowd behavior. This paper presents a general framework based on extracting ensemble features from the foreground of a surveillance video to analyze a crowd. The proposed method can flexibly integrate different foreground-detection technologies to adapt to various monitored environments. Furthermore, the extractable representative features depend on the heterogeneous foreground data. Finally, a classification algorithm is applied to these features to automatically model crowd behavior and distinguish an abnormal event from normal patterns. The experimental results demonstrate that the proposed method’s performance is both comparable to that of state-of-the-art methods and satisfies the requirements of real-time applications.",
"corpus_id": 206444708,
"title": "Extracting foreground ensemble features to detect abnormal crowd behavior in intelligent video-surveillance systems"
} | {
"abstract": "Speedy abnormal event detection meets the growing demand to process an enormous number of surveillance videos. Based on inherent redundancy of video structures, we propose an efficient sparse combination learning framework. It achieves decent performance in the detection phase without compromising result quality. The short running time is guaranteed because the new method effectively turns the original complicated problem to one in which only a few costless small-scale least square optimization steps are involved. Our method reaches high detection rates on benchmark datasets at a speed of 140-150 frames per second on average when computing on an ordinary desktop PC using MATLAB.",
"corpus_id": 6070091,
"title": "Abnormal Event Detection at 150 FPS in MATLAB"
} | {
"abstract": "This paper proposes new low-dimensional image features that enable images to be very efficiently matched. Image matching is one of the key technologies for many vision-based applications, including template matching, block motion estimation, video compression, stereo vision, image/video near-duplicate detection, similarity join for image/video database, and so on. Normalized cross correlation (NCC) is one of widely used method for image matching with preferable characteristics such as robustness to intensity offsets and contrast changes, but it is computationally expensive. The proposed features, derived by the method of Lagrange multipliers, can provide upper-bounds of NCC as a simple dot product between two low-dimensional feature vectors. By using the proposed features, NCC-based image matching can be effectively accelerated. The matching performance with the proposed features is demonstrated using an image database obtained from actual broadcast videos. The new features are shown to outperform other methods: multilevel successive elimination algorithm (MSEA), discrete cosine transform (DCT) coefficients, and histograms, achieving very high precision while only slightly sacrificing recall.",
"corpus_id": 14031272,
"score": -1,
"title": "Simple low-dimensional features approximating NCC-based image matching"
} |
{
"abstract": "This paper deals with improvements to the contrast source inversion method which is widely used in microwave tomography. First, the method is reviewed and weaknesses of both the criterion form and the optimization strategy are underlined. Then, two new algorithms are proposed. Both of them are based on the same criterion, similar but more robust than the one used in contrast source inversion. The first technique keeps the main characteristics of the contrast source inversion optimization scheme but is based on a better exploitation of the conjugate gradient algorithm. The second technique is based on a preconditioned conjugate gradient algorithm and performs simultaneous updates of sets of unknowns that are normally processed sequentially. Both techniques are shown to be more efficient than original contrast source inversion.",
"corpus_id": 2053451,
"title": "On Algorithms Based on Joint Estimation of Currents and Contrast in Microwave Tomography"
} | {
"abstract": "A computational approach based on an innovative stochastic algorithm, namely, the particle swarm optimizer (PSO), is proposed for the solution of the inverse-scattering problem arising in microwave-imaging applications. The original inverse-scattering problem is reformulated in a global nonlinear optimization one by defining a suitable cost function, which is minimized through a customized PSO. In such a framework, this paper is aimed at assessing the effectiveness of the proposed approach in locating, shaping, and reconstructing the dielectric parameters of unknown two-dimensional scatterers. Such an analysis is carried out by comparing the performance of the PSO-based approach with other state-of-the-art methods (deterministic, as well as stochastic) in terms of retrieval accuracy, as well as from a computational point-of-view. Moreover, an integrated strategy (based on the combination of the PSO and the iterative multiscaling method) is proposed and analyzed to fully exploit complementary advantages of nonlinear optimization techniques and multiresolution approaches. Selected numerical experiments concerning dielectric scatterers different in shape, dimension, and dielectric profile, are performed starting from synthetic, as well as experimental inverse-scattering data.",
"corpus_id": 17437127,
"title": "Computational approach based on a particle swarm optimizer for microwave imaging of two-dimensional dielectric scatterers"
} | {
"abstract": "This paper deals with numerical processing techniques and practical applications of active microwave imaging. Different wavefront processing are presented, from an immediate use of measured projections to more complex procedures. Both spectral approaches to diffraction tomography and spatial iterative methods for generalized imaging are considered using multi‐incidence of multifrequency techniques for 3D and/or 2D objects. The technology of the so‐called microwave camera is presented for the fast recording of the scattered field with arrays of probes involving one‐ or two‐dimensional sensors at a single frequency or in a broad‐frequency band. Three different systems are depicted: a single‐frequency linear sensor devoted to industrial applications (on‐line transverse control of conveyed products), a single‐frequency planar microwave camera for biomedical applications and research, and a broad‐frequency linear microwave camera for civil engineering applications (detection of the rebars in reinforced concrete strctures). Microwave images obtained experimentally with the three systems are presented on configurations of practical interest for each field of application.",
"corpus_id": 29000853,
"score": 2,
"title": "Microwave tomography: From theory to practical imaging systems"
} |
{
"abstract": "Acquired resistance to drugs commonly used for lymphoma treatment poses a significant barrier to improving lymphoma patient survival. Previous work with a lymphoma tissue culture model indicates that selection for resistance to oxidative stress confers resistance to chemotherapy-induced apoptosis. This suggests that adaptation to chronic oxidative stress can contribute to chemoresistance seen in lymphoma patients. Oxidative stress-resistant WEHI7.2 cell variants in a lymphoma tissue culture model exhibit a range of apoptosis sensitivities. We exploited this phenotype to test for mitochondrial changes affecting sensitivity to apoptosis in cells made resistant to oxidative stress. We identified impaired release of cytochrome c, and the intermembrane proteins adenylate kinase 2 and Smac/DIABLO, indicating inhibition of the pathway leading to permeabilization of the outer mitochondrial membrane. Blunting of a glucocorticoid-induced signal and intrinsic mitochondrial resistance to cytochrome c release contributed to both points of resistance. The level of Bcl-2 family members or a difference in Bim induction were not contributing factors. The extent of cardiolipin oxidation following dexamethasone treatment, however, did correlate with apoptosis resistance. The differences found in the variants were all proportionate to the degree of resistance to glucocorticoid treatment. We conclude that tolerance to oxidative stress leads to mitochondrial changes that confer resistance to apoptosis.",
"corpus_id": 187070,
"title": "Mitochondrial Adaptations to Oxidative Stress Confer Resistance to Apoptosis in Lymphoma Cells"
} | {
"abstract": "Extensive research has been done in the search for innovative treatments against colon adenocarcinomas; however, the incidence rate of patients remains a major cause of cancer-related deaths in Malaysia. Natural bioactive compounds such as curcumin have been substantially studied as an alternative to anticancer drug therapies and have been surmised as a potent agent but, nevertheless, remain deficient due to its poor cellular uptake. Therefore, efforts now have shifted toward mimicking curcumin to synthesize novel compounds sharing similar effects. A synthetic analog, (Z)-3-hydroxy-1-(2-hydroxyphenyl)-3-phenylprop-2-ene-1-one (DK1), was recently synthesized and reported to confer improved bioavailability and selectivity toward human breast cancer cells. This study, therefore, aims to assess the anticancer mechanism of DK1 in relation to the induction of in vitro cell death in selected human colon cancer cell lines. Using the3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide(MTT) assay, the cytotoxicity of DK1 towards HT29 and SW620 cell lines were investigated. Acridine orange/propidium iodide (AO/PI) dual-staining assay and flow cytometry analyses (cell cycle analysis, Annexin/V-FITC and JC-1 assays) were incorporated to determine the mode of cell death. To further determine the mechanism of cell death, quantitative real-time polymerase chain reaction (qRT-PCR) and proteome profiling were conducted. Results from this study suggest that DK1 induced changes in cell morphology, leading to a decrease in cell viability and subsequent induction of apoptosis. DK1 treatment inhibited cell viability and proliferation 48 h post treatment with IC50 values of 7.5 ± 1.6 µM for HT29 cells and 14.5 ± 4.3 µM for SW620 cells, causing cell cycle arrest with increased accumulation of cell populations at the sub-G0/G1phaseof 74% and 23%, respectively. Flow cytometry analyses showed that DK1 treatment in cancer cells induced apoptosis, as indicated by DNA fragmentation and depolarization of the mitochondrial membrane. qRT-PCR results show significant upregulation in the expression of caspase-9 in both HT29 and SW620 cell lines, further supporting that cell death induction by DK1 is via an intrinsic pathway. These outcomes, therefore, demonstrate DK1 as a potential anticancer agent for colon adenocarcinoma due to its anti-apoptotic attributes.",
"corpus_id": 4789100,
"title": "DK1 Induces Apoptosis via Mitochondria-Dependent Signaling Pathway in Human Colon Carcinoma Cell Lines In Vitro"
} | {
"abstract": "The pro-apoptotic protein Bak is converted from a latent to an active form by damage-induced signals. This process involves an early exposure of an occluded N-terminal epitope of Bak in intact cells. Here we report a subsequent damage-induced change in Bak, detected using an antibody to the central BH-1 domain. Bak co-immunoprecipitated with Bc1-xL both in undamaged cells and early after damage, when the N-terminal epitope was exposed but the BH-1 epitope remained occluded. A subsequent decrease in binding of Bak to Bc1-xL correlated with exposure of an epitope in the Bak BH-1 domain. Overexpression of Bc1-xL did not affect the kinetics of exposure of the Bak N-terminal epitope but delayed exposure of the BH-1 domain. Cytochrome c release from mitochondria facilitates the activation of apoptotic caspases. The majority of cells with exposed Bak BH-1 domains contained cytosolic cytochrome c. However, a small proportion of cells exhibited exposed Bak BH-1 domains that co-localized with mitochondrial cytochrome c. The data are consistent with a two-step model for the activation of Bak by drug-induced damage signals where dissociation of Bc1-xL from the BH-1 domain of Bak occurs immediately prior to or concomitantly with cytochrome c release.",
"corpus_id": 6311970,
"score": 2,
"title": "Cellular damage signals promote sequential changes at the N-terminus and BH-1 domain of the pro-apoptotic protein Bak"
} |
{
"abstract": "To analyze quantitative trait loci (QTLs) affecting flooding tolerance and other physiological and morphological traits in Echinochloa crus-galli, a restriction fragment length polymorphism (RFLP) map was constructed using 55 plants of the F2 population (E. crus-galli var. praticola × E. crus-galli var. formosensis). One hundred forty-one loci formed 41 linkage groups. The total map size was 1,468 cM and the average size of linkage groups was 35.8 cM. The average distance between markers was 14.7 cM and the range was 0–37.2 cM. Early comparisons to the genetic maps of other taxa suggest appreciable synteny with buffelgrass (Pennisetum spp.) and sorghum (Sorghum spp.). One hundred ninty-one F2 plants were used to analyze QTLs of flooding tolerance, plant morphology, heading date, number of leaves, and plant height. For flooding tolerance, two QTLs were detected and one was mapped on linkage group 24. Other traits, including plant morphology, heading date, number of leaves, and plant height were highly correlated. Three genomic regions accounted for most of the mapped QTLs, each explaining 2–4 of the significant marker-trait associations. The high observed correlation between the traits appears to result from QTLs with a large contribution to the phenotypic variance at the same or nearby locations.",
"corpus_id": 3034646,
"title": "Construction of a comparative RFLP map of Echinochloa crus-galli toward QTL analysis of flooding tolerance"
} | {
"abstract": "Molecular-oxygen deficiency leads to altered cellular metabolism and can dramatically reduce crop productivity. Plants that survive or succumb to transient submergence differ in the timing and duration of carbohydrate consumption and anaerobic metabolism. The increased production of alcohol dehydrogenase, which is required for anaerobic fermentation, paradoxically involves the formation of a reactive oxygen species. Activation of a Rho of plant (Rop) G-protein results in an increase in hydrogen peroxide that correlates with elevation of alcohol dehydrogenase expression. Tolerance of oxygen deficiency requires both activation and inactivation of the G-protein by negative-feedback regulation. We propose that the magnitude and the duration of the signaling can provide tolerance of oxygen deficiency through management of carbohydrate consumption and avoidance of oxidative stress.",
"corpus_id": 45500246,
"title": "Plant responses to hypoxia--is survival a balancing act?"
} | {
"abstract": "MnO2/graphene nanosheets (MnO2/GNs) hybrid composite can be prepared by spray-drying technique with either graphene oxide (GO) or hydrazine-reduced graphene as precursor. The characterization results of the as-synthesized composites indicate that MnO2 and GNs are uniformly distributed and intertangled forming porous microspheres in a diameter of 2–4 μm. This special secondary structure is beneficial for an intimate contact between MnO2 and GNs accounting for improved conductivity. In addition, the high surface area and abundant porosity enables aqueous electrolyte to penetrate deeply inside the hybrid microspheres, and facilitating the Na+ insertion/release process, which increases the utilization of active component resulting in enhanced supercapacitance. MnO2/graphene hybrid microspheres exhibit a good cycling performance as the intertangled graphene buffers the volume change of MnO2 during charge–discharge cycles.",
"corpus_id": 94422262,
"score": 0,
"title": "A facile fabrication of MnO2/graphene hybrid microspheres with a porous secondary structure for high performance supercapacitors"
} |
{
"abstract": "In general, due to the interactions among subsystems, it is difficult to design an decentralized controller for nonlinear interconnected systems. In this study, the model of nonlinear interconnected systems is studied via decentralized fuzzy control method with time delay and polytopic uncertainty. First, the nonlinear interconnected system is represented by an equivalent Takagi and Sugeno type fuzzy model. And the represented model can be rewritten as Parameterized Linear Matrix Inequalities (PLMIs), that is, LMIs whose coefficients are functions of a parameter confined to a compact set. We show that the resulting fuzzy controller guarantees the asymptotic stability and disturbance attenuation of the closed-loop system in spite of controller gain variations within a resulted polytopic region by example and simulations.",
"corpus_id": 2909622,
"title": "Robust and nonfragile H∞ decentralized fuzzy model control method for nonlinear interconnected system with time delay"
} | {
"abstract": "In this paper, we propose an integral sliding mode control with the bound estimation of the uncertain nonlinear parameters for the robot dynamics. For the bound estimation, we assume that the upper bound of the uncertain nonlinearities is represented as a Fredholm integral equation of the first kind. We also provide a sufficient condition for the existence of such a representation. The construction of an adaptation law is only dependent on the sliding surface function. Using the estimated bound, the integral sliding mode control is constructed in such a way that the prescribed sliding surface will attract every system's trajectory and the trajectory will remain within a small boundary layer of the sliding surface for all subsequent time.",
"corpus_id": 12623573,
"title": "Integral sliding mode control with adaptive boundary of nonlinearities for robot manipulators"
} | {
"abstract": "Abstract : Methods are well known for determining testing times to minimize the mean cost of testing plus mean cost of an undetected failure (linear in the mean time between failure and detection), when testing does not degrade a good system. Here, we introduce a model in which the ith test may either cause a failure, with probability beta, or increase the remaining failure rate to lambda(i) lambda(i-7) without changing the form of the conditional lifetime distribution. Algorithms are given for finding the best testing times in cases of uniform and exponential failure time distributions. Optimization over a single cycle is considered first, and then the case with component renewals is solved using the mean loss per unit time criterion. (Author)",
"corpus_id": 107375944,
"score": 1,
"title": "Optimal Inspection Schedules for Failure Detection when Tests Hasten Failures."
} |
{
"abstract": "Cryptosporidium spp. and microsporidia are opportunistic parasites affecting a wide range of hosts in which they can be potentially life threatening in immunocompromised individuals. Diagnosis usually relies on the identification of the stained Cryptosporidium oocyst or microsporidial spores, but these methods lack sensitivity and require highly trained technicians to perform and interpret the results. Molecular diagnosis offers an alternative with both superior sensitivity and specificity as compared to microscopy. Although replacing microscopy with nucleic acid based methods is hampered by the higher costs, in particular in developing countries, multiplexing the detection of more than one parasite in a single test has been found to be very effective and would decrease the cost of the test without the need for new equipment, as it would be the case for quantitative PCR. The method shown in this report for the simultaneous detection of Cryptosporidium spp., Enterocytozoon bieneusi and Encephalitozoon intestinalis by multiplex nested PCR, has proved to have several advantages versus microscopy such as higher sensitivity and specificity, low subjectivity and a minimal need for specialist's training to interpret the results. The present multiplex assay can fill an important gap to identify other possible causative agents of several diarrheal diseases which until present remain undiagnosed and can improve the epidemiology of the disease with a more reliable detection method.",
"corpus_id": 1403714,
"title": "A novel nested multiplex PCR for the simultaneous detection and differentiation of Cryptosporidium spp., Enterocytozoon bieneusi and Encephalitozoon intestinalis."
} | {
"abstract": "Cryptosporidium parvum represents a considerable health risk to humans and animals because the parasite has a low infectious dose and is usually present at low numbers in environmental samples, which makes detection problematic. The purpose of this study was to evaluate Cryspovirus as a target for sensitive detection of C. parvum in clinical samples. Semi-quantitative RT-PCR (sqRT-PCR) and quantitative RT-PCR (qRT-PCR) directed to Cryspovirus sequences could detect less than 5 Cryptosporidium oocysts in RNA extracted from C. parvum-containing calf feces. Of interest was that a similar level of sensitivity was observed using RNA present in DNA extracts of the same C. parvum fecal samples. There was a strong correlation between both the sqRT-PCR and qRT-PCR product and number of C. parvum oocysts. Analysis of DNA extracted from a similar number of oocysts using PCR targeting the Cryptosporidium SSU rDNA gene sequence found that nested PCR was necessary to obtain a detectable PCR signal. The availability of DNA allowed for Cryptosporidium genotyping based on SSU rDNA sequencing as well as C. parvum subtyping through GP60 sequencing. By using DNA that contains viral RNA, the assay avoids two separate extractions — one for RNA and one for DNA. This two-step assay, first to detect Cryptosporidium by Cryspovirus-specific RT-PCR followed by nested SSU rDNA PCR for Cryptosporidium genotyping may represent an important tool for identifying the parasite in clinical samples.",
"corpus_id": 86352897,
"title": "RT-PCR specific for Cryspovirus is a highly sensitive method for detecting Cryptosporidium parvum oocysts"
} | {
"abstract": "Purpose of reviewMicrosporidiosis is an emerging and opportunistic infection associated with a wide range of clinical syndromes in humans. This review highlights the research on microsporidiosis in humans during the previous 2 years. Recent findingsThe reduced and compact microsporidian genome has generated much interest for better understanding the evolution of these parasites, and comparative molecular phylogenetic studies continue to support a relationship between the microsporidia and fungi. Through increased awareness and improved diagnostics, microsporidiosis has been identified in a broader range of human populations that, in addition to persons with HIV infection, includes travelers, children, organ transplant recipients, and the elderly. SummaryEffective commercial therapies for Enterocytozoon bieneusi, the most common microsporidian species identified in humans, are still lacking, making the need to develop tissue culture and small animal models increasingly urgent. Environmental transport modeling and disinfection strategies are being addressed for improving water safety. Questions still exist about whether microsporidia infections remain persistent in asymptomatic immune-competent individuals, reactivate during conditions of immune compromise, or may be transmitted to others at risk, such as during pregnancy or through organ donation. Reliable serological diagnostic methods are needed to supplement polymerase chain reaction or histochemistry when spore shedding may be sporadic.",
"corpus_id": 25984369,
"score": 2,
"title": "Microsporidiosis: current status"
} |
{
"abstract": "The technology of automatic document summarization is maturing and may provide a solution to the information overload problem. Nowadays, document summarization plays an important role in information retrieval. With a large volume of documents, presenting the user with a summary of each document greatly facilitates the task of finding the desired documents. Document summarization is a process of automatically creating a compressed version of a given document that provides useful information to users, and multi-document summarization is to produce a summary delivering the majority of information content from a set of documents about an explicit or implicit main topic. The lexical cohesion structure of the text can be exploited to determine the importance of a sentence/phrase. Lexical chains are useful tools to analyze the lexical cohesion structure in a text .In this paper we consider the effect of the use of lexical cohesion features in Summarization, And presenting a algorithm base on the knowledge base. Ours algorithm at first find the correct sense of any word, Then constructs the lexical chains, remove Lexical chains that less score than other, detects topics roughly from lexical chains, segments the text with respect to the topics and selects the most important sentences. The experimental results on an open benchmark datasets from DUC01 and DUC02 show that our proposed approach can improve the performance compared to sate-of-the-art summarization approaches.",
"corpus_id": 1156692,
"title": "Automated Text Summarization Base on Lexicales Chain and graph Using of WordNet and Wikipedia Knowledge Base"
} | {
"abstract": "The use of text summaries in information-seeking research has focused on query-based summaries. Extracting content that resembles the query alone, however, ignores the greater context of the document. Such context may be central to the purpose and meaning of the document. We developed a generic, a query-based, and a hybrid summarizer, each with differing amounts of document context. The generic summarizer used a blend of discourse information and information obtained through traditional surface-level analysis. The query-based summarizer used only query-term information, and the hybrid summarizer used some discourse information along with query-term information. The validity of the generic summarizer was shown through an intrinsic evaluation using a well-established corpus of human-generated summaries. All three summarizers were then compared in an information-seeking experiment involving 297 subjects. Results from the information-seeking experiment showed that the generic summaries outperformed all others in the browse tasks, while the query-based and hybrid summaries outperformed the generic summary in the search tasks. Thus, the document context of generic summaries helped users browse, while such context was not helpful in search tasks. Such results are interesting given that generic summaries have not been studied in search tasks and the that majority of Internet search engines rely solely on query-based summaries.",
"corpus_id": 1743303,
"title": "Summary in context: Searching versus browsing"
} | {
"abstract": "Balancing robot is a robot that relies on two wheels in the process of movement. Basically, to be able to remain standing balanced, the control requires an angle value to be used as tilt set-point. That angle value is a balance point of the robot itself which is the robot's center of gravity. Generally, to find the correct balance point, requires manual measurement or through trial and error, depends on the robot's mechanical design. However, when the robot is at balance state and its balance point changes because of the mechanical moving parts or bringing a payload, the robot will move towards the heaviest side and then fall. In this research, a cascade PID control system is developed for balancing robot to keep it balanced without changing the set-point even if the balance point changes. Two parameter is used as feedback for error variable, angle and distance error. When the robot is about to fall, distance taken from the starting position will be calculated and used to correct angle error so that the robot will still balance without changing the set-point but manipulating the control's error value. Based on the research that has been done, payload that can be brought by the robot is up to 350 grams.",
"corpus_id": 25761537,
"score": -1,
"title": "Tilt set-point correction system for balancing robot using PID controller"
} |
{
"abstract": "While resource quality and predator‐derived chemical cues can each have profound effects on zooplankton populations and their function in ecosystems, the strength and direction of their interactive effects remain unclear. We conducted laboratory experiments to evaluate how stoichiometric food quality (i.e., algal carbon [C] : phosphorus [P] ratios) affects responses of the zooplankter, Daphnia pulicaria, to predator‐derived chemical cues. We compared growth rates, body P content, metabolic rates, life‐history shifts, and survival of differentially P‐nourished Daphnia in the presence and absence of chemical cues derived from fish predators. We found effects of predator cues and/or stoichiometric food quality on all measured traits of Daphnia. Exposure to fish cues led to reduced growth and increased metabolic rates but had little effect on the body %P content of Daphnia. Elevated algal C : P ratios reduced growth and body %P and increased mass‐specific respiration rates. While most of the effects of predator cues and algal C : P ratios of Daphnia were non‐interactive, reduced survival and relatedly reduced population growth rates that resulted from P‐poor food were amplified in the presence of predator‐derived cues. Our results demonstrate that stoichiometric food quality interacts with antipredator responses of Daphnia, but these effects are largely trait dependent and appear connected to animal life‐history evolution. Given the ubiquity of predators and P‐poor food in lake ecosystems, our results highlight the importance of the interactive responses of animals to predator cues and poor nutrition.",
"corpus_id": 91202347,
"title": "Fear and food: Effects of predator‐derived chemical cues and stoichiometric food quality on Daphnia"
} | {
"abstract": "A key challenge for ecologists is to predict how single and joint effects of global warming and predation risk translate from the individual level up to ecosystem functions. Recently, stoichiometric theory linked these levels through changes in body stoichiometry, predicting that both higher temperatures and predation risk induce shifts in energy storage (increases in C-rich carbohydrates and reductions in N-rich proteins) and body stoichiometry (increases in C : N and C : P). This promising theory, however, is rarely tested and assumes that prey will divert energy away from reproduction under predation risk, while under size-selective predation, prey instead increase fecundity. We exposed the water flea Daphnia magna to 4 °C warming and fish predation risk to test whether C-rich carbohydrates increase and N-rich proteins decrease, and as a result, C : N and C : P increase under warming and predation risk. Unexpectedly, warming decreased body C : N, which was driven by reductions in C-rich fat and sugar contents while the protein content did not change. This reflected a trade-off where the accelerated intrinsic growth rate under warming occurred at the cost of a reduced energy storage. Warming reduced C : N less and only increased C : P and N : P in the fish-period Daphnia. These evolved stoichiometric responses to warming were largely driven by stronger warming-induced reductions in P than in C and N and could be explained by the better ability to deal with warming in the fish-period Daphnia. In contrast to theory predictions, body C : N decreased under predation risk due to a strong increase in the N-rich protein content that offsets the increase in C-rich fat content. The higher investment in fecundity (more N-rich eggs) under predation risk contributed to this stronger increase in protein content. Similarly, the lower body C : N of pre-fish Daphnia also matched their higher fecundity. Warming and predation risk independently shaped body stoichiometry, largely by changing levels of energy storage molecules. Our results highlight that two widespread patterns, the trade-off between rapid development and energy storage and the increased investment in reproduction under size-selective predation, cause predictable deviations from current ecological stoichiometry theory.",
"corpus_id": 3472215,
"title": "Energy storage and fecundity explain deviations from ecological stoichiometry predictions under global warming and size-selective predation."
} | {
"abstract": "By using an elemental‐stoichiometry approach to zooplankton‐phytoplankton interactions, we compare elemental composition and aspects of nutrient deficiency across a variety of marine and freshwater ecosystems. During 1992 and 1993 we sampled a total of 31 lakes (in northern Wisconsin and Michigan and the Experimental Lakes Area of northern Ontario) and 21 marine stations (at seven estuarine, coastal, and open‐ocean sites in the Atlantic and Pacific) for elemental composition of zooplankton, seston, and dissolved components. Relative degree of nutrient deficiency was assessed by phytoplankton dark uptake of ammonia and phosphate, as well as growth response of phytoplankton to N and P addition. Marine and freshwater systems differed greatly in N and P concentrations, N:P stoichiometry, and the distribution of N and P within dissolved, seston, and zooplankton pools. Particularly notable was the high proportion of N and, especially, P that was incorporated in the particulate fraction (seston + zooplankton) of lakes compared to marine sites. In freshwater systems, Daphnia spp., which have low body N: P, dominated zooplankton communities when seston C:P and N:P were also low, and calanoids that tend to have high body N:P dominated when seston C: P and N: P was high. This relationship between zooplankton community composition and seston elemental stoichiometry supports arguments for the importance of food quality constraints on zooplankton growth in freshwater systems. Such patterns were not seen in marine systems.",
"corpus_id": 5757483,
"score": -1,
"title": "Ecological stoichiometry of N and P in pelagic ecosystems: Comparison of lakes and oceans with emphasis on the zooplankton‐phytoplankton interaction"
} |
{
"abstract": "Introduction The aim of this study was to examine health-related quality of life (HRQoL) as measured by EQ-5D and to investigate the influence of chronic conditions and other risk factors on HRQoL based on a distributed sample located in Shaanxi Province, China. Methods A multi-stage stratified cluster sampling method was performed to select subjects. EQ-5D was employed to measure the HRQoL. The likelihood that individuals with selected chronic diseases would report any problem in the EQ-5D dimensions was calculated and tested relative to that of each of the two reference groups. Multivariable linear regression models were used to investigate factors associated with EQ VAS. Results The most frequently reported problems involved pain/discomfort (8.8%) and anxiety/depression (7.6%). Nearly half of the respondents who reported problems in any of the five dimensions were chronic patients. Higher EQ VAS scores were associated with the male gender, higher level of education, employment, younger age, an urban area of residence, access to free medical service and higher levels of physical activity. Except for anemia, all the selected chronic diseases were indicative of a negative EQ VAS score. The three leading risk factors were cerebrovascular disease, cancer and mental disease. Increases in age, number of chronic conditions and frequency of physical activity were found to have a gradient effect. Conclusion The results of the present work add to the volume of knowledge regarding population health status in this area, apart from the known health status using mortality and morbidity data. Medical, policy, social and individual attention should be given to the management of chronic diseases and improvement of HRQoL. Longitudinal studies must be performed to monitor changes in HRQoL and to permit evaluation of the outcomes of chronic disease intervention programs.",
"corpus_id": 463533,
"title": "Health-Related Quality of Life as Measured with EQ-5D among Populations with and without Specific Chronic Conditions: A Population-Based Survey in Shaanxi Province, China"
} | {
"abstract": "BackgroundThe aims of this study were: (1) to compare the discriminative ability of a disease-specific instrument, the St. George's Respiratory Questionnaire (SGRQ) to generic instruments (i.e., EQ-5D and SF-36); and (2), to evaluate the strength of associations among clinical and health-related quality of life (HRQL) measures in chronic obstructive pulmonary disease (COPD).MethodsWe analyzed data collected from 120 COPD patients in a Veterans Affairs hospital. Patients self-completed two generic HRQL measures (EQ-5D and SF-36) and the disease-specific SGRQ. The ability of the summary scores of these HRQL measures to discriminate COPD disease severity based on Global Obstructive Lung Disease (GOLD) stage was assessed using relative efficiency ratios (REs). Strength of correlation was used to further evaluate associations between clinical and HRQL measures.ResultsMean total scores for PCS-36, EQ-VAS and SGRQ were significantly lower for the more severe stages of COPD (p < 0.05). Using SGRQ total score as reference, the summary scores of the generic measures (PCS-36, MCS-36, EQ index, and EQ-VAS) all had REs of <1. SGRQ exhibited a stronger correlation with clinical measures than the generic summary scores. For instance, SGRQ was moderately correlated with FEV1 (r = 0.43), while generic summary scores had trivial levels of correlation with FEV1 (r < 0.2).ConclusionsThe SGRQ demonstrated greater ability to discriminate among different levels of severity stages of COPD than generic measures of health, suggestive that SGRQ may provide COPD studies with greater statistical power than EQ-5D and SF-36 summary scores to capture meaningful differences in clinical severity.",
"corpus_id": 5780359,
"title": "Comparison of health-related quality of life measures in chronic obstructive pulmonary disease"
} | {
"abstract": "ABSTRACT Purpose: To estimate the prevalence of positive anxiety and depression screening in patients with ocular inflammatory disease (OID). The predictors associated with anxiety and depressive symptoms were investigated. Methods: A cross-sectional study was conducted. The Thai Hospital Anxiety and Depression Scale (HADS), a sociodemographic questionnaire, and the Thai Visual Functioning Questionnaire 28 were administered to all participants. Associations were estimated using the Cox regression. Results: Of the 86 participants, 12.8% and 8.1% screened positive for anxiety and depression, respectively. Predictors of an increase in both HADS-Anxiety and HADS-Depression scores comprised poor understanding of OIDs [adjusted relative probability (aRP) = 1.56; p = 0.021 and 1.59; p = 0.012, respectively], and low overall composite score (aRP = 1.45; p = 0.022 and 1.6; p = 0.002, respectively). Conclusions: Approximately one-tenth of our patients screened positive for anxiety and depression. Patients with poor understanding of their OID and poor self-reported visual function were at an increased risk.",
"corpus_id": 49681475,
"score": 1,
"title": "Anxiety and Depression among Patients with Uveitis and Ocular Inflammatory Disease at a Tertiary Center in Southern Thailand: Vision-Related Quality of Life, Sociodemographics, and Clinical Characteristics Associated"
} |
{
"abstract": "Since the beginning of the 1980s, when Mandelbrot observed that earthquakes occur on ‘fractal’ self-similar sets, many studies have investigated the dynamical mechanisms that lead to self-similarities in the earthquake process. Interpreting seismicity as a self-similar process is undoubtedly convenient to bypass the physical complexities related to the actual process. Self-similar processes are indeed invariant under suitable scaling of space and time. In this study, we show that long-range dependence is an inherent feature of the seismic process, and is universal. Examination of series of cumulative seismic moment both in Italy and worldwide through Hurst’s rescaled range analysis shows that seismicity is a memory process with a Hurst exponent H ≈ 0.87. We observe that H is substantially space- and time-invariant, except in cases of catalog incompleteness. This has implications for earthquake forecasting. Hence, we have developed a probability model for earthquake occurrence that allows for long-range dependence in the seismic process. Unlike the Poisson model, dependent events are allowed. This model can be easily transferred to other disciplines that deal with self-similar processes.",
"corpus_id": 4459604,
"title": "Long-range dependence in earthquake-moment release and implications for earthquake occurrence probability"
} | {
"abstract": "Abstract In the present work we investigated changes in the extent of regularity (randomness) of seismic process in fixed time span windows. A comparison with a set of randomized catalogues was accomplished basing on spatial, temporal and energetic characteristics of a seismic process. Increments of cumulative times, increments of cumulative distances and increments of cumulative seismic energies have been calculated from the southern California earthquake catalogue, 1980 to 2020. The multivariate Mahalanobis' distance calculation, combined with the surrogate data testing procedure, was chosen as the analysis method. An analysis of variability in the extent of regularity of a seismic process has been accomplished for different completeness magnitude thresholds and sliding windows of different time spans. Analysing the features of temporal, spatial and energetic variability, in periods of supposedly aftershock activity, we found that the original seismic process is significantly different from a random process. Such periods, containing windows with nonrandom seismicity, are always ended by a series of windows in which a seismic process is indistinguishable from a random one. This was shown at different magnitude thresholds, comparing the original catalogue with the set of randomized catalogues. It was also found that at small magnitude thresholds (M2.6 and M3.0), the fixed time span windows with a seismic process significantly different from the random process might have occurred also prior to four large (M > 7.0) earthquakes in the considered catalogue. The amount of the released seismic energy in these windows is essentially smaller than in the ones after strong earthquakes of smaller magnitudes. Relying on our results we suggest that causes of regularity in the seismic process prior and after large earthquakes are probably different. At larger-magnitude thresholds, the total number of fixed time windows with a seismic process different from the random one gradually decreases. Moreover, at M3.8 and M4.2 thresholds there are practically no windows with regular seismic process prior to four large catalogue earthquakes. In the periods between strong earthquakes, mostly in the periods of relatively small earthquakes generation, the percentage of windows in which the seismic process is indistinguishable from the random one essentially increases with increasing magnitude threshold.",
"corpus_id": 238667234,
"title": "Changes in the dynamics of seismic process observed in the fixed time windows; case study for southern California 1980–2020"
} | {
"abstract": "After a brief overview of classical techniques used to explore cardiac rhythm variability, we show how the DFA method can help diagnose heart failure.",
"corpus_id": 123082558,
"score": 2,
"title": "Nonlinear analysis of cardiac rhythm fluctuations using DFA method"
} |
{
"abstract": "In this brief, the analysis problem of the mode and delay-dependent adaptive exponential synchronization in th moment is considered for stochastic delayed neural networks with Markovian switching. By utilizing a new nonnegative function and the -matrix approach, several sufficient conditions to ensure the mode and delay-dependent adaptive exponential synchronization in th moment for stochastic delayed neural networks are derived. Via the adaptive feedback control techniques, some suitable parameters update laws are found. To illustrate the effectiveness of the -matrix-based synchronization conditions derived in this brief, a numerical example is provided finally.",
"corpus_id": 279449,
"title": "Mode and Delay-Dependent Adaptive Exponential Synchronization in $p$th Moment for Stochastic Delayed Neural Networks With Markovian Switching"
} | {
"abstract": "This paper studies the guaranteed cost control problem for a class of uncertain stochastic nonlinear systems with multiple time delays represented by the Takagi-Sugeno fuzzy model with uncertain parameters. By constructing a new stochastic Lyapunov-Krasovskii functional, sufficient conditions for delay-dependent guaranteed cost control are obtained which do not require system transformation or relaxation matrices. Conditions for the existence of an optimal guaranteed cost controller are presented in the linear matrix inequality format. Simulation examples are provided to demonstrate the effectiveness of the proposed approach in this paper.",
"corpus_id": 10894408,
"title": "Delay-Dependent Guaranteed Cost Control for Uncertain Stochastic Fuzzy Systems With Multiple Time Delays"
} | {
"abstract": "SummaryLet (X, Y) be a ℝdxℝ-valued random vector and let r(t)=E(Y/X=t) be the regression function of Y on X that has to be estimated from a sample (Xi, Yi), i=1,..., n. We establish conditions ensuring that an estimate of the form \n$$r_n (t) = {{\\sum\\limits_{i = 1}^n {Y_i } \\Phi _{ni} (t,X_i )} \\mathord{\\left/ {\\vphantom {{\\sum\\limits_{i = 1}^n {Y_i } \\Phi _{ni} (t,X_i )} {\\sum\\limits_i^n {\\Phi _{ni} (t,X_i )} }}} \\right. \\kern-\\nulldelimiterspace} {\\sum\\limits_i^n {\\Phi _{ni} (t,X_i )} }}$$\n Where Φni(t, x) is a sequence of Borel measurable functions on ℝdxℝd, is uniformly strongly consistent with a certain rate of convergence. Applying this result we obtain rates of strong uniform consistency of the regressogram, kernel estimates, kn-nearest neighbor estimates and estimates based on orthogonal series.",
"corpus_id": 122567693,
"score": 1,
"title": "Strong uniform consistency of nonparametric regression function estimates"
} |
{
"abstract": "We present a photometric stereo-based system for retrieving the RGB albedo and the fine-scale details of an opaque surface. In order to limit specularities, the system uses a controllable diffuse illumination, which is calibrated using a dedicated procedure. In addition, we rather handle RAW, non-demosaiced RGB images, which both avoids uncontrolled operations on the sensor data and simplifies the estimation of the albedo in each color channel and of the normals. We finally show on real-world examples the potential of photometric stereo for the 3D-reconstruction of very thin structures from a wide variety of surfaces.",
"corpus_id": 742042,
"title": "Microgeometry capture and RGB albedo estimation by photometric stereo without demosaicing"
} | {
"abstract": "3D movies have become very popular in recent years. But there are vertical disparities between left and right views of 3D frames due to the lack of accuracy of the mechanical alignment during the shooting. In order to improve accuracy of reconstruction, a three-dimensional reconstruction technology based on multi-view photometric stereo fusion algorithm in movies special-effect production is present in this paper. The original normal is firstly replaced with the surface normal in the average normal, and the reconstructed normal is optimized so as to reduce the deviation of the original surface normal. And then, a reference-plane-based approach is applied to estimate the principle optical axis of each light source as well as its principle radiant energy. For each surface point on the target, the direction and intensity of its incident light ray can be precisely determined by the calibration parameters and the quasi-point light model. Finally, 3D reconstruction of the surface with a quasi-point light source is also implemented in two steps. By estimating the mean value of depth in the iterative process, the surface depth is projected into the physical coordinates. Qualitative and quantitative experimental results show that higher accuracy surface normal as well as better 3D reconstruction quality can be obtained by the proposed approach in comparison with conventional reconstruction methods.",
"corpus_id": 199408164,
"title": "Three-dimensional reconstruction based on multi-view photometric stereo fusion technology in movies special-effect"
} | {
"abstract": "Neural networks have been proposed to classify remotely sensed and ancillary CIS data. In this paper, the backpropagation algorithm is critically evaluated, using as an example, the mapping of a eucalypt forest on the far south coast of New South Wales, Australia. A GIS database was combined with Landsat thematic mapper data, and 190 plots were field sampled in order to train the neural network model and to evaluate the resulting classifications. The results show that the neural network did not accurately classify GIS and remotely sensed data at the forest type level (Anderson Level III), though conventional classifiers also perjGorm poorly with this type of problem. Previous studies using neural networks have classified more general (e.g., Anderson Level I, II) landcover types at a higher accuracy than those obtained here, but mapped land cover into more general themes. Given the poor classification results and the difficulties associated with the setting up of suitable parameters for the neural-network (backpropagation) algorithm, it is concluded that the neuralnetwork approach does not offer significant advantages over conventional classification schemes for mapping eucalypt forests from Landsat TM and ancillary GIs data at the Anderson Level 111 forest type level.",
"corpus_id": 18165633,
"score": 1,
"title": "Performance of a neural network: mapping forests using GIS and remotely sensed data"
} |
{
"abstract": "Accurate characterization of prostate cancer is crucial for treatment planning and patient management. Non-invasive SPECT imaging using a radiolabeled monoclonal antibody, 111In-labeled capromab pendetide, offers advantage over existing means for prostate cancer diagnosis and staging. However, there are difficulties associated with the interpretation of these SPECT images. In this study, we developed a 3D surface-volume hybrid rendering method that utilizes multi-modality image data to facilitate diagnosis of prostate cancer. SPECT and CT or MRI (or both) images were aligned either manually or automatically. 3D hybrid rendering was implemented to blend prostate tumor distribution from SPECT in pelvis with anatomic structures from CT/MRI. Feature extraction technique was also implemented within the hybrid rendering for tumor uptake enhancement. Autoradiographic imaging and histological evaluation were performed to correlate with the in-vivo SPECT images. Warping registration of histological sections was carried out to compensate the deformation of histology slices during fixation to help the alignment between histology and in-vivo images. Overall, the rendered volumetric evaluation of prostate cancer has the potential to greatly increase the confidence in the reading of radiolabeled monoclonal antibody scans, especially in patients where there is a high suspicion of prostate tumor metastasis.",
"corpus_id": 2370128,
"title": "Multimodal and three-dimensional imaging of prostate cancer."
} | {
"abstract": "We present a novel methodological framework for leveraging multiple image sources, including different modalities, acquisition protocols or image features, in the registration of more than two images via information theoretic data fusion. The technique, referred to as multi-attribute combined mutual information (MACMI), adopts a multivariate application of mutual information (MI) to allow several coregistered images to be represented as a single high dimensional multi-attribute image. Our approach improves scenarios involving registration of multiple images as it, (1) utilizes all aligned images obtained in earlier registration steps, (2) improves alignment accuracy compared with pairwise approaches that only consider two images (and hence a fraction of the available data) at a time, and (3) avoids complex optimization problems often associated with fully-groupwise methods. For example, if two coregistered volumes such as T2-weighted and PD-weighted MRI are to be aligned with PET, it is intuitively better to use information from both MR protocols instead of choosing one for registration with PET. In the automated elastic registration of 20 corresponding multiprotocol (T1, T2, PD) synthetic MRI images of the brain with known misalignment of PD MRI, MACMI showed significant improvement in terms of deformation field error over conventional MI-based pairwise registration (p ≪ 0.05). For a total of 108 corresponding whole-mount histology (WMH), T2 MRI, and DCE (T1) MRI images obtained from 17 prostate specimens with cancer, elastic registration of WMH to bothMRI protocols simultaneously was performed viaMACMI. Improved alignment in terms of prostate overlap and cancer localization was observed using MACMI, compared to pairwise registration of WMH to the individual T2 and DCE MR protocols.",
"corpus_id": 2160400,
"title": "Multi-attribute combined mutual information (MACMI): An image registration framework for leveraging multiple data channels"
} | {
"abstract": "It is known that the selective injurious effect of cadmium on the testis can be prevented by zinc cysteine or selenium. Studies conducted in CD-1 mice were initiated to determine whether any of these treatments offered protection by preventing cadmium from reaching the testis in doses sufficient to cause injury. The question of how protection might be offered by this diversity of chemicals formed the basis for the investigations. Using cadmium chloride labelled with cadmium-109 it was shown that none of the protective agents decreased the amount of cadmium reaching the testis. With cysteine the amount of cadmium reaching the testis was actually enhanced. In the presence of selenium there was a 150-250% increase (p<.005) in cadmium uptake by the testis throughout the course of the experiment. And yet of all the protectors known selenium is the most potent completely preventing cadmium damage in a dosage ratio of 2:1. Comparable studies in which selenium rather than cadmium was labelled (selenium-75) demonstrated that in the presence of cadmium selenium levels were augmented. Results from the experiments indicated that the cadmium reaching the testis is somehow inactivated. It is suggested that the protective agents exert their action at the vascular level.",
"corpus_id": 45830993,
"score": 1,
"title": "Mechanisms of zinc, cysteine and selenium protection against cadmium induced vascular injury to mouse testis."
} |
{
"abstract": "Current progress on the design and R&D of Chinese helium-cooled solid breeder test blanket module,CN HCSB TBM is presented. The updated design on structural, neutronics, thermal-hydraulics and safety analysis has been completed. In order to accommodate the HCSB TBM ancillary system, the design and necessary R&Ds corresponding sub-systems have being developed. Current status on the development of function materials, structure material and the helium test loop are also presented. The Chinese low-activation ferritic/martensitic steels CLF-1, which is the structural material for the of HCSB TBM is being manufactured by industry. The neutron multiplier Be and tritium breeder Li4SiO4 pebbles are being prepared in laboratory scale.",
"corpus_id": 162177644,
"title": "Progress on Design and R&D of CN Solid Breeder TBM FTP/3-5Ra"
} | {
"abstract": "In the European Fusion Programme of 1999 preparatory work (Preparation of a Power Plant Conceptual Study Availability, PPA) has been carried out for a fusion power plant study that is planned to start in 2000. This study will focus on the commercial attractiveness of a fusion plant, particularly achievable power level, net efficiency and availability. Part of the activity at the Forschungszentrum Karlsruhe has been the further development of the Helium Cooled Pebble Bed (HCPB) blanket for DEMO as “Improved HCPB” (Subtask PPA 2.3). The modified concept allows for the height of breeder pebble beds to be reduced and thus for larger power densities to be accommodated. Also, mono-disperse Beryllium pebble beds can be used. The net electric efficiency of the blanket was raised by almost 7 points to about 37% due to increased coolant temperature gain, reduction of pressure losses in the blanket and enhanced energy conversion in the proposed steam process. The good availability of the DEMO-HCPB that was shown in earlier studies is expected to carry over to the I-HCPB.",
"corpus_id": 55010624,
"title": "Improved Helium Cooled Pebble Bed Blanket"
} | {
"abstract": "Abstract Retention and desorption behaviors of helium in oxidized and non-oxidized V–4Cr–4Ti alloy samples were investigated after helium ion irradiation at room temperature using a thermal desorption spectroscopy. The ion energy and fluence were 5 keV and (0.5–10) × 10 21 He/m 2 , respectively. An oxidized layer with a thickness of 100 nm was prepared by thermal oxidation. The surface density of blisters produced by helium ion irradiation in the oxidized sample was lower than that in the non-oxidized one. The helium desorption behavior depended significantly on the fluence. In the lower fluence regime, the retained helium desorbed mainly at around 1300 K in both samples. As fluence increased, several desorption peaks appeared in the low temperature region in both samples. However, the peak temperatures were different. The amount of helium retained in the oxidized sample was lower than that in the non-oxidized sample.",
"corpus_id": 94236765,
"score": 2,
"title": "Retention and desorption behavior of helium in oxidized V-4Cr-4Ti alloy"
} |
{
"abstract": "Active vision techniques use programmable light sources, such as projectors, whose intensities can be controlled over space and time. We present a broad framework for fast active vision using Digital Light Processing (DLP) projectors. The digital micromirror array (DMD) in a DLP projector is capable of switching mirrors \"on\" and \"off\" at high speeds (106/s). An off-the-shelf DLP projector, however, effectively operates at much lower rates (30-60Hz) by emitting smaller intensities that are integrated over time by a sensor (eye or camera) to produce the desired brightness value. Our key idea is to exploit this \"temporal dithering\" of illumination, as observed by a high-speed camera. The dithering encodes each brightness value uniquely and may be used in conjunction with virtually any active vision technique. We apply our approach to five well-known problems: (a) structured light-based range finding, (b) photometric stereo, (c) illumination de-multiplexing, (d) high frequency preserving motion-blur and (e) separation of direct and global scene components, achieving significant speedups in performance. In all our methods, the projector receives a single image as input whereas the camera acquires a sequence of frames.",
"corpus_id": 6816528,
"title": "Temporal dithering of illumination for fast active vision"
} | {
"abstract": "We consider the problem of shape recovery for real world scenes, where a variety of global illumination (interreflections, subsurface scattering, etc.) and illumination defocus effects are present. These effects introduce systematic and often significant errors in the recovered shape. We introduce a structured light technique called Micro Phase Shifting, which overcomes these problems. The key idea is to project sinusoidal patterns with frequencies limited to a narrow, high-frequency band. These patterns produce a set of images over which global illumination and defocus effects remain constant for each point in the scene. This enables high quality reconstructions of scenes which have traditionally been considered hard, using only a small number of images. We also derive theoretical lower bounds on the number of input images needed for phase shifting and show that Micro PS achieves the bound.",
"corpus_id": 14927216,
"title": "Micro Phase Shifting"
} | {
"abstract": "Since the initial comparison of Seitz et al. [48], the accuracy of dense multiview stereovision methods has been increasing steadily. A number of limitations, however, make most of these methods not suitable to outdoor scenes taken under uncontrolled imaging conditions. The present work consists of a complete dense multiview stereo pipeline which circumvents these limitations, being able to handle large-scale scenes without sacrificing accuracy. Highly detailed reconstructions are produced within very reasonable time thanks to two key stages in our pipeline: a minimum s-t cut optimization over an adaptive domain that robustly and efficiently filters a quasidense point cloud from outliers and reconstructs an initial surface by integrating visibility constraints, followed by a mesh-based variational refinement that captures small details, smartly handling photo-consistency, regularization, and adaptive resolution. The pipeline has been tested over a wide range of scenes: from classic compact objects taken in a laboratory setting, to outdoor architectural scenes, landscapes, and cultural heritage sites. The accuracy of its reconstructions has also been measured on the dense multiview benchmark proposed by Strecha et al. [59], showing the results to compare more than favorably with the current state-of-the-art methods.",
"corpus_id": 9124253,
"score": -1,
"title": "High Accuracy and Visibility-Consistent Dense Multiview Stereo"
} |
{
"abstract": "Abstract The asymmetrical nitrosyl-deoxy hybrid haemoglobin, ( α NO β NO ), ( α deoxy β deoxy ), was prepared by removing oxygen with sodium dithionite from a mixture of oxyhaemoglobin and nitrosylhaemoglobin (Cassoly, 1978). This asymmetrical hybrid exhibited a distinctive triplet hyperfine structure in the electron paramagnetic resonance spectrum. This triplet has been shown to arise predominantly from the nitrosyl haem of an α subunit which has a deoxy-like structure (Nagai et al. , 1978). By removing one or two carboxyl-terminal residues by carboxypeptidase digestion before mixing, one can obtain asymmetrical nitrosyl-deoxy hybrid haemoglobins in which only one of the four subunits is specifically modified. Eight such modified derivatives were examined by e.p.r. † . They were (desArg α NO β NO ) ( α deoxy β deoxy ), (desArg-Tyr α NO β NO ) ( α deoxy β deoxy ), ( α NO desHis β NO ) ( α deoxy β deoxy ), ( α NO desHis-Tyr β NO ) ( α deoxy β deoxy ), ( α NO β NO ) (desArg α deoxy β deoxy ), ( α NO β NO ) (desArg-Tyr α deoxy β deoxy ), ( α NO β NO ) ( α deoxy desHis β deoxy ) and ( α NO β NO ) ( α deoxy desHis-Tyr β deoxy ), where desArg, desArg-Tyr, desHis and desHis-Tyr indicate that the amino acids were removed from the carboxyl terminus of the subunit. The e.p.r. spectra for these eight derivatives have a more or less reduced relative intensity of the triplet, indicating that the non-covalent bonds involving carboxyl-terminal residues which stabilize the structure of deoxyhaemoglobin (Perutz, 1970) must all be intact in the unmodified asymmetrical nitrosyl-deoxy hybrid haemoglobin, ( α NO β NO ) ( α deoxy β deoxy ). By comparing the relative intensity of the triplet we were able to examine the effect of modification of one specific carboxyl terminus on the nitrosyl haem in the α 1 subunit. The effect was not symmetric, but increased in the order α 1 β 2 β 1 α 1 (suffices 1 and 2 as defined by Perutz (1965)). We attribute this order to the non-equivalence of intersubunit interactions.",
"corpus_id": 10339673,
"title": "Electron paramagnetic resonance study of intersubunit interactions in nitrosyl-deoxy asymmetrical hybrid haemoglobin."
} | {
"abstract": "Abstract Haemoglobin valency hybrids have been further investigated with a view to evaluating evidence for α-β interactions. Comparison of equilibrium oxygen binding of the compounds α III H 2 O β II , α III F β II , α III N 3 β II and α III CN β II show that the derivatives possessing the α chain in the (low spin) oxy conformation possess higher oxygen affinity than those possessing the same chain in (high spin) deoxy conformation. On the other hand, equilibrium titration of the aquo derivative by fluoride and azide showed a higher azide affinity and a slightly lower fluoride affinity for α III H 2 O β II CO compared to the deoxy form α III H 2 O β II . Essentially the same pattern of relations were obtained for the different forms of α II β II also. Electron paramagnetic resonance spectra of the hybrids at pH 6 showed no change of the g value of 5.85 absorption of the ferri chain on change of spin state of the partner ferro chain. However, this was no longer the case at pH 9; the spectra of the alkaline form gave evidence of α-β interaction for the α III β II hybrid only, but not for α II β III . The electron paramagnetic resonance results suggest that α-β interaction in the hybrids may operate in the β → α direction only. The equilibrium and spectroscopic data are discussed in the general context of haem-haem interaction.",
"corpus_id": 45823807,
"title": "Reciprocal effects of change of subunit structure on ligand equilibria of haemoglobin valency hybrids. Attempted correlation with electron paramagnetic resonance spectra."
} | {
"abstract": "From the blue seed coats ofOphiopogon jaburan, a new flavonol glycoside was isolated as needles and determined to be kaempferol 3-O-β-d-galactoside-4′-O-β-d-glucoside (OK-2) by UV and NMR spectral analyses. OK-2 and kaempfrol 3, 4′-di-O-β-d-glucoside (OK-1), which was detected previously, in the blue seed coat were present in a molar ratio of about 13:7. OK-2 was newly found as a factor causing the blueing effects on ophionin which is a main anthocyanin in the blue seed coats. The mixture of 4.8×10−3 M OK-2 and 2.5×10−3 M ophionin in Mcllvaine's buffer solution (pH 5.6) showed stable blue color, and the absorption spectrum of the mixture showed two absorption peaks and a shoulder in visible reasion, coinciding with that of the fresh blue seed coat. The effect of ophionin and OK-2 co-pigmentation on the blue color of seed coat ofO. jaburan was discussed.",
"corpus_id": 45305346,
"score": 1,
"title": "Isolation of a new flavonol glycoside and its effects on the blue color of seed coats ofOphiopogon jaburan"
} |
{
"abstract": "A recent thrust in deep neural network (DNN) research has been toward binary approaches for compact and energy-sparing neuromorphic architectures utilizing emerging devices. However, approaches to deal with device process variations and the realization of stochastic behavior intrinsically within neural circuits remain underexplored. Herein, we leverage a novel probabilistic spintronic device for low-energy recognition operations that improves DNN performance through active in situ learning via the mitigation of device reliability challenges.",
"corpus_id": 155106850,
"title": "Leveraging Stochasticity for In Situ Learning in Binarized Deep Neural Networks"
} | {
"abstract": "Recent advances in development of memristor devices and cross-bar integration allow us to implement a low-power on-chip neuromorphic computing system (NCS) with small footprint. Training methods have been proposed to program the memristors in a crossbar by following existing training algorithms in neural network models. However, the robustness of these training methods has not been well investigated by taking into account the limits imposed by realistic hardware implementations. In this work, we present a quantitative analysis on the impact of device imperfections and circuit design constraints on the robustness of two popular training methods - “close-loop on-device” (CLD) and “open-loop off-device” (OLD). A novel variation-aware training scheme, namely, Vortex, is then invented to enhance the training robustness of memristor crossbar-based NCS by actively compensating the impact of device variations and optimizing the mapping scheme from computations to crossbars. On average, Vortex can significantly improve the test rate by 29.6% and 26.4%, compared to the traditional OLD and CLD, respectively.",
"corpus_id": 11420350,
"title": "Vortex: Variation-aware training for memristor X-bar"
} | {
"abstract": "Researchers continue to demonstrate the benefits of Mining Software Repositories (MSR) for supporting software development and research activities. However, as the mining process is time and resource intensive, they often create their own distributed platforms and use various optimizations to speed up and scale up their analysis. These platforms are project-specific, hard to reuse, and offer minimal debugging and deployment support. In this paper, we propose the use of MapReduce, a distributed computing platform, to support research in MSR. As a proof-of-concept, we migrate J-REX, an optimized evolutionary code extractor, to run on Hadoop, an open source implementation of MapReduce. Through a case study on the source control repositories of the Eclipse, BIRT and Datatools projects, we demonstrate that the migration effort to MapReduce is minimal and that the benefits are significant, as running time of the migrated J-REX is only 30% to 50% of the original J-REX's. This paper documents our experience with the migration, and highlights the benefits and challenges of the MapReduce framework in the MSR community.",
"corpus_id": 14502299,
"score": 1,
"title": "MapReduce as a general framework to support research in Mining Software Repositories (MSR)"
} |
{
"abstract": "Despite the increasing levels of pollution in many tropical African countries, not much is known about the strength and weaknesses of policy and institutional frameworks to tackle pollution and ecological status of rivers and their impacts on the biota. We investigated the ecological status of four large river basins using physicochemical water quality parameters and bioindicators by collecting samples from forest, agriculture, and urban landscapes of the Nile, Omo-Gibe, Tekeze, and Awash River basins in Ethiopia. We also assessed the water policy scenario to evaluate its appropriateness to prevent and control pollution. To investigate the level of understanding and implementation of regulatory frameworks and policies related to water resources, we reviewed the policy documents and conducted in-depth interviews of the stakeholders. Physicochemical and biological data revealed that there is significant water quality deterioration at the impacted sites (agriculture, coffee processing, and urban landscapes) compared to reference sites (forested landscapes) in all four basins. The analysis of legal, policy, and institutional framework showed a lack of cooperation between stakeholders, lack of knowledge of the policy documents, absence of enforcement strategies, unavailability of appropriate working guidelines, and disconnected institutional setup at the grass root level to implement the set strategies as the major problems. In conclusion, river water pollution is a growing challenge and needs urgent action to implement intersectoral collaboration for water resource management that will eventually lead toward integrated watershed management. Revision of policy and increasing the awareness and participation of implementers are vital to improve ecological quality of rivers.",
"corpus_id": 4521594,
"title": "River Water Pollution Status and Water Policy Scenario in Ethiopia: Raising Awareness for Better Implementation in Developing Countries"
} | {
"abstract": "Abstract Influence of landscape pattern on water quality is complex and scale dependent. Existing literature mainly focuses on examining this influence at a wide range of spatial scales from local to basin level or to eco-regional scales. Studies on identifying the critical riparian zone, with certain buffer width and length that exhibit the strongest association between landscape characteristics and stream water quality, are still limited. Such identification is helpful for better understanding the influence of the adjacent landscape on stream water quality and is critical for effective landscape planning and local river management. In this study, the urban area of Xiangyang City along the Hanjiang River was selected as a case study. Water quality samples were collected at eight sites in the examined river system of the Hanjiang River from 2009 to 2014. Landscape pattern analysis, redundancy analysis and stepwise multiple linear regression analysis were used to explore the quantitative associations between landscape patterns and water quality. The results indicate that the landscape metrics that were selected explain approximately 63–87% of the variations in stream water quality at multiple buffer widths in 2009 and 2014. The strongest linkage between landscape characteristics and water quality occurred in the riparian zone with the buffer width of 300 m where the explanatory ability of the landscape metrics still varied at different buffer lengths and increased from 500 m to 8 km. Urban built-up land was more positively associated with degraded water quality at the smaller buffer widths than at the larger buffer widths, whereas forest land exhibited a stronger contribution to water quality improvements at wider buffer widths than at narrower buffer widths. The situation was different for the different buffer lengths. Urban built-up land was more correlated with water quality at longer buffer lengths, and forest land had a stronger contribution at shorter buffer lengths. Overall, landscape configuration seemed to have stronger effects on the water quality than landscape composition. These findings provide important information regarding multi-scale measures for sustainable landscape management to improve surface water quality.",
"corpus_id": 90893412,
"title": "Identifying the critical riparian buffer zone with the strongest linkage between landscape characteristics and surface water quality"
} | {
"abstract": "Usability of CART algorithm for determining egg quality characteristics influencing fertility in the eggs of Japanese quail",
"corpus_id": 90403281,
"score": 0,
"title": "Usability of CART algorithm for determining egg quality characteristics influencing fertility in the eggs of Japanese quail"
} |
{
"abstract": "A new methodology to design a dual-band 3-way Bagley power divider is presented. The general method of using π-network as well as the proposed modified π-network has been discussed. The proposed π-network mimics a λ/2 line at two different frequencies. Design equations in closed form are obtained by using transmission line concepts and simple network analysis techniques. The proposed design is validated using Agilent ADS. This modification also leads to wider bandwidth as observed by a careful simulation.",
"corpus_id": 13231150,
"title": "A dual-band bagley power divider using modified Π-network"
} | {
"abstract": "A Wilkinson power divider operating not only at one frequency f/sub 0/, but also at its first harmonic 2f/sub 0/ is presented. This power divider consists of two branches of impedance transformer, each of which consists of two sections of 1/6-wave transmission-line with different characteristic impedance. The two outputs are connected through a resistor, an inductor, and a capacitor. All the features of a conventional Wilkinson power divider, such as an equal power split, impedance matching at all ports, and a good isolation between the two output ports, can be fulfilled at f/sub 0/ and 2f/sub 0/, simultaneously.",
"corpus_id": 28179627,
"title": "A dual-frequency Wilkinson power divider: for a frequency and its first harmonic"
} | {
"abstract": "We propose an algorithm to separate simultaneously speaking persons from each other, the \"cocktail party problem\", using a single microphone. Our approach involves a deep recurrent neural networks regression to a vector space that is descriptive of independent speakers. Such a vector space can embed empirically determined speaker characteristics and is optimized by distinguishing between speaker masks. We call this technique source-contrastive estimation. The methodology is inspired by negative sampling, which has seen success in natural language processing, where an embedding is learned by correlating and de-correlating a given input vector with output weights. Although the matrix determined by the output weights is dependent on a set of known speakers, we only use the input vectors during inference. Doing so will ensure that source separation is explicitly speaker-independent. Our approach is similar to recent deep neural network clustering and permutation-invariant training research; we use weighted spectral features and masks to augment individual speaker frequencies while filtering out other speakers. We avoid, however, the severe computational burden of other approaches with our technique. Furthermore, by training a vector space rather than combinations of different speakers or differences thereof, we avoid the so-called permutation problem during training. Our algorithm offers an intuitive, computationally efficient response to the cocktail party problem, and most importantly boasts better empirical performance than other current techniques.",
"corpus_id": 32755673,
"score": -1,
"title": "Monaural Audio Speaker Separation with Source Contrastive Estimation"
} |
{
"abstract": "Peptides derived from enzymatic digestions (cathepsin D and trypsin) were characterized and amino acid sequences determined by using their LC/MS spectra. A Frit-FAB interface that produces extensive peptide fragmentation and permits amino acid sequencing at the low picomole level is described for a model antigen, Staphylococcus aureus nuclease (Nase), and an enzyme of unknown structure, yeast aminopeptidase B. The amino acid sequences of peptides derived from digestion of Nase with cathepsin D (a relatively nonspecific endoprotease) were readily deduced and have provided insights into the nature of antigen processing. Frit-FAB LC/MS spectra of the Nase peptides contained a sufficient number of fragment ions to conclusively identify peptides with a mass below 2000 Da. Capillary LC/MS provided a means for the separation and identification of these enzymatically derived peptides in a fraction of the time that would have been required by gas-phase Edman sequence analysis. The optimized Frit-FAB experiment was consequently evaluated for the partial characterization of aminopeptidase B recently purified to homogeneity from Saccharomyces cerevisiae. Sequence-specific ions observed in the Frit-FAB mass spectra of these tryptic peptides were identical with those commonly observed in high-energy collision-induced dissociation (CID) spectra and included side-chain fragment ions that differentiated leucine from isoleucine. These fragment ions were used to deduce entire amino acid sequences for several of the tryptic peptides.",
"corpus_id": 2402559,
"title": "Optimization of the fragmentation in a frit-fast atom bombardment ion source for the sequencing of peptides at the picomole level."
} | {
"abstract": "The feasibility of using photodissociation of protonated peptide molecules to sequence specific fragment ions with a 193-nm pulsed laser beam in a magnetic deflection tandem mass spectrometer of EBEB configuration was demonstrated. Although the short pulse (15 ns) and low repetition rate (100 Hz) of the excimer laser permitted the irradiation of only ∼ 0.02% of the (M + H)+ ions exiting MS-1, a photon-induced decomposition spectrum of the heptapeptide angiotensio III (Mr 930.5) was produced that was practically the same (but with better signal-to-noise ratio) as that generated by collision-activated dissociation at the same low duty cycle. Because of the low and pulsed fragment ion currents, an array detector was used to record the spectra. A dependence between laser power and abundance of fragment ions was observed (increased power increases the relative abundance of ions of low mass). Laser power was varied from 6 to 80 mJ. Formation of fragment ions from a large peptide (melittin, M, 2844.75) was also observed. The results permit the design of modifications that may increase the fragment ion yield to 10% or higher, which would make photon-induced decomposition a useful method for magnetic deflection mass spectrometers.",
"corpus_id": 30493542,
"title": "Photon-induced dissociation with a four-sector tandem mass spectrometer"
} | {
"abstract": "High-quality impermeable concrete as cover of reinforcing steel is one of the best methods of preventing chlorides from initiating corrosion. American Association of State Highway and Transportation Officials (AASHTO) T 277 and American Society for Testing and Materials (ASTM) C 1202-91 Rapid Chloride Permeability Test were developed because of a need to rapidly measure permeability of concrete to chloride ions. Some criticisms have been made, mainly concerning the fact that conditions under which measurements are made may cause changes to the specimens. This work was designed to observe how changes in the testing procedure affect results. Factors such as temperature, AC impedance, initial DC current, charge passed, and chloride ion profiles were monitored during polarization of four different concretes. It was found that simple measurement of initial current or resistivity gave the same ranking as conventional tests for the four concretes and can replace the rapid chloride test with a considerable time saving.",
"corpus_id": 137028147,
"score": 0,
"title": "INVESTIGATION OF THE RAPID CHLORIDE PERMEABILITY TEST"
} |
{
"abstract": "BackgroundDespite the continuous production of genome sequence for a number of organisms, reliable, comprehensive, and cost effective gene prediction remains problematic. This is particularly true for genomes for which there is not a large collection of known gene sequences, such as the recently published chicken genome. We used the chicken sequence to test comparative and homology-based gene-finding methods followed by experimental validation as an effective genome annotation method.ResultsWe performed experimental evaluation by RT-PCR of three different computational gene finders, Ensembl, SGP2 and TWINSCAN, applied to the chicken genome. A Venn diagram was computed and each component of it was evaluated. The results showed that de novo comparative methods can identify up to about 700 chicken genes with no previous evidence of expression, and can correctly extend about 40% of homology-based predictions at the 5' end.ConclusionsDe novo comparative gene prediction followed by experimental verification is effective at enhancing the annotation of the newly sequenced genomes provided by standard homology-based methods.",
"corpus_id": 1627166,
"title": "Gene finding in the chicken genome"
} | {
"abstract": "Driven by competition, automation, and technology, the genomics community has far exceeded its ambition to sequence the human genome by 2005. By analyzing mammalian genomes, we have shed light on the history of our DNA sequence, determined that alternatively spliced RNAs and retroposed pseudogenes are incredibly abundant, and glimpsed the apparently huge number of non-coding RNAs that play significant roles in gene regulation. Ultimately, genome science is likely to provide comprehensive catalogs of these elements. However, the methods we have been using for most of the last 10 years will not yield even one complete open reading frame (ORF) for every gene--the first plateau on the long climb toward a comprehensive catalog. These strategies--sequencing randomly selected cDNA clones, aligning protein sequences identified in other organisms, sequencing more genomes, and manual curation--will have to be supplemented by large-scale amplification and sequencing of specific predicted mRNAs. The steady improvements in gene prediction that have occurred over the last 10 years have increased the efficacy of this approach and decreased its cost. In this Perspective, I review the state of gene prediction roughly 10 years ago, summarize the progress that has been made since, argue that the primary ORF identification methods we have relied on so far are inadequate, and recommend a path toward completing the Catalog of Protein Coding Genes, Version 1.0.",
"corpus_id": 1746286,
"title": "Genome annotation past, present, and future: how to define an ORF at each locus."
} | {
"abstract": "Cell lines are widely used as in vitro models of tumorigenesis. However, an increasing number of researchers have found that cell lines differ from their sourced tumour samples after long-term cell culture. The application of unsuitable cell lines in experiments will affect the experimental accuracy and the treatment of patients. Therefore, it is imperative to identify optimal cell lines for each cancer type. Here, we review the methods used to evaluate cell lines since 2005. Furthermore, gene expression, copy number and mutation profiles from The Cancer Genome Atlas and the Cancer Cell Line Encyclopedia are used to calculate similarity between tumours and cell lines. Then, the ideal cell lines to use for experiments for eight types of cancers are found by combining the results with Gene Ontology functional similarity. After verification, the optimal cell lines have the same genomic characteristics as their homologous tumour samples. The contaminated cell lines identified in previous research are also determined to be unsuitable in vitro cancer models here. Moreover, our study suggests that some of the commonly used cell lines are not suitable cancer models. In summary, we provide a reference for ideal cell lines to use in in vitro experiments and contribute to improving the accuracy of future cancer research. Furthermore, this research provides a foundation for identifying more effective treatment strategies.",
"corpus_id": 3919049,
"score": 1,
"title": "Optimization of cell lines as tumour models by integrating multi-omics data"
} |
{
"abstract": "Dopamine is an important neurotransmitter in vertebrate and invertebrate nervous systems and is widely distributed in the brain of the honey bee, Apis mellifera. We report here the functional characterization and cellular localization of the putative dopamine receptor gene, Amdop3, a cDNA clone isolated and identified in previous studies as AmBAR3 (Apis mellifera Biogenic Amine Receptor 3). The Amdop3 cDNA encodes a 694 amino acid protein, AmDOP3. Comparison of AmDOP3 to Drosophila melanogaster sequences indicates that it is orthologous to the D2-like dopamine receptor, DD2R. Using AmDOP3 receptors expressed in HEK293 cells we show that of the endogenous biogenic amines, dopamine is the most potent AmDOP3 agonist, and that activation of AmDOP3 receptors results in down regulation of intracellular levels of cAMP, a property characteristic of D2-like dopamine receptors. In situ hybridization reveals that Amdop3 is widely expressed in the brain but shows a pattern of expression that differs from that of either Amdop1 or Amdop2, both of which encode D1-like dopamine receptors. Nonetheless, overlaps in the distribution of cells expressing Amdop1, Amdop2 and Amdop3 mRNAs suggest the likelihood of D1:D2 receptor interactions in some cells, including subpopulations of mushroom body neurons.",
"corpus_id": 1129545,
"title": "Characterization of a D2-like dopamine receptor (AmDOP3) in honey bee, Apis mellifera."
} | {
"abstract": "Dopamine is found in both neuronal and non-neuronal tissues in the larval stage of the fruit fly, Drosophila melanogaster, and functions as a signaling molecule in the nervous system. Although dopaminergic neurons in the central nervous system (CNS) were previously thought solely to be interneurons, recent studies suggest that dopamine may also act as a neuromodulator in humoral pathways. We examined both application of dopamine on intact larval CNS-segmental preparations and isolated neuromuscular junctions (NMJs). Dopamine rapidly decreased the rhythmicity of the CNS motor activity. Application of dopamine on neuromuscular preparations of the segmental muscles 6 and 7 resulted in a dose-responsive decrease in the excitatory junction potentials (EJPs). With the use of focal, macro-patch synaptic current recordings the quantal evoked transmission showed a depression of vesicular release at concentrations of 10 microM. Higher concentrations (1 mM) produced a rapid decrement in evoked vesicular release. Dopamine did not alter the shape of the spontaneous synaptic currents, suggesting that dopamine does not alter the postsynaptic muscle fiber receptiveness to the glutaminergic motor nerve transmission. The effects are presynaptic in causing a reduction in the number of vesicles that are stimulated to be released due to neural activity.",
"corpus_id": 16838874,
"title": "Dopaminergic modulation of motor neuron activity and neuromuscular function in Drosophila melanogaster."
} | {
"abstract": "Abstract A combined cooling, heating and power (CCHP) microgrid with distributed cogeneration units and renewable energy sources provides an effective solution to energy-related problems, including increasing energy demand, higher energy costs, energy supply security, and environmental concerns. This paper presents an overall review of the modeling, planning and energy management of the CCHP microgrid. The performance of a CCHP microgrid from the technical, economical and environmental viewpoints are closely dependent on the microgrid’s design and energy management. Accurate modeling is the first and most important step for planning and energy management of the CCHP microgrid, so this paper first presents an review of modeling of the CCHP microgrid. With regard to planning of the CCHP microgrid, several widely accepted evaluation methods and indicators for cogeneration systems are given. Research efforts on the planning methods of the CCHP microgrid are then introduced. Finally, the energy management of the CCHP microgrid is briefly reviewed in terms of cogeneration decoupling, control strategies, emission reduction and problem solving methods.",
"corpus_id": 109150262,
"score": 0,
"title": "Modeling, planning and optimal energy management of combined cooling, heating and power microgrid: A review"
} |
{
"abstract": "The aim of this study is to determine which compounds are present into drinking water packaged in poly(ethylene terephtalate) bottles and to know the origin of these substances in relationship with the material. A screening procedure was established for the detection of unknown compounds into bottled water. A panel of water bottles has been tested after exposure to extreme conditions of temperature and UV radiation to accelerate the possible migration of substances. At the same time, physico-chemical characterization of polymeric material has been performed namely calorimetric analysis, IRTF and low-frequency mechanical spectroscopy. The results thus obtained allow understanding in a better way the migration kinetics of molecules inside the polymer, it means the pollution of the bottled water.",
"corpus_id": 18554643,
"title": "Characterization of poly(ethylene terephthalate) used in commercial bottled water"
} | {
"abstract": "Antimony is a regulated contaminant that poses both acute and chronic health effects in drinking water. Previous reports suggest that polyethylene terephthalate (PET) plastics used for water bottles in Europe and Canada leach antimony, but no studies on bottled water in the United States have previously been conducted. Nine commercially available bottled waters in the southwestern US (Arizona) were purchased and tested for antimony concentrations as well as for potential antimony release by the plastics that compose the bottles. The southwestern US was chosen for the study because of its high consumption of bottled water and elevated temperatures, which could increase antimony leaching from PET plastics. Antimony concentrations in the bottled waters ranged from 0.095 to 0.521 ppb, well below the US Environmental Protection Agency (USEPA) maximum contaminant level (MCL) of 6 ppb. The average concentration was 0.195+/-0.116 ppb at the beginning of the study and 0.226+/-0.160 ppb 3 months later, with no statistical differences; samples were stored at 22 degrees C. However, storage at higher temperatures had a significant effect on the time-dependent release of antimony. The rate of antimony (Sb) release could be fit by a power function model (Sb(t)=Sb 0 x[Time, h]k; k=8.7 x 10(-6)x[Temperature ( degrees C)](2.55); Sb 0 is the initial antimony concentration). For exposure temperatures of 60, 65, 70, 75, 80, and 85 degrees C, the exposure durations necessary to exceed the 6 ppb MCL are 176, 38, 12, 4.7, 2.3, and 1.3 days, respectively. Summertime temperatures inside of cars, garages, and enclosed storage areas can exceed 65 degrees C in Arizona, and thus could promote antimony leaching from PET bottled waters. Microwave digestion revealed that the PET plastic used by one brand contained 213+/-35 mgSb/kg plastic; leaching of all the antimony from this plastic into 0.5L of water in a bottle could result in an antimony concentration of 376 ppb. Clearly, only a small fraction of the antimony in PET plastic bottles is released into the water. Still, the use of alternative types of plastics that do not leach antimony should be considered, especially for climates where exposure to extreme conditions can promote antimony release from PET plastics.",
"corpus_id": 37362596,
"title": "Antimony leaching from polyethylene terephthalate (PET) plastic used for bottled drinking water."
} | {
"abstract": "The comparison of the various sources of food contamination with organic chemicals suggests that in the public, but also among experts, the perception of risk is often distorted. Firstly, neither pesticides nor environmental pollutants contribute the most; the amount of material migrating from food packaging into food may well be 100 times higher. Secondly, control of these large migrants is often lagging behind the standards set up for other sources, since many of the components (particularly those not being “starting materials”) have not been identified and, thus, not toxicologically evaluated. Finally, attitudes towards different types of food contaminants are divergent, also reflected by the legal measures: for most sources of food contamination there are strict rules calling for minimization, whereas the European packaging industry has even requested a further increase in the tolerance to as close as possible to the limit set by the toxicologists. This paper calls for a more realistic perception and more coherent legal measures—and improvements in the control of migration from packaging material.",
"corpus_id": 19961445,
"score": 2,
"title": "Food Contamination with Organic Materials in Perspective: Packaging Materials as the Largest and Least Controlled Source? A View Focusing on the European Situation"
} |
{
"abstract": "ABSTRACT Exploring compositional and functional characteristics of the rumen microbiome can improve the understanding of its role in rumen function and cattle feed efficiency. In this study, we applied metatranscriptomics to characterize the active rumen microbiomes of beef cattle with different feed efficiencies (efficient, n = 10; inefficient, n = 10) using total RNA sequencing. Active bacterial and archaeal compositions were estimated based on 16S rRNAs, and active microbial metabolic functions including carbohydrate-active enzymes (CAZymes) were assessed based on mRNAs from the same metatranscriptomic data sets. In total, six bacterial phyla (Proteobacteria, Firmicutes, Bacteroidetes, Spirochaetes, Cyanobacteria, and Synergistetes), eight bacterial families (Succinivibrionaceae, Prevotellaceae, Ruminococcaceae, Lachnospiraceae, Veillonellaceae, Spirochaetaceae, Dethiosulfovibrionaceae, and Mogibacteriaceae), four archaeal clades (Methanomassiliicoccales, Methanobrevibacter ruminantium, Methanobrevibacter gottschalkii, and Methanosphaera), 112 metabolic pathways, and 126 CAZymes were identified as core components of the active rumen microbiome. As determined by comparative analysis, three bacterial families (Lachnospiraceae, Lactobacillaceae, and Veillonellaceae) tended to be more abundant in low-feed-efficiency (inefficient) animals (P < 0.10), and one archaeal taxon (Methanomassiliicoccales) tended to be more abundant in high-feed-efficiency (efficient) cattle (P < 0.10). Meanwhile, 32 microbial metabolic pathways and 12 CAZymes were differentially abundant (linear discriminant analysis score of >2 with a P value of <0.05) between two groups. Among them, 30 metabolic pathways and 11 CAZymes were more abundant in the rumen of inefficient cattle, while 2 metabolic pathways and 1 CAZyme were more abundant in efficient animals. These findings suggest that the rumen microbiomes of inefficient cattle have more diverse activities than those of efficient cattle, which may be related to the host feed efficiency variation. IMPORTANCE This study applied total RNA-based metatranscriptomics and showed the linkage between the active rumen microbiome and feed efficiency (residual feed intake) in beef cattle. The data generated from the current study provide fundamental information on active rumen microbiome at both compositional and functional levels, which serve as a foundation to study rumen function and its role in cattle feed efficiency. The findings that the active rumen microbiome may contribute to variations in feed efficiency of beef cattle highlight the possibility of enhancing nutrient utilization and improve cattle feed efficiency through modification of rumen microbial functions.",
"corpus_id": 4770238,
"title": "Metatranscriptomic Profiling Reveals Linkages between the Active Rumen Microbiome and Feed Efficiency in Beef Cattle"
} | {
"abstract": "Ruminants fulfill their energy needs for growth primarily through microbial breakdown of plant biomass in the rumen. Several biotic and abiotic factors influence the efficiency of fiber degradation, which can ultimately impact animal productivity and health. To provide more insight into mechanisms involved in the modulation of fibrolytic activity, a functional DNA microarray targeting genes encoding key enzymes involved in cellulose and hemicellulose degradation by rumen microbiota was designed. Eight carbohydrate-active enzyme (CAZyme) families (GH5, GH9, GH10, GH11, GH43, GH48, CE1, and CE6) were selected which represented 392 genes from bacteria, protozoa, and fungi. The DNA microarray, designated as FibroChip, was validated using targets of increasing complexity and demonstrated sensitivity and specificity. In addition, FibroChip was evaluated for its explorative and semi-quantitative potential. Differential expression of CAZyme genes was evidenced in the rumen bacterium Fibrobacter succinogenes S85 grown on wheat straw or cellobiose. FibroChip was used to identify the expressed CAZyme genes from the targeted families in the rumen of a cow fed a mixed diet based on grass silage. Among expressed genes, those encoding GH43, GH5, and GH10 families were the most represented. Most of the F. succinogenes genes detected by the FibroChip were also detected following RNA-seq analysis of RNA transcripts obtained from the rumen fluid sample. Use of the FibroChip also indicated that transcripts of fiber degrading enzymes derived from eukaryotes (protozoa and anaerobic fungi) represented a significant proportion of the total microbial mRNA pool. FibroChip represents a reliable and high-throughput tool that enables researchers to monitor active members of fiber degradation in the rumen.",
"corpus_id": 3571443,
"title": "FibroChip, a Functional DNA Microarray to Monitor Cellulolytic and Hemicellulolytic Activities of Rumen Microbiota"
} | {
"abstract": "The fruits and vegetables consumption benefits have been known since ancient times; as they are natural antioxidants, they prevent or delay cellular damage following several mechanisms, such as electron transfer to free radicals or metal chelate catalysts, reduce the risk of cardiovascular diseases, and the harmful effects of radiation [1–3]. The fruits antibacterial, antiviral and anti-inflammatory activity, have been associated with their antioxidant power [4–6]. This is due to a widely distributed concentration, in seeds, peels, pulp and flowers, of non-enzymatic antioxidants such as vitamin C, vitamin E, polyphenols and anthocyanins. The flavonoids are the main antioxidants in natural products. Their basic skeleton contains the benzene A-ring connected to the benzene B-ring via a heterocyclic pyrane C-ring, Scheme 1. They are classified [7], into flavones, flavonols, flavan-3-ols, isoflavones, flavanones, anthocyanidins, dihydroflavonols, flavan-3,4-diols, coumarins, chalcones, dihydrochalcones and aurones, based on the B-ring relative position to the C-ring, as well as the functional groups (ketones, hydroxyls), and the presence or not of a double bond in the C-ring, Scheme 1. Seasonal availability and short postharvest life may limit consumption of fresh fruits, hence a large range of industrial techniques are commonly employed to preserve fruits: solar drying, microwave drying, osmotic dehydration, spray-drying, freezing and freeze-drying [8, 9]. These processes induce to the fruits physical, chemical, and biochemical changes. Therefore, it is important to consider the effect of the preserving process on the fruits antioxidant capacity [10]. Polyphenols extraction using modern environmentally techniques are recommended, because of their effectiveness and relatively low cost for obtaining natural antioxidants from fruits. Microwave assisted extraction (MAE) [11], ultrasound assisted extraction (bath and probe) (UAE) [12], ultrasound-microwave-assisted extraction (UMAE) [13], supercritical fluid extraction (SFE) [14] and pressurized solvent extraction (PSE) [15], were used. Although the Abstract : Flavonoids are natural phenolic derivatives that, in low concentration, can provide health benefits by preventing biomolecules (proteins, nucleic acids, lipids, sugars) oxidative damage through free radical mediated reactions. The flavonoids, in selected Mediterranean seasonal fruits: apricot, sour cherry, plum, pomegranate, date, prickly pear (cactus fruit), and nectarine, by RP-HPLC, coupled with photodiode array and electrochemical detectors, after microwave-ultrasound assisted extraction, using flavonoid standards, were detected. The total antioxidant capacity in the lyophilized fruit extracts, by differential pulse voltammetry, (electrochemical index-EI), integrated peak area, and chronoamperometry, was evaluated. In the lyophilized fruit extracts, and the catechin standard, the free radical scavenger efficient concentration (EC50), using DPPHC assay, was determined.",
"corpus_id": 2190639,
"score": 1,
"title": "Flavonoids in Selected Mediterranean Fruits: Extraction, Electrochemical Detection and Total Antioxidant Capacity Evaluation"
} |
{
"abstract": "Patterned magnetic nanowires are extremely well suited for data storage and logic devices. They offer non-volatile storage, fast switching times, efficient operation and a bistable magnetic configuration that are convenient for representing digital information. Key to this is the high level of control that is possible over the position and behaviour of domain walls (DWs) in magnetic nanowires. Magnetic random access memory based on the propagation of DWs in nanowires has been released commercially, while more dynamic shift register memory and logic circuits have been demonstrated. Here, we discuss the present standing of this technology as well as reviewing some of the basic DW effects that have been observed and the underlying physics of DW motion. We also discuss the future direction of magnetic nanowire technology to look at possible developments, hurdles to overcome and what nanowire devices may appear in the future, both in classical information technology and beyond into quantum computation and biology.",
"corpus_id": 6867201,
"title": "Nanowire spintronics for storage class memories and logic"
} | {
"abstract": "In this paper, we report on the synthesis of FeCo/Cu multisegmented nanowires by means of pulse electrodeposition in nanoporous anodic aluminum oxide arrays supported on silicon chips. By adjustment of the electrodeposition conditions, such as the pulse scheme and the electrolyte, alternating segments of Cu and ferromagnetic FeCo alloy can be fabricated. The segments can be built with a wide range of lengths (15-150 nm) and exhibit a close-to-pure composition (Cu or FeCo alloy) as suggested by energy-dispersive X-ray mapping results. The morphology and the crystallographic structure of different nanowire configurations have been assessed thoroughly, concluding that Fe, Co, and Cu form solid solution. Magnetic characterization using vibrating sample magnetometry and magnetic force microscopy reveals that by introduction of nonmagnetic Cu segments within the nanowire architecture, the magnetic easy axis can be modified and the reduced remanence can be tuned to the desired values. The experimental results are in agreement with the provided simulations. Furthermore, the influence of nanowire magnetic architecture on the magnetically triggered protein desorption is evaluated for three types of nanowires: Cu, FeCo, and multisegmented FeCo15nm/Cu15nm. The application of an external magnetic field can be used to enhance the release of proteins on demand. For fully magnetic FeCo nanowires the applied oscillating field increased protein release by 83%, whereas this was found to be 45% for multisegmented FeCo15nm/Cu15nm nanowires. Our work suggests that a combination of arrays of nanowires with different magnetic configurations could be used to generate complex substance concentration gradients or control delivery of multiple drugs and macromolecules.",
"corpus_id": 5124053,
"title": "Multisegmented FeCo/Cu nanowires: electrosynthesis, characterization, and magnetic control of biomolecule desorption."
} | {
"abstract": "A bio-photoelectrochemical cell (BPEC) based on a fuel-free self-circulation water-oxygen-water system was fabricated. It consists of Ni:FeOOH modified n-type bismuth vanadate (BiVO4 ) photoanode and laccase catalyzed biocathode. In this BPEC, irradiation of the photoanode generates photocurrent for photo-oxidation of water to oxygen, which is reduced to water again at the laccase biocathode. Of note, the by-products of two electrode reactions could continue to be reacted, which means the H2 O and O2 molecules are retained in an infinite loop of water-oxygen-water without any sacrificial chemical components. As a result, the assembled fuel-free BPEC exhibits good performance with an open-circuit potential of 0.97 V and a maximum power density of 205 μW cm-2 at 0.44 V. This BPEC based on a self-circulation system offers a fuel-free model to enhance multiple energy conversion and application in reality.",
"corpus_id": 10453384,
"score": 0,
"title": "Fuel-Free Bio-photoelectrochemical Cells Based on a Water/Oxygen Circulation System with a Ni:FeOOH/BiVO4 Photoanode."
} |
{
"abstract": "This artifact is based on EVF, an extensible and expressive Java visitor framework. EVF aims at reducing the effort involved in creation and reuse of programming languages. EVF an annotation processor that automatically generate boilerplate ASTs and AST for a given an Object Algebra interface. This artifact contains source code of the case study on \"Types and Programming Languages\", illustrating how effective EVF is in modularizing programming languages. There is also a microbenchmark in the artifact that shows that EVF has reasonable performance with respect to traditional visitors.",
"corpus_id": 506819,
"title": "EVF: An Extensible and Expressive Visitor Framework for Programming Language Reuse (Artifact)"
} | {
"abstract": "A type system is a syntactic method for automatically checking the absence of certain erroneous behaviors by classifying program phrases according to the kinds of values they compute. The study of type systems -- and of programming languages from a type-theoretic perspective -- has important applications in software engineering, language design, high-performance compilers, and security.This text provides a comprehensive introduction both to type systems in computer science and to the basic theory of programming languages. The approach is pragmatic and operational; each new concept is motivated by programming examples and the more theoretical sections are driven by the needs of implementations. Each chapter is accompanied by numerous exercises and solutions, as well as a running implementation, available via the Web. Dependencies between chapters are explicitly identified, allowing readers to choose a variety of paths through the material.The core topics include the untyped lambda-calculus, simple type systems, type reconstruction, universal and existential polymorphism, subtyping, bounded quantification, recursive types, kinds, and type operators. Extended case studies develop a variety of approaches to modeling the features of object-oriented languages.",
"corpus_id": 5088113,
"title": "Types and programming languages"
} | {
"abstract": "In order to solve the problem of state estimation and fault detection for a class of stochastic switched nonlinear system with unknown characteristics of observation noise change, a system state estimation algorithm based on interactive multiple model and unscented Kalman filter (IMM-UKF) is proposed. The algorithm uses the UKF to estimate the state of each subsystem at different time points, then fuses the state estimation results of different subsystems to obtain the final state estimation, and realizes the estimation of the real state of the system. In order to achieve real-time tracking of system observation noise, a fuzzy controller is established to adjust the observation noise of IMM- UKF in real time, and a fuzzy adaptive IMM-UKF algorithm (FAIMM-UKF) is proposed. For actuator failure in a class of stochastic switched nonlinear systems, FAIMM-UKF is used to estimate the system state. Based on the result of the state estimation, residual and residual evaluation functions are established to detect the actuator fault. Finally, the effectiveness of the proposed algorithm is verified by simulation experiment.",
"corpus_id": 171096895,
"score": 1,
"title": "Fault Detection for Stochastic Switched System Based on Fuzzy Adaptive Unscented Kalman Filter"
} |
{
"abstract": "Purpose - – The purpose of this paper is to present a multi-objective model to the optimum portfolio selection using genetic algorithm and analytic hierarchy process (AHP). Portfolio selection is a multi-objective decision-making problem in financial management. Design/methodology/approach - – The proposed approach solves the problem in two stages. In the first stage, the portfolio selection problem is formulated as a zero-one mathematical programming model to optimize two objectives, namely, return and risk. A genetic algorithm (GA) with multiple fitness functions called as Multiple Fitness Functions Genetic Algorithm is applied to solve the formulated model. The proposed GA results in several non-dominated portfolios being in the Pareto (efficient) frontier. A decision-making approach based on AHP is then used in the second stage to select the portfolio from among the solutions obtained by GA which satisfies a decision-maker’s interests at most. Findings - – The proposed decision-making system enables an investor to find a portfolio which suits for his/her expectations at most. The main advantage of the proposed method is to provide prima-facie information about the optimal portfolios lying on the efficient frontier and thus helps investors to decide the appropriate investment alternatives. Originality/value - – The value of the paper is due to its comprehensiveness in which seven criteria are taken into account in the selection of a portfolio including return, risk, beta ratio, liquidity ratio, reward to variability ratio, Treynor’s ratio and Jensen’s alpha.",
"corpus_id": 153220109,
"title": "Optimum portfolio selection using a hybrid genetic algorithm and analytic hierarchy process"
} | {
"abstract": "This paper describes a decision-making model of dynamic portfolio optimization for adapting to the change of stock prices based on an evolutionary computation method named genetic network programming (GNP). The proposed model, making use of the information from technical indices and candlestick chart, is trained to generate portfolio investment advice. Experimental results on the Japanese stock market show that the decision-making model using time adapting genetic network programming (TA-GNP) method outperforms other traditional models in terms of both accuracy and efficiency. A comprehensive analysis of the results is provided, and it is clarified that the TA-GNP method is effective on the portfolio optimization problem.",
"corpus_id": 36074754,
"title": "A model of portfolio optimization using time adapting genetic network programming"
} | {
"abstract": "Abstract A two-stage procedure for the design of a cellular manufacturing system is proposed. The first stage forms the part families. The use of clustering techniques with a new proximity measure is advocated for this stage. The proximity measure uses the manufacturing operations and the operations' sequences. The second stage forms the machine cells. An integer programming model is proposed for this stage. The solution of this model will specify the type and the number of machines in each cell and the assignment of the part families to the cells. The relevance of this approach in the design of flexible manufacturing systems is discussed.",
"corpus_id": 108875932,
"score": 2,
"title": "A framework for the design of cellular manufacturing systems"
} |
{
"abstract": "Endocarditis is an increasingly frequent complication of drug addiction. Precise localization of the site of involvement is necessary should antibiotic therapy fail and surgical therapy become indicated. This is a report of a patient with Pseudomonas endocarditis in whom the site of involvement was accurately localized noninvasively to the tricuspid valve by two-dimensional echocardiography. This was confirmed at the time of excision of the tricuspid valve.",
"corpus_id": 274184,
"title": "Tricuspid endocarditis in a drug addict; detection of tricuspid vegetations by two-dimensional echocardiography."
} | {
"abstract": "Between June 1980 and September 1981 we evaluated 24 cases of endocarditis from methicillin-resistant Staphylococcus aureus. All of the cases occurred in drug addicts and all were community-acquired. The patients ranged in age from 21 to 59 years and represented an older population than that generally reported for bacterial endocarditis in addicts. Men and women were equally represented (one man presented twice). This unusually high proportion of women may reflect a difference in the rate and location of carriage of methicillin-resistant S. aureus compared with that of methicillin-sensitive staphylococci. Three patients died, one of whom had signed out of the hospital on the 14th day and returned moribund 27 days later. Vancomycin treatment for 28 days was adequate therapy for most patients.",
"corpus_id": 22545607,
"title": "Community-acquired methicillin-resistant Staphylococcus aureus endocarditis in the Detroit Medical Center."
} | {
"abstract": "The pulmonary lesions in a case of mycoplasmal pneumonia are described. Mycoplasmal antigen was demonstrated in the lung by the indirect fluorescence antibody technic. The pulmonary changes were similar to those reported in cases of cold agglutinin-positive primary atypical pneumonia. In addition, cells morphologically indistinguishable from Reed-Sternberg cells were found in alveolar spaces, and caution is directed against placing undue emphasis on their presence. The use of immunofluorescent examination of pulmonary tissue as a diagnostic technic for patients with suspected mycoplasmal pneumonia is suggested.",
"corpus_id": 19596645,
"score": 2,
"title": "Mycoplasmal pneumonia in a patient with rheumatic heart disease."
} |
{
"abstract": "BackgroundPsoriasis is a multifactorial disease involving both genetic predisposition and external triggers, resulting in epidermal and immune dysfunctions. Regardless of the severity of the disease, patients require additional basic topical treatment with emollients. Basic skin care products are well known for their role in moisture retention and symptom control in psoriasis, yet patients underuse them. Dry skin and cutaneous inflammation are associated with an impaired epidermal barrier function. This breakdown of the skin barrier causes the release of proinflammatory mediators that exaggerate inflammation.Objectivesto provide recommendations for the use of emollients (including ceramides, urea, keratolytic agents, zinc salts, niacinamide), thermal water and skin care products in psoriasis.MethodsA review of the current literature from 2000 to 2012 using Medline and Ovid was performed by a working group of five European Dermatologists with clinical and research experience in psoriasis.ResultsEither alone or used adjunctively, basic topical therapy can restore and protect skin barrier function, increase remission times between flare-ups and enhance the effects of pharmaceutical therapy.ConclusionWe provide physicians with a tool to assist them in implementing basic skin care in an integrated disease management approach.",
"corpus_id": 360126,
"title": "Recommendations for adjunctive basic skin care in patients with psoriasis"
} | {
"abstract": "Abstract: Emollients or moisturizers can act as an important adjunctive therapy of topical treatment in psoriatic patients. However, the interest of emollients has never been clearly demonstrated; i.e. are they able to improve topical treatment efficacy and/or maintain continuous remission of the disease? The aim of this study was to evaluate the effect of an emollient on patients with mild plaque psoriasis during and after standard local corticosteroid therapy. Results showed that the use of an emollient can limit relapses after the end of corticotherapy, and maintain the improvement obtained after 1 month corticotherapy at clinical level (physician global assessment) and skin dryness.",
"corpus_id": 273302,
"title": "Emollient for maintenance therapy after topical corticotherapy in mild psoriasis"
} | {
"abstract": "Abstract Administration of sublethal amounts of actinomycin D, α-amanitin, cycloheximide and lead acetate to mice enhances the sensitivity to endotoxin 80- to 350-fold, as judged by the decrease in the LD 50 , whereas cyclophosphamide, methotrexate, 5-fluorouracil and azathioprine fail to enhance sensitivity. Pre-treatment of mice with actinomycin D or cycloheximide, and simultaneous administration of lead acetate does not markedly alter the clearance of a small but toxic dose (12·5 μg) of Cr 51 -labelled endotoxin from the circulation 60 min after endotoxin injection. These substances however, decrease the clearance rate of a colloidal carbon suspension. It is concluded that inhibition of RNA and protein synthesis is responsible for the sensitizing effect and that this inhibition is not operative through a decreased clearance of endotoxin, although the phagocytic capacity is impaired. A decreased synthesis of detoxifying enzymes may be the prime target of inhibitors of RNA and protein synthesis responsible for the events leading to increased sensitivity.",
"corpus_id": 1381968,
"score": 1,
"title": "Toxicity, clearance and distribution of endotoxin in mice as influenced by actinomycin D, cycloheximide, -amanitin and lead acetate."
} |
{
"abstract": "Evolutionary algorithms (EAs) require large scale computing resources when tackling real world problems. Such computational requirement is derived from inherently complex fitness evaluation functions, large numbers of individuals per generation, and the number of iterations required by EAs to converge to a satisfactory solution. Therefore, any source of computing power can significantly benefit researchers using evolutionary algorithms. We present the use of volunteer computing (VC) as a platform for harnessing the computing resources of commodity machines that are nowadays present at homes, companies and institutions. Taking into account that currently desktop machines feature significant computing resources (dual cores, gigabytes of memory, gigabit network connections, etc.), VC has become a cost-effective platform for running time consuming evolutionary algorithms in order to solve complex problems, such as finding substructure in the Milky Way Galaxy, the problem we address in detail in this chapter.",
"corpus_id": 2220204,
"title": "Evolutionary Algorithms on Volunteer Computing Platforms: The MilkyWay@Home Project"
} | {
"abstract": "Effective visualization is critical to developing, analyzing, and optimizing distributed systems. We have developed OverView, a tool for online/offline distributed systems visualization, that enables modular layout mechanisms, so that different distributed system high-level programming abstractions such as actors or processes can be visualized in intuitive ways. OverView uses by default a hierarchical concentric layout that distinguishes entities from containers allowing migration patterns triggered by adaptive middleware to be visualized. In this paper, we develop a force-directed layout strategy that connects entities according to their communication patterns in order to directly exhibit the application communication topologies. In force-directed visualization, entities’ locations are encoded with different colors to illustrate load balancing. We compare these layouts using quantitative metrics including communication to entity ratio, applied on common distributed application topologies. We conclude that modular visualization is necessary to effectively visualize distributed systems since no one layout is best for all applications.",
"corpus_id": 4876137,
"title": "Modular Visualization of Distributed Systems"
} | {
"abstract": "A ‘standard task graph set’ is proposed for fair evaluation of multiprocessor scheduling algorithms. Developers of multiprocessor scheduling algorithms usually evaluate them using randomly generated task graphs. This makes it difficult to compare the performance of algorithms developed in different research groups. To make it possible to evaluate algorithms under the same conditions so that their performances can be compared fairly, this paper proposes a standard task graph set covering many of the proposed task graph generation methods. This paper also evaluates as examples two heuristic algorithms (CP and CP/MISF), a practical sequential optimization algorithm (DF/IHS), and a practical parallel optimization algorithm (PDF/IHS) using the proposed standard task graph set. This set is available at http://www.kasahara.elec.waseda.ac.jp/schedule/. Copyright © 2002 John Wiley & Sons, Ltd.",
"corpus_id": 62555470,
"score": 1,
"title": "A standard task graph set for fair evaluation of multiprocessor scheduling algorithms"
} |
{
"abstract": "We describe linear-time algorithms for solving a class of problems that involve transforming a cost function on a grid using spatial information. These problems can be viewed as a generalization of classical distance transforms of binary images, where the binary image is replaced by an arbitrary function on a grid. Alternatively they can be viewed in terms of the minimum convolution of two functions, which is an important operation in grayscale morphology. A consequence of our techniques is a simple and fast method for computing the Euclidean distance transform of a binary image. Our algorithms are also applicable to Viterbi decoding, belief propagation, and optimal control.",
"corpus_id": 12212153,
"title": "Distance transforms of sampled functions"
} | {
"abstract": "Matching laser range scans observed at different points in time is a crucial component of many robotics tasks, including mobile robot localization and mapping. While existing techniques such as the Iterative Closest Point (ICP) algorithm perform well under many circumstances, they often fail when the initial estimate of the offset between scans is highly uncertain. This paper presents a novel approach to 2D laser scan matching. CRF-Matching generates a Condition Random Field (CRF) to reason about the joint association between the measurements of the two scans. The approach is able to consider arbitrary shape and appearance features in order to match laser scans. The model parameters are learned from labeled training data. Inference is performed efficiently using loopy belief propagation. Experiments using data collected by a car navigating through urban environments show that CRF-Matching is able to reliably and efficiently match laser scans even when no a priori knowledge about their offset is given. They additionally demonstrate that our approach can seamlessly integrate camera information, thereby further improving performance.",
"corpus_id": 17629144,
"title": "CRF-Matching: Conditional Random Fields for . . ."
} | {
"abstract": "Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, which can easily make training stuck or push probability mass in the wrong direction, towards that of higher concentration than that of the data generating distribution. We introduce several ways of regularizing the objective, which can dramatically stabilize the training of GAN models. We also show that our regularizers can help the fair distribution of probability mass across the modes of the data generating distribution, during the early phases of training and thus providing a unified solution to the missing modes problem.",
"corpus_id": 13002849,
"score": -1,
"title": "Mode Regularized Generative Adversarial Networks"
} |
{
"abstract": "For a two-component system, a derivative that specifies the concentration-dependence of one chemical potential can be calculated from the corresponding derivative of the other chemical potential by applying the Gibbs-Duhem Equation. To extend the practical utility of this binary thermodynamic linkage to systems having any number of components, we present a derivation based on a previously unrecognized recursive relationship. Thus, for each independently variable component, kappa, any derivative of its chemical potential, mukappa, with respect to one of the mole ratios {mkappa identical with nkappa/nomega} is related to as a characteristic series of progressively higher order derivatives of muomega for a single \"probe\" component, omega, with respect to certain of the {mkappa}. For aqueous solutions in which omega is solvent water and one or more of the solutes (kappa) is dilute, under typical conditions each sum of terms expressing a derivative of mukappa consists of at most a few numerically significant contributions, which can be quantified, or at least estimated, by analyzing osmometric data to determine how the single chemical potential muomega depends on the {mkappa} without neglecting any significant contributions from the other components. Expressions derived here also will provide explicit criteria for testing various approximations built into alternative analytic strategies for quantifying derivatives that specify the {mkappa} dependences of mukappa for selected components. Certain quotients of these derivatives are of particular interest in so far as they gauge important thermodynamic effects due to \"preferential interactions\".",
"corpus_id": 9330365,
"title": "Gibbs-Duhem-based relationships among derivatives expressing the concentration dependences of selected chemical potentials for a multicomponent system."
} | {
"abstract": "Noncovalent self-assembly of biopolymers is driven by molecular interactions between functional groups on complementary biopolymer surfaces, replacing interactions with water. Since individually these interactions are comparable in strength to interactions with water, they have been difficult to quantify. Solutes (osmolytes, denaturants) exert often large effects on these self-assembly interactions, determined in sign and magnitude by how well the solute competes with water to interact with the relevant biopolymer surfaces. Here, an osmometric method and a water-accessible surface area (ASA) analysis are developed to quantify and interpret the interactions of the remarkable osmolyte glycine betaine (GB) with molecular surfaces in water. We find that GB, lacking hydrogen bond donors, is unable to compete with water to interact with anionic and amide oxygens; this explains its effectiveness as an osmolyte in the Escherichia coli cytoplasm. GB competes effectively with water to interact with amide and cationic nitrogens (hydrogen bonding) and especially with aromatic hydrocarbon (cation-pi). The large stabilizing effect of GB on lac repressor-lac operator binding is predicted quantitatively from ASA information and shown to result largely from dehydration of anionic DNA phosphate oxygens in the protein-DNA interface. The incorporation of these results into theoretical and computational analyses will likely improve the ability to accurately model intra- and interprotein interactions. Additionally, these results pave the way for development of solutes as kinetic/mechanistic and thermodynamic probes of conformational changes and formation/disruption of molecular interfaces that occur in the steps of biomolecular self-assembly processes.",
"corpus_id": 9306307,
"title": "Interactions of the osmolyte glycine betaine with molecular surfaces in water: thermodynamics, structural interpretation, and prediction of m-values."
} | {
"abstract": "Semipermeable membrane devices (SPMD) and polar organic chemical integrative samplers (POCIS) were exposed to a cocktail of organic chemicals using a flow‐through system. Samplers were removed and analyzed every 7 d over a four‐week period in order to determine sampling rates (Rs) for individual compounds. Prior to laboratory exposure, half of the samplers were allowed to foul naturally for six weeks, in order to examine differences in uptake due to fouling. The amount of fouling ranged from 0.2 to 2.8 g dry weight/dm2 for POCIS and 0.1 to 1.4 g dry weight/dm2 for SPMDs, and the pattern of accumulation was also different between them. The Rs values were determined by fitting curves to time course uptake data and also by using performance reference compounds (PRCs) for SPMDs. Sampling rates ranged from 2.7 to 14.2 L/d for SPMDs and 0.01 to 0.27 L/d for POCIS. Fouled SPMDs showed a reduction in Rs (<20%) for all but one compound, and there was a similar reduction in the release of PRCs. However, PRC‐predicted Rs values were overall somewhat higher than those from fitted curves. Uptake of alkylated phenols in POCIS was generally higher (up to 55%) in fouled samplers. The reason for this is not known, but is possibly due to some reduction in interactions with the membrane in fouled samplers. There was no overall pattern in the relationship of sampling rate differences with log KOW or over time for either sampler. Release of compounds from POCIS after a drop in exposure water concentrations provides some encouragement for the application of a PRC approach to polar passive samplers.",
"corpus_id": 21866991,
"score": 1,
"title": "Small but Different Effect of Fouling on the Uptake Rates of Semipermeable Membrane Devices and Polar Organic Chemical Integrative Samplers"
} |
{
"abstract": "BackgroundThe maternal mortality ratio of Uganda is still high and the leading causes of maternal mortality are postpartum haemorrhage (PPH), severe pre-eclampsia and eclampsia. Criteria-based audit (CBA) is a way of improving quality of care that has not been commonly used in low income countries. This study aimed at finding out the quality of care provided to patients with these conditions and to find out if the implementation of recommendations from the audit cycle resulted in improvement in quality of care.MethodsThis study was a CBA following a time series study design. It was done in St. Francis Hospital Nsambya and it involved assessment of adherence to standards of care for PPH, severe pre-eclampsia and eclampsia. An initial audit was done for 3 consecutive months, then findings were presented to health workers and recommendations made; we implemented the recommendations in a subsequent month and this comprised three interventions namely continuing medical education (CME), drills and displaying guidelines; a re-audit was done in the proceeding 3 consecutive months and analysis compared adherence rates of the initial audit with those of the re-audit.ResultsPearson Chi-Square test revealed that the adherence rates of 7 out of 10 standards of care for severe pre-eclampsia/eclampsia were statistically significantly higher in the re-audit than in the initial audit; also, the adherence rates of 3 out of 4 standards of care for PPH were statistically significantly higher in the re-audit than in the initial audit.ConclusionThe giving of feedback on quality of care and the implementation of recommendations made during the CBA including CME, drills and displaying guidelines was associated with improvements in the quality of care for patients with PPH, severe pre-eclampsia and eclampsia.",
"corpus_id": 3792693,
"title": "Assessment of quality of care among in-patients with postpartum haemorrhage and severe pre-eclampsia at st. Francis hospital nsambya: a criteria-based audit"
} | {
"abstract": "‘The consumer of knowledge can never know what a dicky thing knowledge is until he has tried to produce it’. F.J. Roethlisberger, investigator at Hawthorne \n\nThere is a familiar anecdote that relates, with variations, that experiments with improved factory lighting increased the productivity of workers. The outcome seemed clear until someone turned the lighting down to below baseline, whereupon output increased still further. The moral of this tale, referred to as the Hawthorne effect, is that people change their behaviour when they think you are watching it. The story relates to the first of many experiments performed at the Hawthorne works of the Western Electric Company in Chicago from November 1924 onwards. The original aim was to test claims that brighter lighting increased productivity, but uncontrolled studies proved uninterpretable. The workers were therefore divided into matched control and test groups and, to the surprise of the investigators, productivity rose equally in both. In the next experiment, lighting was reduced progressively for the test group until, at 1.4 foot-candles, they protested that they could not see what they were doing. Until then the productivity of both groups had once again risen in parallel. Two volunteers went on to demonstrate that a high output was possible at 0.06 foot-candles, equivalent to moonlight.\n\nThe investigators next changed the light bulbs daily in the sight of the workers, telling them that the new bulbs were brighter. The women commented favourably on the change and increased their work-rate, even though the new bulbs were identical to those that had been removed. This and other manoeuvres showed beyond doubt that productivity related to what the subjects believed, and not to objective changes in their circumstances. These at least seem to be the main facts behind the popular legend, although these particular experiments were never written up, …",
"corpus_id": 45781769,
"title": "The Hawthorne studies-a fable for our times?"
} | {
"abstract": "Publisher Summary This chapter discusses the adhesion of cell. The coating of a solid adherend by a liquid adhesive, in terms of wettability, surface irregularity, and penetrability is described in the chapter. Adhesive phenomena are considered to play an important part in morphogenesis and in the metastasis of malignant disease. These aspects of adhesion are also reviewed in the chapter together with relevant information obtained from studies of cell cultures. The adhesion of two surfaces may result from their mechanical interlocking and/or interfacial forces. Chemical bonds are not usually considered as “adhesive” bonds in the physical sciences. Van der Waals' forces are also involved in adhesion. These may be further classified as (1) London forces, (2) Debye forces, and (3) Keesom forces. An estimate of the work of adhesion between two solids may be obtained from the “adhesion number.” In this method, quartz particles were allowed to settle onto a quartz plate. The particles were then counted. After this, the plate was inverted and the remaining particles were counted. This number expressed as a percentage of the total number of particles originally present on the plate was termed the adhesion number and gave a measure of the adhesion between the particles and the quartz surface. Modifications of this method, by which cells adhering to surfaces are detached by a stream of fluid instead of gravitational forces, are described by Fenn.",
"corpus_id": 37276009,
"score": 1,
"title": "The adhesion of cells."
} |
{
"abstract": "Nongranulomatous, nonspecific interstitial pneumonitis was a predominating or prominent histopathologic finding in 62 percent of 128 granuloma-containing specimens from open lung biopsies obtained from patients with sarcoidosis. Data from this study, combined with observations by others on the evolution of experimentally induced granulomas, indicate that interstitial pneumonitis represents a very early lesion, possibly the initial lesion, in pulmonary sarcoidosis. Because of the relatively large error of sampling inherent in the currently increasing practice of obtaining small specimens for lung biopsy via the flexible fiberoptic bronchoscope, we anticipate that interstitial pneumonitis will be seen as the only histopathologic finding in these specimens with increasing frequency. It is therefore important to recognize that interstitial pneumonitis is a characteristic, although nondiagnostic, morphologic feature of pulmonary sarcoidosis.",
"corpus_id": 824951,
"title": "Nongranulomatous interstitial pneumonitis in sarcoidosis. Relationship to development of epithelioid granulomas."
} | {
"abstract": "Pulmonary sarcoidosis is a granulomatous disease characterized by the accumulation of activated T cells in the lower respiratory tract. To evaluate the hypothesis that sarcoidosis is characterized by a selective activation and expansion of a limited repertoire of T cell receptor (TCR)-specific T cells, we analyzed TCRAV and TCRBV gene expression in bronchoalveolar lavage (BAL) T cells from sarcoidosis patients and, for comparison, from patients with other pulmonary diseases where lymphocyte accumulation is not observed. Increased expression of TCRAV9 and TCRAV14 in BAL T cells was observed in sarcoidosis patients compared to these controls. To ascertain whether the accumulation of AV9 and AV14 expressing BAL T cells in sarcoidosis was the result of clonal expansion, the lengths of the CDR3 regions in AV9 and AV14 transcripts were determined. Some individual patient samples showed evidence of oligoclonality. However, in most cases, the data were consistent with the presence of many different clones. These data suggest that the bulk of BAL T cells in sarcoid patients are either nonspecifically recruited or are responding to a complex mixture of antigens.",
"corpus_id": 1696619,
"title": "TCR expression and clonality analysis in pulmonary sarcoidosis."
} | {
"abstract": "This paper introduces a new method for document page segmentation. This method is based on the analysis of the background white space that surrounds the printed regions on the page. It does not make any assumptions about the shape of the regions as opposed to most earlier approaches which assume that printed regions are rectangular. It is capable of identifying and describing regions of complex shapes more accurately than existing methods. It requires no a priori knowledge. The background white space is covered with tiles and the contour of each region is identified by tracing through these white tiles that encircle it. The method can segment page images with severe skew without skew correction. The white tiles on the image can also be used in subsequent document analysis processes such as the classification of the image regions.",
"corpus_id": 46947302,
"score": 0,
"title": "Flexible page segmentation using the background"
} |
{
"abstract": "The distribution of internal shear stresses in a 2D dislocation system is investigated for when external shear stress is applied. This problem serves as a natural continuation of the previous work of Csikor and Groma (2004 Phys. Rev. B 58 2969), where an analytical result was given for the stress distribution function at zero applied stress. First, the internal stress distribution generated by a set of randomly positioned ideal dislocation dipoles is studied. Analytical calculations are carried out for this case. The theoretical predictions are checked by numerical simulations, showing perfect agreement. It is found that for real relaxed dislocation configurations the role of dislocation multipoles cannot be neglected, but the theory presented can still be applied.",
"corpus_id": 17045047,
"title": "The probability distribution of internal stresses in externally loaded 2D dislocation systems"
} | {
"abstract": "The dynamics of dislocations is reported to exhibit a range of glassy properties. We study numerically various versions of 2D edge dislocation systems, in the absence of externally applied stress. Two types of glassy behavior are identified (i) dislocations gliding along randomly placed, but fixed, axes exhibit relaxation to their spatially disordered stable state; (ii) if both climb and annihilation are allowed, irregular cellular structures can form on a growing length scale before all dislocations annihilate. In all cases both the correlation function and the diffusion coefficient are found to exhibit aging. Relaxation in case (i) is a slow power law, furthermore, in the transient process (ii) the dynamical exponent z approximately 6, i.e., the cellular structure coarsens relatively slowly.",
"corpus_id": 9012039,
"title": "Dislocation glasses: aging during relaxation and coarsening."
} | {
"abstract": "Abstract The question of the description of the elastic fields of dislocations and of the plastic strains generated by their motion is central to the connection between dislocation-based and continuum approaches of plasticity. In the present work, the homogenization of the elementary shears produced by dislocations is discussed within the frame of a discrete-continuum numerical model. In the latter, a dislocation dynamics simulation is substituted for the constitutive form traditionally used in finite element calculations. As an illustrative example of the discrete-continuum model, the stress field of single dislocations is obtained as a solution of the boundary value problem. The hybrid code is also shown to account for size effects originating from line tension effects and from stress concentrations at the tip of dislocation pile-ups.",
"corpus_id": 59449915,
"score": 2,
"title": "Homogenization method for a discrete-continuum simulation of dislocation dynamics"
} |
{
"abstract": "In recent years, there is a global evolution in the way energy is generated and consumed due to climate change, energy independence and the impending decay of fossil fuels. It has seen a rise of interest in the deployment of multi agent systems in energy domains that inherently have uncertain and dynamic environments with limited resources. In such domains, the key challenge is to minimize the energy consumption while satisfying the comfort level of occupants in the buildings under uncertainty (regarding agent negotiation actions). This paper presents the new development for enhancement the performance of Power Management in Smart Home simulator. This development is based on the anticipatory and multi agent systems that used in this simulator.",
"corpus_id": 10074099,
"title": "New development in Anticipatory Agent System used for Power Management in Smart Home Simulator"
} | {
"abstract": "Robert Rosen, 2nd edition, with contributions by Judith Rosen, John J. Klineman, and Mihai Nadin, New York, Springer, 2012, lx+472 pp., ISBN 978-1-4614-1268-7 Robert Rosen, a mathematical biologist...",
"corpus_id": 28158758,
"title": "Anticipatory systems: philosophical, mathematical, and methodological foundations"
} | {
"abstract": "The development of enabling infrastructure for the next generation of multi-agent systems consisting of large numbers of agents and operating in open environments is one of the key challenges for the multi-agent community.Current infrastructure support does not materially assist in the development of sophisticated agent coordination strategies. It is the need for and the development of such a high-level support structure that will be the focus of this paper. A domain-independent (generic) agent architecture is proposed that wraps around an agent's problem-solving component in order to make problem solving responsive to real-time constraints, available network resources, and the need to coordinate—both in the large and small—with problem-solving activities of other agents. This architecture contains five components, local agent scheduling, multi-agent coordination, organizational design, detection and diagnosis, and on-line learning, that are designed to interact so that a range of different situation-specific coordination strategies can be implemented and adapted as the situation evolves. The presentation of this architecture is followed by a more detailed discussion on the interaction among these components and the research questions that need to be answered to understand the appropriateness of this architecture for the next generation of multi-agent systems.",
"corpus_id": 5609948,
"score": 2,
"title": "Reflections on the Nature of Multi-Agent Coordination and Its Implications for an Agent Architecture"
} |
{
"abstract": "Sulfur aromatic compounds, such as mono-, di-, tri-, and tetraalkyl-substituted thiophene, benzothiophenes, dibenzothiophenes, are the molecular components of many fossils (petroleum, oil shale, tar sands, bitumen). Structural units of natural, cross-linked heteroaromatic polymers present in brown coals, turf, and soil are similar to those of sulfur aromatic compounds. Many sulfur aromatic compounds are found in the streams of petroleum refining and upgrading (naphthas, gas oils) and in the consumer products (gasoline, diesel, jet fuels, heating fuels). Besides fossils, the structural fragments of sulfur aromatic compounds are present in molecules of certain organic semiconductors, pesticides, small molecule drugs, and in certain biomolecules present in human body (pheomelanin pigments). Photocatalysis is the frontier area of physical chemistry that studies chemical reactions initiated by absorption of photons by photocatalysts, that is, upon electronic rather than thermal activation, under \"green\" ambient conditions. This review provides systematization and critical review of the fundamental chemical and physicochemical information on heterogeneous photocatalysis of sulfur aromatic compounds accumulated in the last 20-30 years. Specifically, the following topics are covered: physicochemical properties of sulfur aromatic compounds, major classes of heterogeneous photocatalysts, mechanisms and reactive intermediates of photocatalytic reactions of sulfur aromatic compounds, and the selectivity of these reactions. Quantum chemical calculations of properties and structures of sulfur aromatic compounds, their reactive intermediates, and the structure of adsorption complexes formed on the surface of the photocatalysts are also discussed.",
"corpus_id": 6402140,
"title": "Heterogeneous photocatalytic reactions of sulfur aromatic compounds."
} | {
"abstract": "Photolysis of dibenzothiophene sulfoxide results in the formation of dibenzothiophene and oxidized solvent. Though quantum yields are low, chemical yields of the sulfide are quite high. Yields of the oxidized solvents can also be high. Typical products are phenol from benzene, cyclohexanol, and cyclohexene from cyclohexane and 2-cyclohexenol and epoxycyclohexane from cyclohexene. A number of experiments designed to elucidate the mechanism of the hydroxylation were carried out, including measurements of quantum yields as a function of concentration, solvent, quenchers, and excitation wavelength. These data are inconsistent with a mechanism involving a sulfoxide dimer, which also does not properly account for the solvent oxidations. It is suggested that the active oxidizing agent may be atomic oxygen O(3P) or a closely related noncovalent complex, based on the nature of the oxidation chemistry, comparison to known rate constants for O(3P) reactivity, and the quantum yield data.",
"corpus_id": 8829221,
"title": "Photodeoxygenation of Dibenzothiophene Sulfoxide: Evidence for a Unimolecular S−O Cleavage Mechanism1"
} | {
"abstract": "This study examined the particulate emissions from a pre-emissions control era vehicle operated on both leaded and unleaded fuels for the purpose of establishing a historical benchmark. A pre-control vehicle was located that had been rebuilt with factory original parts to approximate an as-new vehicle prior to 1968. The vehicle had less than 20,000 miles on the rebuilt engine and exhaust. The vehicle underwent repeated FTP-75 tests to determine its regulated emissions, including particulate mass. Additionally, measurements of the particulate size distribution were made, as well as particulate lead concentration. These tests were conducted first with UTG96 certification fuel, followed by UTG96 doped with tetraethyl lead to approximate 1968 levels. Results of these tests, including transmission electron micrographs of individual particles from both the leaded and unleaded case are presented. The FTP composite PM emissions from this vehicle averaged 40.5 mg/mile using unleaded fuel. The results from the leaded fuel tests showed that the FTP composite PM emissions increased to an average of 139.5 mg/mile. Analysis of the particulate size distribution for both cases demonstrated that the mass-based size distribution of particles for this vehicle is heavily skewed towards the nano-particle range. The leaded-fuel tests showed a significant increase in mass concentration at the <0.1 micron size compared with the unleaded-fuel test case. The leaded-fuel tests produced lead emissions of nearly 0.04 g/mi, more than a 4-order-of-magnitude difference compared with unleaded-fuel results. Analysis of the size-fractionated PM samples showed that the lead PM emissions tended to be distributed in the 0.25 micron and smaller size range.",
"corpus_id": 15395861,
"score": 2,
"title": "Particulate Emissions from a Pre-Emissions Control Era Spark-Ignition Vehicle: A Historical Benchmark"
} |
{
"abstract": "The pyrolysis of rice husk was conducted in a fixed-bed reactor with a sweeping nitrogen gas to investigate the effects of pressure on the pyrolytic behaviors. The release rates of main gases during the pyrolysis, the distributions of four products (char, bio-oil, water and gas), the elemental compositions of char, bio-oil and gas, and the typical compounds in bio-oil were determined. It was found that the elevation of pressure from 0.1MPa to 5.0MPa facilitated the dehydration and decarboxylation of bio-oil, and the bio-oils obtained under the elevated pressures had significantly less oxygen and higher calorific value than those obtained under atmospheric pressure. The former bio-oils embraced more acetic acid, phenols and guaiacols. The elevation of pressure increased the formation of CH4 partially via the gas-phase reactions. An attempt is made in this study to clarify \"the pure pressure effect\" and \"the combined effect with residence time\".",
"corpus_id": 9247,
"title": "Pressurized pyrolysis of rice husk in an inert gas sweeping fixed-bed reactor with a focus on bio-oil deoxygenation."
} | {
"abstract": "Fast pyrolysis of rice husk was performed in a spout-fluid bed to produce water-soluble organics. The effects of mineral bed materials (red brick, calcite, limestone, and dolomite) on yield and quality of organics were evaluated with the help of principal component analysis (PCA). Compared to quartz sand, red brick, limestone, and dolomite increased the yield of the water-soluble organics by 6-55% and the heating value by 16-19%. The relative content of acetic acid was reduced by 23-43% with calcite, limestone and dolomite when compared with quartz sand. The results from PCA showed all minerals enhanced the ring-opening reactions of cellulose into furans and carbonyl compounds rather than into monomeric sugars. Moreover, calcite, limestone, and dolomite displayed the ability to catalyze the degradation of heavy compounds and the demethoxylation reaction of guaiacols into phenols. Minerals, especially limestone and dolomite, were beneficial to the production of water-soluble organics.",
"corpus_id": 30575747,
"title": "Application of mineral bed materials during fast pyrolysis of rice husk to improve water-soluble organics production."
} | {
"abstract": "In the present study water extractable arabinoxylans (WEAX) from a Mexican spring wheat flour (cv. Tacupeto F2001) were isolated, characterized and gelled and the gel rheological properties and microstructure were investigated. These WEAX presented an arabinose to xylose ratio of 0.66, a ferulic acid and diferulic acid content of 0.526 and 0.036 µg/mg WEAX, respectively and a Fourier Transform Infra-Red (FT-IR) spectrum typical of arabinoxylans. The intrinsic viscosity and viscosimetric molecular weight values for WEAX were 3.5 dL/g and 504 kDa, respectively. WEAX solution at 2% (w/v) formed gels induced by a laccase as cross-linking agent. Cured WEAX gels registered storage (G’) and loss (G’’) modulus values of 31 and 5 Pa, respectively and a diferulic acid content of 0.12 µg/mg WEAX, only traces of triferulic acid were detected. Scanning electron microscopy analysis of the lyophilized WEAX gels showed that this material resembles that of an imperfect honeycomb.",
"corpus_id": 14000129,
"score": 1,
"title": "Characterization of Water Extractable Arabinoxylans from a Spring Wheat Flour: Rheological Properties and Microstructure"
} |
{
"abstract": "Cultures of islets cells were obtained from the cadaveric pancreas of 16–25-week human fetuses. During culture two waves of mitotic activity were observed. An increase in the insulin concentration in the culture medium took place after a wave of mitotic activity.",
"corpus_id": 2374292,
"title": "Cultures of human fetal pancreatic islet cells"
} | {
"abstract": "Isolated pancreatic islets of normal hamsters were perifused either in a closed or in a open system. When the buffer was recirculated and the endogenous insulin was allowed to accumulate, the islets secreted significantly less insulin than when the system was open and the endogenous insulin was washed away. The addition of monocomponent insulin or of proinsulin to the perifusion buffer significantly decreased insulin secretion. The inhibitory action of proinsulin was significantly greater than that of monocomponent insulin. C peptide had no effect. When pancreatic islets were incubated in a fixed volume of stationary buffer containing unlabeled glucose (1.0 mg or 3.0 mg/ml) and glucose-U-14C (1.0 muC/ml), the amount of insulin secreted and the 14CO2 produced by each islet decreased progressively as the number of islets in the sample increased. Under these conditions, the concentration of insulin required to inhibit insulin secretion increased with the concentration of glucose in the medium. Proinsulin did not alter the incorporation of leucine-4.5(-3). H into total extractable insulin (insulin + proinsulin). Thus, insulin and proinsulin appear to inhibit insulin release, but not insulin synthesis.",
"corpus_id": 12253591,
"title": "Insulin secretion and glucose uptake by isolated islets of the hamster. Effect of insulin, proinsulin and C-peptide."
} | {
"abstract": "Abstract. A new type of gas chimney exhibiting an unconventional linear planform is found. These chimneys are termed Linear Chimneys, which have been observed in 3-D seismic data offshore of Angola. Linear Chimneys occur parallel to adjacent faults, often within preferentially oriented tier-bound fault networks of diagenetic origin (also known as anisotropic polygonal faults, PFs), in salt-deformational domains. These anisotropic PFs are parallel to salt-tectonic-related structures, indicating their submission to horizontal stress perturbations generated by the latter. Only in areas with these anisotropic PF arrangements do chimneys and their associated gas-related structures, such as methane-derived authigenic carbonates and pockmarks, have linear planforms. In areas with the classic isotropic polygonal fault arrangements, the stress state is isotropic, and gas expulsion structures of the same range of sizes exhibit circular geometry. These events indicate that chimney's linear planform is heavily influenced by stress anisotropy around faults. The initiation of polygonal faulting occurred 40 to 80 m below the present day seafloor and predates Linear Chimney formation. The majority of Linear Chimneys nucleated in the lower part of the PF tier below the impermeable portion of fault planes and a regional impermeable barrier within the PF tier. The existence of polygonal fault-bound traps in the lower part of the PF tier is evidenced by PF cells filled with gas. These PF gas traps restricted the leakage points of overpressured gas-charged fluids along the lower portion of PFs, hence controlling the nucleation sites of chimneys. Gas expulsion along the lower portion of PFs preconfigured the spatial organisation of chimneys. Anisotropic stress conditions surrounding tectonic and anisotropic polygonal faults coupled with the impermeability of PFs determined the directions of long-term gas migration and linear geometries of chimneys. Methane-related carbonates that precipitated above Linear Chimneys inherited the same linear planform geometry, and both structures record the timing of gas leakage and palaeo-stress state; thus, they can be used as a tool to reconstruct orientations of stress in sedimentary successions. This study demonstrates that overpressure hydrocarbon migration via hydrofracturing may be energetically more favourable than migration along pre-existing faults.\n",
"corpus_id": 55780493,
"score": 0,
"title": "Formation of linear planform chimneys controlled by preferential hydrocarbon leakage and anisotropic stresses in faulted fine-grained sediments, offshore Angola"
} |
{
"abstract": "Introduction Development of strictures of hepaticojejunal anastomoses (HJA) is observed in 6–30% of patients and mortality after repeated reconstructive interventions ranges from 13% to 25%. Double balloon enteroscopy (DBE) allows one to visualize the zone of Roux-en-Y anastomosis after reconstructive operations on the bile ducts for differentiation between stricture of HJA and recurrent cholangitis. Aim Report on the first experience of DBE of jejunal loop studies after reconstructive operations on the biliary tract. Material and methods During the period 2002–2012 we performed in our centre 86 hepaticojejunostomies after iatrogenic bile duct injuries. Mean age was 51 ±6 years. Patients with Roux-en-Y HJA and jejunum loop with Braun's bypass anastomosis who underwent DBE with endoscopic retrograde cholangiography (DBE-RChG) in our unit between February 2009 and December 2012 were enrolled in this study. A total of 33 procedures were performed during this period. All of them involved examination of HJA through a jejunum loop by DBE with capture of bile for bacteriology, Roux loop wall for biopsy and miniinvasive procedures. Results The DBE-RChG after visualization of the HJA zone was performed in 21 cases: 3 of them had the jejunum loop to Braun's bypass, 18 – HJA on the Roux loop. In 13 cases stricture of HJA was confirmed: at 6 reoperations were performed, in 7 – miniinvasive procedures (3 – laser vaporizations, 2 – stone extraction, 1 – lithotripsy, 1 – at the first stage stone extraction was carried out, then laser vaporization). The DBE-RChG was performed in 13 (61.9%) patients. The overall diagnostic success with Braun's bypass was 100%, after Roux-en-Y reconstruction in 10 of 13 cases (55.6%). In connection with accumulation of experience, in 2012 diagnostic success in DBE-RChG of HJA on Roux loop increased to 81.3%. Conclusions The MRI-ChG in our series frequently (10.3%) shows a false-positive result in favor of HJA strictures. The DBE examination of HJA with additional cholangiography is a modern and precise method of detection of HJA strictures. Their DBE-balloon dilation and argon-laser vaporization or DBE lithoextraction are new ways of miniinvasive treatment.",
"corpus_id": 299958,
"title": "The use of double balloon enteroscopy for diagnosis and treatment of strictures of hepaticojejunal anastomoses after primary correction of bile duct injuries"
} | {
"abstract": "Альперович Б.И. (Томск, Россия), Багненко С.Ф. (Санкт/ Петербург, Россия), Бебезов Б.Х. (Бишкек, Киргизия), Бебуришвили А.Г. (Волгоград, Россия), Вафин А.З. (Ставрополь, Россия), Винник Ю.С. (Красноярск, Россия), Власов А.П. (Саранск, Россия), Гранов А.М. (Санкт/ Петербург, Россия), Гришин И.Н. (Минск, Беларусь), Заривчацкий М.Ф. (Пермь, Россия), Каримов Ш.И. (Ташкент, Узбекистан), Красильников Д.М. (Казань, Россия), Лупальцев В.И. (Харьков, Украина), Полуэктов В.Л. (Омск, Россия), Прудков М.И. (Екатеринбург, Россия), Сейсембаев М.А. (Алматы, Казахстан), Совцов С.А. (Челябинск, Россия), Тимербулатов В.М. (Уфа, Россия), Чугунов А.Н. (Казань, Россия), Штофин С.Г. (Новосибирск, Россия)",
"corpus_id": 78590410,
"title": "АННАЛЫ ХИРУРГИЧЕСКОЙ ГЕПАТОЛОГИИ ANNALS OF SURGICAL HEPATOLOGY"
} | {
"abstract": "Proton electron double resonance imaging (PEDRI) is an emerging technique that utilizes the Overhauser effect to enable in vivo and in vitro imaging of free radicals in biological systems. Nitroxide spin probes enable measurement of tissue redox state based on their reduction to diamagnetic hydroxylamines. PEDRI instrumentation at 0.02 T was applied to assess the ability to image the in vivo distribution, clearance, and metabolism of nitroxide radicals in living mice. Using phantoms of 2,2,5,5‐tetramethyl‐3‐carboxylpyrrolidine‐N‐oxyl (PCA) in normal saline the dependence of the enhancement on RF power and spin probe concentration was determined. Enhancements of up to −23 were obtained in phantoms with 2 mM levels. Maximum enhancement of −7 was observed in vivo. Coronal images of nitroxide‐infused mice enabled visualization of the kinetics of spin probe uptake and clearance in different organs including the great vessels, heart, lungs, kidneys, and bladder with an in‐plane spatial resolution of 0.6 mm. PEDRI of living mice was also performed using 3‐carbamoyl‐proxyl and 2,2,6,6‐tetramethyl‐4‐oxopiperidine‐N‐oxyl to compare the different rate of clearance and metabolism among different nitroxide probes. PCA, due to its intravascular compartmentalization, provided the sharpest contrast for the vascular system and highest enhancement values in the PEDRI images among the three nitroxides. Magn Reson Med, 2006. © 2006 Wiley‐Liss, Inc.",
"corpus_id": 28530669,
"score": 1,
"title": "In vivo proton electron double resonance imaging of the distribution and clearance of nitroxide radicals in mice"
} |
{
"abstract": "Mostly the boost inverters are planned to be used in system when the average output voltage is required to be larger than the input dc voltage, like uninterruptible power supply (UPS) and PV systems, unlike single-phase voltage source inverter(VSI) which uses buck topology and average output voltage is always found lower than the dc input voltage. Such kind of inverters need two stages of power conversion and more number of switches with a boost dc-dc converter in between dc source and inverter depending upon the voltage and power levels. This work proposes a novel dc to ac boost inverter based on sinusoidal-pulse-width-modulation (SPWM) control to generate the output in single stage of conversion and whose peak value will be greater than the dc input one depending on the duty cycle of converters. The proposed inverter reduces the switching losses and has much higher efficiency with respect to conventional boost inverter, due to much reduction in the number of switches. The strategy of modulation of new proposed boost inverter will reduce the harmonics and the energy loss in the output of proposed inverter, thereby allowing the proposed inverter to become a new sufficient solution for many applications like automotive electronics, PV systems, solar home applications and other power supply systems.",
"corpus_id": 160022688,
"title": "Design and Controlling of Proposed Efficient Boost-Inverter Implemented using Boost DC-DC Converter"
} | {
"abstract": "The objective of this work is to provide the continuous AC power supply by using semi-isolated multi input converter (S-MIC) for hybrid PV/WIND power charger system. The DC output from the hybrid charger system will be boosted and converted it into AC. This S-MIC converter provides the system very simple in design, delivers continuous power, and it also reduces the cost of the system. The boost inverter is used to boost the output voltage from the hybrid PV/WIND system. Simulation results reveal that the hybrid system provides a constant power to the load.",
"corpus_id": 14105396,
"title": "Continuous AC supply using semi-isolated multi-input converter for hybrid PV/WIND power charger system"
} | {
"abstract": "The controller in a pulse-width-modulation (PWM) power converter has to stabilize the system and guarantee an almost constant output voltage in spite of the perturbations in the input voltage and output load over as large a bandwidth as possible. Boost and flyback power converters have a right-half-plane zero (RHPZ) in their transfer function from the duty cycle to the output voltage, which makes it difficult to achieve the aforementioned goals. Here, the authors propose to design a controller using H/sup /spl infin// control theory, via the solution of two algebraic Riccati equations. The almost optimal H/sup /spl infin// controller is of the same order as the converter and has a relatively low DC gain. The closed-loop characteristics of a typical low-power boost power converter with four different control schemes were compared by computer simulation. The H/sup /spl infin// control was found to be superior in a wide frequency range, while being outperformed by the others at extremely low frequencies. Good agreement was found between simulation results and experimental measurements.",
"corpus_id": 109422304,
"score": 2,
"title": "H/sup /spl infin// control applied to boost power converters"
} |
{
"abstract": "The possible association between finger dermatoglyphic patterns and altitude and surname distribution was analyzed in a sample of adult males from the province of Jujuy, Argentina. We also investigated the biological affinity of this population with other South American natives and admixed populations. Fingerprints were obtained from 996 healthy men, aged 18-20 years, from the highlands (HL: 2500m, Puna and Quebrada) and lowlands (LL: Valle and Selvas). Surnames were classified into native/autochthonous (A) or foreign (F), resulting in three surname classes: FF, when both paternal and maternal surnames were of foreign origin; FA, when one surname was foreign and the other was native; and AA, when both surnames were native. Frequencies of finger dermatoglyphic patterns - arches (A), radial loops (RL), ulnar loops (UL), and whorls (W) - were determined for each digit in relation to geographic location, altitude, and surname origin, resulting in the following categories: HL-FF, HL-FA, HL-AA, LL-FF, LL-FA, and LL-AA. The statistical analyses showed that UL and RL were more common in individuals of HL origin, whereas W and A were more frequent in the LL males (p<0.05). Significant associations were observed between finger dermatoglyphic patterns and surname origin when geographic altitude was considered. In the HL group, UL was associated with AA and FA; in the LL group, the presence of A was associated with FF and FA. The distribution of dermatoglyphic patterns shows that the population of Jujuy belongs to the Andean gene pool and that it has undergone differential levels of admixture related to altitude.",
"corpus_id": 5249937,
"title": "Surnames, geographic altitude, and digital dermatoglyphics in a male population from the province of Jujuy (Argentina)."
} | {
"abstract": "Context Every single person has got a unique dermal ridge pattern; this pattern is genetically determined. Dermal ridge patterns once established become fixed all throughout life. Fingerprint patterns offer a simple, convenient, and economical technique for recognition of some diseases. Aims The aim of this study is to find a relation between dermal ridge patterns and breast cancer among female Egyptian populations. Patients and methods A total of 500 patients with breast cancer and 500 women without cancer were included in our study. The fingerprints of all fingers of both hands of our patients and control group were obtained, using classic method of ink and paper. The fingerprints were then examined by a forensic medicine specialist for identification of the patterns and ridge count. Results The whorl pattern was the commonest pattern among the diseased group, representing 46%; this pattern was significantly increased when compared with the same pattern in the control group. It was found that the mean ridge count of the diseased group was less than that of control group. The frequency of six or more whorls was more common in the diseased group (46%) when compared with the same number in control group (13.4%). Conclusion Fingerprint patterns and ridge counts are easy, simple, noninvasive, cheap, and applicable methods for screening high-risk groups of breast cancer.",
"corpus_id": 211234094,
"title": "Fingerprint patterns, a novel risk factor for breast cancer in Egyptian populations: a case–control study"
} | {
"abstract": "Abstract Five different β(1→3) glucans were tested for immune adjuvant activity on the in vivo induction of alloreactive murine cytotoxic T-lymphocytes (CTL). The β(1→3) glucans, lentinan, pachyman, pachymaran, and two differently substituted hydroxyethylated pachymans strongly enhanced the in vivo generation of alloreactive CTL. The augmenting effect of i.p.-administered β(1→3) glucans exhibited a clear dose-response relationship and was strictly dependent on the injection schedule used. Injection of high doses of β(1→3) glucans as well as the injection during the late phase of the immune response markedly suppressed the magnitude of the lytic CTL activity induced. When the optimal conditions for enhanced CTL responses were chosen, the augmented CTL activity within spleen cells and mesenteric lymph node cells persisted for more than 25 days. Since β(1→3) glucans are chemically defined substances without obvious toxic side effects, they may be of potential use to augment in vivo antigen-specific T-cell responses.",
"corpus_id": 83507548,
"score": 1,
"title": "β(1→3) Glucan-mediated Augmentation of Alloreactive Murine Cytotoxic T-Lymphocytes in Vivo"
} |
{
"abstract": "1. Girling DJ. Adverse effect of anti mycobacterial drugs. Drugs 1982;23:56-74. 2. Blajchman MA, Lowry RC, Pettit JE, Stradling P. Rifampicin – induced immune thrombocytopenia. Br Med J 1970;3:24-6. 3. George JN. Drug induced thrombocytopenia: a systemic review of published case reports. Ann Intern Med 1998;129:886-90. 4. Garg R, Gupta V, Mehra S, Singh R, Prasad R. Rifampicin induced thrombocytopenia. Indian J tuberc 2007;54:94-6. 5. Poole G, Stradling P, Worrledge S. Potentially serious side effects of high-dose twice-weekly rifampicin. Br Med J 1971;3:343-7. 6. Mehta YS, Jijina FF, Badakere SS, Pathare AV, Mohanty D. Rifampicin induced immune thrombocytopenia. Tuberc Lung Dis 1996;77:558-62. 7. Banu Rekha VV, Adhilakshmi AR, Jawahar MS. Rifampicin-induced acute thrombocytopenia. Lung India 2005;22:122-4. 8. Hadfied JW. Rifampicin-induced thrombocytopenia. Postgrad Med J 1980;56:59-60. 9. Verma AK, Singh A, Chandra A, Kumar S, Gupta RK. Rifampicin-induced thrombocytopenia. Indian J Pharmacol 2010;42:240-2. 10. Bassi L, di Berardino L, Perna G, Silvestre LG. Antibodies against rifampicin in patients with tuberculosis after discontinuation of daily treatment (note). Am Rev Respir Dis 1976;114:1189-90. 11. Naranjo CA, Busto U, Sellers EM, Sandor P, Ruiz I, Roberts EA, et al. A method of estimating the probability of adverse drug reactions. Clin Pharmacol Ther 1981;30:239-45. 12. Bhasin DK, Sarode R, Puri S, Marwaha N, Singh K. Can rifampicin be restarted in patients with rifampicininduced thrombocytopenia? Tubercle 1991;72:306-71.",
"corpus_id": 1522927,
"title": "Analysis of asthma research in India"
} | {
"abstract": "BackgroundForeign body aspiration is common in children, especially those under 3 years of age. Chest radiography and CT are the main imaging modalities for the evaluation of these children. Management of children with suspected foreign body aspiration (SFBA) mainly depends on radiological findings.ObjectiveTo investigate the potential use of low-dose multidetector CT (MDCT) and virtual bronchoscopy (VB) in the evaluation and management of SFBA in children.Materials and methodsIncluded in the study were 37 children (17 girls, 20 boys; age 4 months to 10 years, mean 32 months) with SFBA. Chest radiographs were obtained prior to MDCT in all patients. MDCT was performed using a low-dose technique. VB images were obtained in the same session. Conventional bronchoscopy (CB) was performed within 24 h on patients in whom an obstructive abnormality had been found by MDCT and VB.ResultsObstructive pathology was found in 16 (43.25%) of the 37 patients using MDCT and VB. In 13 of these patients, foreign bodies were detected and removed via CB. The foreign bodies were located in the right main bronchus (n = 5), in the bronchus intermedius (n = 6), in the medial segment of the middle lobe bronchus (n = 1), and in the left main bronchus (n = 1). In the remaining three patients, the diagnosis was false-positive for an obstructive pathology by MDCT and VB; the final diagnoses were secretions (n = 2) and schwannoma (n = 1), as demonstrated by CB. In 21 patients in whom no obstructive pathology was detected by MDCT and VB, CB was not performed. These patients were followed for 5–20 months without any recurrent obstructive symptomatology.ConclusionsLow-dose MDCT and VB are non-invasive radiological modalities that can be used easily in the investigation of SFBA in children. MDCT and VB provide the exact location of the obstructive pathology prior to CB. If obstructive pathology is depicted with MDCT and VB, CB should be performed either for confirmation of the diagnosis or for the diagnosis of an alternative cause for the obstruction. In cases where no obstructive pathology is detected by MDCT and VB, CB may not be clinically useful.",
"corpus_id": 617954,
"title": "Utilization of low-dose multidetector CT and virtual bronchoscopy in children with suspected foreign body aspiration"
} | {
"abstract": "In this brochure Professor Penrose puts forward one novel, challenging, and highly original idea buried in a banal matrix of tedious metaphor and metalepsis about the behaviour of men in groups and their reactions to micro-organisms and viruses. Those who fail to derive any profit from the analogy between crowd diseases as Greenwood uses the term and crowd disorders as Penrose does may also miss the point which makes the publication of the essay more than worth while. True to the Galton Laboratory tradition, the author assumes that the reader, if also a mathematician, will immediately grasp the statistical theory he advances; and, if not, will be too dumb to do so. This is a pity, because a public of thoughtful people is getting more and more suspicious of statistical generalizations advanced for allegedly adequate theoretical reasons when there is, as for the so-called cube law, merely a somewhat exiguous empirical basis to support them. The Penrose square-root law has also to do with voting; and what follows is an attempt to fill in the argument which the author himself does not deign to elaborate. The elaboration is all the more pertinent because his hope that the reader \"will tolerate the necessary introduction of mathematical notation\" (p. 6) immediately precedes three gross errors in the formulae which follow, viz.:",
"corpus_id": 71492398,
"score": 0,
"title": "On the Objective Study of Crowd Behaviour"
} |
{
"abstract": "A new species of the genus Panagrellus, P. ulmi sp. n., has been found inside wetwood cankers of elms from the city of Tabriz, Iran. The new species is characterized by having small body size (0.91‒1.22 mm long in females and 0.82‒1.18 mm long in males), lateral field with three longitudinal incisures, lip region narrowing to distal end with six small lips and oral opening surrounded by six acute liplets, stoma with gymnostom shorter than cheilostom, cheilorhabdia not refringent, gymnorhabdia refringent, pharynx with metacorpus not swollen and isthmus slender, excretory pore at level of metacorpus, ovary very long without flexures, oviduct swollen, postvulval uterine sac long, 2.0‒3.4 times the corresponding body diameter, both female and male tails conoid-elongate, spicules with rounded and ventrally bent manubrium and lamina with dorsal anterior hump and fork-like bifurcate tip, gubernaculum with anterior dorsal handle-like manubrium, postcloacal genital papillae five pairs, two anterior subventral, one anterior subdorsal at same level than the first subventral, one posterior subventral and one posterior subdorsal both at same level. Description, measurements and illustrations are provided. In addition, species of Panagrellus and its relatives (Panagrobelus and Plectonchus) are analyzed. After this analysis, Plectonchus hunti is considered an intermediate species between Panagrellus and Panagrobelus, and is transferred, based on morphological and molecular evidence, to the latter genus as Panagrobelus hunti n. comb. On the other hand, Panagrellus (Panagrellinae) and Baujardia (Baujardinae), are two very similar genera according to both morphological and molecular evidence; we consider the respective subfamilies synonyms. Also, Plectonchus and Anguilluloides show great similarities, which justify considering Anguilluloides a junior synonym of the former genus. A. procerus is accordingly transferred as Plectonchus procerus n. comb. while Anguilluloides zondagi is considered a new junior synonym of Plectonchus molgos. Finally, emended diagnoses of the genera Panagrellus, Panagrobelus and Plectonchus, compendia, and keys to their species identification are included.",
"corpus_id": 569177,
"title": "Description of Panagrellus ulmi sp. n. (Rhabditida, Panagrolaimidae) from Iran, and comments on the species of the genus and its relatives."
} | {
"abstract": "Abstract The identity of Panagrellus pycnus, the type species of the genus Panagrellus, is discussed after studying specimens from a cultured population collected in Italy that fits the original material of the species. A new characterization is consequently provided as follows: body 0.93–1.32 mm long, lip region continuous with the adjoining body, stoma with gymnostom very reduced, pharynx with not swollen metacorpus, neck 161–203 µm long, excretory pore at level of the metacorpus, post-vulval uterine sac 99–162 µm long or 2.6–3.8 times as long as the body diameter divided in a short tubular proximal part and a long swollen distal part, vulva post-equatorial (V = 63–69), female tail conical elongate with acute terminus (133–170 µm, c = 6.8–8.1, c’ = 4.9–7.0), male tail conical elongate with acute terminus (104–137 µm, c = 7.8–10.9, c’ = 3.6–5.1), and spicules 70–81 µm long having angular hook-like and very curved ventrad lamina ending in a spatulate tip with a refringent forked axis. The evolutionary relationships of this species and the genus Panagrellus, as derived from the analyses of 18S and 28S rDNA fragments, are discussed. Additionally, the phylogenetic relationships among the members of the infraorder Panagrolaimomorpha is studied, being the genus Tarantobelus transferred to the family Panagrolaimidae and the new subfamily Tarantobelinae n. subfam. is proposed to accommodate it.",
"corpus_id": 239002649,
"title": "Redescription and phylogenetic analysis of the type species of the genus Panagrellus Thorne, 1938 (Rhabditida, Panagrolaimidae), P. pycnus Thorne, 1938, including the first SEM study"
} | {
"abstract": "The popular physical education movement involving exercise and athletic activity to achieve physical fitness was a significant factor in the early development of physical medicine and rehabilitation (PM&R) in the United States (U.S.) Influenced by European gymnastics programs in the late 19th and early 20th centuries, the movement in the U.S. was led by physicians Dudley Sargent, Luther Gullick, and R. Tait McKenzie [1,2]. McKenzie was a physical medicine clinician and teacher whose academic work in exercise physiology was influential in both physical education and medicine [3]. His teaching, writing, and leadership during World War I (WWI) influenced other pioneers in physiatry (Figure 1). World War II (WWII) further proved the value of fitness, physical training and restoration of function among the war wounded. Programs led by physiatrists George Deaver (Figure 2) and Howard Rusk promoted the application of rehabilitation principles and practice within the military [4]. During WWII, the Baruch Committee, established by financier, philanthropist, and statesman Bernard Baruch, emphasized the relevance of physical fitness in the development of the new field, and funded academic centers of physical medicine led by physiatrists with expertise in exercise and fitness [5]. These centers influenced the field of PM&R for decades. The tradition of “exercise is medicine,” pioneered by R. Tait McKenzie, has been endorsed in recent years by the American Medical Association (AMA) and the American College of Sports Medicine (ACSM) [2]. Exercise was also the central theme of the 2014 Annual Assembly of the American Academy of Physical Medicine and Rehabilitation (AAPM&R) in San Diego, CA. Our purpose in this discussion is to demonstrate how involvement in the science and practice of physical education, exercise, and fitness by physical medicine physicians in the first half of the 20th century influenced the future development of PM&R,",
"corpus_id": 31920703,
"score": 0,
"title": "Physical Education, Exercise, Fitness and Sports: Early PM&R Leaders Build a Strong Foundation"
} |
{
"abstract": "Scaling algorithms for entropic transport-type problems have become a very popular numerical method, encompassing Wasserstein barycenters, multi-marginal problems, gradient flows and unbalanced transport. However, a standard implementation of the scaling algorithm has several numerical limitations: the scaling factors diverge and convergence becomes impractically slow as the entropy regularization approaches zero. Moreover, handling the dense kernel matrix becomes unfeasible for large problems. To address this, we combine several modifications: A log-domain stabilized formulation, the well-known epsilon-scaling heuristic, an adaptive truncation of the kernel and a coarse-to-fine scheme. This permits the solution of larger problems with smaller regularization and negligible truncation error. A new convergence analysis of the Sinkhorn algorithm is developed, working towards a better understanding of epsilon-scaling. Numerical examples illustrate efficiency and versatility of the modified algorithm.",
"corpus_id": 966825,
"title": "Stabilized Sparse Scaling Algorithms for Entropy Regularized Transport Problems"
} | {
"abstract": "In this paper, we introduce a notion of barycenter in the Wasserstein space which generalizes McCann's interpolation to the case of more than two measures. We provide existence, uniqueness, characterizations, and regularity of the barycenter and relate it to the multimarginal optimal transport problem considered by Gangbo and Świech in [Comm. Pure Appl. Math., 51 (1998), pp. 23–45]. We also consider some examples and, in particular, rigorously solve the Gaussian case. We finally discuss convexity of functionals in the Wasserstein space.",
"corpus_id": 8592977,
"title": "Barycenters in the Wasserstein Space"
} | {
"abstract": "Large scale computer networks provide access to a bewilderingly large number and variety of resources, including retail products, network services, and people in various capacities. We consider the problem of allowing users to discover the existence of such resources in an administratively decentralized environment. We describe an approach for a system that accesses the distributed collection of repositories that naturally maintain resource information, rather than building a global database to register all resources. A key problem is organizing the resource space in a manner suitable to all participants. Rather than imposing an inflexible hierarchical organization, our approach allows the resource space organization to evolve in accordance with what resources exist and what types of queries users make. Concretely, a set of agents organize and search the resource space by constructing links between the repositories of resource information based on keywords that describe the contents of each repository, and the semantics of the resources being sought. The links form a general graph, with a flexible set of hierarchies embedded within the graph to provide some measure of scalability. The graph structure evolves over time through the use of cache aging protocols. Additional scalability is targeted through the use of probabilistic graph protocols. A prototype implementation and a measurement study are under way. hhhhhhhhhhhhhhhhhh 1 This material is based upon work supported in part by the National Science Foundation under Cooperative Agreement DCR-84200944, and by a grant from AT&T Bell Laboratories.",
"corpus_id": 19826838,
"score": 1,
"title": "The Networked Resource Discovery Project"
} |
{
"abstract": "ABSTRACT \n A 2-yr survey was conducted on golf courses in South Carolina to 1) document the species richness and seasonal activity of Scarabaeoidea; 2) assess any species compositional differences among three trap types (ultraviolet light, unbaited flight-intercept, and unbaited pitfall); and 3) identify any dominant taxa in each trap type. A total of 74,326 scarabaeoid beetles were captured, of which 77.4% were Aphodiinae (not identified to species). The remaining specimens belong to 104 species in 47 genera and 6 families. The most abundant species were Cyclocephala lurida Bland, Dyscinetus morator (F.), Euetheola humilis (Burmeister), Hybosorus illigeri Reiche, and Maladera castanea (Arrow). In all trap types, >90% of all specimens and taxa were collected between April and August. Ultraviolet light traps collected ∼94% of total specimens consisting of 83 taxa (of which 51 were unique to this trap type), whereas flight-intercept traps captured ∼2% of all specimens representing 53 taxa (18 of which were unique), and pitfall traps captured ∼4% of all specimens representing 15 taxa (no unique species; all species also captured by ultraviolet light traps). Indicator species analysis identified 2–3 and 10–13 taxa that were most frequently collected by flight-intercept and ultraviolet light traps, respectively. Flight-intercept traps complemented ultraviolet light traps by capturing more species of dung and carrion beetles and diurnal phytophagous scarab beetles. Results suggested that a similar survey for domestic or exotic scarabaeoid beetles in turfgrass systems should be conducted between April and August using ultraviolet light and flight-intercept traps at 13–58 sites.",
"corpus_id": 3645256,
"title": "A Comparison of Trap Types for Assessing Diversity of Scarabaeoidea on South Carolina Golf Courses"
} | {
"abstract": "Abstract. The purpose of this application, under Article 23.9.3 of the Code, is to conserve the widely used specific name Hybosorus illigeri Reiche, 1853 (Coleoptera: Scarabaeoidea, Hybosoridae), a globally widespread and common scarab beetle. The name is threatened by the very rarely used senior subjective synonyms Hybosorus pinguis Westwood, 1845, Hybosorus roei Westwood, 1845, and Hybosorus carolinus LeConte, 1847. Precedence of the name Hybosorus illigeri is proposed to maintain stability of nomenclature.",
"corpus_id": 92732651,
"title": "Case 3768 – Hybosorus illigeri Reiche, 1853 (Insecta, Coleoptera): proposed conservation by giving it precedence over Hybosorus pinguis Westwood, 1845, Hybosorus roei Westwood, 1845 and Hybosorus carolinus LeConte, 1847"
} | {
"abstract": "The performance of polyvinyl chloride polymer (PVC) dispensers loaded with two rates of ethyl (E,Z)‐2,4‐decadienoate (pear ester) plus the sex pheromone, (E,E)‐8,10‐dodecadien‐1‐ol (codlemone) of codling moth, Cydia pomonella (L.), was compared with similar dispensers and two commercial dispensers (Isomate® and CheckMate®) loaded only with codlemone. Dispenser evaluations were conducted in replicated small (0.1 ha) and large (2 ha) field trials in apple, Malus domestica (Borkhausen), during 2006 (Washington) and 2007 (Michigan, large plot study only). Data recorded included male captures in traps baited with virgin female moths and codlemone lures and direct observations of moth behaviour in treated plots. Volatile air collections of field‐aged dispensers were conducted under laboratory conditions. Disruption of male catch in codlemone‐baited traps was generally similar among dispenser treatments, except for two instances: lower moth catches with the single and dual‐component PVC dispensers, compared with Isomate®, during the first flight in the large plots in Michigan in 2007 and for the dual‐component PVC dispenser compared with the CheckMate® dispenser during the second flight in small plots in Washington in 2006. Levels of fruit injury were similar in large plots treated with all dispensers. Male moth catches in virgin female‐baited traps did not differ among dispenser treatments and were significantly lower than the untreated control. Behavioural observations of adult moths in the field verified anemotactic approaches within 20 cm of pheromone dispensers loaded with and without pear ester that lasted ca. 15 s on average. Field‐aged dual‐component dispensers released pear ester at a >5‐fold higher rate than codlemone over the first 8 weeks and this ratio declined to near unity by 18 weeks.",
"corpus_id": 1891060,
"score": 2,
"title": "Evaluation of novel semiochemical dispensers simultaneously releasing pear ester and sex pheromone for mating disruption of codling moth (Lepidoptera: Tortricidae)"
} |
{
"abstract": "Summary and ConclusionsLaws governing labor relations in the United States are written in a manner which gives significant monopoly power to unions and also creates an incentive for workers to support union security arrangements. These laws would serve to reduce the gains which workers receive from unionization. They would also serve to increase the surplus which unions extract from their members. The argument that an important part of these laws is the creation of rents which can be transferred to the politicians who support laws. An examination of the effect of right-to-work laws on political contribution received by candidates for office indicates that, as the theory would predict, the existence of such laws serves to significantly reduce the contributions from unions. This reduction occurs largely because these laws reduce the number of union members who contribute; they do not seem to change the contributions per union member. The authors hypothesize that it is the monopolization which increases per member contributions, though testing this hypothesis is not possible. Nonetheless, it is the authors' belief that the empirical results have demonstrated the form of the laws mandating unions is explained by the interaction between the union and the politicians who support unionization.",
"corpus_id": 153445901,
"title": "Union membership and campaign contributions"
} | {
"abstract": "Many government programs which appear to be designed to help some particular industry or group do not seem to be succeeding. The explanation offered here is that the program, when inaugurated, generated transitional gains for the individuals or companies in the industry, but that these have been fully capitalized, with the result that the people in the industry now are doing no better than normal. On the other hand, the termination of the particular scheme would, in general, lead to large losses for the entrenched interests.",
"corpus_id": 153505798,
"title": "The transitional gains trap"
} | {
"abstract": "This study advances an entry/exit model to analyse the scale efficiency of UK building societies. We find that there are considerable divergences across building societies in levels of scale efficiency and also in technological change during the sample period 1992-1997. The paper also finds that scale economies and technological change estimates are dependent on whether the econometrician balances a panel data set or utilises the entry/exit model based on Dionne et al’s (1998) specification. In general, scale economies in UK building societies are found to be more significant and more pervasive than in previous studies. JEL classification: C23; C52; G21",
"corpus_id": 16665772,
"score": 1,
"title": "No . 00 / 8 Economies of Scale in UK Building Societies : A ReAppraisal Using An Entry / Exit Model"
} |
{
"abstract": "Biological data on the temperature preferences of fish indicate that, in general, they will be attracted to thermal discharges in the winter. This attraction to warmer temperatures increases their vulnerability to cold shock if the discharge heat source is discontinued. A scheme is proposed to predict the near-field thermal plume environmental temperatures during a power transient. This method can be applied to any jet discharge for which a steady-state model exists. The proposed transient model has been applied to an operating reactor. The predicted results illustrate how very rapidly the maximum temperatures decrease after an abrupt shutdown. This model can be employed to help assess the impact where cold shock may be a problem. Such predictions could also be the basis for restrictions on scheduled midwinter plant shutdowns.",
"corpus_id": 8883965,
"title": "Cold shock: biological implications and a method for approximating transient environmental temperatures in the near-field region of a thermal discharge."
} | {
"abstract": "This study analyzed the community structure of macrobenthic organisms in the subtidal area suffering under the influence of thermal discharge from the Uljin nuclear power plant during 2012-2013 and reviewed the temporal change in the faunal composition of the macrobenthic community using data from p...",
"corpus_id": 135009964,
"title": "Community Structure of Macrobenthos around the Thermal Discharge Area of the Uljin Nuclear Power Plant in the East Sea, Korea"
} | {
"abstract": "[1] Solar flare enhancements to the soft X-ray (XUV) and extreme ultraviolet (EUV) spectral irradiance depend on the location of the flare on the solar disk. Most emission lines in the XUV region (∼0.1 to ∼25 nm) are optically thin and are weakly dependent on the location of the flare, but in the EUV region (∼25 to ∼120 nm), many important lines and continua are optically thick, so enhancements are relatively smaller for flares located near the solar limb, due to absorption by the solar atmosphere. The flare irradiance spectral model (FISM) was used to illustrate these location effects, assuming two X17 flares that are identical except that one occurs near disk center and the other near the limb. FISM spectra of these two flares were used as solar input to the National Center for Atmospheric Research (NCAR) thermosphere-ionosphere-mesosphere electrodynamics general circulation model (TIME-GCM) to investigate the ionosphere/thermosphere (I/T) response. Model simulations showed that in the E region ionosphere, where XUV dominates ionization, flare location does not affect I/T response. However, flare-driven changes in the F region ionosphere, total electron content (TEC), and neutral density in the upper thermosphere, are 2–3 times stronger for a disk-center flare than for a limb flare, due to the importance of EUV enhancement. Flare location did not affect the timing of the ionospheric response, but the thermospheric response was ∼20 min faster for the disk-center flare. Model simulations of I/T responses to an X17 flare on 28 October 2003 were consistent with measurements of TEC and neutral density changes.",
"corpus_id": 120007911,
"score": 1,
"title": "Flare location on the solar disk: Modeling the thermosphere and ionosphere response"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.