query dict | pos dict | neg dict |
|---|---|---|
{
"abstract": "We describe micropower low-voltage (1.1V) low switching activity array multipliers for power-critical low speed (/spl les/5MHz) applications. These multipliers feature the lowest power dissipation (18.8 /spl mu/W/MHz or /spl sim/32% lower for 16-bit and 76.7 /spl mu/W/MHz or /spl sim/53% lower for 32-bit) and lowest energy-delay-product (but slightly reduced speed) of all multipliers compared by using a 0.35 /spl mu/m CMOS process. We obtain these attributes by virtually eliminating the spurious switching by means of proposed latch adders and chronologically timing their assertions by means of delay circuits. We also analyze the switching activity of the different multipliers and verify our results based on the post-layout computer simulations and on measurements on prototype IC.",
"corpus_id": 3207050,
"title": "Low-voltage micropower multipliers with reduced spurious switching"
} | {
"abstract": "In this paper, two short-time spectral amplitude estimators of the speech signal are derived based on a parametric formulation of the original generalized spectral subtraction method. The objective is to improve the noise suppression performance of the original method while maintaining its computational simplicity. The proposed parametric formulation describes the original method and several of its modifications. Based on the formulation, the speech spectral amplitude estimator is derived and optimized by minimizing the mean-square error (MSE) of the speech spectrum. With a constraint imposed on the parameters inherent in the formulation, a second estimator is also derived and optimized. The two estimators are different from those derived in most modified spectral subtraction methods, which are predominantly nonstatistical. When tested under stationary white Gaussian noise and semistationary Jeep noise, they showed improved noise suppression results.",
"corpus_id": 34527146,
"title": "A parametric formulation of the generalized spectral subtraction method"
} | {
"abstract": "Some striking advances have occurred in the use of high resolution electron microscopes in the field of solid state physics since the International Meeting of 1954. These have been concerned with the direct study by transmission of thin crystalline specimens in two distinct ways. In the first the aperture of the microscope is chosen so that some diffracted beams from the specimen pass through to the image and are recombined to form a periodic pattern, the form and spacing of which is closely related to the relative dispositions and spacings of the lattice planes in the crystal. Using this method, the basic periodicity of net planes of the lattice may be directly imaged and departures from perfect periodicity in the form of distortions and discontinuities arising from lattice imperfections such as dislocations may be studied. In the second method the aperture of the microscope objective is chosen so that all diffracted beams from the specimen are intercepted and contrast arises from changes in thickness and orientation and from lattice distortion of the crystal. In particular, the lattice disturbance associated with a dislocation line is sufficient to cause a large local change in the electron intensity scattered outside the objective aperture in the vicinity of the line, thereby making the line visible in the image. Both of these methods have required a complementary study of the diffraction pattern by the selected area technique. For this, the three stage design has been invaluable and the microscope has come into its own as an integrated research tool for the study of crystals and their imperfections. With the addition of a hot stage and means for applying stress to the specimen in situ very wide fields of investigation in physics, chemistry and metallurgy have been opened up.",
"corpus_id": 135869623,
"score": 1,
"title": "Observations on crystal lattices and imperfections by transmission electron microscopy through thin films"
} |
{
"abstract": "This paper presents the first steps in a series of on-going user evaluations of intelligent environments for supporting elderly users at home. We specifically focus on a comparison of elderly perce ...",
"corpus_id": 229122,
"title": "A Cross-Cultural Evaluation of Domestic Assistive Robots"
} | {
"abstract": "Taiwan has entered the aged society in March 2018, meaning that more social and technological resources are needed to solve the problems related to the elderly’s companion service. Companion robots are considered a solution to effectively meet the elderly’s service needs for family escort. However, little is known about the elderly’s acceptance of companion robots. The purpose of this study is to explore the elderly’s acceptance of companion robots from the perspective of user factors. The research was carried out by a mixed method of interviews and questionnaires. Independent sample t test and one-way analysis of variance were used for analysis. The results showed that there were significant differences in the attitude and perceived usefulness of companion robots in terms of education level, living conditions, professional background and technical experience. The research found that the elderly living with parents, with master’s (or doctor’s) education, medical professional background and experience in the use of scientific and technological products expressed more positive attitudes in the responses to the items on the constructs of attitude and perceived usefulness, while the attitude of those with primary school education and humanities professional background, with no experience in scientific and technological products, was relatively negative. Research shows that the acceptance of companion robots by the elderly was affected to some extent by user factors. These findings can provide reference for robot designers, industrial designers and other researchers.",
"corpus_id": 254170589,
"title": "Elderly’s acceptance of companion robots from the perspective of user factors"
} | {
"abstract": "Adding frame structure to slotted ALOHA makes it very convenient to control the ALOHA channel and eliminate instability. The frame length is adjusted dynamically according to the number of garbled, successful, and empty timeslots in the past. Each terminal that has a packet to transmit selects at random one of the n timeslots of a frame. Dynamic frame length ALOHA achieves a throughput (expected number of successful packets per timeslot) of 0.426 which compares favorably with the 1/e (\\approx0.368) upper bound of ordinary slotted ALOHA.",
"corpus_id": 62534398,
"score": 1,
"title": "Dynamic Frame Length ALOHA"
} |
{
"abstract": "C1s deficiency is strongly associated with the development of human systemic lupus erythematosus (SLE); however, the mechanisms by which C1s deficiency contributes to the development of SLE have not yet been elucidated in detail. Using ICR-derived-glomerulonephritis (ICGN) mouse strain that develops SLE and very weakly expresses C1s in the liver, we investigated the protective roles of C1s against SLE. A genetic sequence analysis revealed complete deletion of the C1s1 gene, a mouse homolog of the human C1s gene, with partial deletion of the C1ra and C1rb genes in the ICGN strain. This deletion led to the absence of C1r/C1s and a low level of C1q in the circulation. In order to investigate whether the C1r/C1s deficiency induces SLE, we produced a congenic mouse strain by introducing the deletion region of ICGN into the C57BL/6 strain. Congenic mice exhibited no C1r/C1s and a low level of C1q in the circulation, but did not have any autoimmune defects. These results suggest that C1r/C1s deficiency is not sufficient to drive murine SLE and also that other predisposing genes exist in ICGN mice.",
"corpus_id": 3895508,
"title": "C1r/C1s deficiency is insufficient to induce murine systemic lupus erythematosus"
} | {
"abstract": "Genetic complete deficiency of the early complement components such as C1, C2 and C4 commonly results in a monogenetic form of systemic lupus erythematosus (SLE). However, previous studies have examined groups of complete complement deficient subjects for SLE, while a familial SLE cohort has not been studied for deficiencies of complement. Thus, we undertook the present study to determine the frequency of hereditary complete complement deficiencies among families with two or more SLE patients. All SLE patients from 544 such families had CH50 determined. Medical records were examined for past CH50 values. There were 66 individuals in whom all available CH50 values were zero. All but four of these had a SLE-affected relative with a non-zero CH50; thus, these families did not have monogenetic complement deficient related SLE. The four remaining SLE-affected subjects were in fact two sets of siblings in which three of the four SLE patients had onset of disease at <18 years of age. Both patients in one of these families had been determined to have C4 deficiency, while the other family had no clinical diagnosis of complement deficiency. In this second family, one of the SLE patients had had normal C4 and C3 values, indicating that either C1q or C2 deficiency was possible. Thus, only 2 of 544 SLE families had definite or possible complement deficiency; however, 1 of 7 families in which all SLE patients had pediatric onset and 2 of 85 families with at least 1 pediatric-onset SLE patent had complete complement deficiency. SLE is found commonly among families with hereditary complement deficiency but the reverse is not true. Complete complement deficiency is rare among families with two or more SLE patients, but is concentrated among families with onset of SLE prior to age 18. Lupus (2010) 19, 52—57.",
"corpus_id": 10991921,
"title": "Complete complement deficiency in a large cohort of familial systemic lupus erythematosus"
} | {
"abstract": "Eleven monoclonal antibodies directed against the subcomponent C1q of the first component of human complement, C1, were prepared and tested for binding to intact C1q and to the collagenous portion, the C1q stalks. All of the monoclonals bound well to the intact C1q. Eight out of the eleven exhibited strong binding to the collagenous stalks, while three bound very weakly, if at all, to the stalks and, thus, were presumed to bind to the pepsin-sensitive region which includes the C1q heads. For one of the latter monoclonals, this was confirmed by electron microscopy. Five of the monoclonals were purified by C1q affinity chromatography. When tested with C1 reassembled from its subunits, two of these purified monoclonal antibodies markedly enhanced the rate of spontaneous activation.",
"corpus_id": 7234478,
"score": 2,
"title": "Activation of C1 by monoclonal antibodies directed against C1q."
} |
{
"abstract": "Abstract: Seven patients with sleep apnea DOES and one with sleep apnea DIMS were treated with medroxyprogesterone acetate (MPA). The therapeutic effect was confirmed in most of them by polysomnographic recording. A marked increase of TST was observed in two patients whose AI and/or %SAT remarkably decreased. Contrarily, three patients exhibited a considerable decrease in TST, and AI and/or %SAT were reduced remarkably in two of them with MPA. After the MPA medication a few patients complained of disturbed nocturnal sleep. A significant positive correlation was observed between the decreased rate of TST and that of the mean duration of apneas. From these results, it was considered that MPA has a mild activating action on the arousal system in the CNS, and that the action may be partly responsible for the therapeutic effects of MPA on sleep apneas.",
"corpus_id": 1753984,
"title": "Medroxyprogesterone Acetate and Sleep Apnea"
} | {
"abstract": "We studied the effects of sleep fragmentation on arousal and ventilatory responses to hyperoxic hypercapnia, isocapnic hypoxia, and chemical stimulation of the larynx during sleep in 5 dogs. Sleep fragmentation was induced by repeatedly arousing the dogs with acoustic stimuli throughout 2 to 3 consecutive nights. Responses to respiratory stimuli were then studied during a subsequent daytime sleep. Arterial O2 saturation was measured with an ear oximeter, and sleep stage was determined by electroencephalographic and behavioral criteria. Hypercapnic and hypoxic ventilatory responses were unimpaired by sleep fragmentation. In contrast, alveolar PCO2 levels at arousal increased after sleep fragmentation, from a mean +/- SEM of 52.2 +/- 1.4 mm Hg to 55.6 +/- 1.5 mm Hg (p < 0.05) during slow-wave sleep, and from 57.9 +/- 1.5 mm Hg to 61.3 +/- 2.2 mm Hg (p < 0.05) during rapid-eye movement sleep. Similarly, arterial O2 saturation at arousal decreased after sleep fragmentation from 80.1 +/- 1.0% to 70.2 +/- 2.7% (p < 0.05) during slow-wave sleep, and from 66.3 +/- 3.6% to < 55% (p < 0.05) during rapid-eye-movement sleep. Arousal responses to laryngeal stimulation were also impaired after sleep fragmentation. We conclude that arousal responses to respiratory stimuli are decreased by sleep fragmentation.",
"corpus_id": 28809381,
"title": "Effect of sleep fragmentation on ventilatory and arousal responses of sleeping dogs to respiratory stimuli."
} | {
"abstract": "BACKGROUND\nThis study was conducted to determine the frequency of PIK3CA mutations and human epidermal growth factor receptor-2 (HER2) phosphorylation status (pHER2-Tyr1221/1222) and if PIK3CA, phosphatase and tensin homolog (PTEN), or pHER2 has an impact on outcome in HER2-positive early-stage breast cancer patients treated with adjuvant chemotherapy and trastuzumab.\n\n\nPATIENTS AND METHODS\nTwo hundred and forty HER2-positive early-stage breast cancer patients receiving adjuvant treatment (cyclophosphamide 600 mg/m2, epirubicin 60 mg/m2, and fluorouracil 600 mg/m2) before administration of 1 year trastuzumab were assessable. PTEN and pHER2 expression were assessed by immunohistochemistry. PIK3CA mutations (exons 9 and 20) were determined by pyrosequencing.\n\n\nRESULTS\nFive-year overall survival (OS) and invasive disease-free survival were 87.8% and 81.0%, respectively. Twenty-six percent of patients had a PIK3CA mutation, 24% were PTEN low, 45% pHER2 high, and 47% patients had increased PI3K pathway activation (PTEN low and/or PIK3CA mutation). No significant correlations were observed between the clinicopathological variables and PIK3CA, PTEN, and pHER2 status. In both univariate and multivariate analyses, patients with PIK3CA mutations or high PI3K pathway activity had a significant worse OS [multivariate: hazard ratio (HR) 2.14, 95% confidence interval (CI) 1.01-4.51, P=0.046; and HR 2.35, 95% CI 1.10-5.04, P=0.03].\n\n\nCONCLUSION\nPatients with PIK3CA mutations or increased PI3K pathway activity had a significantly poorer survival despite adequate treatment with adjuvant chemotherapy and trastuzumab.",
"corpus_id": 5015817,
"score": 1,
"title": "PIK3CA mutations, PTEN, and pHER2 expression and impact on outcome in HER2-positive early-stage breast cancer patients treated with adjuvant chemotherapy and trastuzumab."
} |
{
"abstract": "In this paper, we shed new light on the authenticity of the Corpus Caesarianum, a group of five commentaries describing the campaigns of Julius Caesar (100-44 BC), the founder of the Roman empire. While Caesar himself has authored at least part of these commentaries, the authorship of the rest of the texts remains a puzzle that has persisted for nineteen centuries. In particular, the role of Caesar’s general Aulus Hirtius, who has claimed a role in shaping the corpus, has remained in contention. Determining the authorship of documents is an increasingly important authentication problem in information and computer science, with valuable applications, ranging from the domain of art history to counter-terrorism research. We describe two state-of-the-art authorship verification systems and benchmark them on 6 present-day evaluation corpora, as well as a Latin benchmark dataset. Regarding Caesar’s writings, our analysis allow us to establish that Hirtius’s claims to part of the corpus must be considered legitimate. We thus demonstrate how computational methods constitute a valuable methodological complement to traditional, expert-based approaches to document authentication.",
"corpus_id": 4644879,
"title": "Authenticating the Writings of Julius"
} | {
"abstract": "The identification of pseudepigraphic texts – texts not written by the authors to which they are attributed – has important historical, forensic and commercial applications. We introduce an unsupervised technique for identifying pseudepigrapha. The idea is to identify textual outliers in a corpus based on the pairwise similarities of all documents in the corpus. The crucial point is that document similarity not be measured in any of the standard ways but rather be based on the output of a recently introduced algorithm for authorship verification. The proposed method strongly outperforms existing techniques in systematic experiments on a blog corpus.",
"corpus_id": 541156,
"title": "Automatically Identifying Pseudepigraphic Texts"
} | {
"abstract": "In this paper we will stress-test a recently proposed technique for computational authorship verification, ‘‘unmasking'', which has been well received in the literature. The technique envisages an experimental set-up commonly referred to as ‘‘authorship verification'', a task generally deemed more difficult than so-called ‘‘authorship attribution''. We will apply the technique to authorship verification across genres, an extremely complex text categorization problem that so far has remained unexplored. We focus on five representative contemporary English-language authors. For each of them, the corpus under scrutiny contains several texts in two genres (literary prose and theatre plays). Our research confirms that unmasking is an interesting technique for computational authorship verification, especially yielding reliable results within the genre of (larger) prose works in our corpus. Authorship verification, however, proves much more difficult in the theatrical part of the corpus.",
"corpus_id": 7804586,
"score": -1,
"title": "Cross-Genre Authorship Verification Using Unmasking"
} |
{
"abstract": "In this paper, a mobile e-health-management system is presented which extends authors’ previous works on mobile physiological signal monitoring. This system integrates a wearable ring-type pulse monitoring sensor with a smart phone and provides a mobile “exercise-333” health management mechanism. All physiological measurements are transmitted to the smart phone through Bluetooth. The user can monitor his/her own pulse and temperature from the smart phone where the health management mechanism helps him/her to develop a healthy life style: taking exercise 3 times a week and at least lasting for 30 minutes with heart rate over 130 each time. With the popularity and mobility of smart phones, this system effectively provides the needs for mobile health management.",
"corpus_id": 359615,
"title": "A Smart-Phone-Based Health Management System Using a Wearable Ring-Type Pulse Sensor"
} | {
"abstract": "The main goal of this study is to build a secure information ecosystem that connects patients, doctors, medical and insurance companies, sports organizations, fitness centers, manufacturers of telemedicine devices and medical systems for constant monitoring, long-term analysis and quick alerting over sensitive patient's data. This paper provides the extended literature analysis on topic and summarizes state-of-the-art in development of Personal Medical Wearable Device for Distance Healthcare Monitoring X73-PHD mHealth.",
"corpus_id": 20251725,
"title": "An Approach to Automate Health Monitoring in Compliance with Personal Privacy"
} | {
"abstract": "In this paper, we provide another view of the basic difficulty of a nonclassical information structure decision problem. Computational considerations led us to the result that, under the assumption of linear information structure, the partially nested structure is the only class of structures which are equivalent to a static one.",
"corpus_id": 119815775,
"score": 1,
"title": "Another look at the nonclassical information structure problem"
} |
{
"abstract": "The crystal structure of a previously unreported polymorph (form II) of 2,4-dinitrophenylhydrazine (DNPH), C6H6N4O4, was determined at 90 K. The first polymorph (form I) is described in the monoclinic space group P21/c [Okabe et al. (1993 ▶). Acta Cryst. C49, 1678–1680; Wardell et al. (2006 ▶). Acta Cryst. C62, o318–320], whereas form II is in the monoclinic space group Cc. The molecular structures in forms I and II are closely similar, with the nitro groups at the 2- and 4-positions being almost coplanar with the benzene ring [dihedral angles of 3.54 (1) and 3.38 (1)°, respectively in II]. However, their packing arrangements are completely different. Form I exhibits a herringbone packing motif, whereas form II displays a coplanar chain structure. Each chain in form II is connected to adjacent chains by the intermolecular interaction between hydrazine NH2 and 2-nitro groups, forming a sheet normal to (101). The sheet is stabilized by N—H⋯π interactions.",
"corpus_id": 2022129,
"title": "A polymorph of 2,4-dinitrophenylhydrazine"
} | {
"abstract": "The identification of trigger bonds, bonds that break to initiate explosive decomposition, using computational methods could help direct the development of novel, “green” and efficient high energy density materials (HEDMs). Comparing bond densities in energetic materials to reference molecules using Wiberg bond indices (WBIs) provides a relative scale for bond activation (%ΔWBIs) to assign trigger bonds in a set of 63 nitroaromatic conventional energetic molecules. Intramolecular hydrogen bonding interactions enhance contributions of resonance structures that strengthen, or deactivate, the CNO2 trigger bonds and reduce the sensitivity of nitroaniline‐based HEDMs. In contrast, unidirectional hydrogen bonding in nitrophenols strengthens the bond to the hydrogen bond acceptor, but the phenol lone pairs repel and activate an adjacent nitro group. Steric effects, electron withdrawing groups and greater nitro dihedral angles also activate the CNO2 trigger bonds. %ΔWBIs indicate that nitro groups within an energetic molecule are not all necessarily equally activated to contribute to initiation. %ΔWBIs generally correlate well with impact sensitivity, especially for HEDMs with intramolecular hydrogen bonding, and are a better measure of trigger bond strength than bond dissociation energies (BDEs). However, the method is less effective for HEDMs with significant secondary effects in the solid state. Assignment of trigger bonds using %ΔWBIs could contribute to understanding the effect of intramolecular interactions on energetic properties. © 2018 Wiley Periodicals, Inc.",
"corpus_id": 3433052,
"title": "Trigger bond analysis of nitroaromatic energetic materials using wiberg bond indices"
} | {
"abstract": "Skeletal muscle injury resulting in tissue loss poses unique challenges for surgical repair. Despite the regenerative potential of skeletal muscle, if a significant amount of tissue is lost, skeletal myofibers will not grow to fill the injured area completely. Prior work in our lab has shown the potential to fill the void with an extracellular matrix (ECM) scaffold, resulting in restoration of morphology, but not functional recovery. To improve the functional outcome of the injured muscle, a muscle-derived ECM was implanted into a 1 x 1 cm(2), full-thickness defect in the lateral gastrocnemius (LGAS) of Lewis rats. Seven days later, bone-marrow-derived mesenchymal stem cells (MSCs) were injected directly into the implanted ECM. Partial functional recovery occurred over the course of 42 days when the LGAS was repaired with an MSC-seeded ECM producing 85.4 +/- 3.6% of the contralateral LGAS. This was significantly higher than earlier recovery time points (p < 0.05). The specific tension returned to 94 +/- 9% of the contralateral limb. The implanted MSC-seeded ECM had more blood vessels and regenerating skeletal myofibers than the ECM without cells (p < 0.05). The data suggest that the repair of a skeletal muscle defect injury by the implantation of a muscle-derived ECM seeded with MSCs can improve functional recovery after 42 days.",
"corpus_id": 713176,
"score": 0,
"title": "Repair of traumatic skeletal muscle injury with bone-marrow-derived mesenchymal stem cells seeded on extracellular matrix."
} |
{
"abstract": "Receptor tyrosine kinases MET and epidermal growth factor receptor (EGFR) are critically involved in initiation of liver regeneration. Other cytokines and signaling molecules also participate in the early part of the process. Regeneration employs effective redundancy schemes to compensate for the missing signals. Elimination of any single extracellular signaling pathway only delays but does not abolish the process. Our present study, however, shows that combined systemic elimination of MET and EGFR signaling (MET knockout + EGFR‐inhibited mice) abolishes liver regeneration, prevents restoration of liver mass, and leads to liver decompensation. MET knockout or simply EGFR‐inhibited mice had distinct and signaling‐specific alterations in Ser/Thr phosphorylation of mammalian target of rapamycin, AKT, extracellular signal–regulated kinases 1/2, phosphatase and tensin homolog, adenosine monophosphate–activated protein kinase α, etc. In the combined MET and EGFR signaling elimination of MET knockout + EGFR‐inhibited mice, however, alterations dependent on either MET or EGFR combined to create shutdown of many programs vital to hepatocytes. These included decrease in expression of enzymes related to fatty acid metabolism, urea cycle, cell replication, and mitochondrial functions and increase in expression of glycolysis enzymes. There was, however, increased expression of genes of plasma proteins. Hepatocyte average volume decreased to 35% of control, with a proportional decrease in the dimensions of the hepatic lobules. Mice died at 15‐18 days after hepatectomy with ascites, increased plasma ammonia, and very small livers. Conclusion: MET and EGFR separately control many nonoverlapping signaling endpoints, allowing for compensation when only one of the signals is blocked, though the combined elimination of the signals is not tolerated; the results provide critical new information on interactive MET and EGFR signaling and the contribution of their combined absence to regeneration arrest and liver decompensation. (Hepatology 2016;64:1711‐1724)",
"corpus_id": 2363409,
"title": "Combined systemic elimination of MET and epidermal growth factor receptor signaling completely abolishes liver regeneration and leads to liver decompensation"
} | {
"abstract": "Paraspeckles are subnuclear structures formed around NEAT1 lncRNA. Paraspeckles became enlarged after proteasome inhibition caused by NEAT1 transcriptional activation, leading to protein sequestration into paraspeckles. The NEAT1-dependent sequestration affects the transcription of several genes, arguing for a novel role for lncRNA in gene regulation.",
"corpus_id": 6868496,
"title": "NEAT1 long noncoding RNA regulates transcription via protein sequestration within subnuclear bodies"
} | {
"abstract": "BackgroundWe examined models to predict disease activity transitions from moderate to low or severe and associated factors in patients with rheumatoid arthritis (RA).MethodsData from RA patients enrolled in the Corrona registry (October 2001 to August 2014) were analyzed. Clinical Disease Activity Index (CDAI) definitions were used for low (≤10), moderate (>10 and ≤22), and severe (>22) disease activity states. A Markov model for repeated measures allowing for covariate dependence was used to model transitions between three (low, moderate, severe) states and estimate population transition probabilities. Mean sojourn times were calculated to compare length of time in particular states. Logistic regression models were used to examine impacts of covariates (time between visits, chronological year, disease duration, age) on disease states.ResultsData from 29,853 patients (251,375 visits) and a sub-cohort of 9812 patients (46,534 visits) with regular visits (every 3–9 months) were analyzed. The probability of moving from moderate to low or severe disease by next visit was 47% and 18%, respectively. Patients stayed in moderate disease for mean 4.25 months (95% confidence interval: 4.18–4.32). Transition probabilities showed 20% of patients with low disease activity moved to moderate or severe disease within 6 months; >35% of patients with moderate disease remained in moderate disease after 6 months. Results were similar for the regular-visit sub-cohort. Significant interactions with prior disease state were seen with chronological year and disease duration.ConclusionA substantial proportion of patients remain in moderate disease, emphasizing the need for treat-to-target strategies for RA patients.",
"corpus_id": 3679805,
"score": 1,
"title": "Clinical and demographic factors associated with change and maintenance of disease severity in a large registry of patients with rheumatoid arthritis"
} |
{
"abstract": "A new methodology for system-on-chip-level logic-IP-internal electromigration verification is presented in this paper, which significantly improves accuracy by comprehending the impact of the parasitic RC loading and voltage-dependent pin capacitance in the library model. It additionally provides an on-the-fly retargeting capability for reliability constraints by allowing arbitrary specifications of lifetimes, temperatures, voltages, and failure rates, as well as interoperability of the IPs across foundries. The characterization part of the methodology is expedited through the intelligent IP-response modeling. The ultimate benefit of the proposed approach is demonstrated on a 28-nm design by providing an on-the-fly specification of retargeted reliability constraints. The results show a high correlation with SPICE and were obtained with an order of magnitude reduction in the verification runtime.",
"corpus_id": 626313,
"title": "A Fast and Retargetable Framework for Logic-IP-Internal Electromigration Assessment Comprehending Advanced Waveform Effects"
} | {
"abstract": "State-of-the-art timing tools are built around the use of current source models (CSMs), which have proven to be fast and accurate in enabling the analysis of large circuits. As circuits become increasingly exposed to process and temperature variations, there is a strong need to augment these models to account for thermal effects and for the impact of adaptive body biasing, a compensatory technique that is used to overcome on-chip variations. However, a straightforward extension of CSMs to incorporate timing analysis at multiple body biases and temperatures results in unreasonably large characterization tables for each cell. We propose a new approach to compactly capture body bias and temperature effects within a mainstream CSM framework. Our approach features a table reduction method for compaction of tables and a fast and novel waveform sensitivity method for timing evaluation under any body bias and temperature condition. On a 45-nm technology, we demonstrate high accuracy, with mean errors of under 4% in both slew and delay as compared to HSPICE. We show a speedup of over five orders of magnitude over HSPICE and a speedup of about over conventional CSMs.",
"corpus_id": 9522539,
"title": "Compact Current Source Models for Timing Analysis Under Temperature and Body Bias Variations"
} | {
"abstract": "Empirically characterized equation- and table-based cell models have been applied in static timing analysis for decades. These models have been extended to handle a variety of environmental and circuit phenomena over the years. This has given rise to a profusion of cell models that are used to verify circuit functionality and performance. The recent invention of a second-generation of current source models shows the promise of a unified electrical cell model that comprehensively addresses most of the effects that are perceived as accuracy limiters. In this paper, we describe these accuracy limiters and present comprehensive results for a particular current source model [11].",
"corpus_id": 13974057,
"score": 2,
"title": "A “true” electrical cell model for timing, noise, and power grid verification"
} |
{
"abstract": "Various government agencies around the world have proposed vegetable oils and their conversion to biodiesel as a renewable alternative to fossil fuels. Due to its adaptability to marginal soils and environments, the cultivation of Jatropha curcas is frequently mentioned as the best option for producing biodiesel. In the present work the current situation of proven and potential reserves of fossil fuel, and the production and consumption model for the same are analyzed, in order to later review the sustainability of the production process which begins with the cultivation of J. curcas , and culminates with the consumption of biodiesel. A review of the following topics is proposed in order to improve the sustainability of the process: areas destined for cultivation, use of external (chemical) inputs in cultivation, processes for converting the vegetable oil to biodiesel, and, above all, the location for ultimate consumption of the biofuel.",
"corpus_id": 2431876,
"title": "Does Biodiesel from Jatropha Curcas Represent a Sustainable Alternative Energy Source"
} | {
"abstract": "Abstract Jatropha curcas L. is a multipurpose shrub of significant economic importance because of its several potential industrial and medicinal uses. Four provenances of J. curcas from different agro-climatic regions of Mexico (1. Castillo de Teayo, 2. Pueblillo 3. Coatzacoalcos and 4. Yautepec), that differed in morphological characteristics, were studied. The seed kernels were rich in crude protein, CP (31–34.5%) and lipid (55–58%). The neutral detergent fibre contents of extracted J. curcas meals were between 3.9% and 4.5% of dry matter (DM). The gross energy of kernels ranged from 31.1 to 31.6 MJ/kg DM. The contents of starch and total soluble sugars were below 6%. The levels of essential amino acids, except lysine, were higher than that of the FAO/WHO reference protein for a five year old child in all the meal samples on a dry matter basis. The major fatty acids found in the oil samples were oleic (41.5–48.8%), linoleic (34.6–44.4%), palmitic (10.5–13.0%) and stearic (2.3–2.8%) acids. We also found previously unreported cis-11-eicosenoic acid (C20:1) and cis-11,14-eicosadienoic acid (C20:2) in the oil. Phorbolesters were present in high concentrations in the kernels of Coatzacoalcos (3.85 mg/g dry meal), but were not detected in the samples from Castillo de Teayo, Pueblillo and Yautepec. Trypsin inhibitors (33.1–36.4 mg trypsin inhibited g−1 dry meal), phytates (8.5–9.3% of dry meal as phytic acid equivalent), saponins (2.1–2.9% of dry meal) and lectins (0.35–1.46 mg/ml of the minimum amount of the sample required to show the agglutination) were the other major antinutrients present in all the seed meals. Different treatments were attempted on the seed meal samples to neutralize the antinutrients present in them. Trypsin inhibitors were easily inactivated with moist heating at 121 °C for 25 min. Phytate levels were slightly decreased by irradiation at 10 kGy. Measured saponin contents were reduced by ethanol extraction and irradiation. Extraction with ethanol, followed by treatment with 0.07% NaHCO3 considerably decreased lectin activity. The same treatment also decreased the phorbolester content by 97.9% in seeds from Coatzacoalcos. The in vitro digestibility of defatted meal (DM) was between 78.6% and 80.6%. It increased to about 86% on heat treatment.",
"corpus_id": 82905664,
"title": "Chemical composition, toxic/antimetabolic constituents, and effects of different treatments on their levels, in four provenances of Jatropha curcas L. from Mexico"
} | {
"abstract": "The degree of physical and chemical deterioration of biodiesel produced from rapeseed and used frying oil was studied under different storage conditions. These produced drastic effects when the fuel was exposed to daylight and air. However, there were no significant differences between undistilled biodiesel made from fresh rapeseed oil and used frying oil. The viscosity and neutralization numbers rose during storage owing to the formation of dimers and polymers and to hydrolytic cleavage of methyl esters into fatty acids. However, even for samples studied under different storage conditions for over 150 d the specified limits for viscosity and neutralization numbers had not been reached. In European biodiesel specifications there will be a mandatory limit for oxidative stability, because it may be a crucial parameter for injection pump performance. The value for the induction period of the distilled product was very low. The induction period values for the undistilled samples decreased very rapidly during storage, especially with exposure to light and air.",
"corpus_id": 83885953,
"score": 2,
"title": "Long storage stability of biodiesel made from rapeseed and used frying oil"
} |
{
"abstract": "As workloads and data move to the cloud, it is essential that software writers are able to protect their applications from untrusted hardware, systems software, and co-tenants. Intel® Software Guard Extensions (SGX) enables a new mode of execution that is protected from attacks in such an environment with strong confidentiality, integrity, and replay protection guarantees. Though SGX supports memory oversubscription via paging, virtualizing the protected memory presents a significant challenge to Virtual Machine Monitor (VMM) writers and comes with a high performance overhead. This paper introduces SGX Oversubscription Extensions that add additional instructions and virtualization support to the SGX architecture so that cloud service providers can oversubscribe secure memory in a less complex and more performant manner.",
"corpus_id": 38419099,
"title": "Intel® Software Guard Extensions (Intel® SGX) Architecture for Oversubscription of Secure Memory in a Virtualized Environment"
} | {
"abstract": "The rise of the Cloud Computing paradigm has led to security concerns, taking into account that resources are shared and mediated by a Hypervisor which may be targeted by rogue guest VMs and remote attackers. In order to better define the threats to which a cloud server's Hypervisor is exposed, we conducted a thorough analysis of the codebase of two popular open-source Hypervisors, Xen and KVM, followed by an extensive study of the vulnerability reports associated with them. Based on our findings, we propose a characterization of Hypervisor Vulnerabilities comprised of three dimensions: the trigger source (i.e. where the attacker is located), the attack vector (i.e. the Hypervisor functionality that enables the security breach), and the attack target (i.e. the runtime domain that is compromised). This can be used to understand potential paths different attacks can take, and which vulnerabilities enable them. Moreover, most common paths can be discovered to learn where the defenses should be focused, or conversely, least common paths can be used to find yet-unexplored ways attackers may use to get into the system.",
"corpus_id": 6909552,
"title": "Characterizing hypervisor vulnerabilities in cloud computing servers"
} | {
"abstract": "This paper presents a low-power intermediate frequency (IF) limiting amplifier (LA) and received signal strength indicator (RSSI). The LA and RSSI are designed for ZigBee™ receiver at 2MHz IF. To save power, two local loops for offset correction are used in LA chain and a sensitivity of -56dBm is achieved. Each LA gain stage employs cascade diodes load to avoid driving the diode load into velocity saturation region. The indication rang is 50dB within ±2dB linearity error. The core area is 0.11×0.31mm2 using a SMIC 0.18-μm CMOS technology. The overall power consumption is 1mW from a 1.8V supply voltage.",
"corpus_id": 22293916,
"score": -1,
"title": "A 1mW CMOS limiting amplifier and RSSI for ZigBee™ applications"
} |
{
"abstract": "A monoclonal antibody against murine interferon-beta (MuIFN-beta) was prepared using standard methods. Antibodies were immobilized by coupling to Sepharose and used for large-scale purification of poly(I) . poly(C)-induced mouse L cell IFN. Antibodies isolated from the serum of one nude mouse which was transplanted with the anti-MuIFN-beta antibodies producing hybridoma were able to bind at least 7 X 10(7)U MuIFN-beta. In one single antibody affinity chromatography step MuIFN-alpha was separated from MuIFN-beta and a 1000-fold purification of MuIFN-beta was obtained. The purified material had a specific activity of 5 X 10(8)U/mg protein. The recovery from the antibody column was 100%. SDS-PAGE analysis of the purified material revealed the presence of one single protein band with a molecular weight of 33 kD, representing MuIFN-beta.",
"corpus_id": 1438069,
"title": "Large-scale, one-step purification of murine interferon-beta using a monoclonal antibody."
} | {
"abstract": "Rauscher murine leukemia virus (R‐MuLV) induces a rapidly developing erythroleukemia in BALB/c mice. Previously, we have shown that mouse interferon‐α/β (Mu IFN‐α/β) applied shortly after virus inoculation efficiently inhibits the leukemic process (Hekman et al., 1981). Here we describe the effect of Mu IFN‐α/β on an established leukemia. Varying doses of Mu IFN‐α/β were injected over 3 days, starting 8 to 12 days after virus inoculation. The effect of Mu IFN‐α/β on the leukemic process was monitored by measuring the spleen weight, reverse transcriptase activity in the serum and, in selected experiments, by microscopic examination of sections of the spleen using standard histological and immunological staining techniques. Depending on the spleen weight at the start of its application (maximal about 450 mg), Mu IFN‐α/β caused a dramatic reduction in the number of virus‐infected erythroleukemic cells in the spleen. Also, R‐MuLV disappeared from the serum within 3 days. If Mu IFN‐α/β was injected into R‐MuLV‐infected mice with an already 10‐fold enlarged spleen, it could only stop further development of leukemia. Results obtained with crude Mu IFN‐α/β preparations were confirmed with absolutely pure Mu IFN‐β.",
"corpus_id": 33915674,
"title": "The effect of murine interferon‐alpha/beta on an established rauscher murine leukemia virus‐induced erythroleukemia in balb/c mice"
} | {
"abstract": "The Data Encryption Standard (DES) is the best known and most widely used cryptosystem for civilian applications. It was developed at IBM and adopted by the National Buraeu of Standards in the mid 70's, and has successfully withstood all the attacks published so far in the open literature. In this paper we develop a new type of cryptanalytic attack which can break DES with up to eight rounds in a few minutes on a PC and can break DES with up to 15 rounds faster than an exhaustive search. The new attack can be applied to a variety of DES-like substitution/permutation cryptosystems, and demonstrates the crucial role of the (unpublished) design rules.",
"corpus_id": 11633934,
"score": 0,
"title": "Advances in Cryptology-CRYPTO’ 90"
} |
{
"abstract": null,
"corpus_id": 21224888,
"title": "TRUST BREAKTHROUGH IN THE SHARING ECONOMY: AN EMPIRICAL STUDY OF AIRBNB1"
} | {
"abstract": "Customer loyalty or repeat purchasing is critical for the survival and success of any store. By focusing on online stores, this study investigates the moderating role of habit on the relationship between trust and repeat purchase intention. Prior research on online behavior continuance models perceives usefulness, trust, satisfaction, and perceived value as the major determinants of continued usage or loyalty, overlooking the important role of habit. We define habit in the context of online shopping as the extent to which buyers tend to shop online automatically without thinking. Building on recent research on the continued usage of IS and repeat purchasing, we develop a model suggesting that habit acts as a moderator between trust and repeat purchase intention, while familiarity, value and satisfaction are the three antecedents of habit. Data collected from 454 customers of the Yahoo!Kimo shopping mall provide strong support for the research model. The results indicate that a higher level of habit reduces the effect of trust on repeat purchase intention. The data also show that value, satisfaction, and familiarity are important to habit formation and thus relevant within the context of online repeat purchasing. The implications for theory and practice and suggestions for future research are also discussed.",
"corpus_id": 39444763,
"title": "Re-examining the influence of trust on online repeat purchase intention: The moderating role of habit and its antecedents"
} | {
"abstract": "Abstract The central purpose of this survey is to provide readers an insight into the recent advances and challenges in on-line active learning . Active learning has attracted the data mining and machine learning community since around 20 years. This is because it served for important purposes to increase practical applicability of machine learning techniques, such as (i) to reduce annotation and measurement costs for operators and measurement equipments, (ii) to reduce manual labeling effort for experts and (iii) to reduce computation time for model training. Almost all of the current techniques focus on the classical pool-based approach, which is off-line by nature as iterating over a pool of (unlabeled) reference samples a multiple times to choose the most promising ones for improving the performance of the classifiers. This is achieved by (time-intensive) re-training cycles on all labeled samples available so far. For the on-line, stream mining case, the challenge is that the sample selection strategy has to operate in a fast, ideally single-pass manner. Some first approaches have been proposed during the last decade (starting from around 2005) with the usage of machine learning (ML) oriented incremental classifiers , which are able to update their parameters based on selected samples, but not their structures. Since 2012, on-line active learning concepts have been proposed in connection with the paradigm of evolving models , which are able to expand their knowledge into feature space regions so far unexplored. This opened the possibility to address a particular type of uncertainty, namely that one which stems from a significant novelty content in streams, as, e.g., caused by drifts, new operation modes, changing system behaviors or non-stationary environments. We will provide an overview about the concepts and techniques for sample selection and active learning within these two principal major research lines (incremental ML models versus evolving systems), a comparison of their essential characteristics and properties (raising some advantages and disadvantages), and a study on possible evaluation techniques for them. We conclude with an overview of real-world application examples where various on-line AL approaches have been already successfully applied in order to significantly reduce user’s interaction efforts and costs for model updates.",
"corpus_id": 29763327,
"score": -1,
"title": "On-line active learning: A new paradigm to improve practical useability of data stream modeling methods"
} |
{
"abstract": "This paper present a simple watermarking approach based on the rotation of low frequency components of image blocks. The rotation process is performed with less distortion by projection of the samples on specific lines according to message bit. To have optimal detection Maximum Likelihood criteria has been used. Thus, by computing the distribution of rotated noisy samples the optimum decoder is presented and its performance is analytically investigated. The privilege of this proposed algorithm is its inherent robustness against gain attack as well as its simplicity. Experimental results confirm the validity of the analytical derivations and also high robustness against common attacks.",
"corpus_id": 229060,
"title": "Blind image watermarking based on sample rotation with optimal detector"
} | {
"abstract": "Access to multimedia data has become much easier due to the rapid growth of the Internet. While this is usually considered an improvement of everyday life, it also makes unauthorized copying and distributing of multimedia data much easier, therefore presenting a challenge in the field of copyright protection. Digital watermarking, which is inserting copyright information into the data, has been proposed to solve the problem. In this paper, we first discuss the features that a practical digital watermarking system for ownership verification requires. Besides perceptual invisibility and robustness, we claim that the private control of the watermark is also very important. Second, we present a novel wavelet-based watermarking algorithm. Experimental results and analysis are then given to demonstrate that the proposed algorithm is effective and can be used in a practical system.",
"corpus_id": 33603207,
"title": "A wavelet-based watermarking algorithm for ownership verification of digital images"
} | {
"abstract": "μCRL is a process algebraic language for specification and verification of distributed systems. μCRL allows to describe temporal properties of distributed systems but it has no explicit reference to time. In this work we propose a manner of introducing discrete time without extending the language. The semantics of discrete time we use makes it possible to reduce the time progress problem to the diagnostics of “no action is enabled” situations. The synchronous nature of the language facilitates the task. We show some experimental verification results obtained on a timed communication protocol.",
"corpus_id": 18658337,
"score": 1,
"title": "Timed Verification with µCRL"
} |
{
"abstract": "Motors have been exploited as their own sensors for diagnostic and operating conditions at least, since the dawn of modern computing. Contracting systems theory offers a new level of precision in detecting small parameter and state changes in an electric machine for load fault detection and diagnostics from the motor terminals. The presented offline method successfully solves the motor inverse problem to reconstruct the characteristic instantaneous angular speed and load torque signals of the motor during periodic operation. The solution includes a motor parameter estimation step that reflects the specific temperatures and magnetic saturation of the motor during data acquisition. This identification or inversion method is suitable for induction motors driving periodic loads with and without rotor angle-dependent loading. A practical condition monitoring application is demonstrated: valve and cylinder fault detection in reciprocating compressors.",
"corpus_id": 526120,
"title": "Self-Sensing Induction Motors for Condition Monitoring"
} | {
"abstract": "Fault diagnosis in electromechanical drives has been widely investigated over the last decades, and many diagnostic methods have been proposed based on the reported faults. Additionally to already published works which have dealt with gear faults inside low reduction ratio gearboxes, this paper aims to present a simple and effective method for the early diagnosis of evolving faults in a high reduction ratio gear transmission system used in cement kiln drives. The under study system basically consists of a three-phase induction motor mechanically connected to a gearbox. The output shaft of the gearbox drives a gear pinion which, in turn, rotates a girth gear rim surrounding the cement kiln. The identification of mechanical vibrations due to backlash phenomena appearing between the pinion gear and the girth gear rim of the kiln is realized using the motor current signature analysis and processing the motor electromagnetic torque. The proposed diagnostic method is presented, analyzing the experimental results from an under-scale laboratory simulating system.",
"corpus_id": 26285021,
"title": "Detection of Backlash Phenomena Appearing in a Single Cement Kiln Drive Using the Current and the Electromagnetic Torque Signature"
} | {
"abstract": "Although geomorphic observations suggest that the Sierra Nevada has tilted so that the crest has risen 1–2 km since late Miocene time, deuterium and oxygen-18 isotope concentrations in Cenozoic geologic materials decrease eastward across California and Nevada similarly to those in modern, orographically induced precipitation, as if little change in Sierra Nevadan elevations has occurred since Eocene time. Orographic precipitation, however, depends on the amount of moisture in the atmosphere, which in turn can be much larger in warm air, as in Eocene or Oligocene time and in summer, than in the cooler air characteristic of present-day, dominantly winter, precipitation. Moreover, the integrated rainout of vapor, and hence presumably in stable isotope concentrations in the remaining vapor, depends largely on the difference in heights traversed by air masses, not slopes of mountain ranges. Thus, if due simply to orographically induced rainout, both Eocene and Oligocene variations in deuterium isotopes across the Sierra Nevada and Miocene–Quaternary differences in deuterium and oxygen isotopes between the Great Valley of California and the Basin and Range place only weak constraints on the slope or past elevations of the Sierra Nevada. They do not necessarily contradict the inference that the crest of the Sierra Nevada has risen 1000 m or more since late Miocene time.",
"corpus_id": 2818367,
"score": 0,
"title": "Deuterium and oxygen isotopes, paleoelevations of the Sierra Nevada, and Cenozoic climate"
} |
{
"abstract": "This article describes X-NIndex, a novel approach for large XML documents with stable structure. The definition for the large XML document with stable structure is given while the concept of XML document tree coordi- nate(X-DTC) is introduced. The significant advantage of X-NIndex to other XML query schemas is shown and the experimental results are present.",
"corpus_id": 334612,
"title": "X-NIndex: A High Performance Stable and Large XML Document Query Approach and Experience in TOP500 List Data"
} | {
"abstract": "The World Wide Web promises to transform human society by making virtually all types of information instantly available everywhere. Two prerequisites for this promise to be realized are a universal markup language and a universal query language. The power and flexibility of XML make it the leading candidate for a universal markup language. XML provides a way to label information from diverse data sources including structured and semi-structured documents, relational databases, and object repositories. Several XML-based query languages have been proposed, each oriented toward a specific category of information. Quilt is a new proposal that attempts to unify concepts from several of these query languages, resulting in a new language that exploits the full versatility of XML. The name Quilt suggests both the way in which features from several languages were assembled to make a new query language, and the way in which Quilt queries can combine information from diverse data sources into a query result with a new structure of its own.",
"corpus_id": 1466378,
"title": "Quilt: An XML Query Language for Heterogeneous Data Sources"
} | {
"abstract": "In this paper, a radial basis function (RBF) neural network adaptive sliding mode control system based on feedback linearization approach is developed for the current compensation of three-phase active power filter(APF). RBF neural network is used to approximate the switch function of IGBT in APF combined with feedback linearization approach. The weights of RBF neural network are adjusted by means of adaptive method and the stability of the system can be guaranteed. With this method, the harmonic current of non-linear load can be eliminated and the quality of power system can be well improved. The advantages of the adaptive control, neural network control and sliding mode control are combined together to achieve the control task. Simulation results demonstrate that the control system has good control performance and can compensate harmonic current effectively.",
"corpus_id": 17905240,
"score": 1,
"title": "Adaptive neural sliding mode control of active power filter using feedback linearization"
} |
{
"abstract": "This paper studies the power allocation problems for a multiple-input multiple-output (MIMO) antenna system with various partial channel state information (CSI) feedback. Through the derivation of new upper bounds of the ergodic capacity for both the cases of channel mean and covariance information feedback, we devise simple closed-form power allocation solutions for the MIMO systems with two transmit antennas. These results are then extended to systems with any number of antennas at both sides. Unlike previous approaches which require computationally complex numerical optimizations over random processes, our solutions in closed-form offer a water-filling interpretation and perform nearly the same as the global optimum.",
"corpus_id": 1322742,
"title": "Near-Optimal Power Allocation for MIMO Systems with Partial CSI Feedback"
} | {
"abstract": "This paper studies the effect of the imperfect channel state information (CSI) on the performance of adaptive orthogonal frequency division multiplexing (OFDM) systems. To perform resource allocation, CSI must be fed back to the transmitter. Such feedback CSI is always imperfect due to the time-varying channel, the noisy channel estimation and the limited feedback. First, we analyze the imperfection of the feedback CSI from these three aspects. Then, we propose an efficient method to perform power allocation, where Jensen's inequality is used to approximate the objective of the power allocation in order to enhance the computational efficiency. Simulations show that performance loss due to the CSI imperfection and this approximation is very small for a proper frame length.",
"corpus_id": 14193442,
"title": "Efficient Power Allocation for OFDM with Imperfect Channel State Information"
} | {
"abstract": "PurposeThis manuscript utilised in vivo multispectral imaging to demonstrate the efficacy of two different nanomedicine formulations for targeting prostate cancer.MethodsPegylated hyperbranched polymers were labelled with fluorescent markers and targeting ligands against two different prostate cancer markers; prostate specific membrane antigen (PSMA) and the protein kinase, EphrinA2 receptor (EphA2). The PSMA targeted nanomedicine utilised a small molecule glutamate urea inhibitor of the protein, while the EphA2 targeted nanomedicine was conjugated to a single-chain variable fragment based on the antibody 4B3 that has shown high affinity to the receptor.ResultsHyperbranched polymers were synthesised bearing the different targeting ligands. In the case of the EphA2-targeting nanomedicine, significant in vitro uptake was observed in PC3 prostate cancer cells that overexpress the receptor, while low uptake was observed in LNCaP cells (that have minimal expression of this receptor). Conversely, the PSMA-targeted nanomedicine showed high uptake in LNCaP cells, with only minor uptake in the PC3 cells. In a dual-tumour xenograft mouse model, the nanomedicines showed high uptake in tumours in which the receptor was overexpressed, with only minimal non-specific accumulation in the low-expression tumours.ConclusionsThis work highlighted the importance of clearly defining the target of interest in next-generation nanomedicines, and suggests that dual-targeting in such nanomedicines may be a means to achieve greater efficacy.",
"corpus_id": 4962112,
"score": 0,
"title": "Targeting Nanomedicines to Prostate Cancer: Evaluation of Specificity of Ligands to Two Different Receptors In Vivo"
} |
{
"abstract": "Sexual orientation is a complex area. It is unclear to date how people precisely establish their preferred sexual objects. This paper presents two cases, whose heterosexual preference were changed in late adolescent age after a severe psychological event, to draw attention to the study of the possible underlying mechanism. Case 1, one identical male twin, was seriously punished at age 12 years, as he loved a girl in his classroom. Afterwards, he feared to contact with girls, and became attracted to young men at age 17 years, and kept same-sex sexual behaviors since then. However, his twin brother is always heterosexual. Case 2, a girl at age 16 years, was unexpectedly betrayed by her boyfriend, she bore great pain and distress in the beginning. Since then, she had a definite opinion that men were unbelievable, and gradually turned her heterosexual preference and had same-sex sexual behaviors with a girl classmate more than 3 years. Our case presentation indicates that severe frustration of primary heterosexual desires or behaviors and the successive cognitive regulation might lead susceptible adolescents into reorienting their sexual preference. The role of prefrontal cortex and related neuromodulatory pathways were discussed.",
"corpus_id": 54880734,
"title": "The Change of Heterosexual Preference in Adolescents: Implications of Stress and Cognitive Regulation on Sexual Orientation"
} | {
"abstract": "The scientific community is very interested in the biological aspects of gender disorders and sexual orientation. There are different levels to define an individual's sex: chromosomal, gonadic, and phenotypic sex. Concerning the psychological sex, men and women are different by virtue of their own gender identity, which means they recognize themselves as belonging to a determinate sex. They are different also as a result of their own role identity, a set of behaviors, tendencies, and cognitive and emotional attitudes, commonly defined as \"male\" and \"female\". Transsexuality is a disorder characterized by the development of a gender identity opposed to phenotypic sex, whereas homosexuality is not a disturbance of gender identity but only of sexual attraction, expressing sexual orientation towards people of the same sex. We started from a critical review of literature on genetic and hormonal mechanisms involved in sexual differentiation. We re-examined the neuro-anatomic and functional differences between men and women, with special reference to their role in psychosexual differentiation and to their possible implication in the genesis of homosexuality and identity gender disorders. Homosexuality and transsexuality are conditions without a well defined etiology. Although the influence of educational and environmental factors in humans is undeniable, it seems that organic neurohormonal prenatal and postnatal factors might contribute in a determinant way in the development of these two conditions. This \"organicistic neurohormal theory\" might find support in the study of particular situations in which the human fetus is exposed to an abnormal hormonal environment in utero.",
"corpus_id": 1028637,
"title": "Biological aspects of gender disorders."
} | {
"abstract": "No-one, wrote Frank Beach, a notable contributor to the experimental study of hormones and sexual behaviour, ever died from lack of sex. But the personal, social and legal aspects of sexual behaviour are a pervasive pre-occupation in all humans. The variety and vagaries of sex can have severe implications, and the existence of homosexuality and disorders of gender identity demand some sort of explanation (Bancroft, 2008). Neuroscience can ask itself, therefore, why it has contributed so little to understanding human sexuality. One reason is our overall ignorance about the brain, which hinders attempts to relate particular patterns of brain activity to an observable behaviour in a way that contributes to understanding. Another is the effect of sexual mores on the study of sexuality itself: studying sex is still considered a slightly risque career, and made difficult by the politics, constraints and prejudices of human societies. It took the AIDS epidemic to convince many governments and funding bodies that studying sex was important and respectable. Most of our information on the neurobiology of sex comes from animal studies (Becker et al. , 2005), but nearly all of what we know about variations in human sexuality, including hetero- and homo-sexuality, and disorders of gender identity (transsexualism), comes from clinical material, anecdotes or even fiction (the three overlap).\n\nIt is now well-known that sex-determination is heavily reliant upon the sry gene encoded on the Y chromosome. But genes, of course, do nothing themselves: they activate molecules that are the mechanisms, and prominent amongst these is testosterone. During development, the male fetus secretes testosterone—and production may continue into early post-natal life. This has dramatic effects on the internal reproductive organs, promoting the masculine arrangement. But, here we are …",
"corpus_id": 42259423,
"score": -1,
"title": "Who do we think we are? The brain and gender identity."
} |
{
"abstract": "With our increasing appreciation of the true complexity of diseases and pathophysiologies, it is clear that this knowledge needs to inform the future development of pharmacotherapeutics. For many disorders, the disease mechanism itself is a complex process spanning multiple signaling networks, tissues, and organ systems. Identifying the precise nature and locations of the pathophysiology is crucial for the creation of systemically effective drugs. Diseases once considered constrained to a limited range of organ systems, e.g., central neurodegenerative disorders such as Alzheimer’s disease (AD), Parkinson’s disease (PD), and Huntington’ disease (HD), the role of multiple central and peripheral organ systems in the etiology of such diseases is now widely accepted. With this knowledge, it is increasingly clear that these seemingly distinct neurodegenerative disorders (AD, PD, and HD) possess multiple pathophysiological similarities thereby demonstrating an inter-related continuum of disease-related molecular alterations. With this systems-level appreciation of neurodegenerative diseases, it is now imperative to consider that pharmacotherapeutics should be developed specifically to address the systemic imbalances that create the disorders. Identification of potential systems-level signaling axes may facilitate the generation of therapeutic agents with synergistic remedial activity across multiple tissues, organ systems, and even diseases. Here, we discuss the potentially therapeutic systems-level interaction of the glucagon-like peptide 1 (GLP-1) ligand–receptor axis with multiple aspects of the AD, PD, and HD neurodegenerative continuum.",
"corpus_id": 2245306,
"title": "Systems-Level G Protein-Coupled Receptor Therapy Across a Neurodegenerative Continuum by the GLP-1 Receptor System"
} | {
"abstract": "Autism spectrum disorders (ASD) are complex heterogeneous neurodevelopmental disorders of an unclear etiology, and no cure currently exists. Prior studies have demonstrated that the black and tan, brachyury (BTBR) T+ Itpr3tf/J mouse strain displays a behavioral phenotype with ASD-like features. BTBR T+ Itpr3tf/J mice (referred to simply as BTBR) display deficits in social functioning, lack of communication ability, and engagement in stereotyped behavior. Despite extensive behavioral phenotypic characterization, little is known about the genes and proteins responsible for the presentation of the ASD-like phenotype in the BTBR mouse model. In this study, we employed bioinformatics techniques to gain a wide-scale understanding of the transcriptomic and proteomic changes associated with the ASD-like phenotype in BTBR mice. We found a number of genes and proteins to be significantly altered in BTBR mice compared to C57BL/6J (B6) control mice controls such as BDNF, Shank3, and ERK1, which are highly relevant to prior investigations of ASD. Furthermore, we identified distinct functional pathways altered in BTBR mice compared to B6 controls that have been previously shown to be altered in both mouse models of ASD, some human clinical populations, and have been suggested as a possible etiological mechanism of ASD, including “axon guidance” and “regulation of actin cytoskeleton.” In addition, our wide-scale bioinformatics approach also discovered several previously unidentified genes and proteins associated with the ASD phenotype in BTBR mice, such as Caskin1, suggesting that bioinformatics could be an avenue by which novel therapeutic targets for ASD are uncovered. As a result, we believe that informed use of synergistic bioinformatics applications represents an invaluable tool for elucidating the etiology of complex disorders like ASD.",
"corpus_id": 2160734,
"title": "Hippocampal Transcriptomic and Proteomic Alterations in the BTBR Mouse Model of Autism Spectrum Disorder"
} | {
"abstract": "The variable lymphocyte receptor B (VLRB) of jawless vertebrates has a similar function to the antibodies produced by jawed vertebrates, and has been considered as an alternative source to mammalian antibodies for use in biological research. We developed a modified yeast display vector system (pYD8) to display recombinant hagfish VLRB proteins on the extracellular surface of yeast for the isolation of antigen-specific VLRBs. After observing an up-regulation in the VLRB response in hagfish immunized with hemagglutinin 1 of avian influenza virus H9N2 subtype (H9N2-HA1), the antigen-specific VLRBs decorated on the yeast's surface were selected by quantitative library screening through magnetic-activated cell sorting (MACS) and fluorescent-activated cell sorting (FACS). We also demonstrated a strong specificity of the antigen-specific VLRBs, when expressed as a secreted protein using a mammalian expression system. Together, our findings suggest that the pYD8 vector system could be useful for screening antigen-specific hagfish VLRBs, and the specificity of secreted VLRB may have potential for a variety of biological applications.",
"corpus_id": 58540298,
"score": 1,
"title": "Development of a modified yeast display system for screening antigen-specific variable lymphocyte receptor B in hagfish (Eptatretus burgeri)."
} |
{
"abstract": "In the past few decades, it has been widely accepted that forest loss due to human actions alter the interactions between organisms. We studied the relationship between forest fragment size and arbuscular mycorrhizal fungi (AMF) and dark septate endophytes (DSE) colonization, and the AMF spore communities in the rhizosphere of two congeneric Euphorbia species (native and exotic/invasive). We hypothesized that these fungal variables will differ with fragment size and species status, and predicted that (a) AMF and DSE colonization together with AMF spore abundance and diversity would be positively related to forest fragment size; (b) these relationships will differ between the exotic and the native species; and (c) there will be a negative relationship between forest fragment size and the availability of soil nutrients (NH4+, NO3−, and phosphorus). This study was performed in the eight randomly selected forest fragments (0.86–1000 ha), immersed in an agricultural matrix from the Chaquean region in central Argentina. AMF root colonization in the native and exotic species was similar, and was positively related with forest fragment size. Likewise, AMF spore diversity and spore abundance were higher in the larger fragments. While DSE root colonization in the native host was positively related with forest fragment size, DSE colonization in the exotic host showed no relationship. Soil nutrients contents were negatively related with forest fragment size. In addition, NH4+ and NO3− were negatively correlated with AMF spores abundance and root colonization and with DSE colonization in the native species. The results observed in this study show how habitat fragmentation might affect the interaction between key soil components, such as rhizospheric plant-fungal symbiosis and nutrient availability. These environmental changes may have important consequences on plant community composition and nutrient dynamics in this fragmented landscape.",
"corpus_id": 1427213,
"title": "Forest fragment size and nutrient availability: complex responses of mycorrhizal fungi in native–exotic hosts"
} | {
"abstract": "Forest fragmentation is an increasingly common feature across the globe, but few studies examine its influence on biogeochemical fluxes. We assessed the influence of differences in successional trajectory and stem density with forest patch size on biomass quantity and quality and N transformations in the soil at an experimentally fragmented landscape in Kansas, USA. We measured N-related fluxes in the laboratory, not the field, to separate effects of microclimate and fragment edges from the effects of inherent biomass differences with patch size. We measured net N mineralization and N2O fluxes in soil incubations, gross rates of ammonification and nitrification, and microbial biomass in soils. We also measured root and litterfall biomass, C:N ratios, and δ13C and δ15N signatures; litterfall [cellulose] and [lignin]; and [C], [N], and δ13C and δ15N of soil organic matter. Rates of net N mineralization and N2O fluxes were greater (by 113% and 156%, respectively) in small patches than in large, as were gross rates of nitrification. These differences were associated with greater quantities of root biomass in small patch soil profiles (664.2 ± 233.3 vs 192.4 ± 66.2 g m−2 for the top 15 cm). These roots had greater N concentration than in large patches, likely generating greater root derived organic N pools in small patches. These data suggest greater rates of N cycling in small forested patches compared to large patches, and that gaseous N loss from the ecosystem may be related to forest patch size. The study indicates that the differences in successional trajectory with forest patch size can impart significant influence on soil N transformations in fragmented, aggrading woodlands.",
"corpus_id": 21304968,
"title": "Soil nitrogen and carbon dynamics in a fragmented landscape experiencing forest succession"
} | {
"abstract": "Desertification in Spain is a largely society-driven process, which can be effectively managed only through an understanding of ecological, socio-cultural and economic driving forces. This calls for a more active role of decision makers and other stakeholders. We present a promising approach, involving stakeholders in the scenario development process and linking these narrative storylines with an integrated quantitative model. Within the framework of a larger EC-financed project, dealing with desertification in the Mediterranean region, multiscale scenarios were developed for Europe, the Northern Mediterranean and four local areas. In the same project a Policy Support System (PSS) was developed. The main objective of the present exercise was to establish a link between the qualitative scenarios and the PSS for the watershed of the Guadalentín River in Spain. From the results of two scenario workshops, three scenarios were selected, all linked to the same Mediterranean scenario. Our selection aimed at maximising both the variety in the narrative storylines and the expected output of the PSS. The scenarios were subsequently formalised, ensuring that the same information was present for all three scenarios; semi-quantified (\"translated\") by linking them to the main entry points of the PSS; and quantified by parameterisation of the model. Although model runs have not yet been carried out, preliminary results indicate the potential for the constructed quantitative scenarios. The paper illustrates the practical potential and pitfalls of linking qualitative storylines and quantitative models. Future research should, however, also focus on the more fundamental theoretical obstacles that are easily overlooked.",
"corpus_id": 5885032,
"score": 1,
"title": "Linking Narrative Storylines and Quantitative Models To Combat Desertification in the Guadalentín , Spain"
} |
{
"abstract": "In a randomized controlled trial, Hugh MacPherson and colleagues investigate the effectiveness of acupuncture and counseling compared with usual care alone for the treatment of depression symptoms in primary care settings. Please see later in the article for the Editors' Summary",
"corpus_id": 2812576,
"title": "Acupuncture and Counselling for Depression in Primary Care: A Randomised Controlled Trial"
} | {
"abstract": "Spinal cord injury (SCI) often results in death of spinal neurons and atrophy of muscles which they govern. Thus, following SCI, reorganizing the lumbar spinal sensorimotor pathways is crucial to alleviate muscle atrophy. Tail nerve electrical stimulation (TANES) has been shown to activate the central pattern generator (CPG) and improve the locomotion recovery of spinal contused rats. Electroacupuncture (EA) is a traditional Chinese medical practice which has been proven to have a neural protective effect. Here, we examined the effects of TANES and EA on lumbar motor neurons and hindlimb muscle in spinal transected rats, respectively. From the third day postsurgery, rats in the TANES group were treated 5 times a week and those in the EA group were treated once every other day. Four weeks later, both TANES and EA showed a significant impact in promoting survival of lumbar motor neurons and expression of choline acetyltransferase (ChAT) and ameliorating atrophy of hindlimb muscle after SCI. Meanwhile, the expression of neurotrophin-3 (NT-3) in the same spinal cord segment was significantly increased. These findings suggest that TANES and EA can augment the expression of NT-3 in the lumbar spinal cord that appears to protect the motor neurons as well as alleviate muscle atrophy.",
"corpus_id": 973092,
"title": "Tail Nerve Electrical Stimulation and Electro-Acupuncture Can Protect Spinal Motor Neurons and Alleviate Muscle Atrophy after Spinal Cord Transection in Rats"
} | {
"abstract": "Purpose of reviewWe review recently published literature concerning early morbidity and mortality during antiretroviral therapy (ART) among patients in resource-limited settings. We focus on articles providing insights into this burden of disease and strategies to address it. Recent findingsIn sub-Saharan Africa, mortality rates during the first year of ART are very high (8–26%), with most deaths occurring in the first few months. This figure compares with 3–13% in programmes in Latin America and the Caribbean and 11–13% in south-east Asia. Risk factors generally reflect late presentation with advanced symptomatic disease. Key causes of morbidity and mortality include tuberculosis (TB), acute sepsis, cryptococcal meningitis, malignancy and wasting syndrome/chronic diarrhoea. Current literature shows that the fundamental need is for much earlier HIV diagnosis and initiation of ART. In addition, further studies provide data on the role of screening and prophylaxis against opportunistic diseases (particularly TB, bacterial sepsis and cryptococcal disease) and the management of specific opportunistic diseases and complications of ART. Effective and sustainable delivery of these interventions requires strengthening of programmes. SummaryStrategies to address this disease burden should include earlier HIV diagnosis and ART initiation, screening and prophylaxis for opportunistic infections, optimized management of specific diseases and treatment complications, and programme strengthening.",
"corpus_id": 455960,
"score": 1,
"title": "Strategies to reduce early morbidity and mortality in adults receiving antiretroviral therapy in resource-limited settings"
} |
{
"abstract": "When a thin elastic structure comes in contact with a liquid interface, capillary forces can be large enough to induce elastic deformations. This effect becomes particularly relevant at small scales where capillary forces are predominant, for example in microsystems (micro-electro-mechanical systems or microfluidic devices) under humid environments. In order to explore the interaction between capillarity and elasticity, we have developed a macroscopic model system in which an initially immersed vertical elastic rod is raised through a horizontal liquid surface. We follow a combined approach of experiments, theory and numerical simulations to study this system. In spite of its apparent simplicity, our experiment reveals a complex phase diagram, involving large hysteretic behaviour. We employ Kirchhoff equations for thin elastic rods and use path-following methods from which we obtain a variety of equilibrium states and associated transitions that are in excellent qualitative and quantitative agreement with those observed experimentally.",
"corpus_id": 1697243,
"title": "Piercing a liquid surface with an elastic rod: Buckling under capillary forces"
} | {
"abstract": "Water-walking insects and spiders rely on surface tension for static weight support and use a variety of means to propel themselves along the surface. To pass from the water surface to land, they must contend with the slippery slopes of the menisci that border the water's edge. The ability to climb menisci is a skill exploited by water-walking insects as they seek land in order to lay eggs or avoid predators; moreover, it was a necessary adaptation for their ancestors as they evolved from terrestrials to live exclusively on the water surface. Many millimetre-scale water-walking insects are unable to climb menisci using their traditional means of propulsion. Through a combined experimental and theoretical study, here we investigate the meniscus-climbing technique that such insects use. By assuming a fixed body posture, they deform the water surface in order to generate capillary forces: they thus propel themselves laterally without moving their appendages. We develop a theoretical model for this novel mode of propulsion and use it to rationalize the climbers' characteristic body postures and predict climbing trajectories consistent with those reported here and elsewhere.",
"corpus_id": 9854930,
"title": "Meniscus-climbing insects"
} | {
"abstract": "The nonlinear elastic response of large arteries subjected to finite deformations due to action of biaxial principal stresses, is described by simple constitutive equations. Generalized measures of strain and stress are introduced to account for material nonlinearity. This also ensures the existence of a strain energy density function. The orthotropic elastic response is described via quasi-linear relations between strains and stresses. One nonlinear parameter which defines the measures of strain and stress, and three elastic moduli are assumed to be constants. The lateral strain parameters (equivalent to Poisson's ratios in infinitesimal deformations) are deformation dependent. This dependence is defined by empirical relations developed via the incompressibility condition, and by the introduction of a fifth material parameter. The resulting constitutive model compares well with biaxial experimental data of canine carotid arteries.",
"corpus_id": 20482408,
"score": 2,
"title": "A model for the nonlinear elastic response of large arteries."
} |
{
"abstract": "In this study, a photovoltaic (PV) modules site installed from 1997 to 2017 (20 years of outdoor exposure) in the hot, humid region of Kumasi, Ghana in Sub-Saharan Africa was selected in order to study the aging phenomenon and rate of degradation due to long-term exposure. The main purpose of this work was to correlate the performance of 14 PV modules using data from infra-red thermal imaging (hot spot tests), current-voltage (I-V) tests and visual inspection. The modules were first visually inspected followed by electrical performance tests using an I-V curve tracer. Hot spot testing of each module was performed to enable further characterization. The results of the visual inspection using the United States National Renewable Energy Laboratory (NREL) checklist did not show any major observable defects. The results also show that the higher the temperature difference in the hot spot tests, the higher the rate of power degradation. Eleven modules failed the hot spot tests according to the criteria indicated in the literature. The average power degradation rate was 1.36%/year, which is above the industry-accepted range of 0.7–1.0%/year. The results provide evidence of a positive correlation between temperature difference and performance parameters such as power degradation (Pdeg), power performance factor (PPF) and power drop (Pdrop). The power performance factor for all 14 modules fell below the average 80% standard set by most manufacturers for modules operating within the 25-year warranty.",
"corpus_id": 3345222,
"title": "Correlation of Infrared Thermal Imaging Results with Visual Inspection and Current-Voltage Data of PV Modules Installed in Kumasi, a Hot, Humid Region of Sub-Saharan Africa"
} | {
"abstract": "Abstract The optical degradation induced by long-term (about 15 years) field exposure on c-Si photovoltaic modules belonging to the large-scale Delphos ENEA PV plant, located in Manfredonia (South of Italy), was investigated by making comparative reflectance measurements on the exposed modules, after their dismounting and cleaning, and on the original, unexposed counterparts. Four types of module fabrication technologies were analyzed: Helios single-Si, Pragma single-Si, Pragma multi-Si and Ansaldo multi-Si. Siemens multi-Si modules, of recent technology and exposed for 5 years, were taken as reference. The electrical loss measured for the single PV generators of the Delphos plant, each corresponding to a particular module technology, after a monitoring period of about 10 years, resulted to range between 11–22% for the output power and 9–14% for the output current. The aging effects on the dismounted and cleaned modules appeared as the discoloration of ARC layer, particularly at the center of the cells, and as the formation of stains distributed over the cell surface, likely due to the browning of the EVA. The spectral measurements of the total hemispherical reflectance, carried out under direct light at near-normal incidence, showed that the discoloration of ARC is associated to a decrease of the reflectance in the blue region (400–500 nm), and a resulting levelling of the spectral reflectance curves. The spectrally integrated measurements of reflectance carried out at diffuse white light, on the other hand, have provided evidence of an increase of the total hemispherical reflectance for exposed modules, particularly marked for the multi-Si modules, which correlates quite well with the extent of current loss measured on the single PV generators of Delphos plant.",
"corpus_id": 94130681,
"title": "Optical degradation of long-term, field-aged c-Si photovoltaic modules"
} | {
"abstract": "This paper characterizes and compares the degradation observed in thin-film module performance. Three commercially available thin-film modules comprising a-Si:H, a-Si:H/a-SiGe:H/a-SiGe:H and CuInSe2 technologies were used in this study. After an initial indoor assessment the modules were deployed outdoors and periodically taken down for indoor assessment. Results obtained indicate that the a-Si modules degraded by the classical Staebler–Wronski effect. The CuInSe2 module, though known to have long-term performance stability, also degraded in this study. The CuInSe2 module showed shunting behaviour before outdoor exposure. This shunting behaviour was enhanced when the module was deployed outdoors under open-circuit conditions. A comparison of the modules’ performances outdoors indicates that the low bandgap CuInSe2 material performs best at high air mass values. This paper emphasizes the importance of being able to analyze module degradation.",
"corpus_id": 108752509,
"score": 2,
"title": "Characterization of degradation in thin-film photovoltaic module performance parameters"
} |
{
"abstract": "A TRAPATT diode has been fabricated using a variation of silicon planar technology. Its design combines the advantages of the surface stability of the planar process with low parasitic capacitance usually associated only with mesa devices. Since shallow diffusions may be used, the device retains an excellent heat-dissipation capability.",
"corpus_id": 1857378,
"title": "An efficient passivated TRAPATT diode structure"
} | {
"abstract": "A Fairchild FD-300 diode has been made to operate in the TRAPATT mode of oscillation in an inexpensive circuit constructed of commercially available RF components. The best performance achieved was a pulsed output of 68 watts at 630 MHz with an efficiency of 12 percent. Since the diodes are inexpensive and readily available and the mount used was simple to construct, it is felt that this information will open this interesting area of research to those with limited budgets.",
"corpus_id": 62760273,
"title": "A poor man's TRAPATT oscillator"
} | {
"abstract": "A high-performance IMPATT diode test circuit has been developed which is very effective in reducing spurious oscillations. In this circuit, a 6-GHz germanium diode has been tested at 12.1-percent CW efficiency and 0.620-watt output.",
"corpus_id": 62549674,
"score": 2,
"title": "Circuit for testing high-efficiency IMPATT diodes"
} |
{
"abstract": "BALB/c mice were infected with Leishmania mexicana amazonensis and/or Plasmodium yoelii in order to determine the impact of multiple parasitic infection on the efficacy of chemotherapeutic agents. Uninfected, P. yoelii-infected, L.m. amazonensis-infected, and L.m. amazonensis and P. yoelii-infected mice were inoculated with cimetidine (80 mg kg-1 day-1) or pentostam (200 mg kg-1 day-1) once a day for an initial 20-day period, and once a week thereafter. Leishmania mexicana amazonensis lesion development and P. yoelii parasitaemia were the criteria used to assay disease severity. Mice infected with both P. yoelii and L.m. amazonensis developed more severe disease than did animals infected with either parasite alone. Cimetidine and pentostam each slowed the development of L.m. amazonensis in animals infected with only that parasite and in animals infected with both P. yoelli and L.m. amazonensis. However, mice treated with pentostam developed more severe P. yoelii infections than did control animals, whereas cimetidine significantly reduced P. yoelii parasitaemia in all instances.",
"corpus_id": 661453,
"title": "The effect of pentostam and cimetidine on the development of leishmaniasis (Leishmania mexicana amazonensis) and concomitant malaria (Plasmodium yoelii)."
} | {
"abstract": "Summary Human natural killer (NK) cell cytotoxicity against K‐562 target was found to be markedly depressed by quinacrine, mefloquine, pyrimethamine and chloroquine. The most potent were quinacrine and mefloquine. Two other drugs tested, quinine and primaquine, displayed no significant effects even after long incubation periods with effector cells. The results showed that the inhibitory effects were not related to an inhibition of effector cell binding to target or to a toxic effect on NK cells.",
"corpus_id": 29444833,
"title": "The effect of anti‐malarial drugs on human natural killer cells in vitro"
} | {
"abstract": "The lack of detectable tumor-specific cytotoxicity by the peripheral blood lymphocytes of patients with cancer may be due to a lack of cytotoxic lymphocytes or the presence of suppressor lymphocytes that inhibit cytotoxic cells. Unfractionated peripheral blood lymphocytes from 12 of 28 patients with osteogenic sarcoma were cytotoxic to osteogenic sarcoma cells in vitro (P less than 0,001). When the peripheral blood lymphocytes from patients whose lymphocytes were not cytotoxic underwent fractionation, a tumor-specific cytotoxic subpopulation was isolated from 11 of 13 patients (P less than 0.0001). Lymphocytes that inhibited cytotoxic activity of autologous tumor-specific cytotoxic lymphocytes were found in four of 10 patients with osteogenic sarcoma but not in six normal controls. Inhibitor lymphocytes form rosettes with sheep erythrocytes and adhere to nylon, whereas cytotoxic lymphocytes have a receptor for C3 but no surface immunoglobulin. The lack of tumor-specific lymphocytotoxicity in some patients can be due to inhibitor lymphocytes.",
"corpus_id": 34622439,
"score": 2,
"title": "Concomitant presence of tumor-specific cytotoxic and inhibitor lymphocytes in patients with osteogenic sarcoma."
} |
{
"abstract": "Advances in the use of noninvasive neuroimaging to study the neural correlates of pathological and non-pathological anxiety have shone new light on the underlying neural bases for both the development and manifestation of anxiety. This review summarizes the most commonly observed neural substrates of the phenotype of anxiety. We focus on the neuroimaging paradigms that have shown promise in exposing this relevant brain circuitry. In this way, we offer a broad overview of how anxiety is studied in the neuroimaging laboratory and the key findings that offer promise for future research and a clearer understanding of anxiety.",
"corpus_id": 17586418,
"title": "Neuroimaging and Anxiety: the Neural Substrates of Pathological and Non-pathological Anxiety"
} | {
"abstract": "Successful control of affect partly depends on the capacity to modulate negative emotional responses through the use of cognitive strategies (i.e., reappraisal). Recent studies suggest the involvement of frontal cortical regions in the modulation of amygdala reactivity and the mediation of effective emotion regulation. However, within-subject inter-regional connectivity between amygdala and prefrontal cortex in the context of affect regulation is unknown. Here, using psychophysiological interaction analyses of functional magnetic resonance imaging data, we show that activity in specific areas of the frontal cortex (dorsolateral, dorsal medial, anterior cingulate, orbital) covaries with amygdala activity and that this functional connectivity is dependent on the reappraisal task. Moreover, strength of amygdala coupling with orbitofrontal cortex and dorsal medial prefrontal cortex predicts the extent of attenuation of negative affect following reappraisal. These findings highlight the importance of functional connectivity within limbic-frontal circuitry during emotion regulation.",
"corpus_id": 18335240,
"title": "Amygdala–frontal connectivity during emotion regulation"
} | {
"abstract": "We designed and deployed automatic alt-text (AAT), a system that applies computer vision technology to identify faces, objects, and themes from photos to generate photo alt-text for screen reader users on Facebook. We designed our system through iterations of prototyping and in-lab user studies. Our lab test participants had a positive reaction to our system and an enhanced experience with Facebook photos. We also evaluated our system through a two-week field study as part of the Facebook iOS app for 9K VoiceOver users. We randomly assigned them into control and test groups and collected two weeks of activity data and their survey feedback. The test group reported that photos on Facebook were easier to interpret and more engaging, and found Facebook more useful in general. Our system demonstrates that artificial intelligence can be used to enhance the experience for visually impaired users on social networking sites (SNSs), while also revealing the challenges with designing automated assistive technology in a SNS context.",
"corpus_id": 10857293,
"score": -1,
"title": "Automatic Alt-text: Computer-generated Image Descriptions for Blind Users on a Social Network Service"
} |
{
"abstract": "Understanding factors mediating hybridization between native and invasive species is crucial for conservation. We assessed the spatial distribution of hybridization between invasive rainbow trout (...",
"corpus_id": 218929812,
"title": "Abiotic conditions are unlikely to mediate hybridization between invasive rainbow trout and native Yellowstone cutthroat trout in a high-elevation metapopulation"
} | {
"abstract": "Hybridization between rainbow trout (Oncorhynchus mykiss (Walbaum, 1792)) and westslope cutthroat trout (Oncorhynchus clarkii lewisi (Girard, 1856)) occurs commonly when rainbow trout are introduced into the range of westslope cutthroat trout. Typically, hybridization is most common in warmer, lower elevation habitats, but much less common in colder, higher elevation habitats. We assessed the tolerance to cold water temperature (i.e., critical thermal minimum, CTMin) in juvenile rainbow trout and westslope cutthroat trout to test the hypothesis that westslope cutthroat trout better tolerate low water temperature, which may explain the lower prevalence of rainbow trout and interspecific hybrids in higher elevation, cold-water habitats (i.e., the \"elevation refuge hypothesis\"). All fish had significantly lower CTMin values (i.e., were better able to tolerate low temperatures) when they were acclimated to 15 °C (mean CTMin = 1.37 °C) versus 18 °C (mean CTMin = 1.91 °C; p < 0.001). Westslope cutthroat trout tended to have lower CTMin than rainbow trout from two populations, second-generation (F2) hybrids between two rainbow trout populations, and backcrossed rainbow trout at 15 °C (cross type × acclimation temperature interaction; p = 0.018). Differential adaptation to cold water temperatures may play a role in influencing the spatial distribution of hybridization between sympatric species of trout.",
"corpus_id": 4504372,
"title": "Cold tolerance performance of westslope cutthroat trout (Oncorhynchus clarkii lewisi) and rainbow trout (Oncorhynchus mykiss) and its potential role in influencing interspecific hybridization"
} | {
"abstract": "Supportive breeding and stocking performed with non‐native or domesticated fish to support sport fishery industry is a common practice throughout the world. Such practices are likely to modify the genetic integrity of natural populations depending on the extent of genetic differences between domesticated and wild fish and on the intensity of stocking. The purpose of this study is to assess the effects of variable stocking intensities on patterns of genetic diversity and population differentiation among nearly 2000 brook charr (Salvelinus fontinalis) from 24 lakes located in two wildlife reserves in Québec, Canada. Our results indicated that the level of genetic diversity was increased in more intensively stocked lakes, mainly due to the introduction of new alleles of domestic origin. As a consequence, the population genetic structure was strongly homogenized by intense stocking. Heavily stocked lakes presented higher admixture levels and lower levels of among lakes genetic differentiation than moderately and un‐stocked lakes. Moreover, the number of stocking events explained the observed pattern of population genetic structure as much as hydrographical connections among lakes in each reserve. We discuss the implications for the conservation of exploited fish populations and the management of stocking practices.",
"corpus_id": 28871653,
"score": -1,
"title": "Loss of genetic integrity correlates with stocking intensity in brook charr (Salvelinus fontinalis)"
} |
{
"abstract": "Abstract The objective of managing the lands in a watershed to maintain or enhance a dependable water yield of low salinity differs fundamentally from that of enhancing agricultural production in situ. The challenge is to devise strategies compatible with both. Vegetative management to increase evapotranspiration reduces salt emissions; it also reduces water yield and, if achieved by forestation, agricultural production. However, U.S. experience indicates that crop selection to increase water use in recharge areas is an effective practice to ameliorate downslope saline seeps. It appears the physico-chemical principles that control salt and water flow through geologic systems, and the effects of vegetation thereon, are well established. This is true, at least, for systems where the predominant salt is NaCl derived from deposition in rainfall. The mathematical tools to make use of these principles are also adequate. The data base, however, frequently is not sufficient to describe the system, nor is our ability to make the necessary field measurements at a reasonable cost. Aside from economic considerations, potential solutions for dryland salinity problems must be related to the specific site conditions. They may include interception drainage, drainage of water from perched water tables, reduction of hydrostatic pressure in artesian systems, as well as soil and crop management systems. The viability of these (or other) solutions can only be assessed after adequate delineation of the site conditions, including identification of the recharge area, description of the subsurface conditions with evaluation of the hydraulic properties of the aquifer materials traversed by the flux, and sufficient information to derive the flow paths. In addition, the time dependence of the flow system must be considered. Whereas flow problems have most often been solved in terms of potential distributions, it will be helpful to pay more explicit attention to velocity fields and transit times. Examples of specific situations, real or imagined, will be used to illustrate the points made above. A parallel will be drawn with similar problems under irrigated agriculture.",
"corpus_id": 154336574,
"title": "Dryland management for salinity control"
} | {
"abstract": "Abstract The present level of salinity in the Shapur and Dalaki river basin (southern Iran) is hardly influenced by human activities and may be denoted as “natural” salinity. This paper aims to describe the engineering measures for the salinity control of the river water in this basin. Among possible salt disposal measures, collection and evaporation of polluted sources in ponds is the most practicable and feasible one. However, greater benefits can be gained by implementation of salt mitigation measures. The model dyresm was used to simulate the salinity distribution in the planned Jarreh reservoir. Results of the simulation indicate that the Jarreh storage reservoir can regulate and reduce the salt concentration of the irrigation water to a range between 1500 and 2400 mg l −1 compared with between 1000 and 4200 mg l −1 for the original river salinity. Furthermore, the diversion of the most saline inflow in summer also decreases salinity.",
"corpus_id": 154264614,
"title": "A regional approach to salinity management in river basins. A case study in southern Iran"
} | {
"abstract": "Abstract. We describe an implementation of the Ecosystem Demography (ED) concept in the Community Land Model. The structure of CLM(ED) and the physiological and structural modifications applied to the CLM are presented. A major motivation of this development is to allow the prediction of biome boundaries directly from plant physiological traits via their competitive interactions. Here we investigate the performance of the model for an example biome boundary in eastern North America. We explore the sensitivity of the predicted biome boundaries and ecosystem properties to the variation of leaf properties using the parameter space defined by the GLOPNET global leaf trait database. Furthermore, we investigate the impact of four sequential alterations to the structural assumptions in the model governing the relative carbon economy of deciduous and evergreen plants. The default assumption is that the costs and benefits of deciduous vs. evergreen leaf strategies, in terms of carbon assimilation and expenditure, can reproduce the geographical structure of biome boundaries and ecosystem functioning. We find some support for this assumption, but only under particular combinations of model traits and structural assumptions. Many questions remain regarding the preferred methods for deployment of plant trait information in land surface models. In some cases, plant traits might best be closely linked to each other, but we also find support for direct linkages to environmental conditions. We advocate intensified study of the costs and benefits of plant life history strategies in different environments and the increased use of parametric and structural ensembles in the development and analysis of complex vegetation models.",
"corpus_id": 15251937,
"score": 1,
"title": "Taking off the training wheels: the properties of a dynamic vegetation model without climate envelopes, CLM4.5(ED)"
} |
{
"abstract": "This study investigates changes in productivity of general insurance firms in Malaysia for the period from 2008 to 2011. Moreover, this study examines the impact of intellectual capital on changes in productivity. In the first stage, this study applies the Malmquist productivity index (MPI) of data envelopment analysis (DEA) and the MPI with bootstrapping approach to evaluate changes in productivity. In the second stage, this study examines the impact of intellectual capital on changes in productivity through OLS and Tobit regressions. Our MPI findings indicate that all but one sample firms experienced growth in productivity over the sample period. Moreover, the use of the MPI with bootstrapping approach provides an effective analysis of MPI estimates. Our regression analysis reveals that VAIC™ and its individual components have significantly positive impacts on changes in productivity. We suggest that general insurers in Malaysia should invest in intellectual capital, including to improve their managerial skills, to gain sustainable growth in productivity. Our findings corroborate the initiative carried out by the Malaysian government, which continuously emphasize the importance of IC. The findings of this study may lead to a better understanding of the relative changes in total productivity of general insurance firms. By identifying changes in efficiency and changes in technology, better management decisions can be made to achieve greater productivity. Moreover, through the bootstrap estimation, we are able to determine whether estimated increases or decreases are statistically significant.",
"corpus_id": 153783249,
"title": "Intellectual capital and productivity of Malaysian general insurers"
} | {
"abstract": "Purpose – This paper aims to investigate the effect of intellectual capital (IC) on the operating efficiency of non-life insurance firms in China. Design/methodology/approach – The authors use a dynamic data envelopment analysis model called dynamic slacks-based measure (DSBM) model to estimate the operating efficiency of 32 Chinese non-life insurance firms. Using a panel data set for the period from 2006 to 2010, the authors run ordinary least squares (OLS) regressions to find the relationship between IC and efficiency performance. Findings – The authors find that the insurers have almost monotonically decreasing efficiency for the period from 2006 to 2010. Regression results show that human capital, structural capital and relational capital are significantly and positively related to operating efficiency. Research limitations/implications – This study suggests that managers of the Chinese non-life insurers should devote attention to the investments in IC to stay sustainable. Originality/value – This is ...",
"corpus_id": 35393156,
"title": "Dynamic efficiency: intellectual capital in the Chinese non-life insurance firms"
} | {
"abstract": "This article explores employee attitudes towards trade union membership in the post-communist Baltic countries of Estonia, Latvia and Lithuania. It reports on a comparative empirical social survey of attitudes towards representation. We suggest that in addition to those employees who are union members and those who fall within an identifiable ‘representation gap’, there is a sizeable group of ‘undecided’ employees who could be persuaded to join trade unions, if they could see the relevance of collective representation. We argue that this relatively large group could be specific to the Central and East European countries, and employees who fall within the commonly understood representation gap in other countries can be found within this undecided group in Baltic countries. Trade unions therefore face a considerable challenge in proving their relevance to such employees, a problem that has wider resonances in a European context but may be more difficult to resolve in the Central and East European countries.",
"corpus_id": 154664852,
"score": 1,
"title": "The Paradox of Post-Communist Trade Unionism: ‘You Can't Want What You Can't Imagine’"
} |
{
"abstract": "Random regression test-day models using Legendre polynomials are commonly used for the estimation of genetic parameters and genetic evaluation for test-day milk production traits. However, some researchers have reported that these models present some undesirable properties such as the overestimation of variances at the edges of lactation. Describing genetic variation of saturated fatty acids expressed in milk fat might require the testing of different models. Therefore, 3 different functions were used and compared to take into account the lactation curve: (1) Legendre polynomials with the same order as currently applied for genetic model for production traits; 2) linear splines with 10 knots; and 3) linear splines with the same 10 knots reduced to 3 parameters. The criteria used were Akaike's information and Bayesian information criteria, percentage square biases, and log-likelihood function. These criteria indentified Legendre polynomials and linear splines with 10 knots reduced to 3 parameters models as the most useful. Reducing more complex models using eigenvalues seemed appealing because the resulting models are less time demanding and can reduce convergence difficulties, because convergence properties also seemed to be improved. Finally, the results showed that the reduced spline model was very similar to the Legendre polynomials model.",
"corpus_id": 8061294,
"title": "Short communication: Genetic variation of saturated fatty acids in Holsteins in the Walloon region of Belgium."
} | {
"abstract": "Daily milk weights from 1006 lactations on 775 Holstein-Friesian cows in 42 herds and monthly test-day weights from 102 540 lactations on 73 717 cows in 17 481 herd-year-seasons were used to study the influence of covariances among milk weighings within a lactation on three models for describing the shape of the lactation curve for individual cows. The models included a gamma function, an inverse quadratic polynomial function, and a regression model of yields on day in lactation (linear and quadratic) and on log of 305 divided by day in lactation (linear and quadratic). For each model, several variance-covariance matrices of the observation vector were used. Models were compared on the basis of squared deviations of predicted versus actual milk weights and on the correlation between predicted and actual weights through the lactation averaged over cows. Better predictions were observed when covariances among test-day yields were ignored while models could be ranked regression model, gamma function, and inv...",
"corpus_id": 85852165,
"title": "ACCOUNTING FOR COVARIANCES AMONG TEST DAY MILK YIELDS IN DAIRY COWS"
} | {
"abstract": "Abstract Test-day (TD) milk yields from Spanish Holstein cows were analysed in three independent data sets (35 615, 35 209 and 27 272 TD records, respectively) with a set of random regression models. Wilmink and Ali-Schaeffer lactation functions and, Legendre polynomials (RRL) of varying order (up to six coefficients) on additive genetic (AG) and permanent environmental (PE) effects were used. The analysis of the eigenvalues and eigenvectors of the AG and PE random regression (co)variance matrices revealed the possibility of reducing the dimension of the RRL submodels, particularly for the AG effects. Lactational submodels provided the largest daily AG variance estimates at the onset of lactation, as well as low or even negative genetic correlations between peripheral TD. Polynomials of higher order (four or above) showed oscillatory patterns with larger variances and lower genetic correlations predicted for the extremes of lactation. Model performance was assessed using a broad range of criteria. The results showed a strong consistency among data sets in terms of models ranking. Lactational models showed a worse performance than the RRL models with the same number of parameters. For RRL models, all criteria except the Bayesian information criterion, favoured the most complex model. This criterion selected a model with 2–3 coefficients for the AG effects and 5–6 coefficients for the PE effects.",
"corpus_id": 121091258,
"score": 2,
"title": "Comparing alternative random regression models to analyse first lactation daily milk yield data in Holstein–Friesian cattle"
} |
{
"abstract": "This paper is concerned with finite element methods of least-squares type for the approximate numerical solution of incompressible, viscous flow problems. Our main focus is on issues that are critical for the success of the finite element methods, such as decomposition of the Navier-Stokes equations into equivalent first-order systems, mathematical prerequisites for the optimality of the methods, and the use of mesh-dependent norms. In conclusion we present a novel application of least-squares principles involving an optimal boundary control problem for fluid flows.",
"corpus_id": 1021311,
"title": "LEAST SQUARES FINITE ELEMENT METHODS FOR VISCOUS , INCOMPRESSIBLE FLOWS"
} | {
"abstract": "In this paper, we develop and analyze mixed finite element methods for the Stokes and Navier-Stokes equations. Our mixed method is based on the pseudostress-pressure-velocity formulation. The pseudostress is approximated by the Raviart-Thomas, Brezzi-Douglas-Marini, or Brezzi-DouglasFortin-Marini elements, the pressure and the velocity by piecewise discontinuous polynomials of appropriate degree. It is shown that these sets of finite elements are stable and yield optimal accuracy for the Stokes problem. For the pseudostress-pressure-velocity formulation of the stationary Navier-Stokes equations, the well-posedness and error estimation results are established. By eliminating the pseudostress variables in the resulting algebraic system, we obtain cell-centered finite volume schemes for the velocity and pressure variables that preserve local balance of momentum.",
"corpus_id": 1255689,
"title": "Mixed methods for stationary Navier-Stokes equations based on pseudostress-pressure-velocity formulation"
} | {
"abstract": "An optimized artificial immune network-based classification model, namely OPTINC, was developed for remote sensing-based land use/land cover (LULC) classification. Major improvements of OPTINC compared to a typical immune network-based classification model (aiNet) include (1) preservation of the best antibodies of each land cover class from the antibody population suppression, which ensures that each land cover class is represented by at least one antibody; (2) mutation rates being self-adaptive according to the model performance between training generations, which improves the model convergence; and (3) incorporation of both Euclidean distance and spectral angle mapping distance to measure affinity between two feature vectors using a genetic algorithm-based optimization, which helps the model to better discriminate LULC classes with similar characteristics. OPTINC was evaluated using two sites with different remote sensing data: a residential area in Denver, CO with high-spatial resolution QuickBird image and LiDAR data, and a suburban area in Monticello, UT with HyMap hyperspectral imagery. A decision tree, a multilayer feed-forward back-propagation neural network, and aiNet were also tested for comparison. Classification accuracy, local homogeneity of classified images, and model sensitivity to training sample size were examined. OPTINC outperformed the other models with higher accuracy and more spatially cohesive land cover classes with limited salt-and-pepper noise. OPTINC was relatively less sensitive to training sample size than the neural network, followed by the decision tree.",
"corpus_id": 129803305,
"score": 1,
"title": "An artificial immune network approach to multi-sensor land use/land cover classification"
} |
{
"abstract": ": Stroke prevention with oral anticoagulants in patients with atrial fi brillation predisposes for bleeding. As a result, in select patient groups anticoagulation is withheld because of a perceived unfavorable risk-bene fi t ratio. Reasons for withholding anticoagulation can vary greatly between clinicians, often leading to discussion in daily clinical practice on the best approach. To guide clinical decision-making, we have reviewed available evidence on the most frequently reported reasons for withholding anticoagulation: previous bleeding, frailty and age, and an overall high bleeding risk. general contraindications OAC should generally be withheld in patients Feasibility of OAC treatment in terms of medication adherence should always be and monitored.",
"corpus_id": 261307361,
"title": "When to withhold oral anticoagulation in atrial fi brillation – an overview of frequent clinical discussion topics"
} | {
"abstract": null,
"corpus_id": 3152700,
"title": "Apixaban versus Antiplatelet drugs or no antithrombotic drugs after anticoagulation-associated intraCerebral HaEmorrhage in patients with Atrial Fibrillation (APACHE-AF): study protocol for a randomised controlled trial"
} | {
"abstract": "The Task Force for the management of atrial fibrillation of the European Society of Cardiology (ESC) \n\nDeveloped with the special contribution of the European Heart Rhythm Association (EHRA) of the ESC \n\nEndorsed by the European Stroke Organisation (ESO)",
"corpus_id": 39558530,
"score": -1,
"title": "2016 ESC Guidelines for the management of atrial fibrillation developed in collaboration with EACTS."
} |
{
"abstract": "The UK's innovative and productive performance remains a subject of considerable concern, not least because of its increasing productivity gap, but also because of concerns relating to manufacturing's reliance on gaining process efficiencies. Resolution of this position has been argued to involve a move up the value chain, through such means as New Product Development (NPD). Many examples of 'good' NPD practice exist within the literature. However, these examples and the associated determinants of NPD success principally focus on large organisations. In this paper, SMEs are argued as being different in their approach to developing new products, specifically focusing on their journey to create a NPD capability. This exploratory research incorporates two detailed case studies on UK based manufacturing SMEs in order to contribute to new knowledge and understanding, by identifying the key enablers to create a NPD capability.",
"corpus_id": 1804784,
"title": "Creating a New Product Development capability: the organisational enablers for moving up the value chain"
} | {
"abstract": "This paper describes the construction of a large panel data set covering about 2600 firms in the U.S. manufacturing sector for up to twenty years which contains annual data on financial variables, employment, research and development expenditures, and aggregate patent applications. This data set is to be used in a larger study of R&D, inventive output and technological change. In the present paper we present preliminary results on the R&D and patenting behavior of the 1976 cross section of these firms. We find an elasticity of R&D with respect to sales of close to unity, with both very small and very large firms being slightly more R&D intensive than average. Because only 60% of the firms report R&D expenditures, we attempt to correct for selectivity bias and find that though the correction is small, it increases the estimated complementarity between capital intensity and R&D intensity. In exploring the relationship of the patenting activity of these firms to their contemporaneous R&D expenditures, we look with some care at the choice of econometric specifications since the discrete nature of the patents variable for our smaller firms may cause difficulties with the conventional log linear model. The choice of specification does indeed make a difference, and the negative binomial model, which is a Poisson-type model with a disturbance, is preferred. Substantively, we find a much larger output of patents per R&D dollar for the small firms, with a decreasing propensity to patent with size of R&D programs throughout the sample. However, this conclusion is highly tentative both because of its sensitivity to specification and choice of sample and also because we expect that errors in variables bias due to our focus on R&D and patent applications in a single year is far worse for the small firms.",
"corpus_id": 153009543,
"title": "Who Does R&D and Who Patents?"
} | {
"abstract": "The current study employs the hedonic paradigm model (Hirschman & Holbrook, 1982) to investigate the interceding function of emotions on the relationship between personality (i.e., risk taking) and attitude toward mixed martial arts. This study also examines sport-media (e.g., television) consumption of a nontraditional sport. Structural equation modeling was used to examine the proposed model incorporating risk taking, pleasure, arousal, attitude, and actual consumption behavior. The study found a significant mediation effect of emotion (pleasure and arousal) in the relationship between risk taking and attitude. In addition, attitude showed a direct and significant influence on actual media-consumption behavior. Theoretical and practical implications of the results are discussed, along with future directions for research.",
"corpus_id": 52832382,
"score": 1,
"title": "Examining Television Consumers of Mixed Martial Arts: The Relationship Among Risk Taking, Emotion, Attitude, and Actual Sport-Media-Consumption Behavior"
} |
{
"abstract": "This paper presents a novel approach to background subtraction which aims to extract moving objects in video stream. To this end, a novel background model is proposed by using both working backgrounds and candidate backgrounds, which can be transferred to each other according to an adaptive mechanism. The input image (video frame) is compared and evaluated with these dual-class backgrounds (DCB) to detect foreground objects. Furthermore, for robust background modeling a novel background updating scheme is proposed based on the life-value which represents the existing time of a background sample, and the access-time which represents the number of valid visits of a background sample. Experiments on a standard dataset demonstrated the effectiveness and robustness of the proposed approach by comparing it with the previous typical background subtraction techniques.",
"corpus_id": 3648798,
"title": "Background subtraction using dual-class backgrounds"
} | {
"abstract": "Many applications of computer vision, motion captures nowadays are an active research field. Supported by camera innovation in high definition technology and high-speed processing unit technology make higher degree on object detection standard. We can see it from the increasing number of new methods that have improvement in accuracy. In automatic vehicle surveillance area, Spatial Mixture Gaussian model becomes well-known moving based object detection via background subtraction technique in this decades. This method models particular pixel as mixture of Gaussians distribution with regard to pixel's higher probability of occurrences and variance of each Gaussians in the mixture model. Although, this model has threshold to control the sensitivity of object's motion, it has problem with separating an object from its shadow. This is happening because the shadow attaches to the object. Since they always move in tandem, as the result, detected object area will merge and shadow and object will form into a single unity that is difficult to separate. In accordance with detection, occluded object because of a shadow will decrease detector's accuracy. Therefore, we need to remove shadow, in order to maintain detector's quality of accuracy. Challenge in doing so is there is exist dynamic illumination condition which resulting a nonuniform shadow pixel value. This can cause failure of threshold-based linear shadow casting technique. To solve above-mentioned problem, we need a shadow filter that can adapt to the illumination changes. In this experiment, we have successfully implemented an adaptive shadow filter based on DSD algorithm to improve background subtraction method. Our proposed method has a stable result in outdoor environment dataset and it is proven to be able applied to traffic surveillance video application.",
"corpus_id": 24192584,
"title": "Background subtraction using spatial mixture of Gaussian model with dynamic shadow filtering"
} | {
"abstract": "Dislocations have been found to extend for considerable distances outside of planar diffused structures in silicon and to affect the electrical properties of the diffused junctions. The mechanism of dislocation propagation outside of phosphorus‐diffused structures has been studied by x‐ray diffraction microscopy and other techniques. It is shown that these dislocations are propagated through an anomalously large compressive stress that results from large strains in some high‐concentration phosphorus‐diffused structures. These strains cannot be attributed to the residual effects of substitutional phosphorus atomic mismatch with the silicon lattice. The anomalous stress and dislocations usually appear after an oxidizing diffusion or drive‐in cycle at temperatures less than 1150°C. Also, the dislocations are much less likely to occur in (100) or (110) oriented surf aces as opposed to (111) surfaces.",
"corpus_id": 98614472,
"score": 0,
"title": "Strain Effects Around Planar Diffused Structures"
} |
{
"abstract": "The molecular size distribution and biochemical composition of the dissolved organic carbon released from natural communities of lake phytoplankton (photosynthetically produced dissolved organic carbon [PDOC]) and subsequently used by heterotrophic bacteria were determined in three lakes differing in trophic status and concentration of humic substances. After incubation of epilimnetic lake water samples with H14CO3- over one diel cycle, the phytoplankton were removed by size-selective filtration. The filtrates, still containing most of the heterotrophic bacteria, were reincubated in darkness (heterotrophic incubation). Differences in the amount and composition of PDO14C between samples collected before the heterotrophic incubation and samples collected afterwards were considered to be a result of bacterial utilization. The PDO14C collected at the start of the heterotrophic incubations always contained both high (>10,000)- and low (<1,000)-molecular-weight (MW) components and sometimes contained intermediate-MW components as well. In general, bacterial turnover rates of the low-MW components were fairly rapid, whereas the high-MW components were utilized slowly or not at all. In the humic lake, the intermediate-MW components accounted for a large proportion of the net PDO14C and were subject to rapid bacterial utilization. This fraction probably consisted almost entirely of polysaccharides of ca. 6,000 MW. Amino acids and peptides, other organic acids, and carbohydrates could all be quantitatively important parts of the low-MW PDO14C that was utilized by the heterotrophic bacteria, but the relative contributions of these fractions differed widely. It was concluded that, generally, low-MW components of PDOC are quantitatively much more important to the bacteria than are high-MW components, that PDOC released from phytoplankton does not contain substances of quantitative importance as bacterial substrates in all situations, and that high-MW components of PDOC probably contribute to the buildup of refractory, high-MW dissolved organic carbon in pelagic environments.",
"corpus_id": 2985857,
"title": "Biochemical Composition of Dissolved Organic Carbon Derived from Phytoplankton and Used by Heterotrophic Bacteria"
} | {
"abstract": "A bstractThe upper few millimeters of intertidal sediment supports a varied biomass of microbial consortia and microphytobenthos. Many of these organisms release extracellular polymers into the surrounding sediment matrix that can result in sediment cohesion and the increased stability of the sediment. The relationship between the heterotrophic and autotrophic components of these biofilms is not well understood. A combination of mesocosm and field investigations were used to investigate the relationship between microbial production rate (algae and bacteria), the extracellular carbohydrates, biomass, and stability in conjunction with a variety of environmental factors. An inverse relationship was found between rates of algal production and sediment stability both in the field and in laboratory mesocosms, though the relationship was significant only in the field (P < 0.001). Stability of sediments increased with increasing bacterial production rate (P < 0.001). Positive correlations were found between sediment stability and a range of other variables, including algal biomass (P < 0.001), colloidal-S EPS (P < 0.001), colloidal-S carbohydrate (P < 0.01), colloidal-S EDTA (P < 0.01), and sediment water content (P < 0.001). Using the data acquired, a preliminary model was developed to predict changes in sediment stability. Chlorophyll a, water content, and colloidal-S EPS were found to be the most important predictors of stability in intact cores incubated under laboratory conditions. Differences observed in patterns of the surface (0–2 mm) distribution of colloidal-S carbohydrate and chlorophyll a when expressed on a dry weight or areal basis were attributed to effects of dewatering and concomitant changes in wet bulk density. The polymeric carbohydrate (colloidal-S EPS) component of the biofilms was not found to be a constant fraction of the colloidal-S carbohydrate extract, varying from 16 to 58%, and the percentage of polymer decreased logarithmically as chlorophyll a concentrations increased and the biofilms matured (P < 0.001). Changes in the relationships between these variables over the period of biofilm development and maturation highlight the difficulties in their use to predict sediment stability. Exopolymer concentrations were more closely correlated with algal biomass than with bacterial numbers. Rates of algal carbon fixation were considerably greater than those for bacteria, suggesting that the algae have a much greater potential for exopolymer production. It is suggested that the microphytobenthos secretions make a more important contribution to sediment stability.\n",
"corpus_id": 3039558,
"title": "Interrelationships between Rates of Microbial Production, Exopolymer Production, Microbial Biomass, and Sediment Stability in Biofilms of Intertidal Sediments\n"
} | {
"abstract": "Viable bacteria were recovered from estuarine waters passed through a 0.2-μm polycarbonate membrane filter. The recovery method included the use of a dilute nutrient broth for primary enrichment followed by conditioning of the organism to a dilute nutrient solid medium. These bacteria were gram-negative rods and coccobacilli having an NaCl requirement and, upon initial culturing, low nutritional requirements. In response to increased nutrient preparations, these microorganisms underwent an increase in size and growth rate, giving rise to visible colonies. Phenotypic characterization suggests that species of Vibrio, Aeromonas, Pseudomonas, and Alcaligenes were among the isolates. The abundance and the nutritional requirements of these ultramicrobacteria imply that they represent a class of microorganisms which have successfully adjusted to poor nutrient conditions.",
"corpus_id": 34840436,
"score": 2,
"title": "Isolation and Characterization of Ultramicrobacteria from a Gulf Coast Estuary"
} |
{
"abstract": "In this article, we derive an approximate asymptotic analytical expression for the long-time chronoamperometric current response at an inlaid microband (or laminar) electrode. The expression is applicable when the length of the microband is much greater than the width, so that the diffusion of the electrochemical species can be regarded as two-dimensional. We extend the previously known result for the diffusion-limited current response (Aoki, K. et al. J. Electroanal. Chem. 1987, 225, 19–32 and Phillips, C.G. J. Electroanal. Chem. 1992, 333, 11–32) to accommodate quasi-reversible reactions and unequal diffusion coefficients of the oxidant and the reductant. Comparison with numerical calculations validates the analytical expression, and we demonstrate that unequal diffusion coefficients can substantially change the current response. Finally, we discuss the form of the long-time current response for a one-step, one-electron redox reaction if the rate constants are modelled in the Butler–Volmer framework, and indicate the importance of choosing the width of the microband appropriately to allow accurate experimental determination of the standard kinetic rate constant and the electron transfer coefficient.",
"corpus_id": 174907,
"title": "The Long-Time Chronoamperometric Current at an Inlaid Microband (or Laminar) Electrode"
} | {
"abstract": "In the first paper in this series we derived an exponentially expanding mesh designed specifically to give a fast, efficient solution to the problem of simulation of diffusion processes at microdisc electrodes to a pre-determined level of accuracy. In this paper we make use of this mesh to consider the problem of linear sweep voltammetry and show that for the simulated values of the peak current for a fully reversible reaction we can obtain agreement with previous analytic results to within 0.25% at all parameter values of interest. We go on to consider irreversible and quasi-reversible systems, and demonstrate good qualitative agreement with previously described numerical and analytical results. We again make use of the FIRM and the ADI finite difference methods, and discuss when each of these methods is most appropriate.",
"corpus_id": 94650409,
"title": "An exponentially expanding mesh ideally suited to the fast and efficient simulation of diffusion processes at microdisc electrodes. 3. Application to voltammetry"
} | {
"abstract": "The application of fast-scan cyclic voltammetry methods to the high-speed microband channel electrode (Rees et al. J. Phys. Chem. 99 (1995) 7096) is reported. Theory is presented to simulate cyclic voltammograms for a simple electron transfer under high convective flow rates within the high-speed channel. Experiments are reported for the oxidation of 9,10-diphenylanthracene (DPA) in acetonitrile solution containing 0.10 M tetrabutylammonium perchlorate (TBAP) for both 12.5 and 40 μm platinum microband electrodes using a range of scan rates from 50 to 3000 V s−1 and centre-line flow velocities from 12 to 25 m s−1. Analysis of the voltammograms yielded values for k0 and α for DPA which were measured to be 0.80±0.27 and 0.52±0.07 cm s−1, respectively. The range of applicability of this method was also investigated. Experiments are also presented using steady-state linear sweep voltammetry to obtain accurate measurements of the heterogeneous kinetic parameters for DPA at a platinum microband electrode. The measured value of k0 for DPA was found to be 0.94±0.16 cm s−1, with α=0.53±0.02 and a formal oxidation potential of 1.40±0.01 V (vs. Ag).",
"corpus_id": 93224396,
"score": 2,
"title": "The application of fast scan cyclic voltammetry to the high speed channel electrode"
} |
{
"abstract": "Sign language is used as a communication medium among deaf and dumb people to convey the message with each other. A person who can talk and hear properly (normal person) cannot communicate with deaf and dumb person unless he/she is familiar with sign language. Same case is applicable when a deaf and dumb person wants to communicate with a normal person or blind person. In order to bridge the gap in communication among deaf and dumb community and normal community, lot of research work has been carried out to automate the process of sign language interpretation with the help of image processing and pattern recognition techniques. This paper proposes optimized approaches of implementing the famous Viola Jones algorithm with LBP features for hand gesture recognition which will recognize Indian sign language gestures in a real time environment. The performance analysis of the proposed approaches is presented along with the experimental results. An optimized algorithm has been implemented in the form of an android application and tested with real time data.",
"corpus_id": 29032700,
"title": "Indian Sign Language Interpreter with Android Implementation"
} | {
"abstract": "Natural communication between humans involves hand gestures, which has an impact on research in human-robot interaction. In a real-world scenario, understanding human gestures by a robot is hard due to several challenges like hand segmentation. To recognize hand postures this paper proposes a novel convolutional implementation. The model is able to recognize hand postures recorded by a robot camera in real-time, in a real-world application scenario. The proposed model was also evaluated with a benchmark database and showed better results than the ones reported in the benchmark paper.",
"corpus_id": 15116076,
"title": "A Multichannel Convolutional Neural Network for Hand Posture Recognition"
} | {
"abstract": null,
"corpus_id": 2647599,
"score": -1,
"title": "Key Factors Affecting User Experience of Mobile Crowdsourcing Applications"
} |
{
"abstract": "During 2009, a total of 10,844 laboratory-confirmed cases of pandemic (H1N1) 2009 were reported in Beijing, People’s Republic of China. However, because most cases were not confirmed through laboratory testing, the true number is unknown. Using a multiplier model, we estimated that ≈1.46–2.30 million pandemic (H1N1) 2009 infections occurred.",
"corpus_id": 1127812,
"title": "Estimates of the True Number of Cases of Pandemic (H1N1) 2009, Beijing, China"
} | {
"abstract": "A major concern about the emergence of the novel strain of influenza A/H1N1 is the severity of illness it causes. Tini Garske and colleagues propose methods to obtain accurate estimates of the case fatality ratio as the pandemic unfolds",
"corpus_id": 26484486,
"title": "Assessing the severity of the novel influenza A/H1N1 pandemic"
} | {
"abstract": "PurposeThe ideal approach to complex ventral hernia repair is frequently debated. Differences in processing techniques among biologic materials may impact hernia repair outcomes. This study evaluates the outcomes of hernia repair with a terminally sterilized human acellular dermal matrix (TS-HADM) (AlloMax® Surgical Graft, by C. R. Bard/Davol, Inc., Warwick, RI, USA) treated with low-dose gamma irradiation.MethodsA single-arm multi-center retrospective observational study of patients undergoing hernia repair with TS-HADM was performed. Data analyses were exploratory only; no formal hypothesis testing was pre-specified.ResultsSeventy-eight patients (43F, 35M) underwent incisional hernia repair with a TS-HADM. Mean follow-up was 20.5 months. Preoperative characteristics include age of 56.6 ± 11.1 years, BMI 36.7 ± 9.9 kg/m2, and mean hernia defect size 187 cm2. Sixty-five patients underwent component separation technique (CST) with a reinforcing graft. Overall, 21.8 % developed recurrences. Recurrences occurred in 15 % of patients repaired with CST. Major wound complications occurred in 31 % of patients overall. Based upon CDC surgical wound classification, major wound complications were seen in 26, 40, 56, and 50 % of Class 1, 2, 3, and 4 wounds, respectively. No grafts required removal.ConclusionsHernia recurrences are not uncommon following complex abdominal wall reconstruction. Improved outcomes are seen when a TS-HADM is utilized as reinforcement to primary fascial closure.",
"corpus_id": 4533357,
"score": 1,
"title": "Complex ventral hernia repair with a human acellular dermal matrix"
} |
{
"abstract": "PROF. B. SAHNI'S important observations1 have necessitated a reconsideration of this problem. In order to review the geological evidence on the ground, an excursion was arranged to examine several sections which had led E. R. Gee, of the Geological Survey of India, and other geologists to the conclusion that the Saline Series of the Salt Range is of Cambrian or pre-Cambrian age. Prof. Sahni was unfortunately unable to take part in the excursion, the party consisting of the undersigned.",
"corpus_id": 4026998,
"title": "Age of the Saline Series in the Punjab Salt Range"
} | {
"abstract": "Abstract The Punjab Saline Series has been assigned to the Cambrian and also to the Eocene. Gee has described field evidence which points to a Cambrian age, but Sahni has found microscopic plant and insect remains which have been interpreted as demonstrating an Eocene age. The application of Schultze's technique (also adopted by Sahni) to samples from the undoubted Cambrian rocks of the Salt Range has shown that these too occasionally contain a microflora which includes elements that are not so far known to be indigenous to Cambrian rocks. Its occurrence awaits explanation.",
"corpus_id": 129189618,
"title": "Evidence Bearing on the Age of the Saline Series in the Salt Range of the Punjab"
} | {
"abstract": "Summary The Tertiary of the Punjab is divided into two principal parts, the marine and estuarine Eocene (Nummulitic) system below, and, unconformably overlying this, a great succession of nonmarine beds representing continental sedimentation from the Oligocene to the late Pliocene or early Quaternary. This nonmarine succession, here named the “Ni-madric System,” includes the Murree and Siwalik series. It comprises over 20,000 feet of fairly well stratified, alternating sandy and silty beds, mainly fine-grained, but coarsely conglomeratic in its uppermost division. It is interpreted as the product of mainly fluviatile, but in part eolian, deposition over fairly flat plains, under a semitropical climate of medium rainfall. The plains extended into the region now occupied by the Himalaya. Evidence of the validity of thickness measurements is afforded by the comparative constancy of individual fine-grained formations, the great thicknesses superimposed in single sections, and by the record here presented of a well 6,000 . . .",
"corpus_id": 129796643,
"score": 2,
"title": "Tertiary Stratigraphy and Orogeny of the Northern Punjab"
} |
{
"abstract": "We prove that under CH there are ω1 non-homeomorphic Kunen compact L-spaces. Moreover there exist models of ZFC that have 2ω1 many non-homeomorphic Kunen spaces.",
"corpus_id": 5101388,
"title": "There are many Kunen compact L-spaces"
} | {
"abstract": "Abstract The continuum Hypothesis implies that there is a compact Hausdorff space which is hereditarily Lindelof but not separable. The space is the support of a Borel probability measure for which the measure-0 subsets, the first-category subsets, and the separable subsets all coincide.",
"corpus_id": 122212540,
"title": "A compact L-space under CH"
} | {
"abstract": "We show that the minimal cardinality of a dense subset of the measure algebra is the same as the minimal cardinality of a base of the ideal of Lebesgue measure zero subsets of the real line.",
"corpus_id": 34825001,
"score": 2,
"title": "On dense subsets of the measure algebra"
} |
{
"abstract": "Hatching eggs from commercial broiler breeder flocks at 34 and 37 wk of age (young) and at 59 and 61 wk of age (old) were stored for 2 d at 18 degrees C and 75% RH and then subjected to turning frequencies of 24 or 96 times daily up to 8, 10, 12, or 14 d of incubation at standard conditions to determine if an increased turning frequency would facilitate an early cessation of turning. Turning was discontinued after the respective days were completed. Eggs remained in setter trays until combined at 18 d to complete hatching in a single machine. The young flocks exhibited significantly better fertile hatchability, as expected, but there was no overall effect due to differences in cessation of turning from 8 to 14 d of incubation (range 88.9 to 89.2%). However, turning 96 times daily produced significantly better fertile hatchability that was largely due to a significant interaction of flock age and turning frequency; the beneficial effect of increased turning frequency, largely reduced late-embryonic mortality, was more evident in the eggs from the older flocks.",
"corpus_id": 3655189,
"title": "Effect of flock age, cessation of egg turning, and turning frequency through the second week of incubation on hatchability of broiler hatching eggs."
} | {
"abstract": "Abstract A total of approximately 29,000 eggs were used in five experiments to evaluate the effect of duration of incubation time in turning trays on the hatchability of hen eggs. Chicks hatched in the final experiment were also used to evaluate the effects of transfer time on the subsequent performance of broiler chicks grown to slaughter age. Eggs were transferred from turning to hatcher trays at various hourly intervals after 13 to 20 days of incubation without any significant effects (P>.05) on hatchability. Incubator transfer time was not significantly (P Broiler chicks hatched from eggs transferred after 16, 17, 18, 19, and 20 days of incubation were not significantly (P>.05) affected in their subsequent performance by the time of egg transfer during incubation.",
"corpus_id": 85275549,
"title": "The Effect of Transferring Hen Eggs from Turning to Stationary Trays After 13 to 20 Days of Incubation on Subsequent Hatchability and General Performance"
} | {
"abstract": "Background: Percutaneous coronary interventions cause anxiety in patients, although these procedures are lifesaving. Aim: The aim of this study was to determine the effect of nature sounds and earplug interventions on the anxiety of patients after percutaneous coronary interventions. Methods: A randomized controlled trial design was used in this study. A total of 114 patients who were scheduled to undergo percutaneous coronary intervention were allocated to three groups in a randomized manner: two intervention groups (nature sound group, earplug group) and one control group. The Visual Analog Scale, State Anxiety Inventory and physiological parameters were used to measure anxiety. Data were collected from the patients at three time points: immediately before, immediately after and 30 minutes after the interventions. Results: The respiratory rates and the Visual Analog Scale and State Anxiety Inventory scores of patients in the nature sound and earplug groups immediately after and 30 minutes after the interventions were significantly lower than those of the control group (p < 0.05). No differences were found when comparing respiratory rates, Visual Analog Scale scores and State Anxiety Inventory scores between patients in the nature sound group and patients in the earplug group (p > 0.05). No changes were observed in the pulse and systolic/diastolic blood pressure values of patients in the control and intervention groups (p > 0.05). Conclusions: It was determined that nature sounds and earplug interventions are effective in reducing the anxiety of patients following percutaneous coronary intervention.",
"corpus_id": 195328615,
"score": 0,
"title": "The effect of nature sounds and earplugs on anxiety in patients following percutaneous coronary intervention: A randomized controlled trial"
} |
{
"abstract": "Exposure assessment is a main component of epidemiologic studies and variability in exposure. This assessment is considered as a common approach for such phenomenon. A total of 129 dust samples were collected randomly from 197 personnel from a cement factory located in Ilam province, during 2009 in Iran. The between- and within-group components of variability were determined to assess the contrast in exposure level between the Similar Exposure Groups (SEGs) and to calculate the within-worker geometric standard deviation of the theoretical exposure-response slope. Results were analyzing by one-way random effects model. According to the mentioned model, the probability of long-term mean exposure exceeding to the occupational exposure limit (OEL) was assessed for each SEGs. The arithmetic means (AM) of total dust levels ranged from 0.04 to 39.37 mg/m(3). The geometric means (GM) of total dust were higher in the crusher (20.84 mg/m(3)), packing (17.29 mg/m(3)), kiln (16.78 mg/m(3)), cement mill (14.90 mg/m(3)), and raw mill (10.44 mg/m(3)). However, the figures for the maintenance and administration parts were 3.77 mg/m(3) and 1.01 mg/m(3), respectively. The random effects model data demonstrated that the F-value calculated was greater than the critical F-value approximately 59% of the variability in the exposure was due to differences between groups. Based on these finding, the order of probability of the long-term mean exposure exceeding (Z) to the OEL of 10 mg/m(3) for total dust which were in kiln (100%), packing (100%), cement mill (90%), crusher (73%), raw mill (60%) administration (2.3%) and the maintenance parts (0%).",
"corpus_id": 1887907,
"title": "Variability in total dust exposure in a cement factory."
} | {
"abstract": "A frequent practical problem of research in developing countries is the lack of reliable records on occupational hazards. To improve this situation, this article suggests and evaluates a two-phase method for estimating particle exposure. The first phase uses the focal group, or homogeneous group, technique to reconstruct the production process and estimate the level of dust exposure. The second phase applies the technique of individual history of exposure to hazards at work, an index that accumulates current and previous exposure. This method was introduced in a Portland cement plant to assess the dust-exposure levels of workers and to evaluate its usefulness in the association between estimated exposure levels and the frequency of health effects--particularly respiratory effects--that occurred as a result of such exposures. The results obtained from the analysis of the production process and of the exposure levels determined by the cement workers showed that it is possible to reconstruct the history of exposure to cement dust during each worker's occupational history. The results also showed that estimated exposure is related to respiratory damage; higher exposure resulted in more serious diseases. This supports the usefulness of the suggested methodology.",
"corpus_id": 33514910,
"title": "Risk indicator of dust exposure and health effects in cement plant workers."
} | {
"abstract": "Abstract This special issue on sustainable energy and environmental protection concerns with latest developments related to sustainable and renewable energy. In this editorial introduction, the editor is highlighting the different articles presented and discussed in this issue. Main area of this issue can be summarized as follows: PV (photovoltaics) and Solar Energy, Wind Energy, Hydrogen and Fuel Cell, Energy Efficiency, Eco-Design and Energy and Environment Planning and Management. The contents of this issue will be discussed in details in the following sections.",
"corpus_id": 31611163,
"score": 1,
"title": "Developments in sustainable energy and environmental protection"
} |
{
"abstract": "The mechanism of invasion of human red blood cells by Plasmodium falciparum merozoites has been studied by several indirect methods. Red blood cells of the S+s+U+ and S-s-U- blood group phenotypes were trypsin treated and their susceptibility to invasion measured. Trypsin-treated S+s+U+ cells lack the portion of glycophorin A which bears the MN blood group determinants but possess glycophorin B, whereas trypsin-treated S-s-U- cells lack both the glycophorin A MN determinants and the glycophorin B molecule. Since the treated S-s-U- cells showed an even greater loss in susceptibility to invasion that the treated S+s+U+ cells, we conclude that glycophorin B does have a role In merozoite recognition, although it appears less important than glycophorin A. Attempts to decrease invasion by pretreatment with glycosidases were unsuccessful, except for the previously reported effect of neuraminidase. N-acetyl-D-glucosamine decreases the appearance of ring-stage parasites after in vitro reinvasion of P. falciparum. However, the persistence of intact and lysed schizont-infected cells when N-acetyl-D-glucosamine was present, several hours after disappearance of these cells from control cultures, leads us to conclude that this sugar has a deleterious effect on terminal stages of parasite maturation. It is therefore not possible to conclude that N-acetyl-D-glucosamine inhibits merozoite attachment and reinvasion specifically by competition for the receptor.",
"corpus_id": 837864,
"title": "Studies on the role of red blood cell glycoproteins as receptors for invasion by Plasmodium falciparum merozoites."
} | {
"abstract": "The effect of protease inhibitors on invasion of rhesus erythrocytes by Plasmodium knowlesi merozoites was evaluated. Chymostatin, N-alpha-p-tosyl-L-lysine chloromethyl ketone (TLCK), and L-1-tosylamide-2-phenylethylchloromethyl ketone (TPCK) inhibited invasion. Leupeptin, antipain, pepstatin, and phenylmethylsulfonyl fluoride (PMSF) had no effect. TLCK and TPCK inhibited attachment of merozoites to host erythrocytes. Chymostatin had no adverse effect on attachment, and in its presence junction formation between the merozoite and host erythrocyte occurred. Both chymostatin and leupeptin inhibited normal rupture of schizont-infected erythrocytes. It is suggested that proteolytic activity may be important both in the rupture of schizont-infected erythrocytes and in the invasion of erythrocytes by malaria parasites.",
"corpus_id": 1407589,
"title": "Plasmodium knowlesi: studies on invasion of rhesus erythrocytes by merozoites in the presence of protease inhibitors."
} | {
"abstract": "The purpose of this study was to evaluate the effect of the repeated nonthermal atmospheric discharge (NADC) exposure without peroxide or water on bleaching effect of cycling stained tooth and to evaluate surface roughness and microhardness of the enamel surface exposed by NADC. Specimens with 5×5mm were prepared from extracted bovine teeth. Staining with tea and exposure by NADC were repeated for five times and color was measured at each step. Other specimens were prepared and surface roughness (Ra) and microhardness (Vickers hardness) were measured after exposure by NADC. The repeated NADC exposure without peroxide or water showed bleaching effect for stained bovine teeth. The surface roughness was not changed by NADC exposure. The less microhadness was shown by NADC exposere.",
"corpus_id": 76667085,
"score": 1,
"title": "Effect of nonthermal atmospheric discharge on stain removal of tooth."
} |
{
"abstract": "JCO/NOVEMBER 2017 © 2017 JCO, Inc. age, and gingival dehiscence.8-13,15 Because of these perceived advantages, several groups of authors developed a modified concept of hybrid anchorage for expansion, called micro-implant-assisted rapid palatal expansion (MARPE), using four miniimplants and four anchor teeth.17-19 Liou and colleagues have described a method to enhance the stimulatory orthopedic effect of Wilmes and colleagues have introduced the Hybrid Hyrax* expansion appliance to avoid anchorage loss in such cases.7-15 In a minimally invasive procedure, two mini-implants are placed in the paramedian area of the anterior palate to support anchorage in the sagittal and transverse dimensions.16 The upper permanent or deciduous molars can thus be stabilized in their positions while the maxilla is orthopedically displaced in an anterior direction. The appliance reduces transverse forces on the dentition during maxillary expansion, resulting in less buccal tipping, root damORONZO DE GABRIELE, DDS, MD GIANLUCA DALLATANA, DT ROBERTO RIVA, DT SIVABALAN VASUDAVAN, BDSc, MDSc, MPH BENEDICT WILMES, DMD, MSD, PhD The Easy Driver for Placement of Palatal Mini-Implants and a Maxillary Expander in a Single Appointment",
"corpus_id": 3517463,
"title": "The easy driver for placement of palatal mini-implants and a maxillary expander in a single appointment."
} | {
"abstract": "OBJECTIVE\nTo evaluate the treatment effects of a hybrid hyrax-facemask (FM) combination in growing Class III patients.\n\n\nMATERIAL AND METHODS\nA sample of 16 prepubertal patients (mean age, 9.5 ± 1.6 years) was investigated by means of pre- and posttreatment cephalograms. The treatment comprised rapid palatal expansion with a hybrid hyrax, a bone- and toothborne device. Simultaneously, maxillary protraction using an FM was performed. Mean treatment duration was 5.8 ± 1.6 months. The treatment group was compared with a matched control group of 16 untreated Class III subjects. Statistical comparisons were performed with the Mann-Whitney U-test.\n\n\nRESULTS\nSignificant improvement in skeletal sagittal values could be observed in the treatment group over controls: SNA: 2.4°, SNB: -1.7°, Co-Gn: -2.3 mm, Wits appraisal: 4.5 mm. Regarding vertical changes, maintenance of vertical growth was obtained as shown by a small nonsignificant increase of FMA and a small significant decrease of the Co-Go-Me angle.\n\n\nCONCLUSIONS\nThe hybrid hyrax-FM combination was found to be effective for orthopedic treatment in growing Class III patients in the short term. Favorable skeletal changes were observed both in the maxilla and in the mandible. No dentoalveolar compensations were found.",
"corpus_id": 207360579,
"title": "Effectiveness of maxillary protraction using a hybrid hyrax-facemask combination: a controlled clinical study."
} | {
"abstract": "Health outcomes for patients with major chronic illnesses depend on the appropriate use of proven pharmaceuticals and other therapeutic technologies, and effective self-management by patients. Effective chronic illness care then bases clinical decisions on the best, rigorous scientific evidence, or evidence-based medicine. Effective support for patient self-management includes efforts to increase patient participation in care and collaborative goal-setting and planning of treatment. These interventions appear somewhat consistent with recent conceptualizations of patient-centered care. The consistent delivery of proven therapies and information and support for self-management requires practice systems organized for that purpose. The Chronic Care Model is a compilation of those practice system changes shown to improve chronic care. This paper explores the concept of patient-centeredness and its relationship to the Chronic Care Model. We conclude that the Model is both evidence-based and patient-centered and that these can be properties of health systems, and not just of individual practitioners.",
"corpus_id": 40112833,
"score": 1,
"title": "Finding common ground: patient-centeredness and evidence-based chronic illness care."
} |
{
"abstract": "The expectation of improvement in patient survival with administration of new chemotherapy agents for metastatic breast carcinoma (MBC) is not consistently supported by data from clinical trials, which are often underpowered and have not detected moderate survival advantage. The aim of this study was to evaluate the impact of new agents on prognosis of MBC patients enrolled in clinical trials of first‐line chemotherapy.",
"corpus_id": 2322491,
"title": "Survival of metastatic breast carcinoma patients over a 20‐year period"
} | {
"abstract": "The authors performed a randomized trial comprising patients with metastatic breast carcinoma (MBC). They used a noninferiority design to evaluate whether the results of sequential administration of epirubicin and paclitaxel were not markedly worse than the concomitant administration in terms of objective response rates (ORRs). Toxicity profile, quality of life (QOL), and pharmacoeconomic evaluations were evaluated as well.",
"corpus_id": 44404364,
"title": "Concomitant versus sequential administration of epirubicin and paclitaxel as first‐line therapy in metastatic breast carcinoma"
} | {
"abstract": "This series is intended to included works that deal with the politics, international relations and political economy of Middle Eastern countries or regional organizations. Also of interest to the series are works on social forces, ideological discourses and strategic affairs pertaining to the Middle East.",
"corpus_id": 152350483,
"score": 0,
"title": "Britain and the Egyptian Nationalist Movement, 1936-1952"
} |
{
"abstract": "Seed size distinguishes most crops from their wild relatives and is an important quality trait for the grain legume cowpea. In order to breed cowpea varieties with larger seeds we introgressed a rare haplotype associated with large seeds at the Css-1 locus from an African buff seed type cultivar, IT82E-18 (18.5 g/100 seeds), into a blackeye seed type cultivar, CB27 (22 g/100 seed). Four recombinant inbred lines derived from these two parents were chosen for marker-assisted breeding based on SNP genotyping with a goal of stacking large seed haplotypes into a CB27 background. Foreground and background selection were performed during two cycles of backcrossing based on genome-wide SNP markers. The average seed size of introgression lines homozygous for haplotypes associated with large seeds was 28.7g/100 seed and 24.8 g/100 seed for cycles 1 and 2, respectively. One cycle 1 introgression line with desirable seed quality was selfed for two generations to make families with very large seeds (28–35 g/100 seeds). Field-based performance trials helped identify breeding lines that not only have large seeds but are also desirable in terms of yield, maturity, and plant architecture when compared to industry standards. A principal component analysis was used to explore the relationships between the parents relative to a core set of landraces and improved varieties based on high-density SNP data. The geographic distribution of haplotypes at the Css-1 locus suggest the haplotype associated with large seeds is unique to accessions collected from Southeastern Africa. Therefore this quantitative trait locus has a strong potential to develop larger seeded varieties for other growing regions which is demonstrated in this work using a California pedigree.",
"corpus_id": 7926290,
"title": "Introgression of a rare haplotype from Southeastern Africa to breed California blackeyes with larger seeds"
} | {
"abstract": "The warm-season legume, cowpea (Vigna unguiculata), is an important crop that performs well in marginal environments. The effects of high temperature are among the most substantial challenges faced by growers of cowpea. Heat injury during late reproductive development sterilizes pollen such that no fruit is set. To study the inheritance of this trait and to deliver resources to breed cowpea with enhanced tolerance to heat, we performed a quantitative trait locus (QTL) analysis using 141 individuals from a recombinant inbred population made from a cross between cowpea varieties CB27 and IT82E-18. Five regions, which represent 9 % of the cowpea genome, explain 11.5–18.1 % of the phenotypic variation and are tagged with 48 transcript-derived single nucleotide polymorphism markers. Favorable haplotypes were donated by CB27 for four of these regions while IT82E-18 was the source of tolerance explained by the fifth QTL. Homeologous regions in soybean contain several genes important for tolerance to heat, including heat shock proteins, heat shock transcription factors, and proline transporters. This work presents essential information for marker-assisted breeding and supports previous findings concerning heat-induced male sterility in cowpea.",
"corpus_id": 15575179,
"title": "Markers for breeding heat-tolerant cowpea"
} | {
"abstract": "Sparse species have chronically small local population sizes, even though they occur in several habitats over a wide geographic range. Greenhouse de Wit replacement series with seven species of sparse and common perennial grasses of tallgrass prairie were performed with seedlings and tiller fragments for 5, 10, and 15, mo. As younger and older seedlings, sparse grasses overyielded and were advantaged by the interaction with common grasses. The common grasses underyielded and were disadvantaged in mixture with sparse grasses. As tillers, the interaction was less antagonistic, and both common and sparse grasses either overyielded or were unaffected by the interaction. Seedlings of sparse species were largest when planted in low proportion, surrounded by individuals of a common grass. Because the sparse species are not disadvantaged by interactions with their common neighbors, their competitive abilities are not implicated as a cause of their local rarity. Rather, the good competitive abilities of these spar...",
"corpus_id": 56427988,
"score": 1,
"title": "Competitive Abilities of Sparse Grass Species: Means of Persistence or Cause of Abundance"
} |
{
"abstract": "Aims Fractures of the navicular can occur in isolation but, owing to the intimate anatomical and biomechanical relationships, are often associated with other injuries to the neighbouring bones and joints in the foot. As a result, they can lead to long‐term morbidity and poor function. Our aim in this study was to identify patterns of injury in a new classification system of traumatic fractures of the navicular, with consideration being given to the commonly associated injuries to the midfoot. Patients and Methods We undertook a retrospective review of 285 consecutive patients presenting over an eightyear period with a fracture of the navicular. Five common patterns of injury were identified and classified according to the radiological features. Type 1 fractures are dorsal avulsion injuries related to the capsule of the talonavicular joint. Type 2 fractures are isolated avulsion injuries to the tuberosity of the navicular. Type 3 fractures are a variant of tarsometatarsal fracture/dislocations creating instability of the medial ray. Type 4 fractures involve the body of the navicular with no associated injury to the lateral column and type 5 fractures occur in conjunction with disruption of the midtarsal joint with crushing of the medial or lateral, or both, columns of the foot. Results In order to test the reliability and reproducibility of this new classification, a cohort of 30 patients with a fracture of the navicular were classified by six independent assessors at two separate times, six months apart. Interobserver reliability and intraobserver reproducibility both had substantial agreement, with kappa values of 0.80 and 0.72, respectively. Conclusion We propose a logical, all‐inclusive, and mutually exclusive classification system for fractures of the navicular that gives associated injuries involving the lateral column due consideration. We have shown that this system is reliable and reproducible and have described the rationale for the subsequent treatment of each type.",
"corpus_id": 3367082,
"title": "A new and reliable classification system for fractures of the navicular and associated injuries to the midfoot"
} | {
"abstract": "Abstract Fractures of the tarsal navicular are relatively uncommon, and generally are a result of acute trauma or chronic overload in the form of stress fractures. Their importance arise from being somewhat difficult to detect, and if missed can result in significant morbidity with midfoot arthrosis, especially since its complex anatomy and blood supply make the navicular susceptible to osteonecrosis. It is recommended to have a high index of suspicion along with the use of advanced imaging techniques to ensure fractures are not missed. The aetiology and current management concepts of tarsal navicular fractures are reviewed in order to guide optimal treatment for patients. Salvage options for delayed diagnosis and ongoing pain and functional limitation include arthrodesis of the midfoot which will also be discussed.",
"corpus_id": 81427909,
"title": "Navicular fractures: aetiology and management"
} | {
"abstract": "Two case reports are presented of adrenal insufficiency due to bilateral adrenal haemorrhage following surgery. This unusual complication with its non-specific manifestations may result in unexpected clinical deterioration of the postoperative patient. Corticosteroid replacement and repletion of sodium and water deficits should be given promptly when adrenal haemorrhage is suspected.",
"corpus_id": 30809412,
"score": 1,
"title": "Adrenal insufficiency secondary to postoperative bilateral adrenal haemorrhage."
} |
{
"abstract": "The topography of the celiac trunk and superior and inferior mesenteric arteries was studied by dissection in 27 embalmed cadavers. Variant vascular patterns were noted in four subjects. These consisted of: (1) an accessory right hepatic artery from the superior mesenteric artery, (2) an anomalous middle colic artery from the proximal segment of the splenic artery, and (3) two instances of an accessory left colic artery originating from the superior mesenteric artery. The precarious course of the middle colic artery (coming from the splenic artery) and its dominance in the formation of the marginal artery were thought to predispose the ascending and transverse colon to an increased risk of vascular damage. These cases also illustrate two variant patterns of formation of the marginal artery. In the case of the anomalous middle colic artery, the only contribution of the superior mesenteric artery to the marginal artery was through the anastomosis of its ileocolic branch with the right branch of the aberrant middle colic artery. In subjects with accessory left colic arteries, the superior mesenteric artery played a dominant role in the formation of the marginal artery by contributing the accessory left colic artery, which supplied the splenic flexure and the proximal part of the descending colon. These arterial variations underscore the importance of doing vascular studies prior to major abdominal surgery. © 1995 WiIey‐Liss, Inc.",
"corpus_id": 782004,
"title": "Anomalous origins of colic arteries"
} | {
"abstract": "A case is reported of an anomalous origin of the middle colic artery. The middle colic artery originated from the coeliac trunk (CT) instead of the superior mesenteric artery, the normal place of origin. The colon receives its blood supply from the superior and inferior mesenteric arteries. Since modern colon surgery requires a more detailed anatomy of blood supply, many articles have been published on the anatomy and variations of the arteries of the colon. However, the incidence of such an anomaly is low and there have been few previous reports. These arterial variations underscore the importance of performing vascular studies prior to major abdominal surgery.",
"corpus_id": 8727134,
"title": "The middle colic artery originating from the coeliac trunk."
} | {
"abstract": "instruments, the bladder and pipe, or old-fashioned pewter syringe. Moreover, Dr. O'Beirne has fully shown that greater benefit is to be derived in cases of incarcerated hernia'and obstinate constipation from passing up a long tube-(the tube of a stomach-pump answers very well)-into the colon, than from the use of the ordinary short enema pipe. The long tube relieves the bowels of their flatus; and of course by diminishing the bulk of the contents of the abdomen, renders the return of the hernia more easy.* In the old standing cases, occurring to aged people with large herniee, the surgeon may be justified in waiting some time to try the effect of his remedies; but in the acute cases occurring to young people, it may be laid down as a general rule that, if the taxis, bleeding, warmbath, and opium do not succeed, it is the safest plan, on the average, to perform an operation for dividing the stricture without further delay,-using the other remedies only if the patient will not consent to the operation.\"-p. 425. It may be seen, from the above observations, that we have no just ground of quarrel of any omission on the author's part; 'tis by the peccadillo of taking from one department to enhance the apparent merit of the other that has slightly raised our bile. But we love to forgive, and we unbend with pleasure on the present occasion. We trust, however, that in the next edition with which, doubtless, ere long the admirable press of Bentley will again teem, our author will take care to show what we are convinced he truly understands-that the difference between \" operative surgery and \"practical surgery\" is not so great as his arrangements of sections would imply, and that a large proportion of the latter strictly belongs to the former, and should, therefore, according to the present plan of the work, find a place in the appropriate section. We wish that our limits would permit us to make some further quotations; but in good faith, for our readers' sake, we must recommend the original. It is a useful hand-book for the practitioner, and we should deem a teacher of surgery unpardonable who did not recommend it to his pupils. In our own opinion, it is admirably adapted to the wants of the student; and with congratulations to the author and publishers-for the latter deserve much credit for the handsome appearance of the volume-on the success of the undertaking, we leave the present edition as a piquant proportion of the ample store of knowledge which it is the good fortune of the rising youth in the profession to be so cheaply provided with in the present day.",
"corpus_id": 52118295,
"score": 2,
"title": "Elements of Anatomy"
} |
{
"abstract": "This article considers smoking behavior among young people in Canada, looking in particular for evidence on why young people take up smoking. Using data from the National Population Health Survey, we find that reported knowledge about the health effects of own smoking is less useful than might have been expected in explaining why some young people smoke but that responses to a question about whether people worry too much about the health effects of second-hand smoke is informative. We also find that for subjects too young to have begun their own household formation, the number of people in their household who regularly smoke in the house is an informative variable. In particular, among young people aged 12–14 years, having a household member who regularly smokes inside the house (as opposed to having none) increases the probability that the young person will smoke by 2%, whereas for the those aged 15–19, having a household member who regularly smokes inside the house increases the probability that the young person will smoke by 18%.",
"corpus_id": 551080,
"title": "Risky Behavior in Youth: An Analysis of the Factors Influencing Youth Smoking Decisions in Canada"
} | {
"abstract": "textabstractDuring adolescence young people are known to try out a range of risk behaviours,\nincluding smoking. Even though the detrimental health consequences of smoking are well\nknown, the prevalence of smoking among Dutch adolescents remains high. Until today,\nefforts to control adolescent smoking are mainly focused on the prevention of smoking,\nwhereas fewer efforts are made towards facilitating smoking cessation. Since the chance\nof a successful attempt to cease smoking diminishes the longer that people smoke, it is\nimportant that cessation interventions also focus on adolescents. However, compared to\nthe many reports on predictors of smoking initiation, the literature addressing adolescent\nsmoking cessation is rather limited, and the field is still considered to be underdeveloped.\nTo facilitate the planning and development of programs to promote cessation among\nadolescents who smoke, the current thesis presents a number of studies that focus\non identifying and studying potential determinants of smoking cessation, as well\nas determinants of important parameters of successful cessation such as readiness\nto quit smoking and undertaking quit attempts. Multiple levels of influence on the\nprocess of adolescent smoking cessation are considered and tested, including addiction,\npsychological and environmental factors. In addition, predictions and assumptions of\nseveral theories that are frequently used in explaining health behaviour, such as the\nTranstheoretical Model and Social Cognitive Theory, were tested in their applications to\nadolescent smoking cessation.",
"corpus_id": 141543128,
"title": "Dawning dependence: Processes underlying smoking cessation in adolescence = Opkomende afhankelijkheid: Onderliggende processen van stoppen met roken in de adolescentie"
} | {
"abstract": "This paper presents preliminary data on how an assessment instrument with a unique structure can be used to identify common incorrect ideas from prior coursework at the beginning of a biochemistry course, and to determine whether these ideas have changed by the end of the course. The twenty-one multiple-choice items address seven different concepts, with a parallel structure for distractors across each set of items to capture consistent incorrect responses. For the current study, the instrument was administered as a pre-test and post-test in majors level biochemistry courses, and the results from two different groups are presented. These results indicated that students performed better on the post-test, resulting in positive mean gain scores for each concept. The structure of the instrument allows data analysis that helped uncover persistent incorrect ideas for some of the concepts, including bond energy and protein alpha helix structure, even after a semester of instruction in biochemistry. The persistent incorrect idea for the protein alpha helix structure uncovered by this assessment has not been reported before in the literature. These results confirm the need to use a robust diagnostic instrument to assess students’ understanding of basic concepts at the beginning of the semester, but also stress the need to assess students near the end of the course to gain insight on the effectiveness of instruction. Since each group of students is different, biochemistry instructors are encouraged to use the instrument to identify problems with their own students’ incoming ideas rather than rely on published results to inform instruction. In addition to providing assistance for instructors of biochemistry in planning targeted instructional interventions, we anticipate that data collected from this instrument can also be used to identify potential modifications for prerequisite courses.",
"corpus_id": 56922192,
"score": 0,
"title": "Uncovering students' incorrect ideas about foundational concepts for biochemistry"
} |
{
"abstract": "To determine the global root number of an elliptic curve defined over a number field, one needs to understand all the local root numbers. These have been classified except at places above 2, and in this paper we attempt to complete the classification. At places above 2, we express the local root numbers in terms of norm residue symbols in the case when wild inertia acts through a cyclic quotient, and in terms of root numbers of explicit 1‐dimensional characters in the case when wild inertia acts through a quaternionic quotient.",
"corpus_id": 1651746,
"title": "Root numbers of elliptic curves in residue characteristic 2"
} | {
"abstract": "For an elliptic curve E over a number field K, we prove that the algebraic rank of E goes up in infinitely many extensions of K obtained by adjoining a cube root of an element of K. As an example, we briefly discuss E=X_1(11) over Q, and how the result relates to Iwasawa theory.",
"corpus_id": 18711759,
"title": "Ranks of elliptic curves in cubic extensions"
} | {
"abstract": "Using Harper’s anti-deficit achievement framework as a theoretical guide, the purpose of this phenomenological study was to investigate the academic and social experiences of four nontraditional, high-achieving, Black male undergraduates attending one historically Black university. Findings show that the participants were intrinsically motivated to succeed in college to make a better future for themselves and their families. Support from their peers, family, and children also played a role in their success. Last, the university cultivated a campus environment that affirmed the participants’ identities as Black males and nontraditional students. These findings present a counternarrative to deficit-oriented research about Black males generally and nontraditional Black male collegians specifically.",
"corpus_id": 59449359,
"score": 0,
"title": "(Re)defining the Narrative"
} |
{
"abstract": "Atherosclerosis begins in childhood, progresses silently through a long preclinical stage, and eventually manifests clinically, usually from middle age. Over the last 30 years, it has become clear that the initiation and progression of disease, and its later activation to increase the risk of morbid events, depends on profound dynamic changes in vascular biology.1 The endothelium has emerged as the key regulator of vascular homeostasis, in that it has not merely a barrier function but also acts as an active signal transducer for circulating influences that modify the vessel wall phenotype.2 Alteration in endothelial function precedes the development of morphological atherosclerotic changes and can also contribute to lesion development and later clinical complications.3\n\nAppreciation of the central role of the endothelium throughout the atherosclerotic disease process has led to the development of a range of methods to test different aspects of its function, which include measures of both endothelial injury and repair. These have provided not only novel insights into pathophysiology, but also a clinical opportunity to detect early disease, quantify risk, judge response to interventions designed to prevent progression of early disease, and reduce later adverse events in patients.\n\nThe present review summarizes current understanding of endothelial biology in health and disease, the strengths and weaknesses of current testing strategies, and their potential applications in clinical research and patient care.\n\nAlthough only a simple monolayer, the healthy endothelium is optimally placed and is able to respond to physical and chemical signals by production of a wide range of factors that regulate vascular tone, cellular adhesion, thromboresistance, smooth muscle cell proliferation, and vessel wall inflammation. The importance of the endothelium was first recognized by its effect on vascular tone. This is achieved by production and release of several vasoactive molecules that relax or constrict the vessel, as …",
"corpus_id": 93248,
"title": "Endothelial function and dysfunction: testing and clinical relevance."
} | {
"abstract": "The Centers for Disease Control and Prevention (CDC) in collaboration with the American Heart Association (AHA) convened a workshop in Atlanta, Ga, on March 14 and 15, 2002, titled, “CDC/AHA Workshop on Inflammatory Markers and Cardiovascular Disease: Applications to Clinical and Public Health Practice,” which was intended to address issues about the appropriate selection and use of inflammatory markers to predict cardiovascular disease (CVD) risk.1 Three concurrent discussion groups on issues related to laboratory, clinical, and population science were held. This report details the discussions and findings of the laboratory science group.\n\n\n\n1. Of the inflammatory markers identified, C-reactive protein (CRP) has the analyte and assay characteristics that are the most conducive for use in practice.\n\n2. To obtain a CRP concentration in metabolically stable patients, 2 measurements, fasting or nonfasting, should be made (optimally 2 weeks apart) and the results averaged. If the CRP level is >10 mg/L, then the test should be repeated and the patient examined for sources of infection or inflammation.\n\n3. CRP results should be expressed only as milligrams per liter and expressed to 1 decimal point.\n\n4. Risk assessment should be modeled after the lipids approach via 3 risk categories: low risk, average risk, and high risk. On the basis of the CRP population distributions, the following tertiles are recommended for categorizing patients: low risk, 3.0 mg/L. It should be recognized that other acute inflammatory conditions may result in mildly to moderately increased CRP levels, such as inflammatory bowel disease,2 rheumatoid arthritis,3 and long-term alcoholism.4\n\n5. Performance goals for CRP measurement, similar to those developed for total cholesterol, HDL and LDL cholesterol, and triglycerides, need to be developed with a view toward better characterization of the total allowable error required to measure CRP reliably. …",
"corpus_id": 6969007,
"title": "CDC/AHA Workshop on Markers of Inflammation and Cardiovascular Disease: Application to Clinical and Public Health Practice: report from the clinical practice discussion group."
} | {
"abstract": "Denture stomatitis, a common disorder affecting denture wearers, is characterized as inflammation and erythema of the oral mucosal areas covered by the denture. Despite its commonality, the etiology of denture stomatitis is not completely understood. A search of the literature was conducted in the PubMed electronic database (through November 2009) to identify relevant articles for inclusion in a review updating information on the epidemiology and etiology of denture stomatitis and the potential role of denture materials in this disorder. Epidemiological studies report prevalence of denture stomatitis among denture wearers to range from 15% to over 70%. Studies have been conducted among various population samples, and this appears to influence prevalence rates. In general, where reported, incidence of denture stomatitis is higher among elderly denture users and among women. Etiological factors include poor denture hygiene, continual and nighttime wearing of removable dentures, accumulation of denture plaque, and bacterial and yeast contamination of denture surface. In addition, poor-fitting dentures can increase mucosal trauma. All of these factors appear to increase the ability of Candida albicans to colonize both the denture and oral mucosal surfaces, where it acts as an opportunistic pathogen. Antifungal treatment can eradicate C. albicans contamination and relieve stomatitis symptoms, but unless dentures are decontaminated and their cleanliness maintained, stomatitis will recur when antifungal therapy is discontinued. New developments related to denture materials are focusing on means to reduce development of adherent biofilms. These may have value in reducing bacterial and yeast colonization, and could lead to reductions in denture stomatitis with appropriate denture hygiene.",
"corpus_id": 22561229,
"score": 1,
"title": "Epidemiology and etiology of denture stomatitis."
} |
{
"abstract": "Abstract Since the mid 1990s, after the pacification of Central America, the region has experienced a sustained economic growth. Additionally, the Central American governments have been able to increase the population׳s access to electricity, e.g. the percentage of Central American population with access to electricity in 1995 and 2010 was 59% and 86%, respectively. The aforementioned reasons and the need to reduce electricity costs in order to remain competitive in a global economy have produced a transformation of the power scenario in Central America. The present paper presents a review about the power generation scenario of Central America within the framework of the new Regional Interconnected Electric System. It also briefly analyzes the trends of the power generation profile with a special emphasis on the renewable energy sources. As it can be inferred from the analysis presented in this paper, the Central American power scenario will mainly be shaped by the participation of the private sector and the development of the recently created regional electricity market. Additionally, it is clear the willingness of all the Central American countries to move away from oil-fired power generation. The lack of up-front capital needed to develop large renewable energy projects (mainly hydropower) can favor the development of gas-fired and/or coal-fired power stations. Nevertheless, the regional electricity market may favor the viability of large power generation projects.",
"corpus_id": 154948637,
"title": "A review on the Central America electrical energy scenario"
} | {
"abstract": "Today, electricity tariffs play an essential role in the electricity retail market as they are the key factor for the decision-making of end-users. Additionally, tariffs are necessary for increasing competition in the electricity market. They have a great impact on load energy management. Moreover, tariffs are not taken as a fixed approach to expense calculations only but are influenced by many other factors, such as electricity generation, transmission, distribution costs, and governmental taxation. Thus, electricity pricing differs significantly between countries or between regions within a country. Improper tariff calculation methodologies in some areas have led to high-power losses, unnecessary investments, increased operational expenses, and environmental pollution due to the non-use of available sustainable energy resources. Due to the importance of electricity tariffs, the authors of this paper have been inspired to review all electricity tariff designs used worldwide. In this paper, 103 references from the last ten years are reviewed, showing a detailed comparison between different tariff designs and demonstrating their main advantages and drawbacks. Additionally, this paper reviews the utilized electricity tariffs in different countries, focusing on one of the most important countries in the Middle East and North Africa regions (Egypt). Finally, some recommended solutions based upon the carried-out research are discussed and applied to the case study for electricity tariff improvement in this region. This review paper can help researchers become aware of all the electricity tariff designs used in various countries, which can lead to their design improvements by using suitable software technologies. Additionally, it will increase end-users’ awareness in terms of deciding on the best electricity retail markets as well as optimizing their energy usage.",
"corpus_id": 253567215,
"title": "A Review of Electricity Tariffs and Enabling Solutions for Optimal Energy Management"
} | {
"abstract": "Many regions of the world feel the pressure to interconnect electric power systems internationally. Regional integrations of the electricity sector have become part of free trade and common market initiatives, though the steps individual national jurisdictions take towards developing integrated systems vary. In this article, we review three regions concerned with common market initiatives and at different stages of integration processes that involve infrastructural, regulatory, and commercial decisions. First, we examine the North European countries in the Nordic Council, then countries in the Southern Cone of South America in MERCOSUR, and finally Mexico, the United States and Canada, linked under NAFTA. This comparative study highlights the potential, but also the many hurdles, that electricity sector integrations face. The study suggests a framework for measuring the level of electricity sector integration that could be applied to other regions.",
"corpus_id": 14157934,
"score": 2,
"title": "Measuring international electricity integration: a comparative study of the power systems under the Nordic Council, MERCOSUR, and NAFTA"
} |
{
"abstract": "This paper compares the innovation performance of established pharmaceutical firms and biotech companies, controlling for differences in the scale and scope of research. We develop a structural model to analyze more than 3,000 drug research and development projects advanced to preclinical and clinical trials in the United States between 1980 and 1994. Key to our approach is careful attention to the issue of selection. Firms choose which compounds to advance into clinical trials. This choice depends not only on the technical promise of the compound, but also on commercial considerations such as the expected profitability of the market or concerns about product cannibalization. After controlling for selection, we find that (a) even after controlling for scale and scope in research, established pharmaceutical firms are more innovative than newly entered biotech firms; (b) older biotech firms display selection behaviors and innovation performances similar to established pharmaceutical firms; and (c) compounds licensed during preclinical trials are as likely to succeed as internal compounds of the licensor, which is inconsistent with the “lemons” hypothesis in technology markets.",
"corpus_id": 1127092,
"title": "A Breath of Fresh Air? Firm Type, Scale, Scope, and Selection Effects in Drug Development"
} | {
"abstract": "The research questions studied in this paper concern the role of demand side strategy for a firm engaged in duopolistic competition in quality and price. A demand side strategy research looks towards markets and consumers unlike the traditional resource side strategy research that looks upstream — into the firm’s resources and its supply side. We examine whether following a demand side strategy would benefit the firm, the consumers or both. In a market where two firms are competing with each other, we first find the equilibrium quality and price levels for the traditional case where the two firms optimize their own profit function. We use this case as a benchmark case for comparison. Next, we let one firm (the lower quality firm) adopt a demand side strategy operationalized by an objective function where the profit is augmented by consumer surplus. The equilibrium results show that a demand side strategy would increase the product quality level in the market, and improve the adopter firm’s competitiveness at the same time increasing the market consumer surplus. We also study the case where the higher quality firm adopts demand side strategy and compare the results with the two cases mentioned above. Overall, we find that adopting a demand side strategy would benefit the adopter firm’s profitability and the consumers. We, therefore, find evidence of what the strategy literature has been predicting about the role of demand side strategy.",
"corpus_id": 2574974,
"title": "Role of demand-side strategy in quality competition"
} | {
"abstract": "While a lot of attention has been paid to those characteristics of capabilities that give firms a competitive advantage, a lot less attention has been given to supporting empirical evidence and to the deployment of these capabilities. This paper presents a model for mapping firm capabilities into customer value and competitive advantage in different markets. With empirical evidence from cholesterol drugs, I illustrate how the model can be used to estimate customer value and competitive advantage from technological capabilities. Copyright © 2002 John Wiley & Sons, Ltd.",
"corpus_id": 42344630,
"score": 2,
"title": "Mapping technological capabilities into product markets and competitive advantage: the case of cholesterol drugs"
} |
{
"abstract": "ii Acknowledgements iii List of Tables vi List of Figures vii CHAPTER ONE: INTRODUCTION TO THE STUDY 1 Background of the Problem 2 Statement of the Problem 7 Purpose of the Study 8 Research Questions 8 Rationale 8 Theoretical Framework 9 Scope and Limitations of the Study 11 Summary 12 CHAPTER TWO: REVIEW OF RELATED LITERATURE 14 Professional Learning 14 Literacy Resources 26 Effective Literacy Practices 34 Summary 41 CHAPTER THREE: METHODOLOGY AND RESEARCH DESIGN 43 The Project: The Reading and Writing Initiative 43 Research Design 44 Selection of Participants 45 Instrumentation 48 Data Collection 50 Data Analysis 51 Methodological Assumptions 51 Ethical Considerations 53 Summary 53 CHAPTER FOUR: PRESENTATION OF RESULTS 55 The Design Process for the Guide 55 The Guide and Its Components 71 Educators’ Perceptions of the Design Process and the Guide 74 Summary 96 CHAPTER FIVE: SUMMARY, DISCUSSION, AND IMPLICATIONS 98 Summary of the Study 99 Discussion 103 Implications 114 Conclusion 119",
"corpus_id": 153073454,
"title": "Exploring the Design Process and Components of an Elementary Literacy Guide in an Ontario School Board Initiative"
} | {
"abstract": "We argue that professional development should address five aspects of school capacity: teachers' knowledge, skills, and dispositions; professional community; program coherence; technical resources; and principal leadership. A two-year study of nine urban elementary schools in the United States found considerable variation in schools' use of professional development to address capacity. More comprehensive professional development occurred through both externally developed programs and school-based initiatives. Comprehensive professional development was most strongly related to the school's initial level of capacity and principal leadership, less related to per teacher funding, least related to external assistance and district/state policy. Implications are discussed.",
"corpus_id": 144315457,
"title": "Professional Development That Addresses School Capacity: Lessons from Urban Elementary Schools"
} | {
"abstract": "There have been many survey papers in the area of project scheduling in recent years. These papers have primarily emphasized modeling and algorithmic contributions for speciic classes of project scheduling problems, such as Net Present Value (NPV) maximization and makespan minimization, with and without resource constraints. Paralleling these developments has been the research in the area of project scheduling decision support, with its emphasis on data sets, data generation methods, and so on, that are essential to benchmark, evaluate, and compare the new models, algorithms and heuristic techniques. These investigations have extended the frontiers of research and application in all areas of project scheduling and management. In this paper, we survey the vast literature in this area with a perspective that integrates models, data, and optimal and heuristic algorithms, for the major classes of project scheduling problems. We also include recent surveys that have compared commercial project scheduling systems. Finally, we present an overview of web{based decision support systems and discuss the potential of this technology in enabling and facilitating researchers and practitioners in identifying new areas of inquiry and application.",
"corpus_id": 16971142,
"score": 1,
"title": "An integrated survey of project scheduling"
} |
{
"abstract": "When treatment failures occur during the course of a clinical trial, the treatment regimen following failure may be changed. This change in therapy complicates comparisons among the original treatment arms. As in some clinical trials with dropouts, intent-to-treat analysis can yield a large bias. We examine the use of multiple imputation to replace observations after treatment failure has occurred. As a sensitivity analysis, this approach is compared to existing methods for handling treatment failures - removing treatment failure subjects, removing data after the onset of treatment failure, and imputation the last observation prior to treatment failure for all subsequent observations - in addition to an analysis of all collected data based on randomized treatment assignment. A data set from the Asthma Clinical Research Network is used to demonstrate the methods.",
"corpus_id": 476897,
"title": "Including multiple imputation in a sensitivity analysis for clinical trials with treatment failures."
} | {
"abstract": "Objective There is little information about which intimate partner violence (IPV) policies and services assist in the identification of IPV in the emergency department (ED). The objective of this study was to examine the association between a variety of resources and documented IPV diagnoses. Methods Using billing data assembled from 21 Oregon EDs from 2001 to 2005, we identified patients who were assigned a discharge diagnosis of IPV. We then surveyed ED directors and nurse managers to gain information about IPV-related policies and services offered by participating hospitals. We combined billing data, survey results, and hospital-level variables. Multivariate analysis assessed the likelihood of receiving a diagnosis of IPV depending on the policies and services available. Results In 754 597 adult female ED visits, IPV was diagnosed 1929 times. Mandatory IPV screening and victim advocates were the most commonly available IPV resources. The diagnosis of IPV was independently associated with the use of a standardized intervention checklist (odds ratio: 1.71; 95% confidence interval: 1.04–2.82). Public displays regarding IPV were negatively associated with IPV diagnosis (odds ratio 0.56; 95% confidence interval: 0.35–0.88). Conclusion IPV remains a rare documented diagnosis. Most common hospital-level resources did not demonstrate an association with IPV diagnoses; however, a standardized intervention checklist may play a role in clinician's likelihood of diagnosing IPV.",
"corpus_id": 24822974,
"title": "Association between emergency department resources and diagnosis of intimate partner violence"
} | {
"abstract": "Introduction\nIt remains unclear if Gulf War (GW) veterans have a higher risk of developing motor neuron disorder. We intended to establish baseline neurophysiological values, including thenar motor unit number estimate (MUNE) and isometric hand grip (IHG) strength, to compare future follow-ups of deployed GW veterans with or without muscular complaints.\n\n\nMaterials and Methods\nWe evaluated 19 GW veterans with self-reported weakness, cramps, or excessive muscle fatigue (Ill-19) and compared them with 18 controls without such muscular complaints (C-18). We performed MUNE on hand thenar muscles using adapted multipoint stimulation method for Ill-19 and 15 controls (C-15). We measured IHG strength (maximum force, endurance, and fatigue level) on Ill-19 and C-18 with a hand dynamometer. We performed nerve conduction studies on all study participants to determine which subjects had mild carpal tunnel syndrome (CTS). We compared the MUNE and IHG strength measures between Ill group and controls and between those with CTS and those without CTS.\n\n\nResults\nWe obtained thenar MUNE of Ill-19 (95% CI of mean: 143-215; mean age: 46 yr) and compared it with that of C-15 (95% CI of mean: 161-230; mean age: 45 yr), and 95% of CI of mean among IHG strength variables (maximum force: 324-381 Newton; endurance: 32-42 s; fatigue level: 24%-33%) compared with C-18 (maximum force: 349-408 Newton; endurance: 35-46 s; fatigue level: 21%-27%). There was no significant difference in either MUNE or IHG strength between Ill-19 group and controls. The MUNE and IHG maximum forces were significantly lower in those with CTS compared with those without CTS. As a surrogate of mild CTS, the median versus ulnar distal sensory latency on nerve conduction study was only weakly associated with MUNE, maximum force, and fatigue level, respectively.\n\n\nConclusion\nTo our knowledge, no published study on MUNE reference values of military veteran population has been available. The quantifiable values of both thenar MUNE and IHG strength of military veterans serve as baselines for our longitudinal follow-up of motor neuron function of deployed troops. These reference values are also useful for other laboratories to study veterans' motor system with or without mild CTS.",
"corpus_id": 4464692,
"score": 1,
"title": "Motor Unit Number Estimate and Isometric Hand Grip Strength in Military Veterans with or Without Muscular Complaints: Reference Values for Longitudinal Follow-up."
} |
{
"abstract": "Looks at the need for quality education that will propel the African continent into the future. The assesment of theory and industrial needs are addressed in the light of future demands. Change of current educational practices and forcusting on future trends of economic demands is emphasised.",
"corpus_id": 2965839,
"title": "The Future of Education and Its Challenges in Africa."
} | {
"abstract": "Throughout the post-independence period, every African country has struggled with the problematic role of higher education in development. Until the mid-1990s the role of higher education in development programmes and policies in Africa was to some extent an anomaly, with the majority of education development projects focusing on the primary school level. International donors and partners regarded universities, for the most part, as institutional enclaves without deep penetration into the development needs of African communities. As such, higher education was seen as a non-focal sector or even as a ‘luxury ancillary’. The latter view was for many years propagated, for example, by the World Bank (Brock-Utne, 2002; Hayward, 2004; Mamdami, 2008; Maassen et al., 2007; Psacharopoulos, 1986; Sawyerr, 2004)",
"corpus_id": 150864860,
"title": "Higher Education and Economic Development in Africa"
} | {
"abstract": "We document five stylized facts of economic growth. (1) The \"residual\" rather than factor accumulation accounts for most of the income and growth differences across nations. (2) Income diverges over the long run. (3) Factor accumulation is persistent while growth is not persistent and the growth path of countries exhibits remarkable variation across countries. (4) Economic activity is highly concentrated, with all factors of production flowing to the richest areas. (5) National policies closely associated with long-run economic growth rates. We argue that these facts do not support models with diminishing returns, constant returns to scale, some fixed factor of production, and that highlight the role of factor accumulation. Empirical work, however, does not yet decisively distinguish among the different theoretical conceptions of \"total factor productivity growth.\" Economists should devote more effort towards modeling and quantifying total factor productivity.",
"corpus_id": 5898661,
"score": 2,
"title": "It's Not Factor Accumulation: Stylized Facts and Growth Models"
} |
{
"abstract": "This paper describes a new and robust method for estimating circular motion geometry from an uncalibrated image sequence. Under circular motion, all the camera centers lie on a circle, and the mapping of the plane containing this circle to the horizon line in the image can be modelled as a 1D projection. A 2◊ 2 homography is introduced in this paper to relate the projections of the camera centers in two 1D views. It is shown that the two imaged circular points and the rotation angle between the two views can be derived directly from the eigenvectors and eigenvalues of such a homography respectively. The proposed 1D geometry can be nicely applied to circular motion estimation using either point correspondences or silhouettes. The method introduced here is intrinsically a multiple view approach as all the sequence geometry embedded in the epipoles is exploited in the computation of the homography for a view pair. This results in a robust method which gives accurate estimated rotation angles and imaged circular points. Experimental results are presented to demonstrate the simplicity and applicability of the new method.",
"corpus_id": 1254527,
"title": "1d camera geometry and its application to circular motion estimation"
} | {
"abstract": "Keypoint extraction and matching has been widely studied by the computer vision community, mostly focused on pinhole camera models. In this paper we perform a comparative analysis of four keypoint extraction algorithms applied to full spherical images, particularly in the context of pose estimation. Two of the methods chosen for the comparative study, namely A-KAZE and ASIFT, have been designed considering a perspective camera model, but were already applied in an omnidirectional structure from motion pipeline, generating successful results in the literature. The other two algorithms are properly adapted versions of the traditional descriptors SIFT and ORB to the spherical domain, subbed SSFIT and SPHORB. We conduct our tests on captures of omnidirectional cameras, both synthetic and real, arbitrarily translated and rotated with known ground-truth transformations. The extracted keypoints are fed to the well-known 8-point algorithm with RANSAC, allowing to estimate the relative camera poses. These poses (translation vector and rotation matrix) are then compared to the ground-truth transformation parameters, generating the error metrics used in our analysis. Our results indicated that spherical descriptors SSIFT and SPHORB did not produce better results than planar descriptors A-KAZE and ASIFT in the context of pose estimation, particularly in the evaluation with real image pairs.",
"corpus_id": 24233203,
"title": "Evaluation of Keypoint Extraction and Matching for Pose Estimation Using Pairs of Spherical Images"
} | {
"abstract": "Chit-chat models are known to have several problems: they lack specificity, do not display a consistent personality and are often not very captivating. In this work we present the task of making chit-chat more engaging by conditioning on profile information. We collect data and train models to (i) condition on their given profile information; and (ii) information about the person they are talking to, resulting in improved dialogues, as measured by next utterance prediction. Since (ii) is initially unknown our model is trained to engage its partner with personal topics, and we show the resulting dialogue can be used to predict profile information about the interlocutors.",
"corpus_id": 6869582,
"score": -1,
"title": "Personalizing Dialogue Agents: I have a dog, do you have pets too?"
} |
{
"abstract": "SummaryA pot experiment was conducted to study the transformations of organic and inorganic N in soil and its availability to maize plants. Inorganic N was in the form of15N labelled ammonium sulphate (As) and15N labelledSesbania aculeata (Sa), a legume, was used as organic N source. Plants utilized 20% of the N applied as As; presence of Sa reduced the uptake to 14%. Only 5% of the Sa-N was taken up by the plants and As had no effect on the availability of N from Sa. Losses of N from As were found to be 40% which were reduced to 20% in presence of Sa. Losses of N were also observed from Sa which increased in the presence of As. Application of As had no effect on the availability of soil or Sa-N. However, more As-N was transported into microbial biomass and humus components in the presence of Sa.Plants derived almost equal amounts of N from different sourcesi.e., soil, Sa and As. However, more As-N was transported into the shoots whereas the major portion of nitrogen in the roots was derived from Sa.",
"corpus_id": 8588218,
"title": "Transformations in soil and availability to plants of15N applied as inorganic fertilizer and legume residues"
} | {
"abstract": "A comprehensive model of terrestrial N dynamics has been developed and coupled with the geographically explicit terrestrial C cycle component of the Integrated Science Assessment Model (ISAM). The coupled C‐N cycle model represents all the major processes in the N cycle and all major interactions between C and N that affect plant productivity and soil and litter decomposition. Observations from the LIDET data set were compiled for calibration and evaluation of the decomposition submodel within ISAM. For aboveground decomposition, the calibration is accomplished by optimizing parameters related to four processes: the partitioning of leaf litter between metabolic and structural material, the effect of lignin on decomposition, the climate control on decomposition and N mineralization and immobilization. For belowground decomposition, the calibrated processes include the partitioning of root litter between decomposable and resistant material as a function of litter quality, N mineralization and immobilization. The calibrated model successfully captured both the C and N dynamics during decomposition for all major biomes and a wide range of climate conditions. Model results show that net N immobilization and mineralization during litter decomposition are dominantly controlled by initial N concentration of litter and the mass remaining during decomposition. The highest and lowest soil organic N storage are in tundra (1.24 Kg N m−2) and desert soil (0.06 Kg N m−2). The vegetation N storage is highest in tropical forests (0.5 Kg N m−2), and lowest in tundra and desert (<0.03 Kg N m−2). N uptake by vegetation is highest in warm and moist regions, and lowest in cold and dry regions. Higher rates of N leaching are found in tropical regions and subtropical regions where soil moisture is higher. The global patterns of vegetation and soil N, N uptake and N leaching estimated with ISAM are consistent with measurements and previous modeling studies. This gives us confidence that ISAM framework can predict plant N availability and subsequent plant productivity at regional and global scales and furthermore how they can be affected by factors that alter the rate of decomposition, such as increasing atmospheric [CO2], climate changes, litter quality, soil microbial activity and/or increased N.",
"corpus_id": 11341433,
"title": "Integration of nitrogen cycle dynamics into the Integrated Science Assessment Model for the study of terrestrial ecosystem responses to global change"
} | {
"abstract": "Summary1.The effect of temperature on growth, longevity and egg production in Chirocephalus diaphanus reared in the laboratory is described.2.The maximum growth rate occurred at 25°C and the growth rate decreased with decreasing temperature.3.Sexual maturity occurred earliest at 25°C and occurred later with decreasing temperature.4.The greatest longevity of a culture occurred at 10°C, the least at 25°C. The greatest final length was attained in animals kept at 10°C, the smallest final length occurred in animals kept at 25°C.5.Egg production was highest at 10°C and lowest at 5°C, whilst the greatest total number of eggs produced per female was 178 at 10°C.6.The results of this study are compared with those previously published for other species of Anostraca.Résumé1.L'effet de la température sur la croissance, la durée de vie et la ponte du Chirocephalus diaphanus élevé en laboratoire se décrit.2.La croissance maximum s'est produite à 25°C, elle a diminué lorsqu'on a abaissé la température.3.C'est à 25°C que la puberté s'est produite le plus tôt. Avec des températures plus basses celle-ci est arrivée plus tard.4.A 10°C, nous avons obtenu la culture avec la durée de vie la plus longue. C'est à 25°C qu'a été observée la durée de vie la plus réduite. A 10°C, les Chirocephalus diaphanus ont atteint leur longueur maximum. Ils ont été les moins longs à 25°C.5.La ponte a atteint son maximum à 10°C; son minimum à 5°C, quant au nombre maximum d'oeufs déposés par une femelle, il a été de 178 à 10°C.6.Les résultats donnés ici sont comparés à ceux ont été publiés auparavant pour les autres espèces de Anostracés.",
"corpus_id": 32438303,
"score": 1,
"title": "The effect of temperature on growth, longevity and egg production in Chirocephalus diaphanus prévost (Crustacea: Anostraca)"
} |
{
"abstract": "According to data received from an international survey, almost 6800 students are enrolled in software engineering degree programs in 11 countries, as of January, 2001. A total of 94 academic programs in software engineering are in place at 60 universities with 350 full-time faculty and nearly 200 part-time faculty teaching hundreds of undergraduate and graduate courses in the discipline. Over 5500 people have obtained degrees in software engineering since 1979. The authors are conducting the first of an ongoing annual survey of international academic software engineering programs, as a joint ACM/IEEE-CS project. This status report covers: history, audience, initial survey, initial partial results available on the WWW, request for evaluation of WWW-site, request for additional questions for next version of survey, time-line for next version of the survey, \"lessons learned,\" and some future directions. The annual report and survey results will be posted on a wide variety of Web pages. A more current report, based on the sabbatical of the first author, will be presented at the conference. The sabbatical involves the initial development of an \"International Software Engineering University Consortium-ISEUC\". A sample scenario for an employee in industry who becomes a student in ISEUC is given.",
"corpus_id": 726005,
"title": "Academic software engineering: what is and what could be? Results of the first annual survey for international SE programs"
} | {
"abstract": "In the November, 2001 issue of Crosstalk, the emphasis was on “distributed software development” with several provocative articles. Elizabeth Starrett, in the editorial column, wisely asked which distributed development concept the reader would prefer: distributed development of software, development of distributed software, or distributed development of distributed software. Reading the entire article ignited a spark – what about the “distributed development of software professionals?” This is exactly the focus of this paper – the distributed development of software professionals around the world with the assistance of international universities recognized for their software engineering expertise, combined with the use of hybrid learning technologies, for providing high-quality credit and non-credit courses at all levels. Providing software engineering (SE) training and education on a global basis is a priority of several organizations. The primary markets are corporations wanting to develop reliable, robust, and useful software products in a timely and efficient fashion, but whose professionals do not currently have state-of-the-art knowledge or skills. As a response, the author instigated the International Software Engineering University Consortium ISEUC in 2000. Other “players” include individual universities, university consortia, ACM, IEEE, U.S. Department of Defense and book publishers. ISEUC is a worldwide consortium of universities designed to provide SE courses via distributed learning, primarily using the Internet. ISEUC, a group of 35 universities, was selected from the 100+ responders to a SE survey funded in 1999 by ACM and IEEE-CS. ISEUC was slated to begin initial operations in September 2003, based on the results of visits to Australia, Canada, the U.K., and the U.S.A. This paper gives a description of ISEUC,",
"corpus_id": 59975661,
"title": "Distributed Development Of Software Engineering Professionals"
} | {
"abstract": "In this paper, we propose a new method for stitching multiple fluoroscopic images taken by a C-arm instrument. We employ an X-ray radiolucent ruler with numbered graduations while acquiring the images, and the image stitching is based on detecting and matching ruler parts in the images to the corresponding parts of a virtual ruler. To achieve this goal, we first detect the regular spaced graduations on the ruler and the numbers. After graduation labeling, for each image, we have the location and the associated number for every graduation on the ruler. Then, we initialize the panoramic X-ray image with the virtual ruler, and we “paste” each image by aligning the detected ruler part on the original image, to the corresponding part of the virtual ruler on the panoramic image. Our method is based on ruler matching but without the requirement of matching similar feature points in pairwise images, and thus, we do not necessarily require overlap between the images. We tested our method on eight different datasets of X-ray images, including long bones and a complete spine. Qualitative and quantitative experiments show that our method achieves good results.",
"corpus_id": 2966638,
"score": 1,
"title": "Ruler Based Automatic C-Arm Image Stitching Without Overlapping Constraint"
} |
{
"abstract": "THE PHARMACIST SUPPLY IN THE UNITED STATES, 1994-2009: A POPULATION ECOLOGY PERSPECTIVE By Kevin Shawn Joseph Lett A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy at Virginia Commonwealth University Major Director: Director: Dr. Michael A. Pyles The U.S. healthcare system is a complex segment of our society that is constantly evolving with changes to various areas such as education, financing, safety, and health. There continues to be a critical examination of how healthcare professionals are trained and utilized as healthcare demands increase. One category of healthcare professionals that has evolved over time to address societal needs is pharmacists. Pharmacists have kept their traditional function of dispensing medications while expanding into multiple areas of expertise and training from patient counseling and drug therapy, to being part of multidisciplinary teams treating acute care patients. According to the National Association of Boards of Pharmacy (NABP) in 2009 there were approximately 265,000 licensed pharmacists in the U.S. (NABP, 2010). The Health Resources and Services Administration (HRSA) reported the settings with the largest number of positions are chain pharmacies (77,300), hospitals (49,200), and independent pharmacies (36,200) (DHHS, 2008). The ratio of pharmacists per 100,000 population is expected to increase from 68.9 pharmacists per 100,000 population to 76.7 per 100,000 between 1995 and 2020 (Gershon, Cultice, & Knapp, 2000). This increase in the pharmacist to population ratio is consistent with a growth rate of 13% during this time period of time. Until 1998, the supply of pharmacists in the U.S. appeared to be in reasonable balance with demand. Market forces gradually upset the delicate balance between the supply of pharmacists and the demand for their services between 1998 and 2009. In particular, a precipitous increase in the volume of prescription written and filled during this time period contributed to upsetting this delicate balance between the supply of pharmacists and demand (Cooksey, Walton, Stankewicz, & Knapp, 2003). Researchers have noted a number of environmental factors affecting the pharmacist supply in the U. S. This inquiry explores these factors within the context of the population ecology theoretical framework. In addition to the volume of prescriptions, additional environmental factors believed to have a discernible impact on the pharmacist supply include, the number of physicians, size of the business industry and insurance coverage. Previous studies on pharmacists supply have pointed to income, physician population, and population among other variables that predict the demand for pharmacists (Walton, Cooksey, Knapp, Quist, & Miller, 2004; Cherry, D.K., Woodwell, D.A., & Rechtsteiner 2007; Walton, Knapp, Miller, & Schumock 2007). U. S. physicians wrote over 4 billion prescriptions in 2007 (Medical Expenditure Panel Survey, 2008). Physicians are the primary healthcare providers that generate prescriptions to be filled. Consequently, the number of physicians is believed to be a significant environmental factor affecting the supply of pharmacists. There were approximately 940,000 physicians in the U. S. in 2008. Projections call for continuous growth of the number of physicians well into the future (Smart, 2010). Another important environmental factor potentially impacting the demand for pharmacists is the size of the business industry. In 2006, the health plan offer rate for large or medium organizations (50 or more employees) was 96.7% compared to 61.2% for small organizations (50 or less employees) (Sommers & Crimmel, 2008; Crimmel & Sommers, 2008). Insurance coverage has the potential to have a positive impact on the demand for pharmacists because it provides the opportunity to obtain required prescriptions (Ranji, Wyn, Salganicoff, & Yu, 2007; Weinick, Byron, & Bierman, 2005). The population ecology theoretical framework has been used in the study of restaurants, newspapers, and physicians and their interactions with their surrounding environments. The theoretical framework proved to be beneficial in the exploration of the pharmacist supply vis-a-vis the environment. The primary constructs in the population ecology theory are carrying capacity and density. Carrying capacity consists of two sub-constructs: munificence and concentration. Density points to the current pharmacists supply and its impact on the future pharmacist supply. Numerous variables have been used in previous empirical studies of the pharmacist supply. Among the indicators of munificence in previous studies in the extant literature on pharmacist supply are total population, elderly population, hospitals, and median household income. In the present inquiry, total population was found to be a statistically significant environmental factor affecting the pharmacist supply. This was hypothesized that there is a positive linear relationship between total population and the pharmacist supply. The number of hospitals with pharmacies was also found to be a statistically significant environmental factor affecting the pharmacist supply. Hospital pharmacies are important venues wherein pharmacists can demonstrate their unique expertise and make discernible contributions to desirable health care outcomes when pharmaceutical interventions are required. In light of this empirical finding, it seems reasonable that a growth in hospital pharmacies corresponds with an increased demand for pharmacists (Kaboli, Hoth, McClimon, & Schnipper, 2006). Measures of the concentration dimension included the number of hospital beds per 100,000 population, employer volume and size and the number of insured. The only putative indicator of concentration that was found to be statistically significant in this inquiry was the number of employers with 20 or more employees. Previous pharmacist supply was found to be a significant environmental factor affecting the pharmacist supply in the future. Thus, density is a significant environmental factor affecting the pharmacist supply. Five of the 13 hypotheses tested in this inquiry were accepted. These findings are consistent with related findings in the extant literature on the pharmacist supply. Empirical findings from this inquiry are believed to make significant contributions to the literature on the pharmacist supply. The population ecology theoretical framework appears to be a suitable tool for exploring environmental factors affecting the pharmacist supply. Recommendations for future research are presented in the final chapter.",
"corpus_id": 68757048,
"title": "The Pharmacist Supply in the United States, 1994-2009: A Population Ecology Perspective"
} | {
"abstract": null,
"corpus_id": 750096,
"title": "Predicting the impact of Medicare Part D implementation on the pharmacy workforce."
} | {
"abstract": "Issues related to women's physical health and health care reflect the broader concerns of women who must function within systems that have been constructed by and for men. The health care system in this country is still very much a male-dominated institution in which the demands on women to fit a male model are especially cogent (Lee, 1975). Because women traditionally have had primary responsibility for the care of children and for the ill and aging in their families, they typically assume greater responsibility in health matters than do men. Yet there is evidence that women as a group, and particularly women of limited educational, social, and economic resources, encounter significant obstacles to obtaining adequate diagnosis and treatment of medical disorders. From a prevention perspective also, less attention has been paid to the health risks of women than to those of men.",
"corpus_id": 5602458,
"score": -1,
"title": "Women's health issues."
} |
{
"abstract": "In this paper we establish a suprising fundamental identity for Parseval frames in a Hilbert space. Several variations of this result are given, including an extension to general frames. Finally, we discuss the derived results.",
"corpus_id": 1427531,
"title": "A fundamental identity for Parseval frames"
} | {
"abstract": "Abstract We use two appropriate bounded invertible operators to define a controlled frame with optimal frame bounds. We characterize those operators that produces Parseval controlled frames also we state a way to construct nearly Parseval controlled frames. We introduce a new perturbation of controlled frames to obtain new frames from a given one. Also we reduce the distance of frames by appropriate operators and produce nearly dual frames from two given frames which are not dual frames for each other.",
"corpus_id": 124060047,
"title": "SOME RESULTS ON CONTROLLED FRAMES IN HILBERT SPACES"
} | {
"abstract": "Abstract We discuss how infrared region influence on short distance physics via new object, called “short string”. This object exists in confining theories and violates the operator product expansion. Most analytical results are obtained for the dual Abelian Higgs theory, while phenomenological arguments are given for QCD.",
"corpus_id": 41879716,
"score": 1,
"title": "Short strings and gluon propagator in the infrared region"
} |
{
"abstract": "Augmented Reality (AR) can extend digital world to real world. Augmented Reality not just developed; it had been around there for many years. Actual term of AR came around in 1990's. AR experienced by using headset or mobiles. AR has many applications in military, medical field, entertainment (games) and education. In this paper different AR games are analyzed that are categorize as entertainment and educational games, some tools to develop augmented reality games are also discussed. Some parameters are also used and describe in this paper to analyze survey papers and to analyze tools for making augmented reality games. As most people think that games are just for fun but AR games usage for educational purpose increases; for making lectures attractive, grasping attention of students, teaching history and for higher retention. Augmented reality in mobile games enhances learning methodologies and also provides paramount entertainment.",
"corpus_id": 24785608,
"title": "Emerging trends in augmented reality games"
} | {
"abstract": "Augmented Reality (AR) refers to the science and technology which enables 3D/2D computer graphics to overlay video frames in real time so as to add contextual information to deepen people's understanding of the real scene. In particular, outdoor AR as an emerging technology opens up new possibilities and applications for Location Based Services (LBS). As with the increasing amount of 3D city models and development of 3D computer graphics technology, insufficient research attention has been put into 3D city model based AR navigation, this paper aims to contribute to outdoor AR tracking research by using 3D city models and a 3D game engine. The camera pose and position are calculated through 2D/3D correspondence between the live video frame and 3D city models. The augmented live video feed is rendered in a 3D game engine while the 3D model runs in the background to facilitate the real time registration of virtual objects into the real scene as well as position determination of the moving camera.",
"corpus_id": 16840882,
"title": "Outdoor augmented reality tracking using 3D city models and game engine"
} | {
"abstract": "Learning mathematics is one of the most important aspects that determine the future of learners. However, mathematics as one of the subjects is often perceived as being complicated and not liked by the learners. Therefore, we need an application with the use of appropriate technology to create visualization effects which can attract more attention from learners. The application of Augmented Reality technology in digital game is a series of efforts made to create a better visualization effect. In addition, the system is also connected to a leaderboard web service in order to improve the learning motivation through competitive process. Implementation of Augmented Reality is proven to improve student's learning motivation moreover implementation of Augmented Reality in this game is highly preferred by students.",
"corpus_id": 16814116,
"score": -1,
"title": "ARmatika: 3D game for arithmetic learning with Augmented Reality technology"
} |
{
"abstract": "In this paper, we propose an unsupervised face clustering algorithm called “Proximity-Aware Hierarchical Clustering” (PAHC) that exploits the local structure of deep representations. In the proposed method, a similarity measure between deep features is computed by evaluating linear SVM margins. SVMs are trained using nearest neighbors of sample data, and thus do not require any external training data. Clus- ters are then formed by thresholding the similarity scores. We evaluate the clustering performance using three challenging un- constrained face datasets, including Celebrity in Frontal-Profile (CFP), IARPA JANUS Benchmark A (IJB-A), and JANUS Challenge Set 3 (JANUS CS3) datasets. Experimental results demonstrate that the proposed approach can achieve significant improvements over state-of-the-art methods. Moreover, we also show that the proposed clustering algorithm can be applied to curate a set of large-scale and noisy training dataset while maintaining sufficient amount of images and their variations due to nuisance factors. The face verification performance on JANUS CS3 improves significantly by finetuning a DCNN model with the curated MS-Celeb-1M dataset which contains over three million face images.",
"corpus_id": 1018804,
"title": "A Proximity-Aware Hierarchical Clustering of Faces"
} | {
"abstract": "We present a novel clustering algorithm for tagging a face dataset (e. g., a personal photo album). The core of the algorithm is a new dissimilarity, called Rank-Order distance, which measures the dissimilarity between two faces using their neighboring information in the dataset. The Rank-Order distance is motivated by an observation that faces of the same person usually share their top neighbors. Specifically, for each face, we generate a ranking order list by sorting all other faces in the dataset by absolute distance (e. g., L1 or L2 distance between extracted face recognition features). Then, the Rank-Order distance of two faces is calculated using their ranking orders. Using the new distance, a Rank-Order distance based clustering algorithm is designed to iteratively group all faces into a small number of clusters for effective tagging. The proposed algorithm outperforms competitive clustering algorithms in term of both precision/recall and efficiency.",
"corpus_id": 206591583,
"title": "A rank-order distance based clustering algorithm for face tagging"
} | {
"abstract": "With the emergence of GPU computing, deep neural networks have become a widely used technique for advancing research in the field of image and speech processing. In the context of object and event detection, sliding-window classifiers require to choose the best among all positively discriminated candidate windows. In this paper, we introduce the first GPU-based non-maximum suppression (NMS) algorithm for embedded GPU architectures. The obtained results show that the proposed parallel algorithm reduces the NMS latency by a wide margin when compared to CPUs, even clocking the GPU at 50% of its maximum frequency on an NVIDIA Tegra K1. In this paper, we show results for object detection in images. The proposed technique is directly applicable to speech segmentation tasks such as speaker diarization.",
"corpus_id": 4211178,
"score": -1,
"title": "Work-efficient parallel non-maximum suppression for embedded GPU architectures"
} |
{
"abstract": "In order to test the social mechanisms through which organizational climate emerges, this article introduces a model that combines transformational leadership and social interaction as antecedents of climate strength (i.e., the degree of within-unit agreement about climate perceptions). Despite their longstanding status as primary variables, both antecedents have received limited empirical research. The sample consisted of 45 platoons of infantry soldiers from 5 different brigades, using safety climate as the exemplar. Results indicate a partially mediated model between transformational leadership and climate strength, with density of group communication network as the mediating variable. In addition, the results showed independent effects for group centralization of the communication and friendship networks, which exerted incremental effects on climate strength over transformational leadership. Whereas centralization of the communication network was found to be negatively related to climate strength, centralization of the friendship network was positively related to it. Theoretical and practical implications are discussed.",
"corpus_id": 5841690,
"title": "Transformational leadership and group interaction as climate antecedents: a social network analysis."
} | {
"abstract": "Ray, Baker, and Plowman's (2011) study of organizational mindfulness highlights latent tensions in the mindfulness literature and promising avenues for future research. Their study provides a springboard for reconciling the literature by differentiating organizational mindfulness from mindful organizing, establishing where organizational mindfulness and mindful organizing are most important, and clarifying how and when each construct can be most fruitfully deployed in research and practice. Clearer theorizing leads to a set of research questions that seek to integrate multiple conceptions of individual and organizational mindfulness, establish their individual and organizational antecedents, explore the consequences for individuals and organizations, and in so doing, further increase the relevance of organizational mindfulness for business schools.",
"corpus_id": 8567805,
"title": "Organizational Mindfulness and Mindful Organizing: A Reconciliation and Path Forward"
} | {
"abstract": "The connections in many networks are not merely binary entities, either present or not, but have associated weights that record their strengths relative to one another. Recent studies of networks have, by and large, steered clear of such weighted networks, which are often perceived as being harder to analyze than their unweighted counterparts. Here we point out that weighted networks can in many cases be analyzed using a simple mapping from a weighted network to an unweighted multigraph, allowing us to apply standard techniques for unweighted graphs to weighted ones as well. We give a number of examples of the method, including an algorithm for detecting community structure in weighted networks and a simple proof of the maximum-flow--minimum-cut theorem.",
"corpus_id": 1054844,
"score": -1,
"title": "Analysis of weighted networks"
} |
{
"abstract": "Non-stationarities are ubiquitous in EEG signals. They are especially apparent in the use of EEG-based brain–computer interfaces (BCIs): (a) in the differences between the initial calibration measurement and the online operation of a BCI, or (b) caused by changes in the subject's brain processes during an experiment (e.g. due to fatigue, change of task involvement, etc). In this paper, we quantify for the first time such systematic evidence of statistical differences in data recorded during offline and online sessions. Furthermore, we propose novel techniques of investigating and visualizing data distributions, which are particularly useful for the analysis of (non-)stationarities. Our study shows that the brain signals used for control can change substantially from the offline calibration sessions to online control, and also within a single session. In addition to this general characterization of the signals, we propose several adaptive classification schemes and study their performance on data recorded during online experiments. An encouraging result of our study is that surprisingly simple adaptive methods in combination with an offline feature selection scheme can significantly increase BCI performance.",
"corpus_id": 9565267,
"title": "Towards adaptive classification for BCI"
} | {
"abstract": "Classifying motion intentions in brain?computer interfacing (BCI) is a demanding task as the recorded EEG signal is not only noisy and has limited spatial resolution but it is also intrinsically non-stationary. The non-stationarities in the signal may come from many different sources, for instance, electrode artefacts, muscular activity or changes of task involvement, and often deteriorate classification performance. This is mainly because features extracted by standard methods like common spatial patterns (CSP) are not invariant to variations of the signal properties, thus should also change over time. Although many extensions of CSP were proposed to, for example, reduce the sensitivity to noise or incorporate information from other subjects, none of them tackles the non-stationarity problem directly. In this paper, we propose a method which regularizes CSP towards stationary subspaces (sCSP) and show that this increases classification accuracy, especially for subjects who are hardly able to control a BCI. We compare our method with the state-of-the-art approaches on different datasets, show competitive results and analyse the reasons for the improvement.",
"corpus_id": 2489635,
"title": "Stationary common spatial patterns for brain-computer interfacing"
} | {
"abstract": "In recent years, several visual programming languages and tools are emerging, which allow young students to easily program applications. Particularly, the block-based language used by Scratch has been the standard in most school initiatives to introduce Computational thinking (CT) in courses unrelated to computing. However, CT competences are not specifically included in the curricula of many Higher Education degrees that future teachers of Primary and Secondary Education have to complete. This paper describes a workshop for teachers' training on CT. It is based on the block-based common language of Scratch, but focused on enhancing teachers' skills to develop mobile applications with a tool based on the MIT's AppInventor. This workshop provided some insights on the capabilities of future teachers in the use of programming tools1.",
"corpus_id": 21142399,
"score": -1,
"title": "Bringing computational thinking to teachers' training: a workshop review"
} |
{
"abstract": "We introduce Concentrated Differential Privacy, a relaxation of Differential Privacy enjoying better accuracy than both pure differential privacy and its popular \"(epsilon,delta)\" relaxation without compromising on cumulative privacy loss over multiple computations.",
"corpus_id": 14861086,
"title": "Concentrated Differential Privacy"
} | {
"abstract": "In this article, we introduce a new and general privacy framework called Pufferfish. The Pufferfish framework can be used to create new privacy definitions that are customized to the needs of a given application. The goal of Pufferfish is to allow experts in an application domain, who frequently do not have expertise in privacy, to develop rigorous privacy definitions for their data sharing needs. In addition to this, the Pufferfish framework can also be used to study existing privacy definitions. We illustrate the benefits with several applications of this privacy framework: we use it to analyze differential privacy and formalize a connection to attackers who believe that the data records are independent; we use it to create a privacy definition called hedging privacy, which can be used to rule out attackers whose prior beliefs are inconsistent with the data; we use the framework to define and study the notion of composition in a broader context than before; we show how to apply the framework to protect unbounded continuous attributes and aggregate information; and we show how to use the framework to rigorously account for prior data releases.",
"corpus_id": 377285,
"title": "Pufferfish: A framework for mathematical privacy definitions"
} | {
"abstract": "State-of-the-art deep neural networks have achieved impressive results on many image classification tasks. However, these same architectures have been shown to be unstable to small, well sought, perturbations of the images. Despite the importance of this phenomenon, no effective methods have been proposed to accurately compute the robustness of state-of-the-art deep classifiers to such perturbations on large-scale datasets. In this paper, we fill this gap and propose the DeepFool algorithm to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers. Extensive experimental results show that our approach outperforms recent methods in the task of computing adversarial perturbations and making classifiers more robust.1",
"corpus_id": 12387176,
"score": -1,
"title": "DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks"
} |
{
"abstract": "This paper identifies the existence of two kinds of agency problem by focusing on rights issues by Chinese firms. It begins by examining the context in which state-designated managers are strongly encouraged by the government to provide funds for its use, often at the expense of their companies' interests. The statistics presented in this paper further show how Chinese managers transfer these cash funds, raised from their public shareholders, via rights issues, to the state as cash dividends. An empirical examination of 459 rights issues by Chinese firms from 1999 to 2004 shows how Chinese managers have inflated their earnings prior to rights issues in order to gain regulatory permission and thus process the cash transfers more smoothly. This is evidence of the first type of agency problem: managers behaving in a way that is against the interests of their shareholders. In addition, during rights issues. existing shareholders can purchase new shares at lower prices and as a result there is a substantial drop in both the firm's share price and its returns. Wealth reallocation from the nation to subscribed shareholders occurs during this period, which implies that state-designated managers disregard this obligation to safeguard national assets. This practice is a sign of the second type of agency problem: Chinese state-designated managers violating the national interest for the sake of their own self-interest.",
"corpus_id": 153035661,
"title": "Rights Issues in China as Evidence for the Existence of Two Types of Agency Problems"
} | {
"abstract": "Purpose of this paper is to investigate whether HR slack leads to improve firm performance, and what is the impact of HR slack in absolute and in relative terms on firm performance in a developing country. It also examines how ownership types moderate the HR slack-performance relationship. The longitudinal data-set of 11,985 firms-year observations from 2000-2009 were used and generalized linear models (GLMs) employed for analyzing data. The findings reveal that (1) absolute HR slack (AHRS) leads to enhance firm performance; (2) AHRS is positively and relative HR slack (RHRS) is negatively affected firm performance; (3) both AHRS and RHRS have inverse U-shaped effects on firm performance. (4) AHRS is positively influenced on performance of both state-owned enterprises (SOEs), and private owned enterprises (POEs). RHRS is negatively affect performance of SOEs. It can be concluded that both absolute and relative HR slacks lead to increase the firm performance up to a certain level, thereafter, firm performance is declined (Curvilinear relationship). The paper is original in its contribution to the organizational slack–firm performance literature by examining the relevance of absolute and relative HR slacks as indispensable sources.",
"corpus_id": 154369265,
"title": "Impact of Human Resource Slacks on Firm Performance: Evidence from a Developing Country"
} | {
"abstract": "In a takeover, wealth transfers from bidder and target equityholders to target debtholders can occur if target debt is coinsured by either the bidder's assets or by the synergy itself. Such wealth transfers reduce bidder and target shareholder gains and could poison the acquisition. With a sample from 1979-90, the author finds that, as the coinsurance potential of a firm's debt--measured as the amount of relatively risky debt outstanding--increases, its likelihood of being acquired decreases. In particular, he finds this coinsurance deterrent to be strongest during the 1985-90 period and strongest for firms with public debt outstanding. Copyright 1996 by University of Chicago Press.",
"corpus_id": 153365163,
"score": 2,
"title": "Targeting Capital Structure: The Relationship between Risky Debt and the Firm's Likelihood of Being Acquired"
} |
{
"abstract": "The paper uses Family Expenditure Survey data to estimate a reduced form, cross-section model, of mortgage demand. A double hurdle model is estimated which takes account of potential mortgage rationing. The model contrasts with the usual limited dependent variable model (Tobit) where zero values for purchases are observed and treated as equilibrium observations. The empirical specification allows for the deregulation of financial services in the 1980s, and tests for the impact of this on mortgage demand. The null hypothesis that capital markets are perfect and that rationing was not a binding constraint over the sample under study was rejected. The research seeks to better understand the impact of credit rationing upon household behavior. Copyright 1995 by Blackwell Publishing Ltd",
"corpus_id": 153491409,
"title": "Rationing, Mortgage Demand and the Impact of Financial Deregulation"
} | {
"abstract": "This paper derives measures of the average and marginal incidence of a tax or subsidy in imperfect competition, in the context of the UK housing market. We argue that one form of mortgage, common in the UK but not elsewhere (the endowment mortgage), exists primarily because of the structure of taxation in the UK. We estimate the determinants of the choice of the type of mortgage, and the size of mortgage conditional on the choice, using data from the Building Societies Association on 43 000 individual mortgages taken out between 1985 and 1989. The estimated parameters are an input to the incidence measures. Results suggest that between 70 and 80% of the additional subsidy to endowment mortgages is captured by lenders, rather than borrowers.",
"corpus_id": 15184089,
"title": "Measuring tax incidence: an application to mortgage provision in the UK"
} | {
"abstract": "This paper investigates the impact of migration on Italian inbound tourism flows in a dynamic panel data framework. Arrivals, expenditure and nights from 65 countries are analysed for the period 2005–2011. The migration variable is defined at both origin and destination in order to assess the pushing and pulling forces. Estimates were performed using both aggregated flows and flows disaggregated to separate the visiting friends and relatives (VFRs) from two non-VFR categories, namely holiday and business. The results suggest the presence of a strong migration-tourism nexus, which clearly goes beyond VFRs. Moreover, the effects of the different determinants vary according to the way in which the tourism market is segmented and, within each segment, to the way in which tourism demand is measured.",
"corpus_id": 55096977,
"score": 1,
"title": "Migration and inbound tourism: an Italian perspective"
} |
{
"abstract": "We present a predictive control scheme to regulate the fluid pressure in the reservoir rock of deep geothermal systems. Controlling the fluid pressure profile is important to avoid strong seismic events during hydraulic stimulation. The introduced predictive controller builds on a nonlinear, uncertain, and non-differentiable model describing the pressurization and seismicity in the reservoir. Since measurements of system states are limited, we additionally design an unscented Kalman filter to solve the observation problem.",
"corpus_id": 9586313,
"title": "Predictive pressure control in deep geothermal systems"
} | {
"abstract": "In order to deal with the computational burden of optimal control, it is common practice to reduce the degrees of freedom by fixing the input or its derivatives to be constant over several time-steps. This policy is referred to as \"move blocking\". This paper addresses two issues. First, a survey of various move blocking strategies is presented and the shortcomings of these blocking policies, such as the lack of stability and constraint satisfaction guarantees, are illustrated. Second, a novel move blocking scheme, \"moving window blocking\" (MWB), is presented. In MWB, the blocking strategy is time-dependent such that the scheme yields stability and feasibility guarantees for the closed-loop system. Finally, the results of a large case-study are presented that illustrate the advantages and drawbacks of the various control strategies discussed in this paper.",
"corpus_id": 8669653,
"title": "Move blocking strategies in receding horizon control"
} | {
"abstract": "Swarm-like seismic activity including six moderate events (Mj = 5.1–5.4) occurred in 1989, 1990 and 1997 in the same area as the 2000 Western Tottori Earthquake (Mj = 7.3). For each time period, we carried out temporary seismic observations in and around the source area and processed the data together with data from permanent stations, to determine the hypocenters precisely. In this study we also redetermined the earthquake locations in each seismic activity using a two-step master event technique with common master events, so that the accuracy in the relative locations of the events was improved. The purpose of this study is to clarify the relationship between the preceding seismic activity and the mainshock in 2000 by comparing the hypocenter distributions. The relocated hypocenter distributions show that the three preceding swarms occurred in different parts of the same fault plane as the 2000 Western Tottori Earthquake. The b-values of the preceding swarms were low (0.51–0.67), suggesting a high stress level in the area. The mainshock initiated in the area of the preceding swarms. The rupture propagated with relatively small slip (∼1 m) in the area for the first three seconds. Then, it developed to main rupture with large slip (2–4 m) outside the area toward the southeast.",
"corpus_id": 56269391,
"score": 1,
"title": "Swarm-like seismic activity in 1989, 1990 and 1997 preceding the 2000 Western Tottori Earthquake"
} |
{
"abstract": "Objective: To determine the risk factors associated with tooth loss between the ages of 18 and 26. Methods: Dental examinations at ages 18 and 26 were conducted on Study members in the Dunedin Multidisciplinary Health and Development Study, and sociodemographic and dental service use data were collected using a self–report questionnaire. At age 15, an estimate of socio–economic status (SES) for each Study member had been obtained by classifying the occupation of the male parent. A case of tooth loss was defined as an individual who had lost one or more teeth (excluding third molars) due to caries between ages 18 and 26. Logistic regression and Poisson analysis were used to model the occurrence of tooth loss. Results: Among the 821 study members who were examined at both ages, one or more teeth were lost because of caries by 85 (10.3%). After controlling for sex, SES and visiting pattern, baseline caries experience predicted subsequent tooth loss, with the odds increasing by 2.8 for every increase by 1 in the number of decayed surfaces present at age 18. Episodic dental visitors had 3.1 times the odds of their routine visiting counterparts of losing a tooth over the observation period. The number of teeth lost was, on average, 2.3 times higher among episodic dental visitors. Conclusions: Socio–economic inequalities in tooth loss appear to begin early in the life course, and are modified by individuals’ SES and dental visiting patterns.",
"corpus_id": 24394369,
"title": "Socio–Economic and Behavioural Risk Factors for Tooth Loss from Age 18 to 26 among Participants in the Dunedin Multidisciplinary Health and Development Study"
} | {
"abstract": "UNLABELLED\nTooth loss diminishes oral function and quality of life, and national health targets aim to reduce population levels of tooth loss.\n\n\nOBJECTIVES\nThe purpose of this study was to determine tooth loss incidence and predictors of tooth loss among older adults in South Australia.\n\n\nMETHODS\nData were obtained from a cohort study of a stratified random sample of community-dwelling dentate people aged 60+ years. Interviews and oral examinations were conducted among 911 individuals at baseline and among 693 of them (76.1%) 2 years later. Incidence rates and relative risks were calculated for population subgroups and multivariate logistic regression was used to construct risk prediction models. A method was developed to calculate 95% confidence intervals (95% CI) for relative risks (RR) from logistic regression models using a Taylor series approximation.\n\n\nRESULTS\nSome 19.5% (95% CI = 15.4-23.6%) of people lost one or more teeth during the 2 years. Men, people with a recent extraction, people who brushed their teeth infrequently, smokers and people born outside Australia had significantly (P < 0.05) greater risk of tooth loss. Baseline clinical predictors of tooth loss included more missing teeth, retained roots, decayed root surfaces, periodontal pockets and periodontal recession. In a multivariate model that controlled for baseline clinical predictors, former smokers (RR = 2.55, 95% CI = 1.48-4.40) and current smokers (RR = 2.06, 95% CI = 0.92-4.62) had similarly elevated risks of tooth loss compared with non-smokers.\n\n\nCONCLUSIONS\nThe findings from this population suggest that a history of smoking contributes to tooth loss through mechanisms in addition to clinical disease processes alone.",
"corpus_id": 1911626,
"title": "Two-year incidence of tooth loss among South Australians aged 60+ years."
} | {
"abstract": "The traditional measure of caries, the DMF index, either as prevalence or incidence of disease, has become highly positively skewed among children and young adults. Most discussion of skewed distributions has focused on the properties of statistical analyses using such data or the implications for sample sizes and subject selection in clinical trials. This paper examines the full range of epidemiologic studies, their aims and constitutive interest in order to identify the measurement problems associated with skewed DMF index data. Constitutive interests include: description; documentation; explanation and prediction; evaluation; advocacy; and, experimentation. 'New' outcome measures that would assist in reaching the aims and constitutive interests of the epidemiology of caries include caries severity grading, variants of prevalence, extent and severity and their combination into case definitions, and weighting of the components of the DMF index. Research questions for each area of 'new' outcome measures are identified as steps in the codifying of their use in the epidemiology of caries.",
"corpus_id": 24014486,
"score": -1,
"title": "Skewed distributions--new outcome measures."
} |
{
"abstract": "To determine the cost‐effectiveness of the Dexcom G6 real‐time continuous glucose monitoring (rt‐CGM) system compared with both the self‐monitoring of blood glucose (SMBG) and the Abbott FreeStyle Libre 1 and 2 intermittently scanned CGM (is‐CGM) devices in people with type 1 diabetes receiving multiple daily insulin injections in Denmark.",
"corpus_id": 259193195,
"title": "Cost‐utility of real‐time continuous glucose monitoring versus self‐monitoring of blood glucose and intermittently scanned continuous glucose monitoring in people with type 1 diabetes receiving multiple daily insulin injections in Denmark"
} | {
"abstract": null,
"corpus_id": 2471985,
"title": "Health State Utilities Associated with Glucose Monitoring Devices."
} | {
"abstract": "Introduction The objective of this study was to give an overview of prevalence, incidence and mortality of type 1 (T1D) and type 2 diabetes (T2D) in Denmark, and their temporal trends. Research design and methods We constructed a diabetes register from existing population-based healthcare registers, including a classification of patients as T1D or T2D, with coverage from 1996 to 2016. Using complete population records for Denmark, we derived prevalence, incidence, mortality and standardized mortality ratio (SMR). Results The overall prevalence of diabetes at 2016 was 0.5% for T1D and 4.4% for T2D, with annual increases since 1996 of 0.5% for T1D and 5.5% for T2D. Incidence rates of T1D decreased by 3.5% per year, with increase for persons under 25 years of age and a decrease for older persons. T2D incidence increased 2.5% per year until 2011, decreased until 2014 and increased after that, similar in all ages. The annual decrease in mortality was 0.3% for T1D and 2.9% for T2D. The mortality rate ratio between T1D and T2D was 1.9 for men and 1.6 for women. SMR decreased annually 2% for T1D and 0.5% for T2D. Conclusions Incidence and prevalence of diabetes is increasing, but mortality among patients with diabetes in Denmark is decreasing faster than the mortality among persons without diabetes. T1D carries a 70% higher mortality than T2D.",
"corpus_id": 219169867,
"score": -1,
"title": "Prevalence, incidence and mortality of type 1 and type 2 diabetes in Denmark 1996–2016"
} |
{
"abstract": "Being one of the leading signaling protocols of VoIP applications, SIP protocol becomes popular in IP-based multimedia services, and securing SIP has become a priority. In this paper, we develop a novel authentication scheme which relies only on one way hash functions. In contrast to the computationally expensive asymmetric RSA signature scheme, our scheme is efficient in signing and verifying procedures. And hash tree is exploited to store and verify key information. Our scheme can be used in SIP entities which have less computation power and limited memory.",
"corpus_id": 1166196,
"title": "A Hash Tree Based Authentication Scheme in SIP Applications"
} | {
"abstract": "This document defines new functionality for negotiating the security mechanisms used between a Session Initiation Protocol (SIP) user agent and its next-hop SIP entity. This new functionality supplements the existing methods of choosing security mechanisms between SIP entities.",
"corpus_id": 32630881,
"title": "Security Mechanism Agreement for the Session Initiation Protocol (SIP)"
} | {
"abstract": "This paper presents a new approach for code similarity on High Level programs. Our technique is based on Fast Dynamic Time Warping, that builds a warp path or points relation with local restrictions. The source code is represented into Time Series using the operators inside programming languages that makes possible the comparison. This makes possible subsequence detection that represent similar code instructions. In contrast with other code similarity algorithms, we do not make features extraction. The experiments show that two source codes are similar when their respective Time Series are similar.",
"corpus_id": 105653,
"score": 1,
"title": "Code Similarity on High Level Programs"
} |
{
"abstract": "It is known that convex polygonal lines on Z2 with the endpoints fixed at 0 = (0, 0) and n = (n1, n2) and with edges of non-negative slope, have limit shape under the scaling Z2 7→ n−1 1 Z2 as n →∞. If n2/n1 → c then the limit shape is identified as a parabolic arc with equation √ c(1− u) + √ v = √ c. In probabilistic terms, this result amounts to a functional Law of Large Numbers under the uniform distribution on the set Ln of such polygons. In the present paper, we consider a converse problem, i.e. that of approximation of convex curves by convex lattice polygons. Let γ be the graph of a strictly convex, increasing C3-function on [0, 1], having non-degenerate curvature. We show that for any such γ, one can construct a probability measure Pn on the space Ln so that under the law Pn, the curve γ is indeed the limit shape of polygons from Ln as n →∞.",
"corpus_id": 818636,
"title": "Approximation of Convex Curves by Random Lattice Polygons"
} | {
"abstract": "THE theory of Fourier integrals arises out of the elegant pair of reciprocal formulæThe Laplace TransformBy David Vernon Widder. (Princeton Mathematical Series.) Pp. x + 406. (Princeton: Princeton University Press; London: Oxford University Press, 1941.) 36s. net.",
"corpus_id": 4090976,
"title": "The Laplace Transform"
} | {
"abstract": "This paper focuses on the problem of disturbance compensation in motion control. Disturbance observer (DOB) and sliding mode control (SMC) are both famous approaches to solve the problem. However, the DOB does not deal with nonlinear disturbance sufficiently. The SMC has chattering phenomenon which limits its application. In this paper, the idea of sliding mode assist disturbance observer (SMADO) is proposed, which avoids the shortness and associates the advantages of both SMC and DOB. In the proposal, the DOB helps the SMC to decrease the switching gain, which weakens the chattering. On the other hand, the SMC assists the DOB in compensating nonlinear disturbance. Based on the basic design, some improvements are also given. To weaken the chattering further, the smooth design is implemented. Moreover, to further systematize the idea of SMADO, the systematical adjustment method of the switching gain is investigated for the basic and smooth SMADO, respectively. The designs are proved by Lyapunov-based method. The validity of the proposals is confirmed by experiments.",
"corpus_id": 37940307,
"score": 1,
"title": "A novel sliding mode assist disturbance observer"
} |
{
"abstract": "The speed of Probabilistic Power Flow (PPF) analysis for hybrid AC/DC grids (ACDCPPF) can be significantly decreased if the corresponding adjustment strategy of Voltage Source Converter (VSC) on the reactive power is not properly designed. To address this issue, a new strategy on the reactive power adjustment of VSC is proposed. A Monte Carlo Simulation (MCS) method is used to model uncertainties in the stochastic behaviors of Photovoltaic (PV) stations and loads. The feasibility and the effectiveness of the improved ACDCPPF are validated using a modified IEEE 9-Bus test system. The simulation results indicate that with the proposed adjustment strategy and the empirical optimal value, the number of AC/DC alternate iterations in a VSC's reactive power limited AC/DC system can be greatly reduced in each sampled Deterministic Power Flow (DPF), which leads to a significant improvement in the computing speed of the ACDCPPF analysis.",
"corpus_id": 7650784,
"title": "VSC's reactive power limited probabilistic power flow for AC/DC grids incorporating uncertainties"
} | {
"abstract": "As a matter of course, the unprecedented ascending penetration of distributed energy resources, mainly harvesting renewable energies such as wind and solar, is concomitant with environmentally friendly concerns. This type of energy resources are innately uncertain and bring about more uncertainties in the power system context, consequently, necessitates probabilistic analysis of the system performance. Moreover, the uncertain parameters may have a considerable level of correlation to each other, in addition to their uncertainties. The two point estimation method (2PEM) is recognised as an appropriate probabilistic method. This study proposes a new methodology for probabilistic power flow studies for such a problem by modifying the 2PEM. The original 2PEM cannot handle correlated uncertain variables, but the proposed method has been equipped with this ability. To justify the impressiveness of the method, two case studies namely the Wood & Woollenberg 6-bus and the IEEE118-bus test systems are examined using the proposed method, then the obtained results are compared against the Monte Carlo simulation results. Comparison of the results justifies the effectiveness of the method in the respected area with regards to both accuracy and execution time criteria.",
"corpus_id": 110905860,
"title": "Probabilistic power flow of correlated hybrid wind-photovoltaic power systems"
} | {
"abstract": "A combination of high energy costs, uniform solar resource, and an active solar industry combine to make Hawaii a good location for cost effective applications of solar water heating. The non-freezing climate allows for simple solar water heating system designs. In the mild climate of Hawaii, solar water heating can displace a large fraction of a home’s electricity use since heating and cooling loads are small. In 1998, sixty-two solar water heaters were installed at Kiai Kai Hale US Coast Guard Housing Area in Honolulu, HI as a pilot project under a grant from the US DOE Federal Energy Management Program (FEMP). The systems are active, open loop systems with a single tank (electric water heater with the bottom element disabled). An assessment of these pilot units will help inform a Coast Guard decision regarding implementing solar water heating on the remaining 256 units in the housing area, and may be useful information for other government and utility programs. On 25 houses with solar water heating and 25 identical houses without solar, instruments were installed to measure on/off cycles of the electric water heaters and the tank outlet temperature. This paper describes the results the monitoring for a six week period From June 11 to July 25, 2002, with a statistical extrapolation to estimate annual savings. Demand savings are estimated at 1.62 kW/house, energy savings at 3,008 kWh/house/year, and annual cost savings per house is estimated at $380/year due to solar. For a system cost of $3,200 ($4,000 minus a $800 utility rebate) and a 25 year present worth factor of 17.1, the savings to investment ratio (SIR) is 2.03, so this solar water heating application is cost effective according to Federal regulation 10CFR436 (which requires SIR>1.0). The annual solar fraction is estimated at 74% and annual solar water heating system efficiency is estimated at 24%. This paper describes the statistical design of the survey; the measured load profiles; the energy, demand, and cost savings; and the observed condition of the systems. The paper includes a discussion of application of the International Performance Measurement and Verification Protocol (IPMVP) applied to renewable energy systems.Copyright © 2003 by ASME",
"corpus_id": 109524355,
"score": 1,
"title": "Time-of-Use Monitoring of U.S. Coast Guard Residential Water Heaters With and Without Solar Water Heating in Honolulu, HI"
} |
{
"abstract": "This paper presents a formula that estimates the growth rate of a transverse coupled bunch instability driven by quadrupole higher order modes (HOMs) in electron storage rings. Thus far, quadrupole HOMs are usually ignored in HOM driven instability studies for electron storage rings due to their weak nature compared to the lower orders. However, they may become relevant when high gradient SC multi-cell cavities with their potentially strong impedance spectrum are operated at high currents in a third generation or future synchrotron light source. An example is BESSY VSR, a scheme where 1.7 ps and 15 ps long bunches (rms) can be stored simultaneously in the BESSY II storage ring [1]. With the presented formula, instability thresholds are discussed for a recent BESSY VSR cavity model and different beam parameters. INTRODUCTION Coupled bunch instabilities (CBIs) driven by higher order modes (HOMs) of RF cavities or other narrow-band impedances are a classic type of instability encountered in many storage rings. An extended study of CBIs relating to BESSY VSR, the upgrade scheme of BESSY II to obtain short and long bunches simultaneously [1,2], has recently been published [3]. This paper presents findings related to quadrupole HOMs. The consequence of a CBI is either beam loss, usually in the transverse plane, or a saturation of the beam oscillation at very large amplitudes if it is occurring in the longitudinal plane. In both cases, the instability must be avoided to ensure the high beam quality promised by all low emittance storage rings. Common methods to suppress CBIs include the usage of an active bunch-by-bunch feedback (BBFB) in each plane that tracks the dipole motion, i.e., center of mass motion, of the beam and applies kicks such that the oscillation amplitude is reduced. Typically, studies of HOM driven CBIs consider only two special cases, namely the dipole motion in the longitudinal or transverse plane caused by the interaction with monopole or dipole HOMs respectively. Formulas to estimate the growth rate of these instabilities based on the machine parameters and the impedance spectrum are used frequently in literature, see for example [3] and references therein. If the growth rates are compared to a damping rate, e.g. the damping rate ∗ Work supported by German Bundesministerium für Bildung und Forschung, partly under contract 05K13PEB, and Land Berlin. † martin.ruprecht@helmholtz-berlin.de of the BBFB, a threshold impedance is found. The threshold impedance for the longitudinal and transverse dipole CBIs driven by monopole and dipole HOMs is given by [4] Z ‖ 0,th(ω, τ −1 d ) = τ−1 d ωs ωα 4πE/e ωrevI , (1) Z⊥ 1,th(τ −1 d ) = τ−1 d β 4πE/e ωrevI , (2) respectively, where the first index of Z represents the order m of the impedance and ω the angular frequency at which the impedance is sampled, ωrev the angular revolution frequency,ωs the angular synchrotron frequency, τ−1 d the damping rate, α the momentum compaction factor, E the beam energy, e the elementary charge, β the betatron function at the position of the cavity. As an example, Fig. 1 shows the impedance of the 1.5GHz cavity model of the BESSY VSR project [5] compared to the threshold impedance based on the BESSY II machine parameters and the present BBFB damping performance [1–3]. 1.00 1.25 1.50 1.75 2.00 2.25 2.50 2.75 3.00 f/GHz 10-6 10-4 10-2 100 102 104 106",
"corpus_id": 3898212,
"title": "Calculation of Transverse Coupled Bunch Instabilities in Electron Storage Rings Driven By Quadrupole Higher Order Modes"
} | {
"abstract": "CW SRF Cavities have been used very successfully in the past in synchrotron light sources to provide high power acceleration. Here we present a novel application of higher harmonic systems of two different frequencies (1.5 GHz and 1.75 GHz) to generate a beating of accelerating voltage. With such a system it is possible to store \"standard\" (some 10 ps long) and \"short\" (ps and sub-ps long) pulses simultaneously in the storage ring. This opens up new possibilities for light source users to perform picosecond dynamic and high-resolution experiments at the same facility. The demands on the SRF system and RF control are substantial and a new design, based on waveguide damping, is currently being developed. This system will be used for a major upgrade of the BESSY II facility to the BESSY Variable Pulse length Storage Ring (BESSY VSR) for a next-generation storage-ring light source. We will discuss the concept, challenges and designs for BESSY VSR from the SRF point of view.",
"corpus_id": 3908888,
"title": "BESSY VSR: A Novel Application of SRF for Synchrotron Light Sources"
} | {
"abstract": "Ball screws are robust and economical linear positioning systems widely employed in high-speed and high-precision machines. Due to precision and stability requirements, the preload force is considered one of the main parameters defining the axial stiffness and the maximum axial load of the ball screw feed drives. In high-speed motions, thermal effects are also considerably relevant regarding positioning precision and dynamic stability of the machine. The temperature increase and the thermal gradient between the screw, the balls and the nuts result in geometrical variations and, consequently, variations in the preload force. This paper presents a numerical modelling strategy to predict the preload variation due to temperature increase using a thermo-mechanical 3D finite element method (FEM)-based model for double nut-ball screw drives. Two different thermo-mechanical coupling strategies are compared, and the obtained results are validated with experimental measurements for different initial preload and linear speeds. In the mechanical analysis, the nut-screw ball contact interface, the offset-based preloading and the restrictions of the ball bearings are included in the model, while the thermal analysis considers heat generation and heat diffusion. The causes of the thermal preload variation are discussed considering the ball load distribution and the axial and radial thermal displacements of the contacting points.",
"corpus_id": 116178970,
"score": 0,
"title": "Thermo-mechanical modelling of ball screw preload force variation in different working conditions"
} |
{
"abstract": "1 Fontan F, Baudet E. Surgical repair of tricuspid atresia. Thorax 197 1;26:240-5. 2 Kreutzer GO,Vargas FL, Schlichter AJ, et al. Atriopulmonary anastomosis. J Thorac Cardiovasc Surg 1982;83: 427-36. 3 Leung MP, Benson LN, Smallhorn JF, Williams WG, Trusler GA, Freedom RM. Abnormal cardiac signs after Fontan type of operation: indicators of residua and sequelae. Br Heart J 1989;61:52-8. 4 Wooley CF, Fontana ME, Kilman JW, Ryan JM. Atrial systolic murmur, tricuspid opening snap, and right atrial pressure pulse. Am J Med 1985;78:375-84. 5 Craige EC. On the genesis of heart sounds. Circulation 1976;53:207-9. 6 Craige E, Fortuin NJ. Opening snap in mitral stenosis. Am Heart J 1975;89:128-34. 7 Crews TL, Pridie RB, Benham R, Leatham A. Auscultatory and phonocardiographic findings in Ebstein's anomaly. Correlation of first heart sound with ultrasonic records of tricuspid valve movement. Br Heart J 1972;34:681-7. 8 Rodbard S, LibanoffAJ. The mitral closing snap. Am Heart J 1972;83:19-22. 9 Nakazawa M, Nojima K, Okuda H, et al. Flow dynamics in the main pulmonary artery after the Fontan procedure in patients with tricuspid atresia and single ventricle. Circulation 1987;75:1117-23. 10 DiSessa TG, Child JS, Perloff JK, et al. Systemic venous and pulmonary arterial flow patterns after Fontan's procedure for tricuspid atresia and single ventricle. Circulation 1984;70:898-902. 11 Nakazawa M, Nakanishi T, Okuda H, et al. Dynamics of right heart flow in patients after Fontan procedure. Circulation 1984;69:306-12. 12 Keren G, Sonnenblick EH, Lejenifel TH. Mitral annulus motion. Circulation 1988;78:621-9. 13 Zaky A, Grabhorn L, Feigenbaum H. Movement of the mitral ring: a study in ultrasonography. Cardiovasc Res 1967;1:121-31.",
"corpus_id": 2210497,
"title": "Views from the past"
} | {
"abstract": "Despite increasing use of Fontan or modified Fontan repairs, the comparative hemodynamic efficacy of different types of connections are unresolved. Accordingly, we undertook a prospective study designed to determine postoperative flow patterns after Fontan's operation. Seven subjects had tricuspid atresia and eight had single ventricle. Ages ranged from 5 to 38 years (mean 16.4). Ten subjects had nonvalved right atrial-to-pulmonary arterial connection, and four had nonvalved right atrial-to-right ventricular communication. A valved conduit established continuity between the right atrium and right ventricle in one subject. Doppler flow profiles were recorded in the pulmonary artery and in the superior and inferior venae cavae of each. A reference electrocardiogram was used for timing purposes. In 14 patients, forward flow in the pulmonary artery was biphasic. Flow began at the end of the T wave (early ventricular diastole), peaked at or before the P wave (atrial systole), and returned to baseline by the peak of the R wave. Forward flow recommenced at the peak of the R wave (ventricular systole) and returned to baseline at the end of the T wave. Flow in the superior vena cava varied, and could not be recorded in three subjects. Between the end of the P wave and peak of the R wave (atrial systole) flow was reversed in eight, absent in three, and forward in one patient. Forward flow occurred between the peak of the R wave and the end of the T wave and was either continuous or biphasic. Fourteen patients had adequate studies of inferior vena cava flow; reversed flow during atrial systole occurred in 10 subjects.(ABSTRACT TRUNCATED AT 250 WORDS)",
"corpus_id": 1561029,
"title": "Systemic venous and pulmonary arterial flow patterns after Fontan's procedure for tricuspid atresia or single ventricle."
} | {
"abstract": "Combined M-mode, two-dimensional and Doppler echocardiographic studies were used to assess the postoperative status of 33 patients who had undergone the modified Fontan procedure. Twenty-four patients had surgical repair with use of a simple direct right atrium to pulmonary artery anastomosis. The remaining patients had repair with use of a prosthesis or associated Glenn shunt. Twenty-seven patients were studied early in the postoperative period (2 months or less) and the remaining patients were studied up to 6 years postoperatively. A total of 36 examinations were performed. Of the 33 patients, 13 had tricuspid atresia, 12 had double inlet left ventricle with hypoplastic right ventricular outlet chamber and 8 had complex lesions with atrioventricular canal, double outlet right ventricle or a hypoplastic ventricle. Postoperative assessment by M-mode and two-dimensional echocardiography demonstrated normal or mildly reduced ventricular function (ejection fraction greater than 40%) in 22 patients. In 24 patients, a \"normal\" flow pattern was observed in the pulmonary artery by pulsed Doppler echocardiography, with predominant diastolic flow and accentuation by atrial systole somewhat similar to the venous flow pattern observed in the superior vena cava. \"Abnormal\" flow patterns (disorganized systolic flow, absence of atrial waves and little or no increase with inspiration) were observed in nine patients with reduced ventricular function or residual shunt. Continuous wave Doppler study also demonstrated mild dynamic subaortic obstruction in two patients. Combined pulsed and continuous wave studies showed atrioventricular valve insufficiency in 10 patients. Follow-up studies revealed a satisfactory clinical course in most patients. Three patients died approximately 4 to 8 months after their Fontan operation.",
"corpus_id": 22247833,
"score": 2,
"title": "Functional assessment of the Fontan operation: combined M-mode, two-dimensional and Doppler echocardiographic studies."
} |
{
"abstract": "This paper presents a control strategy of a micro-wind turbine system (WTS) in AC MicroGrid (MG) with the capability of providing ancillary services to the MG and participating to its regulation. The WTSs can be controlled to operate at their maximum efficiency or as controllable source by tracking a power reference set by the energy management system. Hence, the active control of micro-wind turbine in MG can enhance the MG control by providing some ancillary services such as voltage profile control and frequency stabilization. The control structure is implemented in Matlab/Simulink environment and is validated through dynamic simulations. The obtained results demonstrates the effectiveness of the control structure.",
"corpus_id": 42598675,
"title": "Design and control strategy of micro-wind turbine based PMSM in AC MicroGrid"
} | {
"abstract": "Microgrid is an aggregation of distributed generators (DGs) and energy storage systems (ESS) through corresponding power interface, such as synchronous generators, asynchronous generators and power electronic devices. Without the support from the public grid, the control and management of an autonomous microgrid is more complex due to its poor equivalent system inertia. To investigate microgrid dynamic stability, a small-signal model of a typical microgrid containing asynchronous generator based wind turbine, synchronous diesel generator, power electronic based energy storage and power network is proposed in this paper. The small-signal model of each of the subsystem is established respectively and then the global model is set up in a global reference axil frame. Eigenvalues distributions of the microgrid system under certain steady operating status are identified to indicate the damping of the oscillatory terms and its effect on system stability margin. Eigenvalues loci analysis is also presented which helps identifying the relationship among the dynamic stability, system configuration and operation status, such as the variation of intermittent generations and ESS with different control strategies. The results obtained from the model and eigenvalues analysis are verified through simulations and experiments on a study microgrid system.",
"corpus_id": 23865006,
"title": "Investigation of the Dynamic Stability of Microgrid"
} | {
"abstract": "The emergence of dark silicon - a fundamental design constraint absent in the past generations - brings intriguing challenges and opportunities in microprocessor design. To gracefully embrace dark silicon, design methodologies must adapt themselves to identify progressive systems that can effectively exploit the growing dark silicon. We demonstrate that relying on traditional design metrics may lead to sub-optimal design choices with the rise of the dark silicon area. We provide a new metric to guide a dark silicon aware system design and propose a stochastic optimization algorithm for dark silicon aware multicore system design. Our design approach shows 7-23% benefit in upcoming technology generations.",
"corpus_id": 16083338,
"score": -1,
"title": "Designing for dark silicon: a methodological perspective on energy efficient systems"
} |
{
"abstract": "A quasi-passive leg exoskeleton is presented for load-carrying augmentation during walking. The exoskeleton has no actuators, only ankle and hip springs and a knee variabledamper. Without a payload, the exoskeleton weighs 11.7 kg and requires only 2 Watts of electrical power during loaded walking. For a 36 kg payload, we demonstrate that the quasi-passive exoskeleton transfers on average 80% of the load to the ground during the single support phase of walking. By measuring the rate of oxygen consumption on a study participant walking at a self-selected speed, we find that the exoskeleton slightly increases the walking metabolic cost of transport (COT) as compared to a standard loaded backpack (10% increase). However, a similar exoskeleton without joint springs or damping control (zero-impedance exoskeleton) is found to increase COT by 23% compared to the loaded backpack, highlighting the benefits of passive and quasi-passive joint mechanisms in the design of efficient, low-mass leg exoskeletons.",
"corpus_id": 1426956,
"title": "A QUASI-PASSIVE LEG EXOSKELETON FOR LOAD-CARRYING AUGMENTATION"
} | {
"abstract": "The first energetically autonomous lower extremity exoskeleton capable of carrying a payload has been demonstrated at U.C. Berkeley. This paper summarizes the mechanical design of the Berkeley Lower Extremity Exoskeleton (BLEEX). The anthropomorphically-based BLEEX has seven degrees of freedom per leg, four of which are powered by linear hydraulic actuators. The selection of the degrees of freedom and their ranges of motion are described. Additionally, the significant design aspects of the major BLEEX components are covered.",
"corpus_id": 5520039,
"title": "On the mechanical design of the Berkeley Lower Extremity Exoskeleton (BLEEX)"
} | {
"abstract": "Information graphics (infographics) in popular media are highly structured knowledge representations that are generally designed to convey an intended message. This paper presents a novel methodology for retrieving infographics from a digital library that takes into account a graphic's structural and message content. The retrieval methodology can be summarized thus: 1) hypothesize requisite structural and message content from a natural language query, 2) measure the relevance of each candidate infographic to the requisite structural and message content hypothesized from the user query, and 3) integrate these relevance measurements via a linear combination model in order to produce a ranked list of infographics in response to the user query. The methodology has been implemented and evaluated, and it significantly outperforms a baseline method that treats queries and graphics as bags of words.",
"corpus_id": 12556067,
"score": -1,
"title": "A novel methodology for retrieving infographics utilizing structure and message content"
} |
{
"abstract": "Uterine blood was sampled by venepuncture or an indwelling catheter in a total of 33 cyclic gilts and 26 mated animals subsequently confirmed to contain embryos; jugular blood was obtained simultaneously from catheterised animals. Prostaglandin F2 alpha and progesterone were determined by radioimmunoassay of the plasma. The concentration of PGF2 alpha in uterine venous blood of cyclic animals remained below 1.0 ng/ml until the corpora lutea were 12 days old. Highest PGF2 alpha values were associated with 15-17 day corpora lutea, with a mean of 5.9 ng/ml for six samples on Day 17. Likewise, the PGF2 alpha concentration in the uterine blood of mated animals did not exceed 1.0 ng/ml until the corpora lutea were older than 12 days, and a mean value of 6.0 ng/ml was found by acute sampling with 15-day corpora lutea. The highest mean concentrations of PGF2 alpha in uterine blood from a series of 14 catheterised pregnant animals were 2.8 and 2.3 ng/ml, respectively, with 15- and 16-day corpora lutea. Values for PGF2 alpha on the 17th, 18th and 19th days of pregnancy showed a downward trend. There was considerable day to day variation in the mean uterine and peripheral concentrations of progesterone in mated animals, but there was no sustained depression in response to elevated PGF2 alpha concentrations. The results suggest that exocrine secretion of PGF2 alpha into the uterine lumen of pigs under the influence of trophoblastic oestrogens does not provide a sufficient explanation for the establishment of the corpora lutea of pregnancy. Further attention should be devoted to the luteotrophic--as distinct from anti-luteolytic--rôle of pig conceptuses at the time of maternal recognition of pregnancy. Circumstantial evidence for luteal sensitivity to chorionic gonadotrophins is included.",
"corpus_id": 535504,
"title": "Uterine secretion of prostaglandin F2 alpha in anaesthetized pigs during the oestrous cycle and early pregnancy."
} | {
"abstract": "Ovarian progesterone induces essential changes leading to a temporary state of uterine receptivity for conceptus implantation. Estrogens secreted by the porcine conceptus on days 11 and 12 of pregnancy provide the initial signal for maternal recognition of pregnancy and maintenance of a functional corpus luteum (CL) for continued production of progesterone. As prostaglandins F(2)(α) (PGF(2)(α)) and E(2) (PGE(2)) exert opposing actions on the CL, a tight control over their synthesis and secretion is critical either for the initiation of luteolysis or maintenance of pregnancy. One of the supportive mechanisms by which conceptus inhibits luteolysis is changing PG synthesis in favor of luteoprotective PGE(2). Conceptus PGE(2) could be amplified by PGE(2) feedback loop in the endometrium. In pigs, as in other species, implantation and establishment of pregnancy is associated with upregulation of expression of proinflammatory factors, which include cytokines, growth factors, and lipid mediators. The conceptus produces inflammatory mediators: interferon γ and interferon δ, interleukins IL1B and IL6, and PGs, which probably activate inflammatory pathways in the endometrium. The endometrium responds to these embryonic signals by enhancing further progesterone-induced uterine receptivity. Understanding the mechanisms of pregnancy establishment is required for translational research to increase reproductive efficiencies and fertility in humans and animals.",
"corpus_id": 419781,
"title": "Novel insights into the mechanisms of pregnancy establishment: regulation of prostaglandin synthesis and signaling in the pig."
} | {
"abstract": "This study was conducted to confirm our previous reports that group housing lowered basal heart rate and various evoked heart-rate responses in Sprague-Dawley male and female rats and to extend these observations to spontaneously hypertensive rats. Heart rate data were collected by using radiotelemetry. Initially, group- and single-housed rats were evaluated in the same animal room at the same time. Under these conditions, group-housing did not decrease heart rate in undisturbed male and female rats of either strain compared with single-housed rats. Separate studies then were conducted to examine single-housed rats living in the room with only single-housed rats. When group-housed rats were compared with these single-housed rats, undisturbed heart rates were reduced significantly, confirming our previous reports for Sprague-Dawley rats. However, evoked heart rate responses to acute procedures were not reduced universally in group-housed rats compared with either condition of single housing. Responses to some procedures were reduced, but others were not affected or were significantly enhanced by group housing compared with one or both of the single-housing conditions. This difference may have been due, in part, to different sensory stimuli being evoked by the various procedures. In addition, the variables of sex and strain interacted with housing condition. Additional studies are needed to resolve the mechanisms by which evoked cardiovascular responses are affected by housing, sex, and strain.",
"corpus_id": 26264226,
"score": 1,
"title": "Heart rates of male and female Sprague-Dawley and spontaneously hypertensive rats housed singly or in groups."
} |
{
"abstract": "Abstract We present new karyotype records for six Proechimys species from the Brazilian Amazon. P. echinothrix from the region of Purus River had 2n = 32 chromosomes and a FN = 58, while P. cuvieri from the region of the Japurá River presented 2n = 28 and FN = 46. All individuals presented hybridization with an 18S rDNA probe in a single chromosome pair, with the exception of P. cuvieri from the Japurá region, which presented a third signal in one of the homologs of pair 1. No ITS were found in any of the individuals. Our data supports the hypothesis that the P. cuvieri population from the Japurá Basin and P. echinothrix from the lower Purus are new taxonomic entities. Our data expand the geographic distribution of the cytotype (2n = 40, FN = 54) described for P. gardneri from the Madeira River, and the cytotype (2n = 46, FN = 50), described for P. guyannensis, as well as the recently-described cytotype of P. goeldii (2n = 16, FN = 14). No clear pattern of chromosomal evolution has yet been defined in Proechimys, despite the considerable karyotypic diversity of the genus.",
"corpus_id": 219284752,
"title": "New karyotype records for the genus Proechimys (Rodentia: Echimyidae) from Brazilian Amazonia"
} | {
"abstract": null,
"corpus_id": 1914473,
"title": "Proechimys (Rodentia, Echimyidae): characterization and taxonomic considerations of a form with a very low diploid number and a multiple sex chromosome system"
} | {
"abstract": "Primary studies on whole sequenced genomes focused on single genes and little attention was directed to repetitive DNAs, and duplicated segments. Singleand few-copy sequences correspond to a small fraction of the genomes. For example, only 1.5% of the human genome is composed of coding sequences (Horvath et al., 2001). On the other hand, the eukaryote genome contains several types of DNA sequences present in multiple copies that, in some instances, can represent large portions of the genome (Charlesworth et al., 1994). Although extensively studied for the past three decades, the molecular forces that propagate and maintain repetitive DNAs in the genome are still being discussed.",
"corpus_id": 40717259,
"score": -1,
"title": "Chromosomes and Repetitive DNAs: A Contribution to the Knowledge of the Fish Genome"
} |
{
"abstract": "Mobile ad hoc networks are characterized by multi-hop wireless links, the absence of any cellular infrastructure, and frequent host mobility. Due to the dynamic nature of the network topology and the resource constraints, routing in MANETs is a very challenging task. The advantage of multi-path routing in MANETs is not obvious because the traffic along the multiple paths will interfere with each other. In this paper, the Ad Hoc On Demand Multi-path Distance Vector (AOMDV) routing protocol for mobile ad hoc networks has been examined. This algorithm computes alternate loop-free routes and node-disjoint paths. The mechanism used from the AOMDV is not effective in all the situations. We show a particular situation in which the mechanism used by the AOMDV fails and we a solution for this particular condition is proposed. We have analysed the performances of our correction through a tool simulator called NS-2 and we obtained improvements in terms of overhead control and delay.",
"corpus_id": 1339686,
"title": "A correction for ad hoc on demand multipath distance vector routing protocol (AOMDV)"
} | {
"abstract": "Mobile Ad-hoc Networks (MANET) is considered as a new paradigm of infrastructure-less mobile wireless communication systems. Routing in MANETs is considered as a challenging task due to the unpredictable changes in the network topology. In this paper, routing protocols used in MANETs are explained in detail and performance of both pro-active routing protocols like Destination Sequenced Distance Vector(DSDV),Optimized Link State Routing(OLSR), routing and reactive routing protocols like Dynamic Source Routing(DSR),Ad-hoc On demand Distance Vector(AODV) routing is evaluated. The major goal of this paper is to analyze the performance of well known MANET routing protocols in different mobility cases under low, medium and high density scenario. The performance is analyzed with respective Packet Delivery Fraction (PDF), Throughput and Normalized routing overhead. In order to make these optimizations, internal information and control must be shared between layers, which by definition violate the OSI standard. This is termed as Cross-Layer Design. A new Cross-Layer approach has been discussed for routing optimization in MANETs and to provide better QoS. Simulations are carried out using Network Simulator (Ns-2).",
"corpus_id": 110115492,
"title": "Routing Optimization In Mobile Ad-Hoc Networks Through Cross-Layer Design"
} | {
"abstract": "ABSTRACT To systematically review experimental evidence regarding animal-assisted therapies (AAT) for children or adolescents with or at risk for mental health conditions, we reviewed all experimental AAT studies published between 2000–2015, and compared studies by animal type, intervention, and outcomes. Studies were included if used therapeutically for children and adolescents (≤21 years) with or at risk for a mental health problem; used random assignment or a waitlist comparison/control group; and included child-specific outcome data. Of 1,535 studies, 24 met inclusion criteria. Of 24 studies identified, almost half were randomized controlled trials, with 9 of 11 published in the past two years. The largest group addresses equine therapies for autism. Findings are generally promising for positive effects associated with equine therapies for autism and canine therapies for childhood trauma. The AAT research base is slim; a more focused research agenda is outlined.",
"corpus_id": 3900708,
"score": 0,
"title": "Animal-assisted therapies for youth with or at risk for mental health problems: A systematic review"
} |
{
"abstract": "This paper describes an efficient connectionist knowledge representation and reasoning system that combines rule-based reasoning with reasoning about inheritance and classification within an IS-A hierarchy. In addition to a type hierarchy, the proposed system can encode generic facts such as 'Cats prey on birds' and rules such a s 'if z preys on y then y is scared of z ' and use them to infer that Tweety (who is a Canary) is scared of Sylvestor (who is a Cat). The system can also encode qualified rules such as 'if an animate agent walks into a solid object then the agent gets hurt'. The proposed system can answer queries in time that is only proportional to the length of the shortest derivation of the query and is independent of the sire of the knowledge base. The system maintains and propagates variable bindings using temporally synchronous i.e., in-phase firing of appropriate nodes. *This work was supported by NSF grant IRI 8805465 and ARO grant ARO-DAA2984-9-0027.",
"corpus_id": 1209729,
"title": "Combining a Connectionist Type Hierarchy With a Connectionist Rule-Based Reasoner"
} | {
"abstract": "Unclear Distinctions lead to Unnecessary Shortcomings: Examining the rule vs fact, role vs ller, and type vs predicate distinctions from a connectionist representation and reasoning perspective Venkat Ajjanagadde Wilhelm-Schickard Institute, Universitaet Tuebingen Sand 13, D-72076 Tuebingen, Germany venkat@occam.informatik.uni-tuebingen.de Abstract This paper deals with three distinctions pertaining to knowledge representation, namely, the rules vs facts distinction, roles vs llers distinction, and predicates vs types distinction. Though these distinctions may indeed have some intuitive appeal, the exact natures of these distinctions are not entirely clear. This paper discusses some of the problems that arise when one accords these distinctions a prominent status in a connectionist system by choosing the representational structures so as to re ect these distinctions. The example we will look at in this paper is the connectionist reasoning system developed by Ajjanagadde & Shastri(Ajjanagadde & Shastri 1991; Shastri & Ajjanagadde 1993). Their1 system performs an interesting class of inferences using activation synchrony to represent dynamic bindings. The rule/fact, role/ ller, type/predicate distinctions gure predominantly in the way knowledge is encoded in their system. We will discuss some signi cant shortcomings this leads to. Then, we will propose a much more uniform scheme for representing knowledge. The resulting system enjoys some signi cant advantages over Ajjanagadde & Shastri's system, while retaining the idea of using synchrony to represent bindings. Introduction Given a particular piece of knowledge, can one unambiguously decide whether it is a rule or a fact? Are there entities which always act as roles and never as llers? Are there entities which always act as llers and never as roles? What is a type and what is a general predicate? In spite of the fact that the rule/fact, role/ ller, type/predicate distinctions get mentioned not too infrequently in general AI parlance, an attempt to clearly state the distinctions faces di culties (Some of the difculties will be listed in the following section). This paper illustrates that taking these rather unclear distinctions and according them prominent representa1This paper was written in third person for double-blind reviewing. tional status in a connectionist network may not be a desirable thing to do. Speci cally, the example we consider here is the connectionist reasoning system(Ajjanagadde & Shastri 1991; Shastri & Ajjanagadde 1993) developed by Ajjanagadde & Shastri (Henceforth A & S). Their system performs an interesting class of inferences extremely fast. A major idea underlying their approach is the use of activation synchrony to represent dynamic bindings. We consider the idea of using synchrony to represent bindings to be indeed e cient, elegant, and as discussed in (Ajjanagadde & Shastri 1991; Shastri & Ajjanagadde 1993), neurologically plausible. However, the system of A & S has some shortcomings. These shortcomings are due to the representational methodologies A & S have chosen and are not due to the use of synchrony itself. The major reason for the shortcomings of their representational schemes can be diagnosed to be the prominence A & S have accorded to the distinctions of rules & facts, roles & llers, types & predicates. The representational structures in their system directly re ect these distinctions. For example, Fig. 1 shows how A & S encode the following knowledge base: give(x,y,z) ) own(y,z) ; buy(x,y) ) own(x,y); own(x,y) ) can-sell(x.y) ; give(john,mary,book1); buy(mike,house3)",
"corpus_id": 15297573,
"title": "Unclear Distinctions Lead to Unnecessary Shortcomings: Examining the Rule Vs Fact, Role Vs Ller, and Type Vs Predicate Distinctions from a Connectionist Representation and Reasoning Perspective"
} | {
"abstract": "The present paper focuses on analysis of bone x-ray images to identify any abnormality zone for possible bone diseases or fractures. The proposed research work is useful in the field of bio-medical imaging. The proposed work involves image pre-processing steps such as denoising, histogram smoothing, segmentation and edge detection, in order to enhance the given image quality and convert it into denoised image to identify and separate the region of interest (ROI) manually using algorithms for calculating bone mineral density (BMD) value to analyse and identify bone diseases and fracture risk. In this work we have to calculate some Gray Level Co-occurrence Matrix (GLCM) features and other some more features like mean, median and standard deviation are also derived. At the final stage we have created a database of both the normal and eroded bone images by applying BMD and GLCM features for classification and comparison of values in order to determine whether the input image is infected or not.",
"corpus_id": 31219899,
"score": 1,
"title": "A New Approach to Identify the Fracture Zone and Detection of Bone Diseases of X-ray Image"
} |
{
"abstract": "The radiographic anatomy of the temporomandibular joint in the dog and cat is described in dorsoventral and oblique projections. The positioning for different oblique views in conventional radiography and technical details of computed tomography are reviewed. Typical radiographic features of craniomandibular osteopathy, dysplasia, luxation, subluxation, fractures, ankylosis, degenerative joint disease, infection, and neoplasia involving the temporomandibular joint are discussed.",
"corpus_id": 69512,
"title": "Imaging of the canine and feline temporomandibular joint: a review."
} | {
"abstract": "Four cases of temporomandibular joint ankylosis in dogs and cats were treated successfully by unilateral condylectomy. In all four cases, the ability to open the mouth was reestablished and in three cases was maintained long-term.",
"corpus_id": 210072396,
"title": "Temporomandibular Ankylosis: Treatment by Unilateral Condylectomy in Two Dogs and Two Cats"
} | {
"abstract": "Dai-kenchu-to (TJ-100) is an herbal medicine used to shorten the duration of intestinal transit by accelerating intestinal movement. However, intestinal movement in itself has not been evaluated in healthy volunteers using radiography, fluoroscopy, and radioisotopes because of exposure to ionizing radiation. The purpose of this study was to evaluate the effect of TJ-100 on intestinal motility using cinematic magnetic resonance imaging (cine MRI) with a steady-state free precession sequence. Ten healthy male volunteers received 5 g of either TJ-100 or lactose without disclosure of the identity of the substance. Each volunteer underwent two MRI examinations after taking the substances (TJ-100 and lactose) on separate days. They drank 1200 mL of tap water and underwent cine MRI after 10 min. A steady-state free precession sequence was used for imaging, which was performed thrice at 0, 10, 20, 30, 40, and 50 min. The bowel contraction frequency and distention score were assessed. Wilcoxon signed-rank test was used, and differences were considered significant at a P-value <0.05. The bowel contraction frequency tended to be greater in the TJ-100 group and was significantly different in the ileum at 20 (TJ-100, 8.95 ± 2.88; lactose, 4.80 ± 2.92; P < 0.05) and 50 min (TJ-100, 9.45 ± 4.49; lactose, 4.45 ± 2.65; P < 0.05) between the groups. No significant differences were observed in the bowel distention scores. Cine MRI demonstrated that TJ-100 activated intestinal motility without dependence on ileum distention.",
"corpus_id": 3767307,
"score": 1,
"title": "Acceleration of small bowel motility after oral administration of dai-kenchu-to (TJ-100) assessed by cine magnetic resonance imaging"
} |
{
"abstract": "Background—Despite extensive proximal ablation, all potentials frequently cannot be eliminated from the left pulmonary veins (PV). Methods and Results—PV electrograms were analyzed during sinus rhythm, coronary sinus, and left atrial appendage (LAA) pacing, and PV and LAA angiography performed. During pacing, an initial low-amplitude slow potential was recorded on the anterior aspect of the left superior PV and anticipated with shortest activation time by LAA pacing. Its timing coincided with posterior LAA activation, shown to be immediately adjacent to the left superior PV by angiography. In the left inferior PV, the first potential was smaller and less sharp, coinciding with adjacent low LA activation. Angiographically, the LAA was at least 15 mm from the left inferior PV. The second sharper potential in both left PVs was eliminated by proximal ablation. Conclusion—Far field LAA activity consistently adds to PV myocardial electrograms in the left superior PV whereas lower, less sharp extravenous potentials in the left inferior PV originate from the inferior LA. They can be identified by LAA and coronary sinus pacing.",
"corpus_id": 3211607,
"title": "Left Atrial Appendage Activity Masquerading as Pulmonary Vein Potentials"
} | {
"abstract": "The ConfiDENSE™ module (Carto3 v4) allows rapid annotation of endocardial electrograms acquired by multielectrode (ME) mapping. However, its accuracy in assessing atrial voltages is unknown.",
"corpus_id": 3754490,
"title": "Accuracy of left atrial bipolar voltages obtained by ConfiDENSE multielectrode mapping in patients with persistent atrial fibrillation"
} | {
"abstract": "Chronic alcoholics who had been abstinent from alcohol for more than 2 years were evaluated with the thyrotropin-releasing hormone (TRH) test. The findings suggest the following profound disturbances in the hypothalamic-pituitary-thyroid axis: 1) a \"euthyroid sick syndrome,\" evidenced by low levels of triiodothyronine (T3), high levels of reverse T3, and normal levels of thyroxine (T4) (this syndrome implies a decreased 5'-deiodination of T4 to T3 and of reverse T3 to its lesser iodinated metabolites), 2) an increased binding capacity for thyroid hormones, evidenced by a decreased T3-uptake value and an increased level of T4-binding globulin, and 3) thyroid-stimulating hormone (TSH) blunting in 31% of patients. Paradoxically, there was a positive correlation between basal T4 and delta max TSH in subjects with blunted TSH, but baseline TSH levels were reduced in subjects with and without blunted TSH.",
"corpus_id": 431505,
"score": 1,
"title": "Thyrotropin-releasing hormone (TRH) in abstinent alcoholic men."
} |
{
"abstract": "Synaptic Transmission is a multiscale process, revealed by various approaches, going from the molecular to the cellular level. This correspondence points out to our recent contributions in this field. We now provide the physical and mathematical foundation, leading to the rational quantification of the analysis of the stochastic steps, underlying synaptic transmission.",
"corpus_id": 2416848,
"title": "The Complexity of Synaptic Transmission Revealed by a Multiscale Analysis Approach From The Molecular to The Cellular Level"
} | {
"abstract": "The mean first passage time (MFPT) for a Brownian particle to reach a small target in cellular microdomains is a key parameter for chemical activation. Although asymptotic estimations of the MFPT are available for various geometries, these formula cannot be applied to degenerated structures where one dimension of is much smaller compared to the others. Here we study the narrow escape time (NET) problem for a Brownian particle to reach a small target located on the surface of a flat cylinder, where the cylinder height is comparable to the target size, and much smaller than the cylinder radius. When the cylinder is sealed, we estimate the MFPT for a Brownian particle to hit a small disk located centrally on the lower surface. For a laterally open cylinder, we estimate the conditional probability and the conditional MFPT to reach the small disk before exiting through the lateral opening. We apply our results to diffusion in the narrow synaptic cleft, and compute the fraction and the mean time for neurotransmitters to find their specific receptors located on the postsynaptic terminal. Finally, we confirm our formulas with Brownian simulations.",
"corpus_id": 15907625,
"title": "The Narrow Escape Problem in a Flat Cylindrical Microdomain with Application to Diffusion in the Synaptic Cleft"
} | {
"abstract": "The diversity of bacterial communities at three sites impacted by acid mine drainage (AMD) from the Yinshan Mine in China was studied using comparative sequence analysis of two molecular markers, the 16S rRNA and gyrB genes. The phylogenetic analyses retrieved sequences from six classes of bacteria, Nitrospira, Alphaproteobacteria, Gammaproteobacteria, Deltaproteobacteria, Acidobacteria, and Actinobacteria, as well as sequences related to the plastid of the cyanobacterium Cyanidium acidocaldarium and also some unknown bacteria. The results of phylogenetic analyses based on gyrB and 16S rRNA were compared. This confirmed that gyrB gene analysis may be a useful tool, in addition to the comparative sequence analysis of the 16S rRNA gene, for the analysis of microbial community compositions. Moreover, the Mantel test showed that the geochemical characteristics, especially the pH value and the concentration of iron, strongly influenced the composition of the microbial communities.",
"corpus_id": 30906661,
"score": 1,
"title": "Bacterial diversity based on 16S rRNA and gyrB genes at Yinshan mine, China."
} |
{
"abstract": "This handbook follows a range of other reports and publications on diaspora involvement in \ndevelopment and peacebuilding (COWI, 2009; De Haas, 2006; GTZ, 2009; Sinatti, 2010; \nSinatti et al., 2010). It has been written mainly for European practitioners and policymakers, \nand was developed as a result of our observation that there is now a markedly increased interest \namong European actors in ‘engaging diasporas’ that is not necessarily matched with confidence \non how to approach the task. During our research for the handbook, a common refrain that we \nheard was: ‘Our organization is very interested in engaging the diaspora, but we need to gain \nexperience on how to do this.’ In this document, examples are presented from various projects \nwithin five European countries: Finland, Germany, Italy, the Netherlands and Norway. In these \ncountries, most diaspora-focused initiatives are relatively recent. By bringing together lessons \nlearned from the experiences in these five countries, we hope the present document will facilitate \nan exchange of knowledge and experience between different European actors.",
"corpus_id": 153208135,
"title": "Participation of Diasporas in Peacebuilding and Development"
} | {
"abstract": "Officially recorded remittance flows to developing countries reached $316 billion in 2009, down 6 percent from $336 billion in 2008. With improved prospects for the global economy, remittance flows to developing countries are expected to increase by 6.2 percent in 2010 and 7.1 percent in 2011, a faster pace of recovery in 2010 than our earlier forecasts. The decline in remittance flows to Latin America that began with the onset of financial crisis in the United States appears to have bottomed out since the last quarter of 2009. Remittance flows to South Asia (and to a smaller extent East Asia) continued to grow in 2009 although at markedly slower pace than in the pre-crisis years. Flows to Europe and Central Asia and Middle-East and North Africa fell more than expected in 2009. These regional trends reveal that: (a) the more diverse the migration destinations, the more resilient are remittances; (b) the lower the barriers to labor mobility, the stronger the link between remittances and economic cycles in that corridor; and (c) exchange rate movements produce valuation effects, but they also influence the consumption-investment motive for remittances. The resilience of remittances during the financial crisis has highlighted their importance in countries facing external financing gaps. Remittances are now being factored into sovereign ratings in middle-income countries and debt sustainability analysis in low-income countries. Countries are also becoming increasingly aware of the income and wealth of overseas diaspora as potential sources of capital. Some countries are showing interest in financial instruments such as diaspora bonds and securitization of future remittances to raise international capital.",
"corpus_id": 166940088,
"title": "Outlook for Remittance Flows 2010-11 : Remittance Flows to Developing Countries Remained Resilient in 2009, Expected to Recover During 2010-11"
} | {
"abstract": "Abstract While large-scale ERP deployments have been prevalent in the private-sector, there have been few attempts to deploy them in the public sector. This paper describes the first large-scale, public-sector ERP implementation, which integrates systems for over 50 different agencies in the state of Pennsylvania government. Over 20 individuals were interviewed during three years to identify and describe issues, success factors, implementation strategies, and lessons learned as compared to private-sector ERP implementations.",
"corpus_id": 40032531,
"score": 1,
"title": "The ImaginePA Project: The First Large-Scale, Public Sector ERP Implementation"
} |
{
"abstract": "In the title compound, {[Zn(SO4)(C8H6N4)]·H2O}n, the ZnII atom is in a distorted octahedral environment. The ZnII atoms are bridged by both 2,2′-bipyrimidine and sulfate ligands, thus forming a three-dimensional polymeric metal–organic solid that contains uncoordinated water molecules in the interstitial space. O—H⋯O hydrogen bonding consolidates the crystal structure.",
"corpus_id": 8303634,
"title": "Poly[[(μ-2,2′-bipyrimidine-κ4 N 1,N 1′:N 3,N 3′)(μ-sulfato-κ2 O:O′)zinc(II)] monohydrate]"
} | {
"abstract": "Abstract New 2,2′-bipyrimidine (bpm)-based copper(II) coordination polymers have been synthesized and characterized. The structure of [Cu(bpm)(SO4)](H2O)n (1) contains zigzag chains which are constructed of Cu-bpm-Cu units, sulfate ions and additional bridging bpms. Sulfate ions coordinate to copper(II) ions, and link the chains to form a three-dimensional bundle structure. The crystal structure of [Cu2(bpm)(suc)0.5(ClO4)2(OH)(H2O)2]n (2) consists of a chain of bpm-bridged dinuclear copper(II) units linked by a carboxylate group from the succinate anion and a hydroxo group. Coordinated perchlorate ions also bridge the adjacent chains. The chain structure of [Cu(bpm)1.5(suc)0.5](ClO4)(H2O)2n (3) consists of the bpm-bridged dinuclear copper(II) units, amphimonodentate succinate dianions and terminal bpms. The succinate dianion acts as a bridging ligand between the dimers to yield a one-dimensional zigzag chain in the crystal. The terminal bpm stacks with a nearest-neighbor terminal bpm on an adjacent chain to form a linkage for a two-dimensional sheet. The present work affords a new strategy to build multi-dimensional coordination polymers, which is based on the use of [Cu-bpm-Cu]4+ copper(II) dinuclear units as ‘building blocks’. The geometries around the pyrimidyl rings of bpm are similar to each other, whereas the geometry of the copper atoms is different. The additional linking ligand as a peripheral ligand coordinates to the dimer unit to control the plasticity of the coordination sphere of copper(II); this makes the modification of the symmetry of its magnetic orbital easy. The magnetic susceptibilities were measured from 2 to 300 K and analyzed as antiferromagnetic Heisenberg S = 1/2 alternating chains to yield J = −38.8 cm−1, α = 0.93 (1), J = −132.2 cm−1, α = 0.22 (2) and J = −4.5 cm−1, α = 0.60 (3).",
"corpus_id": 98391822,
"title": "Mixed ligand copper(II) coordination polymers constructed by Cu-bpm-Cu dimer unit (bpm = 2,2′-bipyrimidine) as a building block. Crystal structures and magnetic properties of [Cu(bpm)(SO4)] (H2O)n, [Cu2(bpm)(suc)0.5(ClO4)2(OH)(H2O)2]n and [Cu(bpm)1.5(suc)0.5](ClO4)(H2O)2n (suc = succinate)"
} | {
"abstract": "The aim of this study was to show that multilayer fractal Brunauer−Emmett−Teller (mfBET) theory can be used as a tool to obtain information about the distribution of water in cellulose powder particles of varying crystallinity. Microcrystalline cellulose, agglomerated micronized cellulose, low-crystallinity cellulose, and cellulose powders from green and brown algae were characterized by scanning electron microscopy and mfBET analysis on water and nitrogen adsorption isotherms. The distribution of water in the cellulose materials was found to be characterized by a fractal dimension smaller than 1.5 for all powders. The results showed that for highly crystalline cellulose materials, such as Cladophora cellulose, the cellulose−water interactions take place mainly on cellulose fibril surfaces adjacent to open pores without causing any significant swelling of the material. For less ordered celluloses the water interaction was found to take place inside the bulk material and the water uptake process caused the...",
"corpus_id": 95602960,
"score": 1,
"title": "Fractal Dimension of Cellulose Powders Analyzed by Multilayer BET Adsorption of Water and Nitrogen"
} |
{
"abstract": "We compared the toxicity of subchronic exposure to equivalent masses of particles from sugar cane burning and traffic. BALB/c mice received 3 intranasal instillations/week during 1, 2 or 4 weeks of either distilled water (C1, C2, C4) or particles (15μg) from traffic (UP1, UP2, UP4) or biomass burning (BP1, BP2, BP4). Lung mechanics, histology and oxidative stress were analyzed 24h after the last instillation. In all instances UP and BP groups presented worse pulmonary elastance, airway and tissue resistance, alveolar collapse, bronchoconstriction and macrophage influx into the lungs than controls. UP4, BP2 and BP4 presented more alveolar collapse than UP1 and BP1, respectively. UP and BP had worse bronchial and alveolar lesion scores than their controls; BP4 had greater bronchial lesion scores than UP4. Catalase was higher in UP4 and BP4 than in C4. In conclusion, biomass particles were more toxic than those from traffic after repeated exposures.",
"corpus_id": 2351097,
"title": "Respiratory toxicity of repeated exposure to particles produced by traffic and sugar cane burning"
} | {
"abstract": "BackgroundSugar cane harvesting by burning on Maui island is an environmental health issue due to respiratory effects of smoke. Volcanic smog (“vog”) from an active volcano on a neighboring island periodically blankets Maui and could confound a study of cane smoke’s effects since cane burning is not allowed on vog days. This study examines the association between cane burning and emergency department (ED) visits, hospital admissions, and prescription fills for acute respiratory illnesses.MethodsThis retrospective study controlled for confounders that could increase respiratory distress on non-burn days by matching each burn day with a non-burn day and then comparing the ratio of patients with respiratory distress residing in the path of sugar burn smoke to those residing elsewhere on Maui on burn versus non-burn days. Patients with acute respiratory distress were defined as those with one or more acute respiratory diagnoses at one of the hospitals or emergency departments on Maui. Separately, patients with acute respiratory illness were identified through prescription records from four community pharmacies, specifically defined as those who filled prescriptions for acute respiratory distress.ResultsThere were 1,256 reports of respiratory distress prescriptions and 686 hospital/ED diagnoses of acute respiratory illness. The ratio of cases within to outside of smoke exposure was higher on burn days for both the ED/hospital data and the pharmacy, though not statistically significant. In post-hoc analyses of the pharmacy data based on the number of acres burned as a proxy for volume of smoke, there was a dose response trend for acreage burned such that the highest quartile showed a statistically significant higher proportion of acute respiratory distress in the exposed versus non-exposed regions (P = 0.015, OR 2.4, 95 % CI [1.2–4.8]).ConclusionsAfter adjusting for confounders on non-burn days, there was a significantly higher incidence of respiratory distress in smoke-exposed regions when greater amounts of acres were burned. Health officials should consider actions to reduce the negative health outcomes associated with sugar cane burning practices.",
"corpus_id": 18041732,
"title": "Association between sugar cane burning and acute respiratory illness on the island of Maui"
} | {
"abstract": "OBJECTIVE. We sought to evaluate the association between early protein and energy intake and neurodevelopment and growth of extremely low birth weight (<1000 g) infants. STUDY DESIGN. Daily protein and energy intakes were collected by chart review for the first 4 weeks of life on 148 extremely low birth weight survivors. A total of 124 infants (84%) returned for evaluation at 18 months' corrected age. Bivariate analysis tested correlations between weekly protein or energy intakes and Bayley Mental Development Index, Psychomotor Development Index, or growth at 18 months. Separate regression models evaluated contributions of protein (grams per kilogram per day) and energy intake (kilojoules per kilogram per day) to the Mental Development Index, Psychomotor Development Index, and growth, while controlling for known confounders. RESULTS. After adjusting for confounding variables, week 1 energy and protein intakes were each independently associated with the Mental Development Index. During week 1, every 42 kJ (10 kcal)/kg per day were associated with a 4.6-point increase in the Mental Development Index and each gram per kilogram per day in protein intake with an 8.2-point increase in the Mental Development Index; higher protein intake was also associated with lower likelihood of length <10th percentile. CONCLUSIONS. Increased first-week protein and energy intakes are associated with higher Mental Development Index scores and lower likelihood of length growth restrictions at 18 months in extremely low birth weight infants. Emphasis should be placed on providing more optimal protein and energy during this first week.",
"corpus_id": 6529250,
"score": 1,
"title": "First-Week Protein and Energy Intakes Are Associated With 18-Month Developmental Outcomes in Extremely Low Birth Weight Infants"
} |
{
"abstract": "Delayed swayback in five-month-old lambs Review of lead poisoning in cattle Acute bracken poisoning in a cow Nutritional cardiomyopathy in pigs Avian tuberculosis in a buzzard These are among matters discussed in the disease surveillance report for October 2014 from SAC Consulting: Veterinary Services (SAC C VS)",
"corpus_id": 658294,
"title": "Delayed swayback diagnosed in lambs with hindlimb paresis"
} | {
"abstract": "This paper describes a series of primary tumors of the liver parenchyma and biliary tract in cattle, sheep and pigs which were examined during a 12‐mouth survey of neoplasms found in 100 abattoirs throughout Great Britain. In cattle 302 tumors were studied, 36 of which arose in the liver. Cholangiocarcinomas outnumbered liver cell tumors. In sheep 32 of 107 tumors originated in the liver, with liver cell tumors more numerous than cholangiocarcinomas. Primary liver tumors were of a comparatively low incidence in pigs—only six in 139 neoplasms. There were no cholangiocarcinomas in pigs though one adencarcinoma of the gallbladder was found. Secondary tumors outnumbered primary liver tumors in cattle and pigs while the converse was true in sheep. Of the secondary tumors involving liver, lymphosarcoma predominated in the three species. Malignancy of liver cell tumors was difficult to assess in the absence of obvious metastases; however, most of the liver cell tumors were apparently benign while the great majority of the cholangiocarcinomas had metastasized to lymph nodes, peritoneum and lung. There was no significant association between primary hepatic tumours and the presence of any pre‐existing disease process.",
"corpus_id": 5432420,
"title": "Tumors of the liver in cattle, sheep and pigs"
} | {
"abstract": "The finding of a malignant epithelial tumor within the gallbladder of a cow would seem to be unusual enough to merit a description of the case.\n\nThe animal, a four-year old grade cow in good physical condition, had been slaughtered for food and during the usual examination of the carcass by the Federal Veterinary Inspector the gallbladder was found to be greatly distended. On opening the gallbladder a portion of the mucosa was found to be studded with an irregular, nodular, sessile type of growth. The mass, which was reddish, covered an area approximately 5 by 8 cm.; the estimated weight was 90 gm. The mass appeared vascular and contained several cysts, some as large as 0.7 cm. in diameter. The liver was examined carefully but abnormalities were not found. A careful search of the gallbladder and bile ducts failed to demonstrate any evidence of cholelithiasis. Jaundice was not present.",
"corpus_id": 73329571,
"score": 2,
"title": "Adenocarcinoma of the Gallbladder of a Cow"
} |
{
"abstract": "BACKGROUND\nSpinal injuries are the most devastating injuries and affect every aspect of patients' lives. This may cause lifelong disability due to spinal cord injury. Recovery of neurological functions is highly desirable. Early or late surgical intervention is still debatable, but majority recommend early intervention. The result of late surgical intervention in term of neurological recovery is not clear. This study focuses on neurological recovery after late surgical intervention. The objective of this study was to assess neurological recovery in term of ASIA grading in patients with traumatic spinal cord injury.\n\n\nMETHODS\nThis descriptive cross-sectional study was performed from June 2013 to June 2016. All patients treated for spinal trauma with spinal cord injury, operated after 24 hrs of injury were included in the study. Neurology was assessed according to ASIA scale preoperative and at 6 months. Data was analysed with the help of SPSS.\n\n\nRESULTS\nTotal of 149 patients, 32 (21.5%) were female and 117 (78.5%) male were included. mean age was 32±13.11 years. Ninety-six (64.4%) patients presented with fall while 53 (35.6%) presented with motor vehicular accidents (MVA). according to AO comprehensive classification 76 (51.1%) patients were type C, 47 (31.5) were type B and 26 (17.4%) were type A. preoperative neurology was ASIA A 65 (43.6%), B12 (8.1%), C 59 (39.6%) and D 13 (8.7%). Mean delay in surgery was 3.6±1.8 days with minimum of 1 and maximum 14 days. ASIA grading on 6 months was ASIA \"A\" 61 (40.9%), B4 (2.7%), C 26 (17.4%), D 33 (22.1%) and E 25 (16.8%). the overall improvement in neurology was in 67 (45%) of patients. improvement by one grade was documented in 49 (32.9%) patients, by two grades in 17 (11.4%) and by three grades in one patient (.7%).\n\n\nCONCLUSIONS\nfall from height is a major cause of spine injuries in our set up followed by RTA. Preventive measures need to be instituted to lessen the devastating outcome.",
"corpus_id": 3707181,
"title": "Neurological Recovery In Traumatic Spinal Cord Injuries After Surgical Intervention."
} | {
"abstract": "Study Design. Systematic Review. Objective. To determine whether early spinal stabilization in thoracolumbar spine trauma decreases morbidity and mortality. Summary of Background Data. The role of early spinal stabilization through surgical means may have a number of benefits. These include reduced morbidity and mortality because of more rapid mobilization afforded by spinal column stabilization and a reduction in the incidence and severity of sepsis and respiratory failure. There are several potential disadvantages of early surgery. The most strongly debated is the potential that the additional physiologic injury may result in an unintended increase in morbidity and mortality caused by worsening of existing injuries, such as with pulmonary or intracranial trauma. This problem may be compounded by increased hemorrhage and resulting hypotension. Operating in the presence of missed or underestimated associated injuries or under less-than-ideal conditions relative to the complexity of the surgery and resources required is also a potential disadvantage. Methods. A systematic review of the English-language literature was undertaken for articles published between January 1990 and December 2008. Electronic databases and reference lists of key articles were searched to identify published studies examining the timing of thoracolumbar fracture fixation. Two independent reviewers assessed the strength of literature using the Grading of Recommendations Assessment, Development, and Evaluation criteria, assessing quality, quantity, and consistency of results. Disagreements were resolved by consensus. Results. A total of 68 articles were initially screened, and 9 ultimately met the predetermined inclusion criteria. These studies demonstrated that early stabilization ofthoracic fractures reduced the mean number of days on a ventilator, the number of days in intensive care unit and in hospital, and reduced respiratory morbidity compared with late stabilization. This effect, other than the length of hospital stay, was not seen with stabilization of lumbar fractures. There is not enough evidence to determine the effect of the timing of stabilization on mortality in thoracolumbar fractures. Conclusion. Ideally, patients with unstable thoracic fractures should undergo early (<72 hours) stabilization of their injury to reduce morbidity and, possibly, mortality.",
"corpus_id": 4236134,
"title": "Does Early Fracture Fixation of Thoracolumbar Spine Fractures Decrease Morbidity or Mortality?"
} | {
"abstract": "Abstract Purpose This study was aimed to gauge the efficacy of primary AGV implantation with concurrent intraoperative intravitreal ranibizumab vs primary AGV implantation alone in the management of neovascular glaucoma (NVG). Methods This retrospective comparative study was carried out based on the data collected in patients of neovascular glaucoma who underwent Ahmed Glaucoma Valve implantation with or without concurrent intravitreal ranibizumab between the period from Feb 2009 to Feb 2015 involving two groups of 40 patients each, having the clinical diagnosis of neovascular glaucoma, having undergone pan-retinal photocoagulation with minimum 03 intravitreal injections of ranibizumab not less than 4 weeks prior to undergoing primary Ahmed glaucoma valve implantation and allotted randomly to either group to receive concurrent administration of intravitreal ranibizumab with Ahmed glaucoma valve (AGV) implant surgery or AGV implant surgery alone. The minimum qualifying follow-up was 3-years. The functional outcome measures included intraoperative and postoperative complications, intraocular pressure (IOP), and the need for antiglaucoma medication, if any, as well as best corrected visual acuity. Results Both the groups showed a significant decrease in IOP (p < 0.05). Sight and IOP threatening postoperative complications were significantly low in the study group. NVI regression was higher in the study group and re-emergence was significantly lesser in the study group (p = 0.002). Mean postop IOP had shown an excellent reduction in IOP up to 14.25 ± 2.05 mm Hg with 1.5 ± 1 antiglaucoma drugs in ranibizumab group and 15.25 ± 2.95 mm Hg with 1.7 ± 0.87 antiglaucoma drugs in the control group at the 3-years follow-up period. Surgical success rates were comparable between the two groups at 1 and 3-year. Conclusion Concurrent intravitreal ranibizumab along with primary AGV implantation minimizes postoperative complications, regresses NVI while accelerating stabilization of IOP and visual functions. How to cite this article Kaushik J, Parihar JKS, Shetty R, et al. A Long-term Clinical Study to Evaluate AGV with Concurrent Intravitreal Ranibizumab vs Primary AGV Implantation in Cases of Refractory Neovascular Glaucoma. J Curr Glaucoma Pract 2022;16(1):41–46.",
"corpus_id": 248583515,
"score": 1,
"title": "A Long-term Clinical Study to Evaluate AGV with Concurrent Intravitreal Ranibizumab vs Primary AGV Implantation in Cases of Refractory Neovascular Glaucoma"
} |
{
"abstract": "It has been proposed that bone damageability (i.e. bone's susceptibility to formation of damage) increases with the elevation or suppression of bone turnover. Suppression of turnover via bisphosphonates increases local bone mineralization, which theoretically should increase the susceptibility of bone to microcrack formation. Elevation of bone turnover has also been proposed to increase bone microdamage through an increase in bone intracortical porosity and local stresses and strains. The goal of this paper was to investigate the above proposals, i.e., whether or not increases to mineral content and porosity increase bone in-service damageability. To do this, we measured in vivo diffuse damage area (Df.Dm.Ar, %) and microcrack density (Cr.Dn) (cracks/mm(2)) in the same specimen from human cortical bone of the midshaft of the proximal femur obtained from cadavers with an age range of eight decades and examined their relationships with porosity, mineralization and age. Results of this study showed that Cr.Dn and Df.Dm.Ar increased with a decrease in bulk mineralization. This finding does not appear to support the proposal that damage accumulation increases with low bone turnover that results in increases mineralization. It was proposed however that the negative correlation between damage accumulation and mineralization may be attributed to highly mineralized regions of bone existing with under-mineralized regions resulting in an overall decrease in average bone mineralization. It was also found that microdamage accumulates with increasing porosity which does appear to support the proposal that elevated bone turnover that results in increased porosity can accelerate microdamage accumulation. Finally, it was shown that linear microcracks and Df.Dm.Ar accumulate with age differently, but because they correlate with each other, one may be the precursor for the other.",
"corpus_id": 11829689,
"title": "Age-related changes in porosity and mineralization and in-service damage accumulation."
} | {
"abstract": "Trabecular architecture becomes more rod-like and anisotropic in osteoporotic and aging trabecular bone. In order to address the effects of trabecular type and orientation on trabecular bone damage mechanics, microstructural finite element modeling was used to identify the yielded tissue in ten bovine tibial trabecular bone samples compressed to 1.2% on-axis apparent strain. The yielded tissue was mapped onto individual trabeculae identified by an Individual Trabeculae Segmentation (ITS) technique, and the distribution of the predicted yielding among trabecular types and orientations was compared to the experimentally measured microdamage. Although most of the predicted yielded tissue was found in longitudinal plates (73+/-11%), the measured microcrack density was positively correlated with the proportion of the yielded tissue in longitudinal rods (R(2)=0.52, p=0.02), but not in rods of other directions or plates. The overall fraction of rods and the fractions of rods along the longitudinal and transverse axes were also correlated with the measured microcrack density. In contrast, diffuse damage area did not correlate with any of these quantities. These results agree with the findings that both in vitro and in vivo microcrack densities are correlated with Structure Model Index (SMI), and are also consistent with decreased energy to failure in more rod-like trabecular bone. Together the results suggest that bending or buckling deformations of rod-like trabeculae may make trabecular structures more susceptible to microdamage formation. Moreover, while simple strain-based tissue yield criteria may account for macroscopic yielding, they may not be suitable for identifying damage.",
"corpus_id": 5900843,
"title": "Effects of trabecular type and orientation on microdamage susceptibility in trabecular bone."
} | {
"abstract": "Background Physical functioning can be assessed by different approaches that are characterized by increasing levels of individual appraisal. There is insufficient insight into which approach is the most informative in patients with ankylosing spondylitis (AS) compared with control subjects. Objective The objective of this study was to compare patients with AS and control subjects regarding 3 approaches of functioning: experienced ability to perform activities (Bath Ankylosing Spondylitis Functional Index [BASFI]), self-reported amount of physical activity (PA) (Baecke questionnaire), and the objectively measured amount of PA (triaxial accelerometer). Methods This case-control study included 24 AS patients and 24 control subjects (matched for age, gender, and body mass index). Subjects completed the BASFI and Baecke questionnaire and wore a triaxial accelerometer. Subjects also completed other self-reported measures on disease activity (Bath AS Disease Activity Index), fatigue (Multidimensional Fatigue Inventory), and overall health (EuroQol visual analog scale). Results Both groups included 14 men (58%), and the mean age was 48 years. Patients scored significantly worse on the BASFI (3.9 vs 0.2) than their healthy peers, whereas PA assessed by Baecke and the accelerometer did not differ between groups. Correlations between approaches of physical functioning were low to moderate. Bath Ankylosing Spondylitis Functional Index was associated with disease activity (r = 0.49) and physical fatigue (0.73) and Baecke with physical and activity related fatigue (r = 0.54 and r = 0.54), but total PA assessed by accelerometer was not associated with any of these experience-based health outcomes. Conclusions Different approaches of the concept physical functioning in patients with AS provide different information. Compared with matched control subjects, patients with AS report more difficulties but report and objectively perform the same amount of PA.",
"corpus_id": 30002108,
"score": 0,
"title": "Physical Functioning in Patients With Ankylosing Spondylitis: Comparing Approaches of Experienced Ability With Self-Reported and Objectively Measured Physical Activity"
} |
{
"abstract": "An 80-year-old man was admitted to an outside hospital for a headache, fever, and altered mental status. His admission to neurologic examination was notable for right eye deviation and right gaze preference. Neuroimaging was performed, and showed a ring-enhancing mass with central restricted diffusion in the right occipital lobe consistent with an abscess. During his admission, he became obtunded, and repeat imaging was performed. Repeat contrast-enhanced magnetic resonance imaging showed interval rupture of the right occipital lobe abscess into the adjacent lateral ventricle, with evidence of pyocephalus and ventriculitis. He was transferred to our facility for neurosurgical debridement of his abscess. Pyogenic ventriculitis is a life-threatening complication of cerebral abscesses caused by rupture and decompression of an abscess into the ventricles [1, 2]. Due to differences in blood supply, the wall of a cerebral abscess is usually thinner at its medial margin facing the ventricles, which predisposes rupture at this site. Signs of Intraventricular rupture of an abscess include a direct connection between the abscess and the ventricle, pyogenic debris within the ventricles, and abnormal thickening shown by increased enhancement of the ventricular wall [3]. Diffusion-weighted images are especially helpful, as viscous, pyogenic material is often diffusion-restricting, and will be bright on diffusion-weighted imaging. Delays in detecting ventriculitis can result in death or severe neurologic disability, even after successful treatment of infection [4]. Familiarity with the common imaging appearance of pyogenic ventriculitis can facilitate earlier diagnosis and treatment of this devastating condition, and potentially improve outcomes.",
"corpus_id": 1660064,
"title": "Precipitous neurologic decline following intraventricular rupture of a cerebral abscess: classic imaging findings in ventriculitis and pyocephalus"
} | {
"abstract": "Our retrospective study concerned 35 cases of surgical complications related to bacterial meningitis in 16 adults and 19 children. The mean age was 28 years for adults (15-56 years), and 6 months for children (1-12 months). Portal of entry for meningitis was found in 12 cases (35%): 8 sinusitis and 4 otitis. Delay to appearance of complications was 4.5 days, and to diagnosis confirmation 9 days with CT scan (17 cases), and transfontanellar ultrasonography (19 cases). The complications were: hydrocephalus, 19 cases (54%), brain empyemas, 7 cases (20%), abscesses, 10 cases (28.5%), ventriculitis, 2 cases (6%). Twenty-two bacteria were isolated from the CSF: Streptococcus pneumoniae (15 cases), Haemophilus influenzae (5 cases), Neisseria meningitidis (1 case), and Escherichia coli (1 case). Fourteen patients underwent neurosurgical treatment based on aspiration in case of suppuration and external drainage in case of hydrocephalus. The associated medical treatment was antibiotics combining third-generation cephalosporins, fluoroquinolone, and metronidazol, with a mean duration of 12 days. Recovery rate was 89%, letality 11%, and after effect rate were 33%. Our results confirm the low frequency of neurosurgical complications related to bacterial meningitis, but it emphasizes the role of an early CT-scan for diagnosis and prognosis.",
"corpus_id": 36984024,
"title": "[Neurosurgical complications of purulent meningitis in the tropical zone]."
} | {
"abstract": "BACKGROUND\nRupture of a cystic craniopharyngioma is a rare phenomenon. The rupture of the cyst causes decompression of the adjacent neural structures resulting in spontaneous improvement of the visual symptoms or level of sensorium. The leakage of its contents into the subarachnoid space gives rise to meningismus. We report an extremely rare phenomenon of an intraventricular rupture of a cystic craniopharyngioma, which resulted in acute neurological deterioration and chemical ventriculitis.\n\n\nCASE DESCRIPTION\nA 38-year-old lady presented with a 1-year history of frontal lobe dysfunction and bilateral primary optic atrophy. The CT scan showed a multi-loculated, hyperdense lesion in the region of the third ventricle and suprasellar cistern. She suffered acute deterioration of neurological status; computed tomography (CT) scan showed a hypodense lesion in the suprasellar cistern with persistent hydrocephalus. She was treated with ventricular drainage, steroids and anticonvulsants. Ventricular fluid showed high cholesterol and LDH levels. The diagnosis of craniopharyngioma was subsequently verified histologically. CONCLUSIONS The intraventricular rupture of a cystic craniopharyngioma can result in acute clinical deterioration and morbidity because of chemical ventriculitis. This is unlike the rupture in the subarachnoid space or sphenoid sinus which usually results in symptomatic improvement, although chemical meningitis may occur. This rare phenomenon should be recognized, and prompt ventricular drainage is advised. The literature is reviewed, and management of this condition is discussed.",
"corpus_id": 26100362,
"score": 2,
"title": "Spontaneous intraventricular rupture of craniopharyngioma cyst."
} |
{
"abstract": "The members of electronic communities are often unrelated to each other; they may have never met and have no information on each other's reputation. This kind of information is vital in electronic commerce interactions, where the potential counterpart's reputation can be a significant factor in the negotiation strategy. Two complementary reputation mechanisms are investigated which rely on collaborative rating and personalized evaluation of the various ratings assigned to each user. While these reputation mechanisms are developed in the context of electronic commerce, it is believed that they may have applicability in other types of electronic communities such as chatrooms, newsgroups, mailing lists, etc.",
"corpus_id": 1363540,
"title": "P.: Trust management through reputation mechanisms"
} | {
"abstract": "The purpose of trust and reputation systems is to strengthen the quality of markets and communities by providing an incentive for good behaviour and quality services, and by sanctioning bad behaviour and low quality services. However, trust and reputation systems will only be able to produce this effect when they are sufficiently robust against strategic manipulation or direct attacks. Currently, robustness analysis of TRSs is mostly done through simple simulated scenarios implemented by the TRS designers themselves, and this can not be considered as reliable evidence for how these systems would perform in a realistic environment. In order to set robustness requirements it is important to know how important robustness really is in a particular community or market. This paper discusses research challenges for trust and reputation systems, and proposes a research agenda for developing sound and reliable robustness principles and mechanisms for trust and reputation systems.",
"corpus_id": 14646850,
"title": "Challenges for Robust Trust and Reputation Systems"
} | {
"abstract": "We present a method for multi-target tracking that exploits the persistence in detection of object parts. While the implicit representation and detection of body parts have recently been leveraged for improved human detection, ours is the first method that attempts to temporally constrain the location of human body parts with the express purpose of improving pedestrian tracking. We pose the problem of simultaneous tracking of multiple targets and their parts in a network flow optimization framework and show that parts of this network need to be optimized separately and iteratively, due to inter-dependencies of node and edge costs. Given potential detections of humans and their parts separately, an initial set of pedestrian tracklets is first obtained, followed by explicit tracking of human parts as constrained by initial human tracking. A merging step is then performed whereby we attempt to include part-only detections for which the entire human is not observable. This step employs a selective appearance model, which allows us to skip occluded parts in description of positive training samples. The result is high confidence, robust trajectories of pedestrians as well as their parts, which essentially constrain each other's locations and associations, thus improving human tracking and parts detection. We test our algorithm on multiple real datasets and show that the proposed algorithm is an improvement over the state-of-the-art.",
"corpus_id": 736730,
"score": -1,
"title": "(MP) 2 T: Multiple People Multiple Parts Tracker"
} |
{
"abstract": "PURPOSE\nTo evaluate long-term follow-up results of excimer laser phototherapeutic keratectomy (PTK) in a Japanese population.\n\n\nMETHODS\nTwenty-six patients (31 eyes) with corneal opacity were treated with excimer laser PTK. Preoperative diagnoses included 16 eyes with band keratopathy, 10 with granular dystrophy, and 5 with corneal scar. Mean postoperative follow-up was 27 months.\n\n\nRESULTS\nCorneal opacity was reduced in all patients. At postoperative month 12, best spectacle-corrected visual acuity (BSCVA) improved from the preoperative level in 22 eyes of 28 eyes, did not change in 3 eyes, and declined in 3 eyes. BSCVA at month 24 was better than the preoperative acuity in 17 eyes of 23 eyes, similar in 1 eye, and worse in 5 eyes. Eyes with granular dystrophy showed significantly better BSCVA improvement than those with band keratopathy. A hyperopic shift of +1.0 diopter or more occurred in 14 eyes of 28 eyes at month 12 and in 12 eyes of 23 eyes at month 24. No serious adverse effects were encountered during the 3-year follow-up period.\n\n\nCONCLUSIONS\nExcimer laser PTK is a safe and effective procedure for the treatment of Japanese patients with superficial corneal opacity.",
"corpus_id": 846963,
"title": "Long-term follow-up of excimer laser phototherapeutic keratectomy."
} | {
"abstract": "Purpose: To evaluate refractive error changes after phototherapeutic keratectomy (PTK). Setting: University Eye Hospital, Kiel, and University Eye Hospital, Hulle, Germany. Methods: The MEL 60 excimer laser (Aesculap Meditec) was used in all cases. To even out the peaks and valleys of irregular surfaces, modulating agents were applied. The study included 45 patients with various preoperative corneal diseases: central scars, recurrent erosions, corneal dystrophies, and surface irregularities. Subjective and objective refraction, keratometry, slitlamp photography, and corneal topography were performed preoperatively and postoperatively. The follow‐up was up to 24 months. Results: Twenty‐six patients had stable postoperative refractions. Thirteen patients developed a hyperopic shift; the highest observed amount was +4.0 diopters. In seven patients, the astigmatic error increased, although no significant change in axis was measured. Three patients had a myopic shift. Conclusion: After PTK, all types of refractive change can occur. The greatest risk is that of a hyperopic shift. We saw a correlation between the degree of hyperopia and the ablation depth. Methods for preventing such changes include (1) a large treatment zone, (2) use of a polishing program involving a low viscosity fluid at the end of the laser procedure, (3) a two‐step treatment in selected cases to avoid ablations that are too deep.",
"corpus_id": 2246969,
"title": "Refractive changes after phototherapeutic keratectomy"
} | {
"abstract": "This thesis investigates the hydrodynamics of flow around/and or above an obstacle(s) placed in a fully turbulent developed flow such as flow around lateral bridge constriction, flow over bridge deck and flow over square ribs that are characterized with free surface flow. Also this thesis examines the flow around one-line circular cylinders placed at centre in a single open channel and floodplain edge in a compound, open channel. \n*Hydrodynamics studies of compound channels with vegetated floodplain have been carried out by a number studies of authors in the last three decades. To enrich our understanding of the flow resistance, comprehensive experiments are carried out with two vegetation configurations-wholly vegetated floodplain and one-line vegetation and then compared to smooth unvegetated compound channel. The main result of the flow characteristics in vegetated compound channels is that spanwise velocity profiles exhibit markedly different characters in the one-line and wholly-vegetated configurations. Moreover, flow resistance estimation results are in agreement with other experimental studies. \n*A complementary experimental study was carried out to investigate the water surface response in an open-channel flow through a lateral channel constriction and a bridge opening with overtopping. The flow through the bridge openings is characterized by very strong variation of the water surface including undular hydraulic jumps. The results of simulation that was carried by (Kara et al. 2014, 2015) showed a reasonable agreement between measured and computed water surface profiles for the constriction case and a fairly good was achieved for the overtopping case. \n*Evaluation of the shear layer dynamics in compound channel flows is carried out using infrared thermography technique with two vegetation configurations - wholly vegetated floodplain and one-line vegetation in comparison to non-vegetated floodplains. This technique also manifests some potential as a flow visualization technique, and leaves space for future studies and research. Results highlight that the mixing shear layer at the interface between the main channel and the floodplain is well captured and quantified by this novel approach. \niii \n*Flume experiments of turbulent open channel flows over bed-mounted square bars at low and intermediate submergence are carried out for six cases. Two bar spacings, corresponding to transitional and k-type roughness, and three flow rates, are investigated. This experimental study focused on two of the most aspects of channel rough shallow flows: water surface profile and mean streamwise vertical velocity. Results show that the water surface was observed to be very complex and turbulent for the large spacing cases, and comprised a single hydraulic jump between the bars. The streamwise position of the jump varied between the cases, with the distance of the jump from the previous upstream bar increasing with flow rate. The free surface was observed to be less complex in the small spacing cases, particularly for the two higher flow rates, in which case the flow resembled a classic skimming flow. The Darcy-Weisbach friction factor was calculated for all six cases from a simple momentum balance, and it was shown that for a given flow rate the larger bar spacing produces higher resistance. The result of the simulation that was carried out by Chua et al. (2016) shows good agreement with the experiments in terms of mean free surface position and mean streamwise velocity. \n*Drag coefficient empirical equations are predicted by a number of authors for an array of vegetation. The research aims to assess the suitability of various empirical formulations to predict the drag coefficient of in-line vegetation. Drag coefficient results show that varying the diameter of the rigid emergent vegetation affects significantly flow resistance. Good agreement is generally observed with those empirical equations. \nKey Words: Flow Visualization; Infrared Thermography; Shallow Flows; Shear layer; Image processing; Experiment; Free surface; Bridge hydrodynamics; Bridge overtopping; Vegetation roughness, Emergent vegetation, Drag coefficient, blockage; Compound channel, Lateral velocity profiles; Hydraulic resistance; Hydraulic jump, Square bars.",
"corpus_id": 114932732,
"score": 0,
"title": "Hydrodynamics of large-scale roughness in open channels"
} |
{
"abstract": "Mycobacterium avium subsp. hominissuis is an environmental bacterium causing opportunistic infections in swine, resulting in economic losses. Additionally, the zoonotic aspect of such infections is of concern. In the southeastern region of Norway in 2009 and 2010, an increase in condemnation of pig carcasses with tuberculous lesions was seen at the meat inspection. The use of peat as bedding in the herds was suspected to be a common factor, and a project examining pigs and environmental samples from the herds was initiated. Lesions detected at meat inspection in pigs originating from 15 herds were sampled. Environmental samples including peat from six of the herds and from three peat production facilities were additionally collected. Samples were analysed by culture and isolates genotyped by MLVA analysis. Mycobacterium avium subsp. hominissuis was detected in 35 out of 46 pigs, in 16 out of 20 samples of peat, and in one sample of sawdust. MLVA analysis demonstrated identical isolates from peat and pigs within the same farms. Polyclonal infection was demonstrated by analysis of multiple isolates from the same pig. To conclude, the increase in condemnation of porcine carcasses at slaughter due to mycobacteriosis seemed to be related to untreated peat used as bedding.",
"corpus_id": 939878,
"title": "Mycobacterium avium subsp. hominissuis Infection in Swine Associated with Peat Used for Bedding"
} | {
"abstract": "Besides Mycobacterium avium subsp. paratuberculosis (MAP), M. avium subsp. avium (MAA), M. avium subsp. silvaticum (MAS), and 'M. avium subsp. hominissuis' (MAH) are equally important members of M. avium complex, with worldwide distribution and zoonotic potential. Genotypic discrimination is a prerequisite to epidemiological studies which can facilitate disease prevention through revealing infection sources and transmission routes. The primary aim of this study was to identify the genetic diversity within 135 MAA, 62 MAS, and 84 MAH strains isolated from wild and domestic mammals, reptiles and birds. Strains were tested for the presence of large sequence polymorphism LSP(A)17 and were submitted to Mycobacterial interspersed repetitive units-variable-number tandem repeat (MIRU-VNTR) analysis at 8 loci, including MIRU1, 2, 3, and 4, VNTR25, 32, and 259, and MATR9. In 12 strains hsp65 sequence code type was also determined. LSP(A)17 was present only in 19.9% of the strains. All LSP(A)17 positive strains belonged to subspecies MAH. The discriminatory power of the MIRU-VNTR loci set used reached 0.9228. Altogether 54 different genotypes were detected. Within MAH, MAA, and MAS strains 33, 16, and 5 different genotypes were observed. The described genotypes were not restricted to geographic regions or host species, but proved to be subspecies specific. Our knowledge about MAS is limited due to isolation and identification difficulties. This is the first study including a large number of MAS field strains. Our results demonstrate the high diversity of MAH and MAA strains and the relative uniformity of MAS strains.",
"corpus_id": 4966844,
"title": "Molecular analysis and MIRU-VNTR typing of Mycobacterium avium subsp. avium, 'hominissuis' and silvaticum strains of veterinary origin."
} | {
"abstract": "Malignant catarrhal fever (MCF) is a serious, often fatal, disease that affects many species in the family Artiodactyla (even-toed ungulates) including cattle, bison, deer, moose, exotic ruminants and pigs. At least ten MCF viruses have been recognized, including two well-known viruses carried by sheep and wildebeest. Six of these viruses have been linked to disease while the others have been found, to date, only in asymptomatic carriers. Each MCF virus is highly adapted to its usual host, and does not normally cause disease in that species, but can cause fatal infections if transmitted to susceptible animals. Malignant catarrhal fever occurs in many countries worldwide. Sheep-associated MCF is the predominant form outside Africa. It is a particular problem in species such as farmed bison, deer and Bali cattle, although it occasionally affects relatively resistant hosts such as pigs and European breeds of cattle. Wildebeest associated MCF is an important disease among cattle in Africa, while zoos can be affected by either of these two forms, as well as by less common MCF viruses carried in various exotic ruminants. Malignant catarrhal fever is difficult to control, as the infections are widespread and asymptomatic in the reservoir species, and the incubation period can be long in susceptible animals. The only reliable methods of control are to separate susceptible species from carriers or breed virus-free reservoir hosts.",
"corpus_id": 8349749,
"score": 1,
"title": "Malignant Catarrhal Fever Malignant Catarrh , Malignant Head Catarrh , Gangrenous Coryza , Catarrhal"
} |
{
"abstract": "When the eyes pursue a fixation point that sweeps across a moving background pattern, and the fixation point is suddenly made to stop, the ongoing motion of the background pattern seems to accelerate to a higher velocity. Experiment I showed that this acceleration illusion is not caused by the sudden change in (i) the relative velocity between background and fixation point, (ii) the velocity of the retinal image of the background pattern, or (iii) the motion of the retinal image of the rims of the CRT screen on which the experiment was carried out. In experiment II the magnitude of the illusion was quantified. It is strongest when background and eyes move in the same direction. When they move in opposite directions it becomes less pronounced (and may disappear) with higher background velocities. The findings are explained in terms of a model proposed by the first author, in which the perception of object motion and velocity derives from the interaction between retinal slip velocity information and the brain's ‘estimate’ of eye velocity in space. They illustrate that the classic Aubert–Fleischl phenomenon (a stimulus seems to be moving slower when pursued with the eyes than when moving in front of stationary eyes) is a special case of a more general phenomenon: whenever we make a pursuit eye movement we underestimate the velocity of all stimuli in our visual field which happen to move in the same direction as our eyes, or which move slowly in the direction opposite to our eyes.",
"corpus_id": 1575687,
"title": "An Acceleration Illusion Caused by Underestimation of Stimulus Velocity during Pursuit Eye Movements: Aubert–Fleischl Revisited"
} | {
"abstract": "In the present work, we have shown the effect of a vestibular stimulation on the velocity perception of a moving scene. The intensity of this effect is related to the amplitude of the cart acceleration, image velocity, spatial frequency of the visual stimulus, and the angle between the directions of cart and image movement. A simple model has been developed to determine whether the perception of visual movement is due to the geometric projection of the vestibular evaluation on the visual vector, or the inverse.",
"corpus_id": 35893909,
"title": "Linear Acceleration Modifies the Perceived Velocity of a Moving Visual Scene"
} | {
"abstract": "The question of whether an afterimage viewed in a dark field appears to move during eye movement was studied by comparing recordings of eye movements with recordings of reports of perceived movement. The correlation was found to be quite good even under conditions where the eye movements were spontaneous rather than specifically directed. The results were taken to support the hypothesis that the behavior of the retinal image is “interpreted” by taking into account information concerning what the eyes are doing.",
"corpus_id": 143611926,
"score": 2,
"title": "Perceived movement of the afterimage during eye movements"
} |
{
"abstract": "The tumor suppressor p53 is arguably the most important transcription factor that safe-guards the genome. Although it is clear that the transcriptional activity of p53 is required for its tumor suppressive function, the underlying mechanisms are still largely unknown. In the past several years, genome-wide approaches have provided novel insights into the tumor suppressive functions of p53. This mini-review summarizes recent progress in studying these functions using genome-wide approaches, and offers some perspectives on this rapidly expanding field. This article is part of a Special Issue entitled: Chromatin in time and space.",
"corpus_id": 2672266,
"title": "Genome-wide studies of the transcriptional regulation by p53."
} | {
"abstract": "TP53 is one of the most frequently-mutated and deleted tumor suppressors in cancer, with a dramatic correlation with dismal prognoses. In addition to genetic inactivation, the p53 protein can be functionally inactivated in cancer, through post-transductional modifications, changes in cellular compartmentalization, and interactions with other proteins. Here, we review the mechanisms of p53 functional inactivation, with a particular emphasis on the interaction between p53 and IκB-α, the NFKBIA gene product.",
"corpus_id": 18586559,
"title": "Mechanisms of p53 Functional De-Regulation: Role of the IκB-α/p53 Complex"
} | {
"abstract": "In this paper, an exact solution approach is described for solving a real-life school bus routing problem (SBRP) for transporting the students of an elementary school throughout central Ankara, Turkey. The problem is modelled as a capacitated and distance constrained open vehicle routing problem and an associated integer linear program is presented. The integer program borrows some well-known inequalities from the vehicle routing problem, which are also shown to be valid for the SBRP under consideration. The optimal solution of the problem is computed using the proposed formulation, resulting in a saving of up to 28.6% in total travelling cost as compared to the current implementation.",
"corpus_id": 491910,
"score": 0,
"title": "Solving school bus routing problems through integer programming"
} |
{
"abstract": "ATP-binding cassette transporter A1 (ABCA1) is an integral cell membrane protein that protects cardiovascular disease by at least two mechanisms: by export of excess cholesterol from cells and by suppression of inflammation. ABCA1 exports cholesterol and phospholipids from cells by multiple steps that involve forming cell surface lipid domains, binding of apolipoproteins to ABCA1, activating signaling pathways, and solubilizing these lipids by apolipoproteins. ABCA1 executes its anti-inflammatory effect by modifying cell membrane lipid rafts and directly activating signaling pathways. The interaction of apolipoproteins with ABCA1 activates multiple signaling pathways, including Janus kinase 2/signal transducer and activator of transcription 3 (JAK2/STAT3), protein kinase A, Rho family G protein CDC42 and protein kinase C. Activating protein kinase A and Rho family G protein CDC42 regulates ABCA1-mediated lipid efflux, activating PKC stabilizes ABCA1 protein, and activating JAK2/STAT3 regulates both ABCA1-mediated lipid efflux and anti-inflammation. Thus, ABCA1 behaves both as a lipid exporter and a signaling receptor. Targeting ABCA1 receptor-like property using agonists for ABCA1 protein could become a promising new therapeutic target for increasing ABCA1 function and treating cardiovascular disease. This article is part of a Special Issue entitled Advances in High Density Lipoprotein Formation and Metabolism: A Tribute to John F. Oram (1945-2010).",
"corpus_id": 1212113,
"title": "Regulation of ABCA1 functions by signaling pathways."
} | {
"abstract": "Background— Two macrophage ABC transporters, ABCA1 and ABCG1, have a major role in promoting cholesterol efflux from macrophages. Peritoneal macrophages deficient in ABCA1, ABCG1, or both show enhanced expression of inflammatory and chemokine genes. This study was undertaken to elucidate the mechanisms and consequences of enhanced inflammatory gene expression in ABC transporter–deficient macrophages. Methods and Results— Basal and lipopolysaccharide-stimulated thioglycollate-elicited peritoneal macrophages showed increased inflammatory gene expression in the order Abca1−/−Abcg1−/−>Abcg1−/−>Abca1−/−>wild-type. The increased inflammatory gene expression was abolished in macrophages deficient in Toll-like receptor 4 (TLR4) or MyD88/TRIF. TLR4 cell surface concentration was increased in Abca1−/−Abcg1−/−>Abcg1−/−> Abca1−/−> wild-type macrophages. Treatment of transporter-deficient cells with cyclodextrin reduced and cholesterol-cyclodextrin loading increased inflammatory gene expression. Abca1−/−Abcg1− bone marrow–derived macrophages showed enhanced inflammatory gene responses to TLR2, TLR3, and TLR4 ligands. To assess in vivo relevance, we injected intraperitoneally thioglycollate in Abcg1−/− bone marrow–transplanted, Western diet–fed, Ldlr-deficient mice. This resulted in a profound inflammatory infiltrate in the adventitia and necrotic core region of atherosclerotic lesions, consisting primarily of neutrophils. Conclusions— The results suggest that high-density lipoprotein and apolipoprotein A-1 exert anti-inflammatory effects by promoting cholesterol efflux via ABCG1 and ABCA1 with consequent attenuation of signaling via Toll-like receptors. In response to a peripheral inflammatory stimulus, atherosclerotic lesions containing Abcg1−/− macrophages experience an inflammatory “echo,” suggesting a possible mechanism of plaque destabilization in subjects with low high-density lipoprotein levels.",
"corpus_id": 8482947,
"title": "Increased Inflammatory Gene Expression in ABC Transporter–Deficient Macrophages: Free Cholesterol Accumulation, Increased Signaling via Toll-Like Receptors, and Neutrophil Infiltration of Atherosclerotic Lesions"
} | {
"abstract": "High-intensity focused ultrasound (HIFU)–mediated drug delivery is a relatively novel technique used to deliver drugs to a targeted location in the body. High-intensity focused ultrasound–mediated drug delivery has a broad range of applications, such as tumor therapy, treating central nervous diseases, transsclera drug delivery, and cardiovascular treatments. Targeted treatments prove to be advantageous to systemic treatments due to the reduction in the associated side effects. Thus, this literature review focuses on the various applications of HIFU-mediated drug delivery as well as the mechanism involved. This article is intended to supply the reader with a detailed description of how this technique can be used as well as describe its potential to surpass other treatment methods. Further discussion on the efficiency, limitations, and future of HIFU-mediated drug delivery is addressed. Furthermore, the gaps in the published literature, relative to this topic, are discussed. Ultimately, HIFU-mediated drug delivery is a developing technique that could provide patients with exciting treatment options.",
"corpus_id": 78394517,
"score": 1,
"title": "Using High-Intensity Focused Ultrasound as a Means to Provide Targeted Drug Delivery"
} |
{
"abstract": "The nature and extent of somatostatin-induced inhibition of pancreatic endocrine secretion were studied by administration of a number of stimuli of either glucagon or insulin to over night fasted baboons with and without an infusion of linear somatostatin. The stimuli for acute-phase insulin release were intravenous pulses of glucose, tolbutamide, isoproterenol, and secretin. When given 15 min after the start of a somatostatin infusion, these agents were essentially unable to stimulate insulin secretion. Chronic insulin secretion was stimulated by infusions of either glucose or glucagon. Within 10 min of the start of a super-imposed infusion of somatostatin, insulin levels fell to less than 40 percent of prestimulus control and remained suppressed for the duration of the somatostatin infusion. Stimulation of glucagon secretion by insulin-induced hypoglycemia was also blocked by somatostatin. Plasma glucose decreased during somatostatin infusions except when superimposed upon an infusion of glucagon. Somatostatin had no effect on glucose production in a rat liver slice preparation. We conclude: (a) Somatostatin is a potent and so far universally effective inhibitor of both acute and chronic phases of stimulated insulin and glucagon secretion (b) The inhibitory effect is quickly reversible and the pattern of recovery of secretion is appropriate to prevailing signals; (c) Present evidence suggests that the effect of somatostatin on blood glucose is mediated through its effect on blood glucagon; (d) In the overnight-fasted baboon both in the basal state and 45 min into a 4-mg/kg-min glucose infusion, a somatostatin-induced fall in serum insulin levels appears to be unable to prevent a decrease in hepatic glucose production.",
"corpus_id": 1134184,
"title": "Somatostatin blockade of acute and chronic stimuli of the endocrine pancreas and the consequences of this blockade on glucose homeostasis."
} | {
"abstract": "In 1972, while searching in hypothalamic extracts for the releasing factor for growth hormone (at that time and still uncharacterizedl ), we observed that the material from some fractions of the purification sequence would powerfully inhibit the secre tion of immunoreactive growth hormone. Krulich & McCann (6) had previously observed such an inhibitory activity, but no attempt at its characterization had been reported. Our test system was composed of sets of identical tissue-culture plates in which anterior pituitary cells from normal rats were attached as a monolayer four to six days after placing in the dish after dispersion with collagenase and trypsin (7). The major component with the growth hormone release-inhibiting activity was isolated, characterized, reproduced by total synthesis, and named somatostatin. It has the following primary sequence (8-10):",
"corpus_id": 304480,
"title": "Somatostatin: physiological and clinical significance."
} | {
"abstract": "Previous calculations using crystal structure coordinates (Strickland and Mercola [1976], Biochemistry. 15: 3857) have predicted that about 40 percent of the calculated tyrosyl circular dichroism of hexameric insulin is due to one of the four tyrosine residues: viz. the A14-tyrosine interacting with the nearby B1-phenylalanine ring group. We have tested this prediction by measuring the tyrosyl circular dichroism of an isomorphous analogue of insulin, des-B1-phenylalanine-insulin. Contrary to expectation, the resulting circular dichroism was the same as that of insulin. It is concluded that the B1-phenylalanine residue does not in fact make a large contribution to the circular dichroism of A14-tyrosine. This result is probably due to the thermal motion of the B1 and A14 ring groups not taken into account by the calculations. An example of the effects of thermal motion on the calculated circular dichroism is given and improvements that do take into account thermal motion are discussed.",
"corpus_id": 23864637,
"score": 1,
"title": "Side-chain mobility and the calculation of tyrosyl circular dichroism of proteins. Implications of a test with insulin and des-B1-phenylalanine insulin."
} |
{
"abstract": "MOTIVATION\nIn recent years, large-scale studies have been undertaken to describe, at least partially, protein-protein interaction maps, or interactomes, for a number of relevant organisms, including human. However, current interactomes provide a somehow limited picture of the molecular details involving protein interactions, mostly because essential experimental information, especially structural data, is lacking. Indeed, the gap between structural and interactomics information is enlarging and thus, for most interactions, key experimental information is missing. We elaborate on the observation that many interactions between proteins involve a pair of their constituent domains and, thus, the knowledge of how protein domains interact adds very significant information to any interactomic analysis.\n\n\nRESULTS\nIn this work, we describe a novel use of the neighborhood cohesiveness property to infer interactions between protein domains given a protein interaction network. We have shown that some clustering coefficients can be extended to measure a degree of cohesiveness between two sets of nodes within a network. Specifically, we used the meet/min coefficient to measure the proportion of interacting nodes between two sets of nodes and the fraction of common neighbors. This approach extends previous works where homolog coefficients were first defined around network nodes and later around edges. The proposed approach substantially increases both the number of predicted domain-domain interactions as well as its accuracy as compared with current methods.",
"corpus_id": 2075021,
"title": "Using neighborhood cohesiveness to infer interactions between protein domains"
} | {
"abstract": "BackgroundProtein binding site prediction by computational means can yield valuable information that complements and guides experimental approaches to determine the structure of protein complexes. Predictions become even more relevant and timely given the current resolution of protein interaction maps, where there is a very large and still expanding gap between the available information on: (i) which proteins interact and (ii) how proteins interact. Proteins interact through exposed residues that present differential physicochemical properties, and these can be exploited to identify protein interfaces.ResultsHere we present VORFFIP, a novel method for protein binding site prediction. The method makes use of broad set of heterogeneous data and defined of residue environment, by means of Voronoi Diagrams that are integrated by a two-steps Random Forest ensemble classifier. Four sets of residue features (structural, energy terms, sequence conservation, and crystallographic B-factors) used in different combinations together with three definitions of residue environment (Voronoi Diagrams, sequence sliding window, and Euclidian distance) have been analyzed in order to maximize the performance of the method.ConclusionsThe integration of different forms information such as structural features, energy term, evolutionary conservation and crystallographic B-factors, improves the performance of binding site prediction. Including the information of neighbouring residues also improves the prediction of protein interfaces. Among the different approaches that can be used to define the environment of exposed residues, Voronoi Diagrams provide the most accurate description. Finally, VORFFIP compares favourably to other methods reported in the recent literature.",
"corpus_id": 1856700,
"title": "Improving the prediction of protein binding sites by combining heterogeneous data and Voronoi diagrams"
} | {
"abstract": "To alleviate the computational cost associated with on-the-fly ab initio semiclassical calculations of molecular spectra, we propose the single-Hessian thawed Gaussian approximation in which the Hessian of the potential energy at all points along an anharmonic classical trajectory is approximated by a constant matrix. The spectra obtained with this approximation are compared with the exact quantum spectra of a one-dimensional Morse potential and with the experimental spectra of ammonia and quinquethiophene. In all cases, the single-Hessian version performs almost as well as the much more expensive on-the-fly ab initio thawed Gaussian approximation and significantly better than the global harmonic schemes. Remarkably, unlike the thawed Gaussian approximation, the proposed method conserves energy exactly, despite the time dependence of the corresponding effective Hamiltonian, and, in addition, can be mapped to a higher-dimensional time-independent classical Hamiltonian system. We also provide a detailed comparison with several related approximations used for accelerating prefactor calculations in semiclassical simulations.",
"corpus_id": 84844762,
"score": 0,
"title": "Single-Hessian thawed Gaussian approximation."
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.