query
dict
pos
dict
neg
dict
{ "abstract": "Domestic labor plays a central role in daily life, as a result women from Sontecomapan generate diverse strategies that allows participation other productive work. Using a qualitative methodology daily life strategies used by women that work on ecotourism are reviewed, with the purpose of making visible time-space restrictions as a consequence of reproductive work. Main strategies are focused on management of spaces and time to reduce the role of anchor due to reproductive work.", "corpus_id": 152745994, "title": "Ecoturismo y vida cotidiana de las mujeres en sontecomapan (Veracruz, México)" }
{ "abstract": "El proceso de reorientación de las actividades económicas tradicionales en las zonas rurales españolas ha producido cambios en el empleo y la propia estructura sociolaboral, destacando la emergencia del turismo y el papel relevante de las mujeres rurales en su consolidación y en el desarrollo rural en general. No obstante, la incorporación de la mujer al mercado laboral turístico muestra dificultades y diferencias de género a nivel de actividad laboral y de brecha salarial. En este contexto, se han caracterizado éstas a través de un trabajo de campo y una encuesta a trabajadores/as en alojamientos y otros establecimientos turísticos en el medio rural de Andalucía, incluyendo tanto las características empresariales de éstos últimos como las personales y laborales de los encuestados. Finalmente se ha podido cuantificar una brecha salarial del 23% en perjuicio de las retribuciones medias de las mujeres rurales, así como identificar otras desigualdades relativas a la concienciación de la vida familiar y laboral, la promoción profesional y el acceso a puestos directivos y de mayor responsabilidad.", "corpus_id": 158884344, "title": "Turismo, brecha salarial y desigualdades laborales de género en espacios rurales de Andalucía (España)" }
{ "abstract": "Although the great majority of UK farms are run as family businesses, the family dimension of these businesses is frequently neglected. There are nevertheless important inferences to be drawn from studying the farm family, its forms and functions, and the way that the family and the business interact. This calls for a multidisciplinary approach and the review attempts to draw together insights from industrial economics, social anthropology, history and rural sociology as they apply to the farm family business. It then relates these insights to considerations of the family development cycle, processes of inheritance and succession, roles of farmers' wives and multiple-job farming families. The literature review points to opposing tendencies within the population of farm businesses. Family forms of organisation and relationships may have become less relevant to farming at the lower end of the size scale but more relevant to the conduct of a successful large farm business.", "corpus_id": 154564766, "score": 2, "title": "THE FARM AS A FAMILY BUSINESS: A REVIEW" }
{ "abstract": "A case of the bare lymphocyte without apparent immunodeficiency was observed in a 33-year-old woman who had no history of severe infections but suffered from sino-bronchial disease. No HLA-A and -B antigens (class I antigens) were detected at the cell surface of lymphocytes, granulocytes, and platelets, but they were expressed, although at a reduced level, on the cultured B lymphoid cell line. T lymphocytes were normal in number and in the relative proportion of T4/T8 and responded to mitogens but not to PPD and candida. HLA-DR antigens (class II antigens) were present on B lymphocytes and showed intermediate MLR-stimulatory capacity, which made it possible to deduce the patient's HLA genotype. She was found to be homozygous at consanguinity for HLA-A, -B, and -DR antigens. The numbers of B lymphocytes, immunoglobulins, and complements were all in the normal range; there was, however, a low level of IgM. Two-dimensional gel analysis of class I antigens revealed the presence of normally expressed beta-2 microglobulins (B2M) and an apparently single set of class I heavy chains, allowing us to consider two alternative cellular mechanisms in this defect; the presence of one abnormal class I structural gene and the regulatory mechanism that acted in cis were suggested.", "corpus_id": 301232, "title": "Defective expression of HLA class I antigens: A case of the bare lymphocyte without immunodeficiency" }
{ "abstract": "This report compares both the HLA restriction patterns and Ir gene regulation of human in vitro T cell-mediated cytotoxic responses to the trinitrophenyl (TNP) hapten and the type A and B influenza viruses. Comparison of the restriction patterns of these cytotoxic responses indicates that A/HK and B/HK are recognized in conjunction with polymorphic HLA-A and -B self determinants, whereas TNP is recognized in association with a more complex spectrum of self determinants. These self determinants include polymorphic HLA-A and -B determinants, polymorphic non-HLA-A and -B determinants that probably include DR antigens, and non-polymorphic determinants that appear to be species specific. Analysis of the self determinants recognized by human T cells in conjunction with influenza virus demonstrates that (a) the antigens recognized by virus-immune T cells can be distinguished from the serologically defined HLA-A and -B antigenic determinants, and (b) there may be multiple self determinants on individual HLA-A molecules that T cells can recognize in conjunction with virus. The results of family studies indicate that donors' T cells often preferentially respond to virus (and to a lesser extent TNP) in conjunction with products of one parental HLA haplotype (haplotype preference). In the family study, three HLA-identical siblings preferentially recognize paternal HLA antigens in conjunction with A/HK, and maternal HLA antigens in conjunction with B/HK and TNP, which indicates antigen-specific HLA-lined genetic control. Population studies demonstrate virus-specific differences in the ability of donors to respond to selected self HLA-A and -B antigens in conjunction with virus. These differences may be controlled by Ir genes that are distinct from HLA-A and -B, because differences are observed in the response patterns of HLA-A- and -B-matched individuals.", "corpus_id": 22812698, "title": "Human cytotoxic T cell responses to trinitrophenyl hapten and influenza virus. Diversity of restriction antigens and specificity of HLA-linked genetic regulation." }
{ "abstract": "Abstract Cancer immunotherapy has recently undergone rapid advances and has become an integral part of the treatment armamentarium in various malignancies. However, tissue-based biomarker development in this arena has been slow, and valid biomarker identification to guide immunotherapeutic management is desperately needed. “Liquid” or blood-based biopsies potentially offer more convenient and efficient means to judge the immune milieu of individual patients and identify who will benefit most from immunotherapy. The following review highlights the current literature regarding the application of liquid biopsies to cancer immunotherapy.", "corpus_id": 4512799, "score": 1, "title": "Liquid Biopsies and Cancer Immunotherapy" }
{ "abstract": "A large number of continuous human leukemia cell lines have been established over the last three decades. Clearly, leukemia cell lines have become important research tools. Here, we have summarized the immunological, molecular and standard cytogenetic features of a panel of well characterized B cell precursor (BCP)-leukemia cell lines which were derived from patients with acute lymphoblastic/undifferentiated leukemia (ALL/AUL) or chronic myeloid leukemia (CML) in blast crisis. Following the recently proposed immunological EGIL classification, we assigned our panel of 27 BCP-cell lines to one of the following categories: B-I pro-B cell line; B-II common-B cell line; and B-III pre-B cell line. All cell lines express general B-lineage associated surface markers (HLA-DR, CD22, CD79a) being negative for surface immunoglobulin (Ig); the differences between the subgroups reside in expression of CD10 and cytoplasmic Ig. Several BCP-cell lines show the myelomonocytic cell-associated markers CD13 and/or CD33. These immunologically 'biphenotypic' BCP-cell lines are generally TdT+ CD10+ CD13+ CD19+ CD22+ CD34+ and carry the Philadelphia (Ph) translocation. The BCP-cell lines display surface receptors for interferon-gamma (CD119), interleukin-7 (CD127) and FLT-3 ligand (CD135). All BCP-cell lines examined have complex numerical and structural chromosomal alterations including translocations commonly seen in BCP-ALL such as t(4;11), t(9;22), t(11;19), t(12;21), and t(17;19) involving the fusion genes MLL-AF4, BCR-ABL, ENL-MLL, TEL/ETV6-AML1 and E2A-HLF, respectively. Besides the expected rearrangement of the Ig heavy chain receptor gene, several cell lines also have rearrangements of the T cell receptor genes beta, gamma or delta. While some BCP-cell lines express (aberrantly) myeloperoxidase at the mRNA level, most lines are negative in the immunological or cytochemical staining. Several large series documented the difficulty in establishing such BCP cell lines with success rates in the range of 10-20% (on average 15%). Still, since the establishment of the first bonafide BCP-cell line in 1974 (cell line REH), some 150 cell lines have been established of which, however, only a small percentage have been sufficiently well characterized and described. A higher success rate for immortalizing any given leukemia cell might depend on a closer emulation of the physiological in vivo microenvironment. The possibility to grow in vitro leukemia cells at will would represent ideal experimental systems permitting basic research and patient-specific investigations. In summary, the use of well-characterized BCP-cell lines provide unprecedented opportunities for studying a multitude of biological aspects related to normal and neoplastic B-lymphocytes.", "corpus_id": 2394258, "title": "Establishment and characterization of human B cell precursor-leukemia cell lines." }
{ "abstract": "Acute lymphoblastic leukemia (ALL) is one of the most common hematological malignancies at pediatric ages and is characterized by different chromosomal rearrangements and genetic abnormalities involved in the differentiation and proliferation of lymphoid precursor cells. Brusatol is a quassinoid plant extract extensively studied due to its antineoplastic effect through global protein synthesis and nuclear factor erythroid 2-related factor-2 (NRF2) signaling inhibition. NRF2 is the main regulator of cellular antioxidant response and reactive oxygen species (ROS), which plays an important role in oxidative stress regulation. This study aimed to evaluate the effect of brusatol in in vitro models of ALL. KOPN-8 (B-ALL), CEM (T-ALL), and MOLT-4 (T-ALL) cell lines were incubated with increasing concentrations of brusatol, and the metabolic activity was evaluated using the resazurin assay. Flow cytometry was used to evaluate cell death, cell cycle, mitochondrial membrane potential (Δψmit), and to measure ROS and reduced glutathione (GSH) levels. Our results show that brusatol promoted a decrease in metabolic activity in ALL cell lines in a time-, dose-, and cell-line-dependent manner. Brusatol induced a cytostatic effect by cell cycle arrest in G0/G1 in all cell lines; however, cell death mediated by apoptosis was only observed in T-ALL cells. Brusatol leads to an oxidative stress imbalance by the increase in ROS levels, namely, superoxide anion. Redox imbalance and cellular apoptosis induced by brusatol are highly modulated by mitochondria disruption as a decrease in mitochondrial membrane potential is detected. These data suggest that brusatol might represent a new therapeutic approach for acute lymphoblastic leukemia, particularly for ALL T-cell lineage.", "corpus_id": 252132376, "title": "Antitumor Effect of Brusatol in Acute Lymphoblastic Leukemia Models Is Triggered by Reactive Oxygen Species Accumulation" }
{ "abstract": "The measurement of plasma microRNAs (miRNAs) and messenger RNAs (mRNAs) is the most recent effort to identify novel biomarkers in preclinical safety. These genomic markers often display tissue-specific expression, may be released from the tissues into the plasma during toxic events, change early and with high magnitude in tissues and in the blood during specific organ toxicities, and can be measured using multiplex formats. Their validation as biomarkers has been challenged by the technical difficulties. In particular, the concentration of miRNAs in the plasma depends on contamination by miRNAs originating from blood cells and platelets, and the relative fraction of miRNAs in complexes with Argonaute 2, high-density lipoproteins, and in exosomes and microvesicles. In spite of these hurdles, considerable progress has recently been made in assessing the potential value of miRNAs in the clinic, especially in cancer patients and cardiovascular diseases. The future of miRNAs and mRNAs as biomarkers of disease and organ toxicity depends on our ability to characterize their kinetics and to establish robust collection and measurement methods. This review covers the basic biology of miRNAs and the published literature on the use of miRNAs and mRNAs as biomarkers of specific target organ toxicity.", "corpus_id": 2860338, "score": 1, "title": "Frontiers in Preclinical Safety Biomarkers" }
{ "abstract": "Kinematic, kinetic, and electromyography data were collected from the biceps femoris, rectus femoris (RF), gluteus maximus, and erector spinae (ES) during a step and elliptical exercise at a standardized workload with no hand use. Findings depicted 95% greater ankle plantar flexion (p = .01), 29% more knee extension (p = .003), 101% higher peak knee flexor moments (p < .001), 54% greater hip extensor moments (p < .001), 268% greater anterior joint reaction force (p = .009), 37% more RF activation (p < .001), and 200% more ES activation (p < .001) for the elliptical motion. Sixteen percent more hip flexion (p < .001), 42% higher knee extensor moments (p < .001), and 54% greater hip flexor moments (p = .041) occurred during the step motion. Biomechanical differences between motions should be considered when planning an exercise regimen.", "corpus_id": 3132786, "title": "Peak Muscle Activation, Joint Kinematics, and Kinetics During Elliptical and Stepping Movement Pattern on a Precor Adaptive Motion Trainer" }
{ "abstract": "The purpose of this study was to compare lower extremity joint angular position and muscle activity during elliptical exercise using different foot positions and also during exercise on a lateral elliptical trainer. Sixteen men exercised on a lateral elliptical and on a standard elliptical trainer using straight foot position, increased toe-out angle, and a wide step. Motion capture and electromyography systems were used to obtain 3D lower extremity joint kinematics and muscle activity, respectively. The lateral trainer produced greater sagittal and frontal plane knee range of motion (ROM), greater peak knee flexion and extension, and higher vastus medialis activation compared with other conditions (P < .05). Toe-out and wide step produced the greatest and smallest peak knee adduction angles, respectively (P < .05). The lateral trainer produced greater sagittal and frontal plane hip ROM and greater peak hip extension and flexion compared with all other conditions (P < .05). Toe-out angle produced the largest peak hip external rotation angle and lowest gluteus muscle activation (P < .05). Findings from this study indicate that standard elliptical exercise with wide step may place the knee joint in a desirable frontal plane angular position to reduce medial knee loads, and that lateral elliptical exercise could help improve quadriceps strength but could also lead to larger knee contact forces.", "corpus_id": 8956129, "title": "Lower limb joint angular position and muscle activity during elliptical exercise in healthy young men." }
{ "abstract": "Abstract A new method involving enrichment and immuno-polymerase chain reaction is presented for rapid and sensitive detection of Escherichia coli O157:H7 in ground beef. The bacteria in the spiked beef sample were enriched in a non-selective medium for 250 min and the E. coli O157:H7 cells were captured rapidly from the culture medium, using magnetic beads coated with E. coli O157-specific antibody. Partial stretches of sequences encoding Shiga toxins (Stx1 and Stx2) were amplified by the polymerase chain reaction and detected by agarose gel electrophoresis. Detection of a 215/212-base pair amplicon indicated contamination of the test sample with E. coli O157:H7. The method enabled detection of a single colony-forming unit of E. coli O157:H7 conclusively in 8 h. The suitability of the method to real life pathogen monitoring applications was demonstrated by accurate confirmation of E. coli O157:H7 in coded culture-positive samples of naturally contaminated ground beef and hamburger patties.", "corpus_id": 83817844, "score": 1, "title": "Detection of Escherichia coli O157:H7 in ground beef in eight hours" }
{ "abstract": "The occurrence and fate of polycyclic aromatic hydrocarbons (PAH) in nearshore marine sediments of Australia is discussed. Available information indicates that PAH are accumulating in the sediments and organisms of estuaries and harbours with both highly urbanized/industrialized and non-urban catchments. PAH levels in polluted sediments are similar to those of grossly polluted areas of Japan, North America and Europe, however PAH sources cannot be identified from the information available. PAH appear to persist in reducing environments, while in relatively pristine environments that have been previously exposed to PAH, conditions are probably favourable for the aerobic degradation of PAH by microorganisms.", "corpus_id": 6213381, "title": "Polycyclic aromatic hydrocarbons in nearshore marine sediments of Australia." }
{ "abstract": "Polycyclic aromatic hydrocarbons (PAHs) and linear alkylbenzenes (LABs) were used as anthropogenic markers of organic chemical pollution of sediments in the Selangor River, Peninsular Malaysia. This study was conducted on sediment samples from the beginning of the estuary to the upstream river during dry and rainy seasons. The concentrations of ƩPAHs and ƩLABs ranged from 203 to 964 and from 23 to 113 ng g(-1) dry weight (dw), respectively. In particular, the Selangor River was found to have higher sedimentary levels of PAHs and LABs during the wet season than in the dry season, which was primarily associated with the intensity of domestic wastewater discharge and high amounts of urban runoff washing the pollutants from the surrounding area. The concentrations of the toxic contaminants were determined according to the Sediment Quality Guidelines (SQGs). The PAH levels in the Selangor River did not exceed the SQGs, for example, the effects range low (ERL) value, indicating that they cannot exert adverse biological effects.", "corpus_id": 406976, "title": "Anthropogenic waste indicators (AWIs), particularly PAHs and LABs, in Malaysian sediments: Application of aquatic environment for identifying anthropogenic pollution." }
{ "abstract": "Phenanthrene-degrading bacteria were isolated from Chesapeake Bay samples by the use of a solid medium which had been overlaid with an ethanol solution of phenanthrene before inoculation. Eighteen representative strains of phenanthrene-degrading bacteria with 21 type and reference bacteria were examined for 123 characteristics representing physiological, biochemical, and nutritional properties. Relationships between strains were computed with several similarity coefficients. The phenogram constructed by unweighted-pair-group arithmetic average linkage and use of the simple Jaccard (SJ) coefficient was used to identify seven phena. Phenanthrene-degrading bacteria were identified as Vibrio parahaemolyticus and Vibrio fluvialis by their clustering with type and reference strains. Several phenanthrene-degrading bacteria resembled Enterobacteriaceae family members, although some Vibrio-like phenanthrene degraders could not be identified.", "corpus_id": 20935507, "score": 2, "title": "Numerical taxonomy of phenanthrene-degrading bacteria isolated from the Chesapeake Bay" }
{ "abstract": "Our objective was to characterize monoclonal antiphospholipid antibodies (APL) and identify disease-associated antigens in patients with the antiphospholipid syndrome (APS). We used the monoclonal antibody HL-5B, derived from a patient with APS suffering from multiple ischemic events, to screen a 12-mer peptide phage display library (New England Biolabs, London, England). The identified phage clones were sequenced and the derived consensus peptide was synthesized. The peptide was used to perform competitive inhibition experiments for their ability to inhibit the binding of the monoclonal antibody and of serum antibodies to cardiolipin and phosphatidylserine. Additionally patients and control sera were screened for their binding reactivities to this peptide. Using this 12-mer phage display library the peptide APHKHKASLSIY as consensus peptide for the monoclonal antiphospholipid antibody HL-5B could be identified. In competitive inhibition studies we showed that this peptide is able to inhibit the binding of HL-5B to cardiolipin and phosphatidylserine and furthermore another antiphospholipid antibody used as control was also inhibited in its binding to phospholipids. Using 21 sera from APS patients 67% showed a binding to the peptide in a specific ELISA above the cutoff level, generated with sera from 20 healthy controls. Out of the reactive patients' sera we used two exemplarily to perform inhibition studies. Both sera could be inhibited more than 40% in their binding to cardiolipin in a commercially available antiphospholipid antibody assay (Aescu.diagnostics, Wendelsheim, Germany). The identified peptide APHKHKASLSIY simulates the antigenic structure recognized from a subpopulation of serum antiphospholipid antibodies. This might indicate that the diversity of the antiphospholipid antibodies is limited and only few epitopes or few common structures are responsible for the development of those antibodies. Tests using these epitopes will strongly improve laboratory diagnosis of the APS.", "corpus_id": 1546965, "title": "Identification of a peptide mimicking the binding pattern of an antiphospholipid antibody." }
{ "abstract": "Antiphospholipid syndrome (APS) is characterized by recurrent fetal loss, repeated thromboembolic phenomena, and thrombocytopenia. The syndrome is believed to be caused by antiphospholipid beta-2-glycoprotein-I (beta2GPI)-dependent Abs or anti-beta2GPI Abs by themselves. Using a hexapeptide phage display library, we identified three hexapeptides that react specifically with the anti-beta2GPI mAbs ILA-1, ILA-3, and H-3, which cause endothelial cell activation and induce experimental APS. To enhance the binding of the peptides to the corresponding mAbs, the peptides were lengthened to correspond with the site of the beta2GPI epitope being recognized by these mAbs. As a result, the following three peptides were prepared: A, NTLKTPRVGGC, which binds to ILA-1 mAb; B, KDKATFGCHDGC, which binds to ILA-3 mAb; and C, CATLRVYKGG, which binds to H-3 mAb. Peptides A, B, and C specifically inhibit both in vitro and in vivo the biological functions of the corresponding anti-beta2GPI mAbs. Exposure of endothelial cells to anti-beta2GPI mAbs and their corresponding peptides led to the inhibition of endothelial cell activation, as shown by decreased expression of adhesion molecules (E-selectin, ICAM-1, VCAM-1) and monocyte adhesion. In vivo infusion of each of the anti-beta2GPI mAbs into BALB/c mice, followed by administration of the corresponding specific peptides, prevented the peptide-treated mice from developing experimental APS. The use of synthetic peptides that focus on neutralization of pathogenic anti-beta2GPI Abs represents a possible new therapeutic approach to APS.", "corpus_id": 12414327, "title": "Prevention of experimental antiphospholipid syndrome and endothelial cell activation by synthetic peptides." }
{ "abstract": "A powerful technique is described to localize the activities of a range of enzymes in a wide variety of plant tissues. The method is based on the coupling of the enzymatic reaction to the reduction of NAD and subsequent reduction and precipitation of nitroblue tetrazolium. Enzymes that did not reduce NAD could be visualized by coupling their activities to glucose-6-phosphate dehydrogenase activity via one or more intermediary 'coupling' enzymes. The method is shown to be applicable for the detection of the activities of hexokinase, fructokinase, sucrose synthase, uridine 5'-diphospho-glucose pyrophosphorylase, ADP-glucose pyrophosphorylase, phosphoglucomutase, and phosphoglucose isomerase. It could be used for all tissues tested, including green leaves, stems, roots, fruits, and seeds. The method is specific, very sensitive, and has a high spatial resolution, giving information at the cellular and the subcellular level. The localization of sucrose synthase, invertase, and uridine 5'-diphospho-glucose pyrophosphorylase in transgenic potato plants, carrying a cytokinin biosynthesis gene, is studied and compared with wild-type plants.", "corpus_id": 25428533, "score": 1, "title": "In situ staining of activities of enzymes involved in carbohydrate metabolism in plant tissues." }
{ "abstract": "728 Med'mal anatomy, condition, and function. The process relies on the ability to identify unique characteristics in the optical property differences of various tissues of interest. Applications of this technology include characterisation of healthy and pathological tissue, measurement of tissue function, and detection and grading of solid tumour tissue. The potential for portability, lack of ionising radiation, higher sensitivity and specificity, lower cost, and the possibility of real-time anlaysis of in vivo tissue makes optical imaging an attractive complement to the current imaging modalities. There have been three basic approaches to implementing medical optical imaging: time-domain, frequency-domain and continuous-wave (CW) (BENARON et al., 1993). The first two techniques tend to require elaborate radiation sources and sensitive detection devices, and the latter uses simpler illumination and sensing apparatus but relies more heavily on signal processing (BEUTHAN and MOLLER, 1993). CW optical imaging consists of the continuous illumination of a target by a band-limited source and the continuous monitoring of the light ~ansmitted by or reflected from the target. It is usually implemented as a method of time-integrated imaging, requiring temporal resolution in the order of seconds or milliseconds, as opposed to the picosecond or femtosecond resolution needed for time-domain and frequency-domain imaging. Two major applications of CW optical reflectance imaging are turnout imaging and functional imaging. The former uses a light-absorbing dye which specifically sequesters in tumour tissue to locate tumours (I-IOCaMAN et al., 1993; HOCI-LMAN and HAGLUND, 1993). For functional imaging, dyes can be used to translate tissue activity into optical changes. However, it is also possible to measure intrinsic optical changes arising from functional tissue activity without the use of dyes (GRINVALD et al., 1988; HILL, 1949; Lmrd~ et al., 1989). Fig. 1 shows the general data flow for both applications. The first step is to illuminate the subject. While continuously ilhmainating the target, a control target is captured and stored, followed by the application of a mechanism to induce changes in optical reflectance. A dye is injected into the subject for tumour imaging, or a stimulus (e.g. electrical, somatosensory or visual) to cause tissue activity is applied for functional imaging. After acquiring a control image and applying the stimulus , a sequence of data images is captured and stored. The set of data images, together with the control image, forms a complete imaging \"run.' As shown in the 'realtime operation' block in Fig. 1, the data images are aligned", "corpus_id": 6712853, "title": "Microcomputer-based system for real-time optical imaging" }
{ "abstract": "In this book questions regarding neuronal populations were primarily investigated using single- and multi-unit recordings with electrodes. The complexity of the functional organization which is revealed by these studies indicates a need for new methods which could obtain complementary information that is hard or impossible to obtain with microelectrodes. Imaging methods that provide high spatial or temporal resolution maps of the cortical organization of large cortical areas are of particular promise for studies of functional organization.", "corpus_id": 134905311, "title": "Optical Imaging of Neuronal Activity in the Living Brain" }
{ "abstract": "Abstract For the past few years, there has been a rapid rise in the use of ion beam technology for surface nanostructuring of various materials. For LiNbO 3 single crystals, we have observed surface nanostructures after surpassing a potential energy ≈ 35 keV of the incident ion. Here, we introduce plasma expansion approach, based on a dimensionless self-similar parameter, to explain the main features of the mechanism responsible for the created structures. The analysis of the results reveals that the temperature ratio between ions-to-electrons does not change the profile of the expansion at early time. However, increasing the temperature ratio makes the expansion domain longer; nevertheless, a further rising leads to decrease the expansion range. Furthermore, the niobium ions exhibit a dominant effect, in comparison to lithium ions, for the modification of the plasma-expanded region.", "corpus_id": 210525562, "score": 1, "title": "On the formation of nanostructures by inducing confined plasma expansion" }
{ "abstract": "Rhinitis medicamentosa, the syndrome of rebound nasal congestion secondary to prolonged topical intranasal use of vasoconstrictors, is reviewed. In this condition, the nasal airway is very obstructed; atrophic rhinitis is the most serious complication. Management consists of withdrawing the offending nasal spray and alleviating the nasal obstruction by means of any of several treatment modalities.", "corpus_id": 546652, "title": "Rhinitis medicamentosa." }
{ "abstract": "Since adrenalin itself possesses such remarkably definite physiological properties which are shared by the synthetical substance described in the preceding paper, it seemed to be of interest to try and trace some connection between their chemical structure and physiological action and, in particular, to see if the activity was to be ascribed to any particular chemical group or combination of groups.", "corpus_id": 95838176, "title": "On the Physiological Activity of Substances Indirectly Related to Adrenalin" }
{ "abstract": "Seismic data reveal that water level in Lake Malawi, East Africa, was 250 to 500 meters lower before about 25,000 years ago. Water levels in Lake Tanganyika at that time were more than 600 meters below the current lake level. A drier climate appears to have caused these low stands, but tectonic tilting may also have been a contributing factor in Lake Malawi. High-angle discordances associated with shallow sequence boundaries suggest that these low stands probably lasted many tens of thousands of years. Because of its basement topography, the Lake Tanganyika basin had three separate paleolakes, whereas the Lake Malawi basin had only one. The different geographies of these paleolakes may be responsible in part for the differences in the endemic fish populations in these lakes.", "corpus_id": 24695589, "score": 0, "title": "Low Lake Stands in Lakes Malawi and Tanganyika, East Africa, Delineated with Multifold Seismic Data" }
{ "abstract": "Purpose – The aim of this paper is to investigate causal link between social capital and microfinance by testing the role of social capital in explaining the household's access to microcredit under the group-based lending approach. Design/methodology/approach – Household level primary data was collected from a rural district of Pakistan. Principal component analysis (PCA) was applied to construct a social capital index, whereas two logit models were developed to predict the probabilities of access to credit. Besides, few qualitative statements have also been used to supplement the results from main empirical analysis and to understand the impact mechanism of social capital on microfinance participation. Findings – Participation in local organizations, heterogeneity of associations and level of both generalized and institutional trust were identified as the key dimensions of structural and cognitive social capital to influence households' access to credit. On the other hand, when these dimensions were combined in a single social capital index, the result indicated that social capital index has no significant effect on microfinance participation. This result provides support to the argument that grouping all the dimensions of social capital into one index may run the risk of losing the explanatory power of social capital. Practical implications – The results of the study could be encouraging for governments and other development agencies. The existing social capital could be utilized in the design and delivery of microfinance programs as well as other rural development activities. The results of the study also encourage policy makers to invest in the creation of social capital either directly or by providing environment supportive of its creation. Originality/value – The study is a contribution to the limited empirical literature on social capital and microfinance. This study is the first of its kind in Pakistan and hopefully will contribute to the limited knowledge on social capital literature in the country generally and in the context of rural development specifically.", "corpus_id": 152993435, "title": "Investigating causal relationship between social capital and microfinance: Implications for rural development" }
{ "abstract": "This study examines the role of trust and intermediation functions in microfinance and microenterprise development. Fifteen Self-help Groups (SHGs) were selected from three different locations in India for Focus Group Discussions (FGDs) and in-depth personal discussions. The peer trust was found higher than the intermediation trust during the microfinance group formation as well as group operations. The level of intermediation trust was higher than the peer trust during microenterprise development. The entry level trust was cognitive in nature, and transformed to ‘affective peer trust’ and ‘affective intermediation trust’ at the operational level. Trust was found to be the causality of social capital in SHGs. Intermediation trust was higher for early adopters of entrepreneurship than that of the late adopters. In case of microentrepreneur, the cognitive intermediation trust was transformed to affective intermediation trust with the passage of time.RésuméCette étude examine le rôle de la confiance et des fonctions d’intermédiation dans le développement du microfinancement et des microentreprises. Quinze groupes d’entraide ont été sélectionnés dans trois endroits différents d’Inde pour des discussions de groupe ciblées et des discussions personnelles approfondies. Il a été constaté que la confiance entre homologues était supérieure à la confiance d’intermédiation au cours de la formation du groupe de microfinance et les opérations de groupe. Le niveau de la confiance d’intermédiation était supérieur à la confiance entre homologues au cours du développement des microentreprises. La confiance de départ était de nature cognitive et était transformée en « confiance affective entre homologues » et en « confiance d’intermédiation affective » au niveau opérationnel. Il a été constaté que la confiance était la causalité du capital social dans les groupes d’entraide. La confiance d’intermédiation était supérieure pour les adopteurs précoces de l’entrepreneuriat à celle des adopteurs tardifs. Dans le cas du microentrepreneur, la confiance d’intermédiation cognitive était transformée en confiance d’intermédiation affective avec le temps.ZusammenfassungDiese Studie untersucht die Rolle von Vertrauens- und Vermittlungsfunktionen bei der Entwicklung der Mikrofinanzierung und von Mikrounternehmen. Es wurden 15 Selbsthilfegruppen aus drei verschiedenen Orten in Indien für Fokusgruppendiskussionen und ausführliche persönliche Gespräche ausgewählt. Man stellte fest, dass das Peer-Vertrauen bei der Gründung der Mikrofinanzgruppe sowie den Gruppentätigkeiten größer war als das Vermittlungsvertrauen. Während der Entwicklung des Mikrounternehmens war das Vermittlungsvertrauen größer als das Peer-Vertrauen. Das Vertrauen auf der Einstiegsstufe war kognitiver Art und wandelte sich auf der betrieblichen Ebene in „affektives Peer-Vertrauen“und „affektives Vermittlungsvertrauen“. In den Selbsthilfegruppen erwies sich das Vertrauen als die Kausalität des sozialen Kapitals. Das Vermittlungsvertrauen war bei frühen Teilnehmern der Unternehmung größer als das Vertrauen bei späteren Teilnehmern. Bei den Mikrounternehmern wandelte sich das kognitive Vermittlungsvertrauen im Laufe der Zeit in affektives Vermittlungsvertrauen.ResumenEl presente estudio examina el papel de la confianza y de las funciones de intermediación en el desarrollo de las microfinanzas y las microempresas. Se seleccionaron quince Grupos de Autoayuda (Self-help Groups, “SHG”) de tres lugares diferentes en India para Debates de Grupos Focales y debates personales en profundidad. Se encontró que la confianza entre iguales era mayor que la confianza de intermediación durante la formación del grupo de microfinanzas, así como también en las operaciones del grupo. El nivel de confianza de intermediación fue superior que la confianza entre pares durante el desarrollo de microempresas. La confianza en el nivel de entrada era de naturaleza cognitiva, y se transformó en “confianza afectiva entre pares” y “confianza afectiva de intermediación” a nivel operativo. Se encontró que la confianza es la causalidad del capital social en los SHG. La confianza de intermediación fue superior para los primeros en adoptar el espíritu emprendedor que la de los últimos en adoptarlo. En el caso del microempresario, la confianza cognitiva de intermediación se transformó en confianza afectiva de intermediación con el paso del tiempo.摘要本研究检验信任与中介功能在微型金融与微型企业发展中所扮演的角色。从印度三个不同地方选取十五个自助团体(Self-help Groups (SHGs) )进行专题小组讨论(Focus Group Discussions)与深度个人讨论。研究发现,在微型金融团体形成以及团体运营过程中,同伴信任要高于中介信任。 入门层次的信任(entry level trust)是属于认知性的,并在运营层次中被转化为“情感同伴信任(affective peer trust)”与“情感中介信任(affective intermediation trust)”。研究发现,在自助团体中,信任是社会资本的起因。对于较早接受企业家精神的人而言,与较迟接受企业家精神的人相比,其中介信任较高。在微型企业的情况中,随着时间的流逝,认知中介信任(cognitive intermediation trust)被转化为情感中介信任。要約本研究では、マイクロファイナンスと小規模企業開発の信頼および仲介機能の役割を調査する。フォーカス・グループ・ディスカッションと個人の詳細なディスカッションについて、インドにおける3 つの異なる場所からフィフティーン・セルフヘルプ・グループ (SHG) を選択した。ピア信頼は、マイクロファイナンス・グループの形成、集団志向における仲介信頼より高かった。仲介信頼は、マイクロファイナンス開発中のピア・トラストよりレベルが高かった。エントリ・レベルの信頼は自然の中で認知され、「感情ピア信頼」と業務レベルの「感情的仲介信頼」に変換された。信頼は、SHGにおける社会資本の因果関係が明らかになった。仲介信頼は、後者の適用者より早期の起業家の方が高かった。マイクロ起業家の場合、認知的仲介信頼は時間の経過とともに感情的仲介信頼に変換された。ملخصتتناول هذه الدراسة دور مهام الثقة والوساطة في تنمية تمويل المشاريع المتناهية في الصغروالمشاريع الصغيرة. تم اختيار خمسة عشر مجموعات المساعدة الذاتية (SHGs) من ثلاثة مواقع مختلفة في الهند لمناقشات مجموعة تركيز ومناقشات شخصية متعمقة. تم إكتشاف أن ثقة النظير أعلى من ثقة الوساطة خلال تشكيل مجموعة تمويل المشاريع المتناهية في الصغر فضلا” عن تشغيل المجموعة. كان مستوى ثقة الوساطة أعلى من ثقة النظيرخلال تنمية المشروعات المتناهية في الصغر. كانت ثقة المستوى الأولي معرفية في الطبيعة، تحولت إلى “ثقة النظيرذات الصلة بالمشاعر” و “ثقة وساطة النظيرذات الصلة بالمشاعر “ على مستوى التشغيل. تم إيجاد أن الثقة هي العلاقة بين السبب والنتيجة لرأس المال الاجتماعي في مجموعات المساعدة الذاتية.(SHGs) كانت ثقة الوساطة للأوائل الذين يستخدمون تكنولوجيا حديثة للمشاريع أعلى من الذين يستخدمون تكنولوجيا حديثة في وقت متأخر. في حالة الأعمال المتناهية في الصغر، تحولت ثقة الوساطة المعرفية إلى ثقة الوساطة ذات الصلة بالمشاعرمع مرور الوقت.", "corpus_id": 146791864, "title": "Trust, Social Capital, and Intermediation Roles in Microfinance and Microenterprise Development" }
{ "abstract": "The ligislation, social reforms and extensive research programmes hi the Twent ie th Century have benefitted the mentally retarded population. Prior to this, the mentally retarded were institutionalised and received only custodial care. As research progressed, the living conditions for the mentally retarded improved and the concept of the multifaceted approach to the problem of the mental retard was born . This in turn , led to the evolution of teams of experts work n g individually and in collaboration with each other. The Medical Team comprising of the family physician, obstetrician, pediatrician, psychiatrist, clinical and educational psychologist and the nursing staff handle the early detection, clinical management , health care, prevention and relevant research. The social worker's, work within the community structure, as well as clinical and educational settings, and manages the social welfare aspect of the retard and his whole family. The Educational and Rehabilitation Team, comprising of teachers, occupational therapists and workshop trainers, train these children from the earliest years for the development of motor, social i7itellectual and vocational skills to enable them to be as independent as possible.", "corpus_id": 41401315, "score": 0, "title": "MULTIDISCIPLINARY REHABILITATION OF THE MENTAL RETARD" }
{ "abstract": "Ultraviolet (UV) ice photodesorption is an important non-thermal desorption pathway in many interstellar environments that has been invoked to explain observations of cold molecules in disks, clouds, and cloud cores. Systematic laboratory studies of the photodesorption rates, between 7 and 14 eV, from CO:N2 binary ices, have been performed at the DESIRS vacuum UV beamline of the synchrotron facility SOLEIL. The photodesorption spectral analysis demonstrates that the photodesorption process is indirect, i.e., the desorption is induced by a photon absorption in sub-surface molecular layers, while only surface molecules are actually desorbing. The photodesorption spectra of CO and N2 in binary ices therefore depend on the absorption spectra of the dominant species in the sub-surface ice layer, which implies that the photodesorption efficiency and energy dependence are dramatically different for mixed and layered ices compared with pure ices. In particular, a thin (1–2 ML) N2 ice layer on top of CO will effectively quench CO photodesorption, while enhancing N2 photodesorption by a factor of a few (compared with the pure ices) when the ice is exposed to a typical dark cloud UV field, which may help to explain the different distributions of CO and N2H+ in molecular cloud cores. This indirect photodesorption mechanism may also explain observations of small amounts of complex organics in cold interstellar environments.", "corpus_id": 6846835, "title": "INDIRECT ULTRAVIOLET PHOTODESORPTION FROM CO:N2 BINARY ICES — AN EFFICIENT GRAIN-GAS PROCESS" }
{ "abstract": "X-ray photodesorption yields of N215 and CO13 are derived as a function of the incident photon energy near the N (∼400 eV) and O K-edge (∼500 eV) for pure N215 ice and mixed CO13:N215 ices. The photodesorption spectra from the mixed ices reveal an indirect desorption mechanism for which the desorption of N215 and CO13 is triggered by the photoabsorption of CO13 and N215, respectively. This mechanism is confirmed by the x-ray photodesorption of CO13 from a layered CO13/N215 ice irradiated at 401 eV on the N 1s → π* transition of N215. This latter experiment enables us to quantify the relevant depth involved in the indirect desorption process, which is found to be 30-40 monolayers in that case. This value is further related to the energy transport of Auger electrons emitted from the photoabsorbing N215 molecules that scatter toward the ice surface, inducing the desorption of CO13. The photodesorption yields corrected from the energy that can participate in the desorption process (expressed in molecules desorbed by eV deposited) do not depend on the photon energy; hence, they depend neither on the photoabsorbing molecule nor on its state after Auger decay. This demonstrates that x-ray induced electron stimulated desorption, mediated by Auger scattering, is the dominant process explaining the desorption of N215 and CO13 from the ices studied in this work.", "corpus_id": 251088852, "title": "Indirect x-ray photodesorption of N215 and CO13 from mixed and layered ices." }
{ "abstract": "We report the first detection of the N = 111 → 000 and 110 → 000 ground state rotational lines of o-ND2H at 335.5 and 388.7 GHz, obtained in the Lynds 1689N, Barnard 1, and Lynds 1544 molecular clouds using the Caltech Submillimeter Observatory (CSO). The submillimeter ND2H lines have moderate opacities and simple hyperfine patterns, which allow accurate determination of the excitation temperature, H2 volume density, and molecular column density. Both transitions have high critical densities. The 389 GHz line, in particular, traces molecular material with densities above a few × 106 cm-3. The strong 389 GHz ND2H emission in LDN 1689N implies a high fraction of dense gas in this source, ~30%, as compared to ~15% in B1 and LDN 1544. All these regions are sites of strong molecular depletion and heavy deuteration. Nonaccreting molecules, H and its isotopologues, are difficult to study, but in the sources studied here it appears that ammonia and its isotopologues are not completely frozen out, even in the high density gas. In the well-studied case of LDN 1544, the volume probed by the ND2H emission has densities of ~106-107 cm-3, within the range where the \"complete freezeout\" has been predicted to occur. The critical density of the 389 GHz ND2H line is close to that of the 309 GHz ND3 line. Observations of these two transitions thus provide an accurate measure of the [ND3]/[ND2H] fractionation ratio in the very dense gas. The [ND3]/[ND2H] ratio in LDN 1689N (~3%) appears lower than the values measured in B1 and LDN 1544 (~7%-10%), indicating that different chemical processes may be at work in these environments. The submillimeter lines of deuteroammonia are relatively strong and detectable from good sites, such as Mauna Kea or Chajnantor. Interferometric observations of these lines with the Submillimeter Array (SMA), and subsequently the Atacama Large Millimeter Array (ALMA), will provide new opportunities to study the physics and chemistry of cold, dense ISM, where most molecules are depleted onto dust grains.", "corpus_id": 97612724, "score": 2, "title": "Ground State Rotational Lines of Doubly Deuterated Ammonia as Tracers of the Physical Conditions and Chemistry of Cold Interstellar Medium" }
{ "abstract": "Biofilms formed by Escherichia coli O157:H7 on cantaloupe rind were characterized in this study. Cantaloupe rind pieces inoculated with E. coli O157:H7 B6-914 was sampled after 2, 12, and 24h incubation for imaging with cryo-scanning electron microscopy (Cryo-SEM) or treating with lauroyl arginate ethyl (LAE) or sodium hypochlorite (SHC). Cryo-SEM images showed that E. coli O157:H7 formed a biofilm within 12h on the rind surface. For rind samples treated with LAE or SHC, the residual cell counts were significantly different (p<0.05) between 2 and 12h incubation, and between 2 and 24h of incubation. For the 2h incubation samples, E. coli O157:H7 was undetectable (>5-log reduction) after treatment with 2000μg/mL of LAE or SHC. In contrast, for 12h incubation samples, 2000μg/mL of LAE or SHC could only achieve 1.74 or 1.86-log reduction, respectively. The study showed the low efficacy of LAE and SHC on cantaloupe rind surface to reduce the E. coli biofilm, suggesting the needs for cantaloupe cleaning methods beyond washing with conventional antimicrobial agents.", "corpus_id": 7032817, "title": "Biofilm of Escherichia coli O157:H7 on cantaloupe surface is resistant to lauroyl arginate ethyl and sodium hypochlorite." }
{ "abstract": "Hass avocados may become contaminated with Salmonella and Listeria monocytogenes at the farm and the packing facility or later during transportation and at retail. In Mexico, avocados are frequently sold in bulk at retail markets, where they are stored at room temperature for several hours or days and exposed to potential sources of microorganisms. These conditions may favor the entry, adhesion, survival, and biofilm formation of Salmonella and L. monocytogenes. The aim of this study was to determine the occurrence of Salmonella, L. monocytogenes, and other Listeria species and the levels of indicator microorganisms on the surface of avocados sold at retail markets. A total of 450 samples (Persea americana var. Hass) were acquired from retail markets located in Guadalajara, Mexico. One group of 225 samples was evaluated for the presence of Salmonella and for enumeration of aerobic plate counts, yeasts and molds, Enterobacteriaceae, coliforms, and Escherichia coli. The other 225 samples were processed for isolation of L. monocytogenes and other Listeria species. Microbial counts (log CFU per avocado) were 4.3 to 9.0 for aerobic plate counts, 3.3 to 7.1 for yeasts and molds, 3.3 to 8.2 for Enterobacteriaceae, 3.3 to 8.4 for coliforms, and 3.3 to 6.2 for E. coli. Eight samples (3.5%) were positive for Salmonella. Listeria spp. and L. monocytogenes were detected in 31 (13.8%) and 18 (8.0%) of 225 samples, respectively. Listeria innocua, Listeria welshimeri, and Listeria grayi were isolated from 7.6, 1.3, and 0.9% of samples. These results indicate that avocados may carry countable levels of microorganisms and could be a vehicle for transmission of Salmonella and L. monocytogenes.", "corpus_id": 209416268, "title": "Salmonella, Listeria monocytogenes, and Indicator Microorganisms on Hass Avocados Sold at Retail Markets in Guadalajara, Mexico." }
{ "abstract": "The energetics of the δ → δ* transition in quadruply bonded complexes are investigated using a very simple valence-bond formalism, called the isolated δ → δ* manifold (IDDM) model. In this model all electrons except for those that occupy the δ or δ* molecular orbitals are ignored, as are explicit metal-ligand interactions. The resulting equations allow the calculation of transition energies very inexpensively, albeit with poor quantitative agreement: the δ → δ* transition in prototypical quadruple-bond systems is predicted to occur at energies greater than 70,000 cm1. The model incorporates configuration interaction between the two1A1g configurations (|δδ| and |δ*δ*|) to roughly the same extent as do correlated all-electron calculations. The application of the method to systems that involve relative changes in δ → δ* transition energies, such as the torsional twisting of quadruple bonds, is more successful quantitatively.", "corpus_id": 95962421, "score": 0, "title": "A simplified View of δ → δ* Transition Energies in Compounds with multiple metal-metal bonds: The isolated δ — δ* manifold model" }
{ "abstract": "The occurrence of anemia (hemoglobin concentration <13.0 g/dl in men and <12.0 g/dl in women) in patients with chronic heart failure has recently received increased attention. Its prevalence ranges from <10% among patients with mild heart failure to more than 50% for those with advanced disease ([1", "corpus_id": 2690502, "title": "Anemia in heart failure time to rethink its etiology and treatment?" }
{ "abstract": "BACKGROUND\nAnemia has been shown to be a predictor of mortality in patients with heart failure and impaired left ventricular systolic function (ISF). Although heart failure in the setting of preserved systolic function (PSF) is an important clinical problem, the relationship between anemia and outcomes in patients with PSF has not been carefully evaluated.\n\n\nMETHODS\nPatients undergoing diagnostic angiography from 1995 to 2003 with symptomatic heart failure (New York Heart Association class II or greater) were studied (N = 4951). Patients with primary valvular or congenital heart disease were excluded. Patients with ejection fraction < or = 0.40 (N = 1858) were considered the ISF group, and patients with ejection fraction > 0.40 (N = 3093) were classified as the PSF group. Anemia was defined by the World Health Organization criteria (hemoglobin < 13 g/dL for men and < 12 g/dL for women). Multivariable Cox proportional hazards models were used to adjust for baseline differences. The possibility of a differential effect of anemia by systolic function was tested using an interaction term in the multivariable model.\n\n\nRESULTS\nAnemia was independently associated with adverse outcomes across the study cohort (adjusted hazard ratio = 1.53, P < .0001). There was no interaction between anemia and systolic function (ISF vs PSF) in the multivariable model (P = .31 for interaction). The hazard ratio for anemia was 1.61 for PSF patients and 1.45 for ISF patients.\n\n\nCONCLUSIONS\nAnemia is an independent predictor of mortality in heart failure, regardless of whether patients have preserved or impaired systolic function. This is the first report of an association between anemia and increased mortality in patients with heart failure and PSF. Future investigations of therapies for anemia in heart failure should consider including patients with PSF.", "corpus_id": 10930155, "title": "Anemia in patients with heart failure and preserved systolic function." }
{ "abstract": "Erythropoietin (Epo), a growth factor produced by the kidney, is important in heart failure patients to promote oxygen delivery to tissues. Seventy‐two chronic heart failure (CHF) patients at our outpatient clinic were subjected to morning serum Epo‐level measurements and classified according to NYHA criteria.", "corpus_id": 6635632, "score": 2, "title": "Serum erythropoietin in heart failure patients treated with ACE‐inhibitors or AT1 antagonists" }
{ "abstract": "Thienyl di-N-methyliminodiacetic acid (MIDA) boronate esters are readily synthesized by electrophilic C–H borylation producing bench stable crystalline solids in good yield and excellent purity. Optimal conditions for the slow release of the boronic acid using KOH as the base in biphasic THF/water mixtures enables the thienyl MIDA boronate esters to be extremely effective homo-bifunctionalized (AA-type) monomers in Suzuki–Miyaura copolymerizations with dibromo-heteroarenes (BB-type monomers). A single polymerization protocol is applicable for the formation of five alternating thienyl copolymers that are (or are close analogues of) state of the art materials used in organic electronics. The five polymers were produced in excellent yields and with high molecular weights comparable to those produced using Stille copolymerization protocols. Therefore, thienyl di-MIDA boronate esters represent bench stable and low toxicity alternatives to highly toxic di-trimethylstannyl AA-type monomers that are currently ubiquitous in the synthesis of these important alternating copolymers.", "corpus_id": 4968555, "title": "A General Protocol for the Polycondensation of Thienyl N-Methyliminodiacetic Acid Boronate Esters To Form High Molecular Weight Copolymers" }
{ "abstract": "Conjugated polymers have attracted much attention in recent years, as they can combine the best features of metals or inorganic semiconducting materials (excellent electrical and optical properties) with those of synthetic polymers (mechanical flexibility, simple processing, and low-cost production), thereby creating altogether new scientific synergies and technological opportunities. In the search for more efficient synthetic methods for the preparation of conjugated polymers, this Perspective reports advances in the field of direct (hetero)arylation polymerization. This recently developed polymerization method encompasses the formation of carbon-carbon bonds between simple (hetero)arenes and (hetero)aryl halides, reducing both the number of synthetic steps and the production of organometallic byproducts. Along these lines, we describe the most general and adaptable reaction conditions for the preparation of high-molecular-weight, defect-free conjugated polymers. We also discuss the bottleneck presented by the utilization of certain brominated thiophene units and propose some potential solutions. It is, however, firmly believed that this polymerization method will become a versatile tool in the field of conjugated polymers by providing a desirable atom-economical alternative to standard cross-coupling polymerization reactions.", "corpus_id": 33460137, "title": "Direct (Hetero)arylation Polymerization: Trends and Perspectives." }
{ "abstract": "A new method for the catalytic C-H arylation of heteroarenes and arenes that manifests high activity paired with reasonably broad scope was developed. Under the catalytic influence of RhCl(CO){P[OCH(CF3)2]3}2 and Ag2CO3, the direct C-H arylation of heteroarenes/arenes with aryl/heteroaryl iodides took place to afford a range of biaryls in good to excellent yields with high regioselectivity. Thiophenes, furans, pyrroles, indoles, and alkoxybenzenes are applicable in this arylation protocol.", "corpus_id": 5851867, "score": 2, "title": "Direct C-H arylation of (hetero)arenes with aryl iodides via rhodium catalysis." }
{ "abstract": "Purpose – In the context of the negotiations for apportionment of emission reduction post‐Kyoto regime, issues of equity and fairness have emerged. The purpose of this paper is to generate a model for equitable emission reduction apportionment.Design/methodology/approach – The mathematical model has been designed utilizing mitigation capacity (based on gross domestic product (GDP)) and cumulative excess emissions as the criteria for apportionment. Quantitative results have been arrived at, using cumulative γ and parabolic mitigation emission reduction trajectories to demonstrate the impact on stakeholders.Findings – The apportionment outcomes are independent of the specific trajectory fine‐tuned in the feasibility region. Since the apportionment takes into account entitlements and the mitigation capacity, Africa and India have negligible reduction targets in tune with the development goals in these economies. Substantial reduction commitments would fall on the USA and the EU countries. China gets a modera...", "corpus_id": 153824002, "title": "A framework for equitable apportionment of emission reduction commitments to mitigate global warming" }
{ "abstract": "Climate change phenomenon can be seen as a simple but daunting problem. The lack of equity in emission reduction burden sharing regime, will cause a need for a greater sacrifice for poor or less developed countries. Thus, the evaluation of different aspects of equity at a national scale and presenting a top–down model of equity for allocation of GHGs emission (such as GERA) in line with sustainable development is the main objective of this study. In this study, the five equity principles proposed in the literature namely (1) population distribution, (2) GHGs emissions, (3) GDP, (4) trend of economic growth and (5) per capita of carbon productivity as appropriate criteria of equity estimation. Due to the different decision makers' preferences, different weights are allocated to indicators and analyzed. Iran has been considered as a case study, and these criteria were applied at national level to propose an allowance allocation scheme. The result of applying GERA for Iran, at provincial level and under the five equity criteria, determines which provinces have to shoulder higher reduction burdens, and makes a room for less developed provinces for growth. Based on these results, this model demonstrated to be more sensitive to criteria selection rather than to the weight factors. In addition, shifting to low carbon technologies or renewables, careful evaluation of current emission–income pattern, improving of energy intensity and finally, adjustment of secondary industries (manufacturing) based on ecological and natural resources of each region are suggested as the most efficient approaches toward sustainability and green development for the case study.", "corpus_id": 154817420, "title": "Options for sustainable development planning based on “GHGs emissions reduction allocation (GERA)” from a national perspective" }
{ "abstract": "In this paper, we address the issue of financial constraints in agricultural cooperatives. We estimate an augmented Q investment model for a large sample of US agricultural cooperatives and test whether cooperative investment is sensitive to cash flow. Empirical results suggest that agricultural cooperatives are indeed financially constrained when making investment decisions.", "corpus_id": 15887014, "score": 1, "title": "TESTING FOR THE PRESENCE OF FINANCIAL CONSTRAINTS IN AGRICULTURAL COOPERATIVES" }
{ "abstract": "We analyze the impact of online advertisements on effectiveness and user experience.We combine results from online campaign and data from perceptual experiment.We use fuzzy multi-objective modeling for searching for trade-off solutions.We propose balanced approach to advertising resources exploitation. The focus placed on maximizing user engagement in online advertising negatively affects the user experience because of advertising clutter and increasing intrusiveness. An intelligent decision support system providing balance between user experience and profits from online advertising based on the fuzzy multi-objective optimization model is presented in this paper. The generalized mathematical model uses uncertain parameters for content descriptors that are difficult to be precisely defined and measured, such as the level of intrusiveness and the change in performance over time. The search for final decision solutions and the verification of the proposed model are based on experimental results from both perceptual studies, which are evaluating visibility and intrusiveness of marketing content as well as online campaigns providing interaction data for estimation of effectiveness. Surprisingly, the online response to the most noticeable advertisements with highly perceived visibility and intrusiveness was relatively low. During the field study performed in order to compute the model parameters, the best results were achieved for advertising content with moderate visual influence on web users. Simulations with the proposed model revealed that a growing level of persuasion can increase results only to a certain extent. Above a saturation point, a strategy based on extensive visual effects, such as high-frequency flashing, resulted in a very high increase of intrusiveness and a slightly better performance in terms of acquired interactions. Proposed balanced content design with the use of intelligent decision support system creates directions towards sustainable advertising and a friendlier online environment.", "corpus_id": 22498402, "title": "Fuzzy multi-objective modeling of effectiveness and user experience in online advertising" }
{ "abstract": "This paper proposes a new intelligent fashion recommender system to select the most relevant garment design scheme for a specific consumer in order to deliver new personalized garment products. This system integrates emotional fashion themes and human perception on personalized body shapes and professional designers’ knowledge. The corresponding perceptual data are systematically collected from professional using sensory evaluation techniques. The perceptual data of consumers and designers are formalized mathematically using fuzzy sets and fuzzy relations. The complex relation between human body measurements and basic sensory descriptors, provided by designers, is modeled using fuzzy decision trees. The fuzzy decision trees constitute an empirical model based on learning data measured and evaluated on a set of representative samples. The complex relation between basic sensory descriptors and fashion themes, given by consumers, is modeled using fuzzy cognitive maps. The combination of the two models can provide more complete information to the fashion recommender system, making it possible to evaluate if a specific body shape is relevant to a desired emotional fashion theme and which garment design scheme can improve the image of the body shape. The proposed system has been validated in a customized design and mass market selection through the evaluations of target consumers and fashion experts using a method frequently used in marketing study.", "corpus_id": 2805159, "title": "Intelligent Fashion Recommender System: Fuzzy Logic in Personalized Garment Design" }
{ "abstract": "Online streaming feature selection (OSFS) algorithms, producing an approximately optimal subset from so-far-seen features in real time, are capable of addressing feature selection issues in extreme large or even infinite dimensional space. There are several algorithms proposed carrying out in OSFS way. However, some of these algorithms need prior knowledge about the entire feature space which is inaccessible in real OSFS scenario. Besides, results of them are sensitive to the permutations of features. In this paper, we first propose an OSFS framework based on the uncertainty measures in rough sets theory. The framework needs no additional information, except for the given data. Moreover, a sorting mechanism is adopted in the framework, as creates its stability to varying the order of features. Then, specifying the uncertainty measure with conditional information entropy (CIE), we design an algorithm named CIE-OSFS based on the framework. Comprehensive experiments are conducted to verify the effectiveness of our method on several high dimensional benchmark data sets. The experimental results indicate that CIE-OSFS achieves more compactness with the prerequisite of guaranteeing the predictive accuracy and performs more stably to the changing of features' order than other algorithms in most cases.", "corpus_id": 20228446, "score": -1, "title": "Online Streaming Feature Selection Based on Conditional Information Entropy" }
{ "abstract": "In this paper, a comparison among Particle swarm optimization (PSO), Bee Colony Optimization (BCO) and the Bat Algorithm (BA) is presented. In addition, a modification to the main parameters of each algorithm through an interval type-2 fuzzy logic system is presented. The main aim of using interval type-2 fuzzy systems is providing dynamic parameter adaptation to the algorithms. These algorithms (original and modified versions) are compared with the design of fuzzy systems used for controlling the trajectory of an autonomous mobile robot. Simulation results reveal that PSO algorithm outperforms the results of the BCO and BA algorithms.", "corpus_id": 313590, "title": "Comparative Study of Type-2 Fuzzy Particle Swarm, Bee Colony and Bat Algorithms in Optimization of Fuzzy Controllers" }
{ "abstract": "In this paper, we propose a new method for dynamic parameter adaptation in particle swarm optimization (PSO). PSO is an optimization method inspired in social behavior, which has been applied to different optimization problems obtaining good results. In this paper, we propose an improvement to the convergence and diversity of the swarm in PSO using interval type-2 fuzzy logic. Simulation results show that the proposed approach improves the performance of PSO. A comparison of the proposed method using type-2 fuzzy logic with the original PSO approach, and with PSO using type-1 fuzzy logic for dynamic parameter adaptation is presented.", "corpus_id": 19049451, "title": "Dynamic parameter adaptation in particle swarm optimization using interval type-2 fuzzy logic" }
{ "abstract": "At present, automatic discourse analysis is a relevant research topic in the field of NLP. However, discourse is one of the phenomena most difficult to process. Although discourse parsers have been already developed for several languages, this tool does not exist for Catalan. In order to implement this kind of parser, the first step is to develop a discourse segmenter. In this article we present the first discourse segmenter for texts in Catalan. This segmenter is based on Rhetorical Structure Theory (RST) for Spanish, and uses lexical and syntactic information to translate rules valid for Spanish into rules for Catalan. We have evaluated the system by using a gold standard corpus including manually segmented texts and results are promising.", "corpus_id": 2428716, "score": 1, "title": "Extending Automatic Discourse Segmentation for Texts in Spanish to Catalan" }
{ "abstract": "We present a method to combine fluid dynamics and image analysis into a single fast simulation environment. Our target applications are hemodynamic studies. Our method combines an NS solver that relies on the L2 penalty approach pioneered by Caltagirone and co-workers, and a level set method based on the Mumford-Shah energy model. Working in Cartesian coordinates regardless of grid no matter the complexity of the geometry, one can use fast parallel domain decomposition solvers in a fairly robust and consistent way. The input of the simulation tool is a set of JPEG images, and the output can be various flow components as well as shear stress indicators on the vessel or domain wall. In two space dimensions the code runs close to real time.", "corpus_id": 1561569, "title": "Toward a Real Time, Image Based CFD" }
{ "abstract": "We present high resolution numerical simulations of incompressible two-dimensional flows in tube bundles, staggered or inline, as encountered in heat exchangers or chemical reactors. We study the time evolution of several flows in arrays of cylinders or squares, at Reynolds number 200. The numerical scheme is either based on adaptive wavelet or Fourier pseudo-spectral space discretization with adaptive time stepping. A volume penalisation method is used to impose no-slip boundary conditions on the tubes. Lift and drag coefficients for the different geometries of tube bundles are compared and perspectives for fluid–structure interaction are given.", "corpus_id": 53646192, "title": "NUMERICAL SIMULATION OF THE TRANSIENT FLOW BEHAVIOUR IN TUBE BUNDLES USING A VOLUME PENALISATION METHOD" }
{ "abstract": "The paper first describes a fast algorithm for the discrete orthonormal wavelet transform and its inverse without using the scaling function. This approach permits to compute the decomposition of a function into a lacunary wavelet basis, i.e., a basis constituted of a subset of all basis functions up to a certain scale, without modification. The construction is then extended to operator-adapted biorthogonal wavelets. This is relevant for the solution of certain nonlinear evolutionary PDEs where a priori information about the significant coefficients is available. We pursue the approach described in (J. Frohlich and K. Schneider,Europ. J. Mech. B/Fluids13,439, 1994) which is based on the explicit computation of the scalewise contributions of the approximated function to the values at points of hierarchical grids. Here, we present an improved construction employing the cardinal function of the multiresolution. The new method is applied to the Helmholtz equation and illustrated by comparative numerical results. It is then extended for the solution of a nonlinear parabolic PDE with semi-implicit discretization in time and self-adaptive wavelet discretization in space. Results with full adaptivity of the spatial wavelet discretization are presented for a one-dimensional flame front as well as for a two-dimensional problem.", "corpus_id": 122689852, "score": 2, "title": "An Adaptive Wavelet-Vaguelette Algorithm for the Solution of PDEs" }
{ "abstract": "Biochar, a natural sourced carbon-rich material, has been used commonly in particle shape for carbon sequestration, soil fertility and environmental remediation. Here, we report a facile approach to fabricate freestanding biochar composite membranes for the first time. Wood biochars pyrolyzed at 300 °C and 700 °C were blended with polyvinylidene fluoride (PVdF) in three percentages (10%, 30% and 50%) to construct membranes through thermal phase inversion process. The resultant biochar composite membranes possess high mechanical strength and porous structure with uniform distribution of biochar particles throughout the membrane surface and cross-section. The membrane pure water flux was increased with B300 content (4825-5411 ± 21 L m-2 h-1) and B700 content (5823-6895 ± 72 L m-2 h-1). The membranes with B300 were more hydrophilic with higher surface free energy (58.84-60.31 mJ m-2) in comparison to B700 (56.32-51.91 mJ m-2). The biochar composite membranes indicated promising adsorption capacities (47-187 mg g-1) to Rhodamine B (RhB) dye. The biochar membranes also exhibited high retention (74-93%) for E. coli bacterial suspensions through filtration. After simple physical cleaning, both the adsorption and sieving capabilities of the biochar composite membranes could be effectively recovered. Synergistic mechanisms of biochar/PVdF in the composite membrane are proposed to elucidate the high performance of the membrane in pollutant management. The multifunctional biochar composite membrane not only effectively prevent the problems caused by directly using biochar particle as sorbent but also can be produced in large scale, indicating great potential for practical applications.", "corpus_id": 4913300, "title": "Biochar composite membrane for high performance pollutant management: Fabrication, structural characteristics and synergistic mechanisms." }
{ "abstract": "Biofouling is the Achilles Heel of membrane processes. The accumulation of organic foulants and growth of microorganisms on the membrane surface reduce the permeability, shorten the membrane life, and increase the energy consumption. Advancements in novel carbon-based materials (CBMs) present significant opportunities in mitigating biofouling of membrane processes. This article provides a comprehensive review of the recent progress in the application of CBMs in antibiofouling membrane. It starts with a detailed summary of the different antibiofouling mechanisms of CBM-containing membrane systems. Next, developments in membrane modification using CBMs, especially carbon nanotubes and graphene family materials, are critically reviewed. Further, the antibiofouling potential of next-generation carbon-based membranes is surveyed. Finally, the current problems and future opportunities of applying CBMs for antibiofouling membranes are discussed.", "corpus_id": 201653563, "title": "Recent advances in mitigating membrane biofouling using carbon-based materials." }
{ "abstract": "In this paper, practical modelling and control system design methode for CAE (Computer Aided Engineering) systems are presented. The Partial Model Matching method (PMM) is effective in designing conventional PID (Proportional, Integral, Derivative) control systems used widely in industrial processes, and some extended types of PID control systems such as decoupling PID control systems and digital PID control systems. In order to design a control system by the PMM method, a transfer function for process dynamics that has a simple structure is needed. Furthermore the cut-off frejuency characteristics that determine the stabilty and response charscteristics of the control system are essential for control system design. Thus, an effective modelling method is proposed that identifies a transfer function by the response data that shows the cut-off frequency characteristics exactly and reduces it to a simple Structured transfer function. The combination of the modeling method and the PMM method is applicable to various dynamics, such as processes with delay time, dead time, overshooting, reverse shooting and oscillation. A CAE system developed on a laptop personal computer using the methods is outlined.", "corpus_id": 21320974, "score": 1, "title": "Practical Modelling and Control System Design Methods for CAE Systems" }
{ "abstract": null, "corpus_id": 1651644, "title": "Light Field Occlusion Removal" }
{ "abstract": "Image completion involves filling missing parts in images. In this paper we address this problem through the statistics of patch offsets. We observe that if we match similar patches in the image and obtain their offsets (relative positions), the statistics of these offsets are sparsely distributed. We further observe that a few dominant offsets provide reliable information for completing the image. With these offsets we fill the missing region by combining a stack of shifted images via optimization. A variety of experiments show that our method yields generally better results and is faster than existing state-of-the-art methods.", "corpus_id": 7426400, "title": "Statistics of patch offsets for image completion" }
{ "abstract": "Local structures of shadow boundaries as well as complex interactions of image regions remain largely unexploited by previous shadow detection approaches. In this paper, we present a novel learning-based framework for shadow region recovery from a single image. We exploit local structures of shadow edges by using a structured CNN learning framework. We show that using structured label information in classification can improve local consistency over pixel labels and avoid spurious labelling. We further propose and formulate shadow/bright measure to model complex interactions among image regions. The shadow and bright measures of each patch are computed from the shadow edges detected by the proposed CNN. Using the global interaction constraints on patches, we formulate a least-square optimization problem for shadow recovery that can be solved efficiently. Our shadow recovery method achieves state-of-the-art results on major shadow benchmark databases collected under various conditions.", "corpus_id": 2642044, "score": -1, "title": "Shadow optimization from structured deep edge detection" }
{ "abstract": "Pulmonary artery thrombosis is rarely reported in preterm neonates. Although treatment of neonatal thrombosis remains controversial, thrombolytic agents must be considered when the thrombosis is life threatening. We herein present a case of a preterm newborn with pulmonary artery thrombosis accompanied by acute-onset respiratory failure and cyanotic congenital heart disease. The thrombosis was successfully treated using tissue plasminogen activator. In conclusion, the thrombolytic therapy should be considered in treatment of patients in whom the thrombosis completely occludes the pulmonary arteries.", "corpus_id": 1033024, "title": "A Previously Healthy Premature Infant Treated With Thrombolytic Therapy for Life-threatening Pulmonary Artery Thrombosis" }
{ "abstract": "Pulmonary artery thrombosis in neonates is a rare entity. We describe two neonates with this diagnosis; their presentation, evaluation, and management. These cases highlight the importance of this differential diagnosis when evaluating the cyanotic neonate.", "corpus_id": 32878181, "title": "Neonatal pulmonary artery thrombosis" }
{ "abstract": "STUDY OBJECTIVE\nIn patients with proven acute pulmonary embolism (PE), a systematic search for \"residual\" deep vein thrombosis (DVT) using venography or compression duplex ultrasonography (CDUS) of the lower limbs is negative in 20 to 50% of patients. We hypothesized that undetectable pelvic vein thrombosis (from the external iliac vein to the inferior vena cava) could account for a substantial proportion of patients with negative CDUS findings. Using a noninvasive test, magnetic resonance angiography (MRA), the objective of the study was to assess the prevalence of pelvic DVT in patients with acute PEs and normal findings on lower limb CDUS.\n\n\nDESIGN\nProspective study.\n\n\nSETTING\nA 35-bed respiratory unit in a 680-bed Parisian teaching hospital.\n\n\nPATIENTS\nFrom June 1995 to October 1996, 24 patients (mean age, 49 years; age range, 18 to 83 years) with acute PEs and normal findings on lower limb CDUS underwent pelvic MRA.\n\n\nMEASUREMENTS AND RESULTS\nMRA disclosed pelvic DVT in seven patients (29%). The common iliac vein was involved in five patients. Internal iliac vein (hypogastric) thrombosis was imaged in two patients, but no patients had DVT limited to this vein. Three patients underwent subsequent venography studies that confirmed the MRA findings. In three other patients, a new MRA at the end of anticoagulant therapy showed the resolution of the DVT.\n\n\nCONCLUSIONS\nOur data support the view that, among patients with negative findings on CDUS, a substantial proportion of the DVTs that are responsible for PE originates in the pelvic veins. This study provides additional arguments to suggest that MRA might become the reference test for the exploration of pelvic DVT.", "corpus_id": 6604961, "score": 2, "title": "Detection of pelvic vein thrombosis by magnetic resonance angiography in patients with acute pulmonary embolism and normal lower limb compression ultrasonography." }
{ "abstract": "BackgroundType 2 diabetes mellitus is a chronic progressive disease. During the course of the disease intensive treatment is often necessary resulting in multiple interventions including administration of insulin. Although dietary intervention is highly recommended, the clinical results of the widely prescribed diets with low fat content and high carbohydrates are disappointing. In this proof-of-concept study, we tested the effect of dietary carbohydrate-restriction in conjunction with metformin and liraglutide on metabolic control in patients with type 2 diabetes.MethodsForty patients with type 2 diabetes already being treated with two oral anti-diabetic drugs or insulin treatment and who showed deterioration of their glucose metabolism (i.e. HbA1c > 7.5), were treated. A carbohydrate-restricted diet and a combination of metformin and liraglutide were instituted, after stopping either insulin or oral anti-diabetic drugs (excluding metformin). After enrollment, the study patients were scheduled for follow-up visits at one, two, three and six months. Primary outcome was glycemic control, measured by HbA1c at six months. Secondary outcomes were body weight, lipid-profile and treatment satisfaction.ResultsThirty-five (88%) participants completed the study. Nearly all participating patients experienced a drop in HbA1c and body weight during the first three months, an effect which was maintained until the end of the study at six months. Seventy-one percent of the patients reached HbA1c values below 7.0%. The range of body weight at enrollment was extreme, reaching 165 kg as the highest initial value. The average weight loss after 6 months was 10%. Most patients were satisfied with this treatment. During the intervention no significant change of lipids was observed. Most patients who were on insulin could maintain the treatment without insulin with far better metabolic control.ConclusionsCarbohydrate restriction in conjunction with metformin and liraglutide is an effective treatment option for patients with advanced diabetes who are candidates for instituting insulin or who are in need of intensified insulin treatment. This proof-of-principle study showed a significant treatment effect on metabolic control.", "corpus_id": 2380726, "title": "Carbohydrate restricted diet in conjunction with metformin and liraglutide is an effective treatment in patients with deteriorated type 2 diabetes mellitus: Proof-of-concept study" }
{ "abstract": "Background Glycemic control in patients with type 2 diabetes mellitus is a health care challenge. Although there are various recommendations for its prevention and management, no one solution has been identified to effectively prevent and manage its complications. Treatment recommendations for type 2 diabetes mellitus may include a single agent hypoglycemic medication or combinations of oral and injectable hypoglycemic medications with diet and exercise in an attempt to achieve optimal glucose control. Currently, there are no published systematic reviews specific to the intervention of exercise and diet in addition to hypoglycemic medication for the improvement of HbA1C in patients with type 2 diabetes mellitus. The objective of this systematic review was to synthesize evidence related to diet and exercise in adult patients with type 2 diabetes mellitus currently taking hypoglycemic medication to determine if management should include diet and exercise to improve glycemic control. Objectives To identify the best available evidence on the effectiveness of nutrition and exercise in addition to medication on HbA1C in patients with type 2 diabetes mellitus. Inclusion criteria Types of participants Adults ages 18 years and older with type 2 diabetes mellitus, regardless of gender, ethnicity or national origin. Types of intervention(s)/phenomena of interest Exercise and/or nutritional programs for adult participants with type 2 diabetes mellitus treated with antihyperglycemic agents. Types of studies Randomized and pseudo‐randomized control trials. Types of outcomes HbA1C Search strategy To find both published and unpublished studied in the English language from the inception of each database through September of 2013. A primary search of PubMed, CINAHL, EMBASE and the Cochrane Central Register of Controlled Trials were conducted using identified keywords and indexed terms across all included databases. A gray literature search was also performed. Methodological quality Two reviewers evaluated the included studies for methodological quality utilizing standardized critical appraisal instruments from the Joanna Briggs Institute. Data collection and synthesis Data were extracted using standardized data extraction instruments from the Joanna Briggs Institute. Due to clinical heterogeneity between included studies, statistical meta‐analysis was not feasible. The results are presented in a narrative form. Results Four articles, three of which featured interventions utilizing exercise in addition to hypoglycemic medications, and one, a nutritional intervention, were chosen for inclusion in this review. Two of the exercise interventions and one nutritional intervention showed an improvement in HbA1C in patients with type 2 diabetes mellitus. The study with a nutritional intervention showed a reduction of HbA1C after six months compared to the control group (‐0.4, p=0.007). One study that focused on exercise showed an improvement on HbA1C of 6.00 ±0.83, p=0.008). A second study that focused on exercise intervention also demonstrated a small but not significant reduction of HbA1C after six months (8.9 ±1.0 at baseline to 8.7 ±1.1). A third study with an exercise intervention showed no effect in the HbA1C after 2 years (effect size 0, p=0.999). Conclusions A reduction in HbA1C in adults with type 2 diabetes may be seen with the addition of nutritional and/or exercise interventions on top of antihyperglycemic medications. The studies included in this review suggest that a supervised aerobic exercise program three times a week may improve outcomes.", "corpus_id": 72711333, "title": "The effect of nutrition and exercise in addition to hypoglycemic medications on HbA1C in patients with type 2 diabetes mellitus: a systematic review" }
{ "abstract": "Adenosine produces acute inhibition of sinus node and atrioventricular (AV) nodal function. This profound but short lived electrophysiologic effect makes adenosine a suitable agent for treating supraventricular tachycardias (SVT) that incorporate the sinus node or AV node as part of the arrhythmia circuit, or for unmasking atrial tachyarrhythmias or ventricular pre-excitation. Its antiadrenergic properties also make it an effective agent for use with some unique atrial and ventricular tachycardias. Appropriate dosing and rapid bolusing with intravenous administration is required. Recognition of infrequent proarrhythmic risks and potential drug interactions with xanthine derivatives and dipyridamole should maximize its safe and effective use. This review will highlight adenosine's mechanism of action, administration, clinical indications, efficacy, and risks when used in tachyarrhythmic management.", "corpus_id": 1887455, "score": 1, "title": "Adenosine as an antiarrhythmic agent." }
{ "abstract": "A class C of graphs is said to be dually compact closed if, for every infinite G ∈ C, each finite subgraph of G is contained in a finite induced subgraph of G which belongs to C. The class of trees and more generally the one of chordal graphs are dually compact closed. One of the main part of this paper is to settle a question of Hahn, Sands, Sauer and Woodrow by showing that the class of bridged graphs is dually compact closed. To prove this result we use the concept of constructible graph. A (finite or infinite) graph G is constructible if there exists a wellordering ≤ (called constructing ordering) of its vertices such that, for every vertex x which is not the smallest element, there is a vertex y < x which is adjacent to x and to every neighbor z of x with z < x. Finite graphs are constructible if and only if they are dismantlable. The case is different, however, with infinite graphs. A graph G for which every breadth-first search of G produces a particular constructing ordering of its vertices is called a BFS-constructible graph. We show that the class of BFS-constructible graphs is a variety (i.e., it is closed under weak retracts and strong products), that it is a subclass of the class of weakly modular graphs, and that it contains the class of bridged graphs and that of Helly graphs (bridged graphs being very special instances of BFS-constructible graphs). Finally we show that the class of intervalfinite pseudo-median graphs (and thus the one of median graphs) and the class of Helly graphs are dually compact closed, and that moreover every finite subgraph of an interval-finite pseudo-median graph (resp. a Helly graph) G is contained in a finite isometric pseudo-median", "corpus_id": 2652649, "title": "On dually compact closed classes of graphs and BFS-constructible graphs" }
{ "abstract": "Abstract Following a question of Anstee and Farber we investigate the possibility that all bridged graphs are cop-win. We show that infinite chordal graphs, even of diameter two, need not be cop-win and point to some interesting questions, some of which we answer.", "corpus_id": 9657208, "title": "On cop-win graphs" }
{ "abstract": "A graph isbridgedif it contains no isometric cycles of length greater than three. Anstee and Farber established that bridged graphs are cop-win graphs. According to Nowakowski and Winkler and Quilliot, a graph is a cop-win graph if and only if its vertices admit a linear orderingv1, v2, ?, vnsuch that every vertexvi,i>1, is dominated by some neighbourvj,j", "corpus_id": 45741644, "score": 2, "title": "Bridged Graphs Are Cop-Win Graphs: An Algorithmic Proof" }
{ "abstract": "Image segmentation is the premise of object-based image analysis (OBIA), and obtaining an optimal segmentation result has been a desire for many researchers. This article proposes an optimal segmentation method for a high-resolution remote-sensing image that is guided by spatial features of area and boundary. This method achieves an optimal result through stepwise refinement on multi-scale segmentations. First, boundary strength is integrated into the choice for the optimal scale based on an improved unsupervised evaluation. Then, under-segmented objects (USOs) and over-segmented objects (OSOs) at the selected optimal scale are identified using a heterogeneity histogram and a slider-like threshold with the guidance of area and boundary. Finally, the corresponding objects, in a specific finer segmentation, are taken to replace the USOs at the optimal scale, and then the USOs and OSOs are refined by an effective merging mechanism. A heterogeneity-change-based merging criterion considering boundary, shape, spectral, and texture features is constructed for the merging of neighbouring objects. The proposed method is more effective than the unsupervised image segmentation evaluation and refinement (UISER) method as it uses spatial features to guide optimal choice of scale, and USO and OSO identification and refinement. Comparative experiments show that the spatial features used in the proposed method are effective for achieving an enhanced segmentation result.", "corpus_id": 122685523, "title": "Optimal segmentation of a high-resolution remote-sensing image guided by area and boundary" }
{ "abstract": "In this letter, we address the problem of urban-area extraction by using a feature-free image representation concept known as ldquoVisual Words.rdquo This method is based on building a ldquodictionaryrdquo of small patches, some of which appear mainly in urban areas. The proposed algorithm is based on a new pixel-level variant of visual words and is based on three parts: building a visual dictionary, learning urban words from labeled images, and detecting urban regions in a new image. Using normalized patches makes the method more robust to changes in illumination during acquisition time. The improved performance of the method is demonstrated on real satellite images from three different sensors: LANDSAT, SPOT, and IKONOS. To assess the robustness of our method, the learning and testing procedures were carried out on different and independent images.", "corpus_id": 7718467, "title": "Urban-Area Segmentation Using Visual Words" }
{ "abstract": "The authors present a method that combines region growing and edge detection for image segmentation. They start with a split-and-merge algorithm where the parameters have been set up so that an oversegmented image results. Then region boundaries are eliminated or modified on the basis of criteria that integrate contrast with boundary smoothness, variation of the image gradient along the boundary, and a criterion that penalizes for the presence of artifacts reflecting the data structure used during segmentation (quadtree, in this case).<<ETX>>", "corpus_id": 829546, "score": -1, "title": "Integrating region growing and edge detection" }
{ "abstract": "We present 225 cases of routine operative cholangiography, utilizing a new squeeze-locking, split-loop clamp to secure the catheter inside the cystic duct plus an x-ray film technique using two different dye concentrations. No surgical or procedural difficulty was experienced in any of the 225 cases. Additional operating time for the procedure averaged under five minutes, with only one inconclusive x-ray film study. These results suggest that the new catheter clamp and the two-film technique provide a mechanical simplification of the procedure and that many of the causes of inconclusive x-ray film study have been eliminated.", "corpus_id": 1286940, "title": "Operative cholangiography: new cholangiogram catheter clamp and improved technique." }
{ "abstract": "A new S-shaped cystic duct metal cannula was developed and used successfully in 200 cystic duct cholangiograms. It has the advantage of being easy to manipulate because the configuration does not obscure the vision and the resistance is less than that of plastic tubes, facilitating injection with lower pressure. It is much cheaper than the disposable cannulas because it can be reused (autoclaved).", "corpus_id": 44950465, "title": "Improved cannula for operative (cystic duct) cholangiography." }
{ "abstract": "Among the four five-year periods from 1951 through 1970, the overall incidence of common duct calculi was relatively stable at 13% in 3,012 patients undergoing cholecystectomy with or without common duct exploration. The use of operative cholangiography rose progressively from 2.9% in the 1951-1955 period to 93% in the 1966-1970 group. This change was associated with a decrease in number of patients undergoing choledochotomy from 41% to 25% and by a striking increase in the number of positive explorations from 28% to 62% (1966 to 1970). No morbidity or mortality could be attributed directly to cholangiography. The recovery rate for calculi in patients who had choledochotomy for traditional criteria, to the exclusion of cholangiography, remained unchanged during these 20 years.", "corpus_id": 36330877, "score": 2, "title": "Operative cholangiography during routine cholecystectomy: a review of 3,012 cases." }
{ "abstract": "The present review concentrates on the biological aspects of porcine T lymphocytes. Their ontogeny, subpopulations, localization and trafficking, and responses to pathogens are reviewed. The development of porcine T cells begins in the liver during the first trimester of fetal life and continues in the thymus from the second trimester until after birth. Porcine T cells are divided into two lineages, based on their possession of the [@@@]\\rmalpha [@@@]β or γδ T-cell receptor. Porcine [@@@]\\rmalpha [@@@]β T cells recognize antigens in a major histocompatibility complex (MHC)-restricted manner, whereas the γδ T cells recognize antigens in a MHC non-restricted fashion. The CD4+CD8− and CD4+CD8lo T cell subsets of [@@@]\\rmalpha [@@@]β T cells recognize antigens presented in MHC class II molecules, while the CD4−CD8+ T cell subset recognizes antigens presented in MHC class I molecules. Porcine [@@@]\\rmalpha [@@@]β T cells localize mainly in lymphoid tissues, whereas γδ T cells predominate in the blood and intestinal epithelium of pigs. Porcine CD8+ [@@@]\\rmalpha [@@@]β T cells are a prominent T-cell subset during antiviral responses, while porcine CD4+ [@@@]\\rmalpha [@@@]β T cell responses predominantly occur in bacterial and parasitic infections. Porcine γδ T cell responses have been reported in only a few infections. Porcine T cell responses are suppressed by some viruses and bacteria. The mechanisms of T cell suppression are not entirely known but reportedly include the killing of T cells, the inhibition of T cell activation and proliferation, the inhibition of antiviral cytokine production, and the induction of immunosuppressive cytokines.", "corpus_id": 22620866, "title": "Biology of porcine T lymphocytes" }
{ "abstract": null, "corpus_id": 2778575, "title": "Classical swine fever virus induces tumor necrosis factor-α and lymphocyte apoptosis" }
{ "abstract": "We have shown that interleukin-1 (IL-1) and IL-2 control IL-2 receptor alpha (IL-2R alpha) gene transcription in CD4-CD8- murine T lymphocyte precursors. Here we map the cis-acting elements that mediate interleukin responsiveness of the mouse IL-2R alpha gene using a thymic lymphoma-derived hybridoma (PC60). The transcriptional response of the IL-2R alpha gene to stimulation by IL-1 + IL-2 is biphasic. IL-1 induces a rapid, protein synthesis-independent appearance of IL-2R alpha mRNA that is blocked by inhibitors of NF-kappa B activation. It also primes cells to become IL-2 responsive and thereby prepares the second phase, in which IL-2 induces a 100-fold further increase in IL-2R alpha transcripts. Transient transfection experiments show that several elements in the promoter-proximal region of the IL-2R alpha gene contribute to IL-1 responsiveness, most importantly an NF-kappa B site conserved in the human and mouse gene. IL-2 responsiveness, on the other hand, depends on a 78-nucleotide segment 1.3 kilobases upstream of the major transcription start site. This segment functions as an IL-2-inducible enhancer and lies within a region that becomes DNase I hypersensitive in normal T cells in which IL-2R alpha expression has been induced. IL-2 responsiveness requires three distinct elements within the enhancer. Two of these are potential binding sites for STAT proteins.", "corpus_id": 20267573, "score": -1, "title": "Mouse interleukin-2 receptor alpha gene expression. Interleukin-1 and interleukin-2 control transcription via distinct cis-acting elements." }
{ "abstract": "We describe isolated cranial nerve-III palsy as a rare clinical finding in a patient with perimesencephalic subarachnoid hemorrhage. In this unusual case, the patient presented with complete cranial nerve-III palsy including ptosis and pupillary involvement. Initial studies revealed subarachnoid hemorrhage in the perimesencephalic, prepontine, and interpeduncular cisterns. Angiographic studies were negative for an intracranial aneurysm. The patient's neurological deficits improved with no residual deficits on follow-up several months after initial presentation. Our case report supports the notion that patients with perimesencephalic subarachnoid hemorrhage have an excellent prognosis. Our report further adds a case of isolated cranial nerve-III palsy as a rare initial presentation of this type of bleeding, adding to the limited body of the literature.", "corpus_id": 359639, "title": "Isolated Cranial Nerve-III Palsy Secondary to Perimesencephalic Subarachnoid Hemorrhage" }
{ "abstract": "15% of patients with spontaneous subarachnoid haemorrhage have normal cerebral angiograms; they fare better than patients with demonstrated aneurysms, though rebleeding and cerebral ischaemia can still occur. In patients with a normal angiogram and accumulation of blood in the cisterns around the midbrain--\"perimesencephalic nonaneurysmal haemorrhage\"--outcome is excellent. To test the hypothesis that rebleeding and disability in angiogram-negative subarachnoid haemorrhage might be limited to those with other patterns of haemorrhage on initial computed tomography (CT), complications and long-term outcome were studied in 113 patients with angiogram-negative subarachnoid haemorrhage, admitted between January, 1983, and July, 1990. All patients were investigated with third-generation CT scans within 72 h of the event, and with cerebral angiography. The mean follow-up period was 45 (range 6-96) months. None of 77 patients with a perimesencephalic pattern of haemorrhage on CT died or was left disabled as a result of the haemorrhage (0% [95% confidence interval 0-5%]). Among the other 36 patients, who had a blood distribution on CT indistinguishable from that in proven aneurysmal bleeds, 4 had rebleeds and 9 died or were left disabled as result of the haemorrhage (25% [14-43%]). Thus, two distinct subsets of patients with angiogram-negative subarachnoid haemorrhage should be recognised. Patients with a perimesencephalic pattern of haemorrhage have an excellent prognosis. Rebleeding, cerebral ischaemia, and residual disability occur exclusively in patients with aneurysmal patterns of haemorrhage on initial CT. Repeated angiography in search of an occult aneurysm is justified only in the patients with aneurysmal patterns.", "corpus_id": 8328086, "title": "Outcome in patients with subarachnoid haemorrhage and negative angiography according to pattern of haemorrhage on computed tomography" }
{ "abstract": "The mortality rate, risk of rebleeding, relevant subjective and objective symptoms, and daily functional capacity after a verified subarachnoid hemorrhage (SAH) of unknown etiology were evaluated in 44 patients treated during a 5-year period (1978 to 1983). A vascular basis for the SAH had been excluded by bilateral carotid and vertebral angiography and computerized tomography. The patients were interviewed at a follow-up examination from 3 to 64 months (median 36 months) after the bleed. The results revealed a 5% mortality rate and a 7% risk of rebleeding. Persisting headache and fatigue were found in 40% of patients, 29% had mild demential symptoms, and 5% had persisting and severe objective neurological symptoms. None had developed epilepsy. A normal daily functional capacity was enjoyed by 84%, while 14% had a moderate reduction in these functions, but were independent of help from other persons. One patient (2%) was not fully assessed.", "corpus_id": 9788394, "score": 2, "title": "The prognosis in subarachnoid hemorrhage of unknown etiology." }
{ "abstract": "Count Sir Luigi Preziosi (1888-1965) was a famous ophthalmologist from the island Republic of Malta. He received his ophthalmic training in Rome and the United Kingdom. He practiced ophthalmology in Malta for 45 years and was a professor at the University of Malta. Like many physicians in Malta, he was active in the politics and governance of his country, serving as president of the Senate, president of the National Congress to draft a new constitution, and, finally, as president of the National Assembly of Malta. His most important ophthalmologic contribution was the development of the thermal sclerostomy filtering operation for glaucoma, which he first described in 1924. He referred to this operation initially as electro-cautery puncture and later simply as Preziosi's operation. Many surgeons considered this procedure an advance over the other available filtering operations such as sclerectomy, iridencleisis, and trephination. The operation was then further developed in 1957 by Harold G. Scheie of the University of Pennsylvania. Scheie referred to his procedure as peripheral iridectomy with scleral cautery, and it was a standard filtering operation for glaucoma for many years until the development of trabeculectomy.", "corpus_id": 649489, "title": "Count sir Luigi Preziosi and his glaucoma operation: the development of early glaucoma filtering surgery." }
{ "abstract": "A SIMPLE glaucoma operation which can be consistently successful is still needed. The nearest approach to this from the point of view of ocular tension would appear to be the anterior flap sclerotomy with basal iridencleisis (Stallard, 1953), but this is a technically difficult operation requiring very great skill. After various trials with measures on the ciliary body, attention is returning to simple drainage as in thetrephine but using other methods of making the filtration hole, e.g. diathermy. The Preziosi cautery method (Preziosi, 1924, 1950) appears to have been introduced originally as a safe method from the point of view of sepsis, but its general use was discontinued so long ago that it is rarely mentioned in the current text-books. The series reported here began in 1951, and the results have been so impressive that the operation has replaced nearly all previously used procedures in the writer's practice. The method is not fraught with the obvious danger of thermal cataract that immediately springs to mind, and appears to give rise to no serious complications. The anterior chamber takes some time to reform, but no massage is required, a good bleb forms, and the ocular tension remains at 25 mm. Hg or less in over 90 per cent. of cases of chronic simple glaucoma.", "corpus_id": 25932607, "title": "SOME RESULTS OF THE PREZIOSI OPERATION*" }
{ "abstract": "A period of great progress in the diagnosis and treatment of glaucoma began in the 1920's with the development of gonioscopy apparatus and the slitlamp, and the first application of epinephrine in the treatment of the disease. With the development of precise methods for examination, differentiation among the various forms of glaucoma became possible, knowledge and theories regarding the nature and mechanisms of the disease became available, and medical and surgical approaches to treatment were developed with varying degrees of success. Major events and concepts in the management of glaucoma during the last 50 years are reviewed, including literature reports and the personal experiences of the authors and his colleagues.", "corpus_id": 2209936, "score": 2, "title": "Progress in the treatment of glaucoma in my lifetime." }
{ "abstract": "Activated factor XIIa (FXIIa) is a serine protease that has received a great deal of interest in recent years as a potential target for the development of new antithrombotics. Despite the strong interest in obtaining structural information, only the structure of the FXIIa catalytic domain in its zymogen conformation is available. In this work, reproducible experimental conditions found for the crystallization of human plasma β-FXIIa and crystal growth optimization have led to determination of the first structure of the active form of the enzyme. Two crystal structures of human plasma β-FXIIa complexed with small molecule inhibitors are presented herein. The first is the noncovalent inhibitor benzamidine. The second is an aminoisoquinoline containing a boronic acid-reactive group that targets the catalytic serine. Both benzamidine and the aminoisoquinoline bind in a canonical fashion typical of synthetic serine protease inhibitors, and the protease domain adopts a typical chymotrypsin-like serine protease active conformation. This novel structural data explains the basis of the FXII activation, provides insights into the enzymatic properties of β-FXIIa, and is a great aid toward the further design of protease inhibitors for human FXIIa.", "corpus_id": 3786478, "title": "Structures of human plasma β-factor XIIa cocrystallized with potent inhibitors." }
{ "abstract": "In recent years, the contact pathway of coagulation has received increased attention as it appears to be more relevant to pathologic thrombosis than to normal hemostasis. In this pathway, factor XII (FXII) becomes activated upon binding to a negatively charged surface [1] and then activates factor XI (FXI) [2, 3], which in turn activates factor IX [4]. In an additional positive feedback loop, activated FXII (FXIIa) activates prekallikrein (PK) [5], which activates additional FXIIa [6]. This article is protected by copyright. All rights reserved.", "corpus_id": 85532738, "title": "Studies into prekallikrein activation pave the way for new avenues of antithrombotic research" }
{ "abstract": "The carbocyclic analogue of (E)-5-(2-bromovinyl)-2'-deoxyuridine, C-BVDU, is a very potent and selective anti-herpes-virus compound. In order to synthesize and study the properties of a DNA that contains C-BVDU, the 5'-triphosphate, C-BVDUTP was prepared and evaluated as a potential substrate of the E. coli Klenow DNA polymerase enzyme. Although C-BVDUTP proved to be a very poor substrate also of this enzyme, it could be incorporated up to 3.6% into the synthetic DNA, poly(dA-dT, C-BVDU). This level of substitution decreased significantly the template activity for DNA and RNA polymerases, as compared to that of poly(dA-dT).", "corpus_id": 23699121, "score": 1, "title": "Incorporation of the carbocyclic analogue of (E)-5-(2-bromovinyl)-2'-deoxyuridine 5'-triphosphate into a synthetic DNA." }
{ "abstract": "There is strong interest in realizing genomic molecular diagnostic platforms that are label-free, electronic, and single-molecule. One attractive transducer for such efforts is the single-molecule field-effect transistor (smFET), capable of detecting a single electronic charge and realized with a point-functionalized exposed-gate one-dimensional carbon nanotube field-effect device. In this work, smFETs are integrated directly onto a custom complementary metal-oxide-semiconductor chip, which results in an array of up to 6000 devices delivering a measurement bandwidth of 1 MHz. In a first exploitation of these high-bandwidth measurement capabilities, point functionalization through electrochemical oxidation of the devices is observed with microsecond temporal resolution, which reveals complex reaction pathways with resolvable scattering signatures. High-rate random telegraph noise is detected in certain oxidized devices, further illustrating the measurement capabilities of the platform.", "corpus_id": 536345, "title": "Complementary Metal-Oxide-Semiconductor Integrated Carbon Nanotube Arrays: Toward Wide-Bandwidth Single-Molecule Sensing Systems." }
{ "abstract": "Observing Protein Dynamics Following the dynamics of protein conformational changes over the relatively long periods of time typical of enzyme kinetics can be challenging. Choi et al. (p. 319; see the Perspective by Lu) were able to observe changes in lysozyme conformation, which changes its electrostatic potential, by using a carbon-nanotube field-effect transistor. Slower hydrolysis steps were compared with faster, but unproductive, hinge motion, and changes in lysozyme activity that occur with pH were shown to arise from differences in the relative amount of time spent in processive versus nonprocessive states. Changes in protein conformation can be detected via changes in electrostatic potential with a carbon nanotube transistor. Tethering a single lysozyme molecule to a carbon nanotube field-effect transistor produced a stable, high-bandwidth transducer for protein motion. Electronic monitoring during 10-minute periods extended well beyond the limitations of fluorescence techniques to uncover dynamic disorder within a single molecule and establish lysozyme as a processive enzyme. On average, 100 chemical bonds are processively hydrolyzed, at 15-hertz rates, before lysozyme returns to its nonproductive, 330-hertz hinge motion. Statistical analysis differentiated single-step hinge closure from enzyme opening, which requires two steps. Seven independent time scales governing lysozyme’s activity were observed. The pH dependence of lysozyme activity arises not from changes to its processive kinetics but rather from increasing time spent in either nonproductive rapid motions or an inactive, closed conformation.", "corpus_id": 28089734, "title": "Single-Molecule Lysozyme Dynamics Monitored by an Electronic Circuit" }
{ "abstract": "A color-center laser was used at 0.95 μm to determine the activation energy for OH diffusion in the cores of current multimode fibers. The temperature range of 600 to 800°C was investigated and the activation energy determined to be 19,700 ± 1600 cal/mole in agreement with previous values reported for bulk silica. Based on this value, some 23,000 years would be necessary at 90°C to produce a 3-percent change in the 0.95-μm OH absorption at the core center. Apparently, OH diffusion over a service life of optical fibers should not be a problem in presently envisioned lightwave applications.", "corpus_id": 32434224, "score": 1, "title": "Measurements of OH diffusion in optical-fiber cores" }
{ "abstract": "Lacking the presence of human and social elements is claimed one major weakness that is hindering the growth of e-commerce. The emergence of social commerce might help ameliorate this situation. Social commerce is a new evolution of e-commerce that combines the commercial and social activities by deploying social technologies into e-commerce sites. Social commerce reintroduces the social aspect of shopping to e-commerce, increasing the degree of social presences in online environment. Drawing upon the social presence theory, this study theorizes the nature of social aspect in online SC marketplace by proposing a set of three social presence variables. These variables are then hypothesized to have positive impacts on trusting beliefs which in turn result in online purchase behaviors. The research model is examined via data collected from a typical e-commerce site in China. Our findings suggest that social presence factors grounded in social technologies contribute significantly to the building of the trustworthy online exchanging relationships. In doing so, this paper confirms the positive role of social aspect in shaping online purchase behaviors, providing a theoretical evidence for the fusion of social and commercial activities. Finally, this paper introduces a new perspective of e-commerce and calls more attention to this new phenomenon of social commerce. Social commerce increases the degree of social presences in online environment.This study proposes a multi-dimensional model of social presence.The social presence factors are found to have positive impacts on trust in sellers.The model delineates a full picture of online buyer behaviors in social commerce.", "corpus_id": 11040607, "title": "Social Presence, Trust, and Social Commerce Purchase Intention: an Empirical Research." }
{ "abstract": "The social commerce represents a new form of electronic commerce mediated by social networking sites. It provides companies with competitive tools for online promotion, and it also assists consumers to make better-informed purchasing decisions based on the sharing of experiences from other consumers. Trust is important in social commerce environment as it serves as a foundation for consumers to evaluate product information from companies as well as from other consumers. However, extant literature still lacks clear understanding of the nature of trust in social commerce. This study sets out to understand trust development in social commerce websites. Specifically, based on trust transference theory, we develop a research model to examine how consumer trust in social commerce impacts their trust in the company and their electronic word of mouth intention. In addition, we also examine how customers’ prior transaction experience with a company could impact their social commerce trust development and serve as a mediator in the trust transfer process. The research model is empirically examined using a survey method consisting of 375 users of a social commerce website. This study contributes to the conceptual and empirical understanding of trust in social commerce. The academic and practical implications of this study are also discussed.", "corpus_id": 16376845, "title": "UNDERSTANDING CONSUMER TRUST IN SOCIAL COMMERCE WEBSITES" }
{ "abstract": "Pervasive socio-technical networks bring new conceptual and technological challenges to developers and users alike. A central research theme is evaluation of the intensity of relations linking users and how they facilitate communication and the spread of information. These aspects of human relationships have been studied extensively in the social sciences under the framework of the\"strength of weak ties\"theory proposed by Mark Granovetter.13 Some research has considered whether that theory can be extended to online social networks like Facebook, suggesting interaction data can be used to predict the strength of ties. The approaches being used require handling user-generated data that is often not publicly available due to privacy concerns. Here, we propose an alternative definition of weak and strong ties that requires knowledge of only the topology of the social network (such as who is a friend of whom on Facebook), relying on the fact that online social networks, or OSNs, tend to fragment into communities. We thus suggest classifying as weak ties those edges linking individuals belonging to different communities and strong ties as those connecting users in the same community. We tested this definition on a large network representing part of the Facebook social graph and studied how weak and strong ties affect the information-diffusion process. Our findings suggest individuals in OSNs self-organize to create well-connected communities, while weak ties yield cohesion and optimize the coverage of information spread.", "corpus_id": 16805499, "score": -1, "title": "The role of strong and weak ties in Facebook : a community structure perspective" }
{ "abstract": "The antiradical properties of essential oils and extracts from coriander seeds Coriandrum sativum L., cardamom fruits Elettaria cardamomum L., fruits of white and black pepper Piper nigrum L., and pods of red cayenne and green chili pepper Capsicum frutescens L. were studied in model reactions with the stable free 2,2-diphenyl-1-picrylhydrazyl radical. The essential oils consisted of monoand sesquiterpene hydrocarbons, alcohols, oxides and esters as the main components. Spice extracts contained flavonoids, diand triterpenoids, phenolic acids, alkaloids and carotenoids. The values of antiradical efficiency were low and decreased in the following order: black pepper extract > cayenne pepper extract > cardamom essential oil > chili pepper extract > cardamom extract > white pepper extract > coriander extract > black pepper essential oil > white pepper essential oil > coriander essential oil.", "corpus_id": 2545710, "title": "Antiradical properties of essential oils and extracts from coriander, cardamom, white, red, and black peppers" }
{ "abstract": "Globally, colorectal cancer is the third commonest cancer in men since 1975.The present study focuses on the preventive strategies aimed at reducing the incidences and mortality of large bowel cancer. Chemoprevention of colon cancer appears to be a very realistic possibility because various intermediate stages have been identified preceding the development of malignant colonic tumors. Several studies have demonstrated that generous consumption of vegetables reduces the risk of colon cancer. This idea has prompted the present investigation to search for some novel plant products, which may have possible anticarcinogenic activity. It has already been proved from various experiments that chemopreventive agents, by virtue of their anti-oxidant, anti-inflammatory, anti-proliferative, apoptosis-inducing activity, act at various levels including molecular, cellular, tissue and organ levels to interfere with carcinogens. Previous studies from our laboratory have already reported the inhibitory effect of cinnamon and cardamom on azoxymethane induced colon carcinogenesis by virtue of their anti-inflammatory, anti-proliferative and pro-apoptotic activity. This particular experiment was carried out to assess the anti-oxidative potential of these spices. Aqueous suspensions of cinnamon and cardamom have been shown to enhance the level of detoxifying enzyme (GST activity) with simultaneous decrease in lipid peroxidation levels in the treatment groups when compared to that of the carcinogen control group.", "corpus_id": 13721348, "title": "Inhibition of lipid peroxidation and enhancement of GST activity by cardamom and cinnamon during chemically induced colon carcinogenesis in Swiss albino mice." }
{ "abstract": "Data on greenhouse gas emission levels associated with fertilization applied in smallholder paddy rice farms in Ghana are scanty. The current study investigated fertilization types to determine their eco-friendliness on yield, Global Warming Potential (GWP) and Greenhouse Gas Intensity (GHGI) in a major rice season in the forest zone of Ghana. In total, five treatments were studied viz Farmer Practice (BAU); Biochar + Farmer Practice (BAU + BIO); Poultry Manure + Farmer Practice (BAU + M); Biochar + Poultry Manure + Farmer Practice (BAU + BIO + M); and Control (CT). Fluxes of methane (CH4) and nitrous oxide (N2O) were measured using a static chamber-gas chromatography method. N2O emissions at the end of the growing season were significantly different across treatments. BAU + BIO + M had highest N2O flux mean of 0.38 kgNha−1day−1 (±0.18). BAU + M had the second highest N2O flux of 0.27 kgNha−1day−1 (±0.08), but was not significantly different from BAU at p > 0.05. BAU+BIO recorded 0.20 kgNha−1day−1 (±0.12), lower and significantly different from BAU, BAU + M and BAU + BIO + M. CH4 emissions across treatments were not significantly different. However, highest CH4 flux was recorded in BAU+BIO at 4.76 kgCH4ha−1day−1 (±4.87). GWP based on seasonal cumulative GHG emissions among treatments ranged from 5099.16 (±6878.43) to 20894.58 (±19645.04) for CH4 and 756.28 (±763.44) to 27201.54 (±9223.51) kgCO2eq ha−1Season−1 for N2O. The treatment with significantly higher yield and low emissions was BAU + M with a GHGI of 4.38 (±1.90) kgCO2eqkg−1.", "corpus_id": 230526087, "score": 1, "title": "Eco-Friendly Yield and Greenhouse Gas Emissions as Affected by Fertilization Type in a Tropical Smallholder Rice System, Ghana" }
{ "abstract": "An extreme electric field on the order of 1010 V m−1 was applied to the free surface of an ionic liquid to cause electric-field-induced evaporation of molecular ions from the liquid. The point of ion emission was observed in situ using a TEM. The resulting electrospray emission process was observed to create nanoscale high-aspect-ratio dendritic features that were aligned with the direction of the electric field. Upon removal of the stressing field the features were seen to remain, indicating that the ionic liquid residue was solidified or gelled. Similar electrospray experiments performed in a field-emission scanning electron microscope revealed that the features are created when the high-energy electron beam damages the molecular structure of the ionic liquid. While the electric field does not play a direct role in the fluid modification, the electric stress was critical in detecting the liquid property change. It is only because the electric stress mechanically elongated the fluid during the electrospray process and these obviously non-liquid structures persisted when the field was removed that the damage was evident. This evidence of ionic liquid radiation damage may have significant bearing on electrospray devices where it is possible to produce high-energy secondary electrons through surface impacts of emitted ions downstream of the emitter. Any such impacts that are in close proximity could see reflected secondary electrons impact the emitter causing gelling of the ionic liquid.", "corpus_id": 3519545, "title": "Radiation-induced solidification of ionic liquid under extreme electric field" }
{ "abstract": "Water dissolved in ionic liquids garners particular attention in electrochemistry, as represented by the case where water molecules cannot be completely removed from ionic liquids. Nevertheless, the effects of such polarizable polar molecules on the energy efficiency of electrochemical devices remain elusive. Thus, we highlight the effects of the spatially varying dielectric response of water and ionic liquid near charged plates using a coarse-grained mean-field theory that simultaneously accounts for both the permanent and induced dipole moments of the species and the strong electrostatic correlation. We show that water can be adsorbed onto electrodes primarily because of the dielectric contrast between the species. Our results predict that linear-dielectric theory is inadequate to account for the correlation between the capacitance and dielectric contrast, which may be fitted by exponential functions. Electronic polarizability can enhance the capacitance. A higher-dielectric component preferentially solvates electrodes, but this effect competes with that of the charge screening. Our theory shows that strong electrostatic correlation causes the dipolar cation and anion to form alternating layers, which in turn yields substantial increases in the capacitance. Our results compare favorably with previous molecular dynamics simulations.", "corpus_id": 103609768, "title": "Water dissolution in ionic liquids between charged surfaces: effects of electric polarization and electrostatic correlation" }
{ "abstract": "Abstract Alginates were irradiated as solids or in aqueous solution with Co 60 gamma rays in the dose range of 20 to 500 kGy to investigate the effect of radiation on alginates. Degradation was observed both in the solid state and solution. The degradation in solution was remarkably greater than that in the solid. For example, the molecular weight of alginate in 1% (w/v) solution decreased from 6×10 −5 for 0 kGy to 8×10 −3 for 20 kGy irradiation while the equivalent degradation by solid irradiation required 500 kGy. Degradation G-values were 1.9 for solid and 55 for solution, respectively. The free radicals from irradiated water must be responsible for the degradation in solution. The degradation was also accompanied by a color change to deep brown for highly degraded alginate. Little color change was observed on irradiation in the presence of oxygen. UV spectra showed a distinct absorption peak at 265 nm for colored alginates, increasing with dose. The fact that discoloration of colored alginate was caused on exposure to ozone suggests a formation of double bond in the pyranose-ring.", "corpus_id": 96135084, "score": 2, "title": "Radiation-induced degradation of sodium alginate" }
{ "abstract": "This paper presents a new approach for verifying confidentiality for programs, based on abstract interpretation. The framework is formally developed and proved correct in the theorem prover PVS. We use dynamic labeling functions to abstractly interpret a simple programming language via modification of security levels of variables. Our approach is sound and compositional and results in an algorithm for statically checking confidentiality.", "corpus_id": 2241642, "title": "Statically checking confidentiality via dynamic labels" }
{ "abstract": "What is the best way to build programs that compute with data sources controlled by multiple principals, while ensuring compliance with the security policies of the principals involved? The objective of this project is to devise methods for building manifestly secure applications for an information grid consisting of multiple data sources controlled by multiple principals. This is achieved by using techniques from mathematical logic, programming language semantics, and mechanized reasoning to ensure security of application code, while permitting convenient expression of complex computations with data sources on the information grid. The project will design and implement a programming language whose type system ensures compliance with security policies through the use of proofs in a formal logic of authorization during both the static and dynamic phases of processing. The project will use automated reasoning tools such as theorem provers and logical frameworks to prove formally and rigorously the security properties of the programming language. As a result, every application written in the language enjoys the guarantees afforded by the language as a whole. The intellectual merit of the project consists of scientific and engineering techniques for building practical programs for computing with multiple data sources that are manifestly secure. Manifest security means that the trust relationships, access control and information flow policies, and proofs of compliance with these policies are made manifest in the framework through the use of formal logical methods for specifying and verifying them. These properties will be formally verified against precise specifications written in a novel logic of authorization and information flow using mechanized theorem provers and logical frameworks so that there is a direct link between the theoretical analysis and the executable code. This ensures that running applications are manifestly in compliance with the security policies of the principals on the information grid to an extent not previously achievable in practical systems. The project will build a secure information grid and associated applications to demonstrate the effectiveness of its approach and provide a means for comparison with competing methods. The broader impacts of the project include the development of fundamental technology to ensure privacy while permitting flexible access to disparate, and independently controlled, data sources. Making security policies themselves, and proofs of application compliance with them, readily available in machine-checkable form is a technical cornerstone for ensuring privacy without unduly limiting the legitimate use of these data sources. The project will also significantly increase collaboration between two major research universities within the Commonwealth of Pennsylvania. The participants have an established record of fostering education in the field through writing textbooks, developing new classes and course materials at their universities, and organizing summer schools for students throughout the world. The project will also employ undergraduate researchers through direct funding and the NSF Research Experience for Undergraduates program. Both participating departments have vibrant organizations supporting and promoting women in computer science, and we will work toward involving women in our project at both undergraduate and graduate level.", "corpus_id": 2808170, "title": "Manifest Security for Distributed Information" }
{ "abstract": "Exploration games are games where agents (or robots) need to search resources and retrieve these resources. In principle, performance in such games can be improved either by adding more agents or by exchanging more messages. However, both measures are not free of cost and it is important to be able to assess the trade-off between these costs and the potential performance gain. The focus of this paper is on improving our understanding of the performance gain that can be achieved either by adding more agents or by increasing the communication load. Performance gain moreover is studied by taking several other important factors into account such as environment topology and size, resource-redundancy, and task size. Our results suggest that there does not exist a decision function that dominates all other decision functions, i.e. is optimal for all conditions. Instead we find that (i) for different team sizes and communication strategies different agent decision functions perform optimal, and that (ii) optimality of decision functions also depends on environment and task parameters. We also find that it pays off to optimize for environment topologies. Copyright © 2018 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved. Institute for Systems and Technologies of Information, Control and Communication (INSTICC)", "corpus_id": 4380096, "score": 1, "title": "On the Effects of Team Size and Communication Load on the Performance in Exploration Games" }
{ "abstract": "AUTHOR: CLAIRE M. LESSIAU TITLE: NUMERICAL ANALYSIS OF AN AIRFOIL RESPONSE TO AN IMPINGING GUST INSTITUTION: EMBRY RIDDLE AERONAUTICAL UNIVERSITY DEGREE: MASTER OF SCIENCE IN AEROSPACE ENGINEERING YEAR: 2003 The BASS code, a nonlinear high-order prefactored compact code is validated on a benchmark problem. The nonlinear response of a loaded airfoil to an impinging vortical gust is investigated in the parametric space of gust intensity and frequency. Computational resources, involving a Linux cluster, were set up and maintained. The code was corrected and adapted to this particular problem. Results are compared with linear solution from the GUST3D solver.", "corpus_id": 4685009, "title": "Numerical Analysis of an Airfoil Response to an Impinging Gust" }
{ "abstract": "Using a mathematical framework originally developed for the development of PML schemes in computational electromagnetics, we develop a set of strongly well-posed PML equations for the absorption of acoustic and vorticity waves in two-dimensional convective acoustics under the assumption of a spatially constant mean flow. A central piece in this development is the development of a variable transformation that conserves the dispersion relation of the physical space equations. The PML equations are given for layers being perpendicular to the direction of the mean flow as well as for layers being parallel to the mean flow. The efficacy of the PML scheme is illustrated by solving the equations of acoustics using a 4th order scheme, confirming the accuracy as well as stability of the proposed scheme.", "corpus_id": 121900807, "title": "Regular Article: Well-posed Perfectly Matched Layers for Advective Acoustics" }
{ "abstract": "The knowledge representation of an embodied, intelligent, cognitive agent typically relies on symbols denoting objects of the world on the top level and perceptual, structured data on the bottom level. The process of determining and maintaining the correct connection between a symbolic object identifier and its perceptual image, both referring to the same physical object, is called symbol anchoring.The dissertation presented here suggests a formal and general approach to the symbol anchoring problem, which enhances previous approaches in terms of generality and expressiveness.", "corpus_id": 15271516, "score": 0, "title": "Anchoring Symbols to Percepts in the Fluent Calculus" }
{ "abstract": "Abstract Given a set L of n non-intersecting line segments in the plane, we show that it is possible to choose a set S of at most ⌞n/2⌟ segments such that for each segment l of L there exists a point p l on one of the segments in S which sees every point of l . That is, for any point p on segment l the segment p l p does not intersect the interior of any line segment other than those containing p and p l . This bound is also shown to be tight. Thus, by imagining that each segment of S contains an edge guard, we conclude that ⌞n/2⌟ edge guards are sometimes necessary and always sufficient to guard any set of n segments in the plane.", "corpus_id": 305646, "title": "An art gallery theorem for line segments in the plane" }
{ "abstract": "Let P be a polygon with n vertices. We say that two points of P see each other if the line segment connecting them lies inside (the closure of) P. In this paper we present efficient approximation algorithms for finding the smallest set G of points of P so that each point of P is seen by at least one point of G, and the points of G are constrained to be belong to the set of vertices of an arbitrarily dense grind. We also present similar algorithms for terrains and polygons with holes.", "corpus_id": 2719718, "title": "Guarding galleries and terrains" }
{ "abstract": "A general equation for scattering processes; applicable to neutron transfer, circuit theory, and probability, is developed. Equations for these specialized disciplines are shown to be special cases of this generalized equation. (L.N.N.)", "corpus_id": 117754200, "score": 1, "title": "On the Relation of Transmission-Line Theory to Scattering and Transfer†" }
{ "abstract": "Striking increase in reactive oxygen species (ROS) such as hydrogen peroxide (H(2)O(2)) has been demonstrated to occur in plants in response to pathogen attack. The aim of this study was to investigate the biochemical aspects of ROS generation, antioxidative mechanism and cell wall reinforcement as responses of tomato cultivars Arka Meghali (AM; susceptible) and BT-10 (BT; resistant) against Ralstonia solanacearum (Ralsol). While the oxidative burst was characterized by a single phase ROS increase in AM, there was a clear bi-phasic ROS generation in BT. The first significant increase of H(2)O(2) production was noticed at 12 h post-inoculation (hpi) followed by a sharp increase in H(2)O(2) generation after 36 hpi. Lipid peroxidation was more in roots of AM than that of BT after pathogen inoculation. Superoxide dismutase and catalase activities were continuously at very high level in Ralsol-inoculated BT plants, whereas activities of the enzymes were observed to decrease at later stage in Ralsol-inoculated AM plants. Guaiacol peroxidase activity was high in Ralsol-inoculated roots of both cultivars, but BT recorded much higher activity than AM. Higher activity of ascorbate peroxidase in inoculated BT might be an indication of better scavenging activity of the enzyme. Total phenolic content and lignin deposition were significantly higher in Ralsol-inoculated BT compared to inoculated AM. Our results indicate that increased level of ROS production coupled with more efficient antioxidative system, lower rate of lipid peroxidation and high lignin deposition in cell wall may contribute to the resistance of tomato plants to Ralsol.", "corpus_id": 406353, "title": "Differential occurrence of oxidative burst and antioxidative mechanism in compatible and incompatible interactions of Solanum lycopersicum and Ralstonia solanacearum." }
{ "abstract": "Fusarium is the most common flax pathogen causing serious plant diseases and in most cases leading to plant death. To protect itself, the plant activates a number of genes and metabolic pathways, both to counteract the effects of the pathogen, and to eliminate the threat. The identification of the plant genes which respond to infection is the approach, that has been used in this study. Forty-seven flax genes have been identified by means of cDNA subtraction method as those, which respond to pathogen infection. Subtracted genes were classified into several classes and the prevalence of the genes involved in the broad spectrum of antioxidants biosynthesis has been noticed. By means of semi-quantitative RT-PCR and metabolite profiling, the involvement of subtracted genes controlling phenylpropanoid pathway in flax upon infection was positively verified. We identified the key genes of the synthesis of these compounds. At the same time we determined the level of the metabolites produced in the phenylpropanoid pathway (flavonoids, phenolic acids) in early response to Fusarium attack by means of GC-MS technique. To the best of our knowledge this is the first report to describe genes and metabolites of early flax response to pathogens studied in a comprehensive way.", "corpus_id": 8212972, "title": "Genes of phenylpropanoid pathway are activated in early response to Fusarium attack in flax plants." }
{ "abstract": "Thermospheric wind data obtained from the Atmosphere Explorer E and Dynamics Explorer 2 satellites have been used to generate an empirical wind model for the upper thermosphere, analogous to the MSIS model for temperature and density, using a limited set of vector spherical harmonics. The model is limited to above approximately 220 km where the data coverage is best and wind variations with height are reduced by viscosity. The data base is not adequate to detect solar cycle (F10.7) effects at this time but does include magnetic activity effects. Mid- and low-latitude data are reproduced quite well by the model and compare favorably with published ground-based results. The polar vortices are present, but not to full detail.", "corpus_id": 119572806, "score": 0, "title": "Empirical global model of upper thermosphere winds based on atmosphere and dynamics explorer satellite data" }
{ "abstract": "The description of protein 3D structures can be performed through a library of 3D fragments, named a structural alphabet. Our structural alphabet is composed of 16 small protein fragments of 5 Cα in length, called protein blocks (PBs). It allows an efficient approximation of the 3D protein structures and a correct prediction of the local structure. The 72 most frequent series of 5 consecutive PBs, called structural words (SWs) are able to cover more than 90% of the 3D structures. PBs are highly conditioned by the presence of a limited number of transitions between them. In this study, we propose a new method called “pinning strategy” that used this specific feature to predict long protein fragments. Its goal is to define highly probable successions of PBs. It starts from the most probable SW and is then extended with overlapping SWs. Starting from an initial prediction rate of 34.4%, the use of the SWs instead of the PBs allows a gain of 4.5%. The pinning strategy simply applied to the SWs increases the prediction accuracy to 39.9%. In a second step, the sequence-structure relationship is optimized, the prediction accuracy reaches 43.6%.", "corpus_id": 341129, "title": "“Pinning strategy”: a novel approach for predicting the backbone structure in terms of protein blocks from sequence" }
{ "abstract": "We describe some of the aspects of Swiss-Prot that make it unique, explain what are the developments we believe to be necessary for the database to continue to play its role as a focal point of protein knowledge, and provide advice pertinent to the development of high-quality knowledge resources on one aspect or the other of the life sciences.", "corpus_id": 3647019, "title": "Swiss-Prot: Juggling between evolution and stability" }
{ "abstract": "We have applied the TOUCHSTONE structure prediction algorithm that spans the range from homology modeling to ab initio folding to all protein targets in CASP5. Using our threading algorithm PROSPECTOR that does not utilize input from metaservers, one threads against a representative set of PDB templates. If a template is significantly hit, Generalized Comparative Modeling designed to span the range from closely to distantly related proteins from the template is done. This involves freezing the aligned regions and relaxing the remaining structure to accommodate insertions or deletions with respect to the template. For all targets, consensus predicted side chain contacts from at least weakly threading templates are pooled and incorporated into ab initio folding. Often, TOUCHSTONE performs well in the CM to FR categories, with PROSPECTOR showing significant ability to identify analogous templates. When ab initio folding is done, frequently the best models are closer to the native state than the initial template. Among the particularly good predictions are T0130 in the CM/FR category, T0138 in the FR(H) category, T0135 in the FR(A) category, T0170 in the FR/NF category and T0181 in the NF category. Improvements in the approach are needed in the FR/NF and NF categories. Nevertheless, TOUCHSTONE was one of the best performing algorithms over all categories in CASP5. Proteins 2003;53:469–479. © 2003 Wiley‐Liss, Inc.", "corpus_id": 9902948, "score": 2, "title": "TOUCHSTONE: A unified approach to protein structure prediction" }
{ "abstract": "Mobile ad hoc networks (MANET) present a challenging area for application development. The combination of mobile nodes and wireless communication can create highly dynamic networks with frequent disconnections and unpredictable availability. Several language paradigms have been applied to MANETs, but there has been no quantitative comparison of alternative approaches. This paper presents the first quantitative evaluation of three common communication paradigms (publish/subscribe, RPC, and tuple spaces) compared within realistic MANET environments using real applications. We investigate the application-level performance of the paradigms and present a summary of their relative strengths and weaknesses. We also demonstrate the impact of wireless and mobility on application-level metrics, the most dramatic being delivery rates dropping to nearly 25% and round trip times increasing up to 2000% in a mobile scenario.", "corpus_id": 1135672, "title": "A Quantitative Comparison of Communication Paradigms for MANETs" }
{ "abstract": "LIME (Linda in a Mobile Environment) is a middleware supporting the development of applications that exhibit physical mobility of hosts, logical mobility of agents, or both. LIME adopts a coordination perspective inspired by work on the Linda model. The context for computation, represented in Linda by a globally accessible, persistent tuple space, is refined in LIME to transient sharing of identically-named tuple spaces carried by individual mobile units. Tuple spaces are also extended with a notion of location and programs are given the ability to react to specified states. The resulting model provides a minimalist set of abstractions that promise to facilitate rapid and dependable development of mobile applications. In this paper, we illustrate the model underlying LIME, provide a formal semantic characterization for the operations it makes available to the application developer, present its current design and implementation, and discuss lessons learned in developing applications that involve physical mobility.", "corpus_id": 18161602, "title": "LIME: A Coordination Middleware Supporting Mobility of Hosts and Agents" }
{ "abstract": "Lime is a middleware communication infrastructure for mobile computation that addresses physical mobility of devices and logical mobility of software components through a rich set of local and remote primitives. The system's key innovation is the concept of transiently shared tuple spaces. In Lime, mobile programs are equipped with tuple spaces that move whenever the program moves and are transparently shared with tuple spaces of other co-located programs. The Lime specification is surprisingly complex and tricky to implement. In this paper, we start by deconstructing the Lime model to identify its core components, then we attempt to reconstruct a simpler model, which we call CoreLime, that supports fine-grained access control and can better scale to large configurations.", "corpus_id": 59673983, "score": 2, "title": "Lime revisited : Reverse engineering an agent communication model" }
{ "abstract": "Abstract Various carbonate apatite formulae contribute to discrete cuticle structures participating in protective functions of the American lobster, Homarus americanus, integument. Canal walls use their lowest Calcium : Phosphate (Ca : P) ratios to protect exposed surfaces of gland and neuronal canals. The linings insulate more soluble calcium carbonate from attack by acid secreting micro-organisms. A trabecular bone-like structure, called here ‘trabeculae’ in analogy to vertebrates, utilizes high Ca : P in the inner exocuticle demonstrating an efficient use of environmentally scarce phosphate and provides the hardness layer of the cuticle that protects the superficial calcite layer from flexure. Strength is derived from phosphatic trabeculae being embedded in a phenolicly hardened inner exocuticle layer. A third location and use of carbonate apatite is at the interface of the calcite and inner exocuticle where it may cap calcite layer development. A fourth phosphatic localization is seen in cuticular nipples that accompany the site of organule canal entry at the epidermal face of the cuticle. This high Ca : P localization may be associated with accumulation of Ca and P by canal forming cells for use in the nascent canal wall construction. A schematic model of the cuticle emphasizes regional diversity of a composite cuticle suggesting mineral function. An outer calcite crystalline layer provides a dense barrier that dissolves slowly through an intact epicuticle providing an external alkaline unstirred layer inhibitory to bacterial physiology. Superficial injury to the epicuticle and calcite layer provides a stronger flush of alkalinity from bared calcite or deeper rapidly dissolving amorphous calcium carbonate, generating a concerted general immune response by increasing alkalinity of the unstirred layer.", "corpus_id": 1534571, "title": "Carbonate apatite formulation in cuticle structure adds resistance to microbial attack for American lobster" }
{ "abstract": "Recently, we proposed a hierarchical model for the elastic properties of mineralized lobster cuticle using (i) ab initio calculations for the chitin properties and (ii) hierarchical homogenization performed in a bottom-up order through all length scales. It has been found that the cuticle possesses nearly extremal, excellent mechanical properties in terms of stiffness that strongly depend on the overall mineral content and the specific microstructure of the mineral-protein matrix. In this study, we investigated how the overall cuticle properties changed when there are significant variations in the properties of the constituents (chitin, amorphous calcium carbonate (ACC), proteins), and the volume fractions of key structural elements such as chitin-protein fibers. It was found that the cuticle performance is very robust with respect to variations in the elastic properties of chitin and fiber proteins at a lower hierarchy level. At higher structural levels, variations of design parameters such as the volume fraction of the chitin-protein fibers have a significant influence on the cuticle performance. Furthermore, we observed that among the possible variations in the cuticle ingredients and volume fractions, the experimental data reflect an optimal use of the structural variations regarding the best possible performance for a given composition due to the smart hierarchical organization of the cuticle design.", "corpus_id": 10087353, "title": "Robustness and optimal use of design principles of arthropod exoskeletons studied by ab initio-based multiscale simulations." }
{ "abstract": "Most Cambrian arthropods employed simple feeding mechanisms requiring only low degrees of appendage differentiation. In contrast, post-Cambrian crustaceans exhibit a wide diversity of feeding specializations and possess a vast ecological repertoire. Crustaceans are evident in the Cambrian fossil record, but have hitherto been known exclusively from small individuals with limited appendage differentiation. Here we describe a sophisticated feeding apparatus from an Early Cambrian arthropod that had a body length of several centimetres. Details of the mouthparts resolve this taxon as a probable crown-group (pan)crustacean, while its feeding style, which allowed it to generate and handle fine food particles, significantly expands the known ecological capabilities of Cambrian arthropods. This Early Cambrian record predates the major expansions of large-bodied, particle-handling crustaceans by at least one hundred million years, emphasizing the importance of ecological context in driving adaptive radiations.", "corpus_id": 4373816, "score": 2, "title": "Sophisticated particle-feeding in a large Early Cambrian crustacean" }
{ "abstract": "As it is impossible to cover the whole field of enzyme mechanisms in a comparatively brief presentation, it is proposed to illustrate the present state of knowledge by reference in detail to one particular enzyme, namely, bovine pancreatic ribonuclease. There is now an enormous amount of information available about this enzyme, including the complete three-dimensional structure of the protein (Kartha, Bello, and Harker, 1967) and a modified derivative (Richards and Wyckoff, 1968). There is also a great deal of chemical and kinetic information which sheds light on the mechanism of its catalytic action and it is now possible to suggest tentatively the nature of the reaction pathway. The physical basis of the rate enhancement factors, which are of the order of magnitude of 10,10 is still problematical and will not be discussed. The enzyme consists of a single chain of 124 amino acid residues; in general, the molecule is kidneyshaped containing a depression, and there is good reason to believe that the active site is in the depression. Several of the amino acid residues in the region of the active site have been implicated inthecatalytic process. Whilst histidines 12 and 119 are the most important, both lysine 41 (Murdock, Grist, and Hirs, 1966) and aspartate 121 (Anfinsen, 1956) are also essential. Lysine 41 is implicated because the effect of fluorodinitrobenzene, which reacts rapidly with the lysine residue and inactivates the enzyme, is prevented by competitive inhibitors; aspartate 121 is implicated because, whereas removal of the end three amino acids from the C-terminus has no effect on catalytic activity, removal of the next one, ie, aspartate 121, results in complete loss of catalysis. The exact function of these two residues is unknown. By far the most important residues have been shown by experiments with haloacetic acids to be two histidine residues, namely 12 and 119 (see Rabin and Mathias, 1963 for review). Negatively charged alkylating reagents, such as iodoacetic acid and bromoacetic acid, inhibit ribonuclease, but this does not occur with neutral alkylating agents such as iodoacetamide, despite the fact that the latter are generally much more reactive than the former. The reaction of the enzyme with the haloacetic acids is extraordinary, as either one of the two histidines will react with the reagent but never both in the same molecule. Moreover the rate of this reaction is several orders of magnitude greater than that of haloacetic acid with a simple imidazole in aqueous solution. If the rate of alkylation of ribonuclease by iodoacetic acid is measured as a function of pH, a typical bell-shaped curve, resembling an idealized pH profile for enzyme activity, is obtained. The reaction of a simple imidazole with iodoacetic acid does not vary withpH in the same way, but follows a simple titration curve inflecting about thepK of the reacting group. There is obviously an ancillary acid group required for the reaction of the enzyme with iodoacetic acid. As a result of experiments of this sort the concept emerged that in the enzyme these two histidines must be located close together three-dimensionally, in such a way that one of them in the acid form can promote the reactivity of the other towards alkylating reagents. One of the histidines, in the positively charged form, could attract and bind the negative end of the alkylating reagent and juxtapose the reactive carbon atom of the latter to the nitrogen of the other histidine thus promoting its alkylation. Clearly, one imidazole acts as a base and the other as an acid; their pKs are in the region of 6 so that in this pH range there will be an equilibrium mixture of acid and base forms. Which histidine is alkylated would depend amongst other things on the distribution of the charges. This general picture would explain why either of these two histidines, but never both in the same molecule, is alkylated by iodoacetic acid. Competitive inhibitors, which presumably sit on the active site, protect these histidine residues against the action of the alkylating reagents. There is also other evidence which implicates these two histidines in the catalytic activity of the enzyme. That they are indeed not many nanometers apart has been confirmed more recently by x-ray crystallographic studies (Richards and Wyckoff, 1968). The reaction catalysed by ribonuclease is shown in Figure 1. RNA is hydrolysed in two stages 1 coright.", "corpus_id": 1149595, "title": "The mechanism of enzyme action." }
{ "abstract": "The response of inbred mouse strains to two polypeptides derived from multichain polyprolines, (T,G)-Pro--L and (Phe,G)-Pro--L, is different from the response of the same mouse strains to a similar series of polymers built on multi-poly-D,L-alanyl--poly-L-lysine, although the same short sequences of amino acids are attached to the side chains of the polypeptides in the two series. These results indicate that a portion of the side chain (e.g. polyalanine or polyproline) participates in the antigenic determinant. This was confirmed by studying the response of different mouse strains to two kinds of polypeptides: (T,G)-Pro-A--L 717 and 718 and (T,G)-A-Pro--L 719 and 721. Antibody assay of antisera to (Phe,G)-Pro--L with the cross-reacting antigens (T,G)-Pro--L and (Phe,G)-A-L indicates that different inbred mouse strains make antibodies specific for different parts of the same polypeptide. Thus, antibody from DBA/1 mice reacts almost exclusively with the (Phe,G) sequence, while SJL antisera bind only (T,G)-Pro--L and fail to bind (Phe,G)-A-L. The immune responses to the same amino acids on two different polypeptides (i.e. A--L and Pro--L) appear to be under separate genetic control.", "corpus_id": 999745, "title": "THE NATURE OF THE ANTIGENIC DETERMINANT IN A GENETIC CONTROL OF THE ANTIBODY RESPONSE" }
{ "abstract": "The access of enzymes and lipid transfer proteins to neutral lipids located predominantly in the core compartment of lipoproteins may be determined to some degree by the solubility of the neutral lipids in the surface monolayer of phospholipid. This report concerns the hypothesis that unesterfied cholesterol can affect the partition of a cholesteryl ester between the surface monolayer of a lipid emulsion and the internal core compartment, thus controlling the degree to which the cholesteryl ester is presented at the emulsion surface. For microemulsions composed of dimyristoyl phosphatidylcholine and cholesteryl oleate, the addition of unesterified cholesterol results in an increase in the particle size from about 170 nm diameter to 210 nm diameter at 13.5 mol% unesterified cholesterol. Fluorescent quenching methods were devised to determine the apparent partition of a fluorescent cholesteryl ester (cholesteryl anthracene-9-carboxylate) between surface and core compartments. The addition of unesterified cholesterol resulted in the movement of the fluorescent cholesteryl ester from the surface monolayer to the core compartment. The apparent partition coefficient, defined as the ratio of the concentration of probe in the monolayer to that in the core, decreased from 1.03 in the absence of unesterfied cholesterol to 0.54 at 28 mol% unesterified cholesterol in the emulsion. In this process, the fluorescent cholesteryl ester becomes less accessible to a quencher (5-doxyl stearate) located in the surface monolayer. The decrease in the surface curvature resulting from incorporation of unesterified cholesterol into the particle does not influence this quenching process. We conclude that the presence of unesterified cholesterol in the emulsion causes the fluorescent cholesteryl ester to become less soluble in the surface monolayer.", "corpus_id": 11064451, "score": 1, "title": "Effect of unesterified cholesterol on the compartmentation of a fluorescent cholesteryl ester in a lipoprotein-like lipid microemulsion." }
{ "abstract": "Soft errors, a form of transient errors that cause bit flips in memory and other hardware components, are a growing concern for embedded systems as technology scales down. While hardware-based approaches to detect/correct soft errors are important, software-based techniques can be much more flexible. One simple software-based strategy would be full duplication of computations and data, and comparing the results of the corresponding original and duplicate computations. However, while the performance overhead of this strategy can be hidden during execution if there are idle hardware resources, the memory demand increase due to data duplication can be dramatic, particularly for array-based applications that process large amounts of data. \n \nFocusing on array-based embedded computing, this paper presents a memory space conscious loop iteration duplication approach that can reduce memory requirements of full duplication (of array data), without decreasing the level of reliability the latter provides. Our “in-place duplication” approach reuses the memory locations from the same array to store the duplicates of the elements of a given array. Consequently, the memory overhead brought by the duplicates can be reduced. Further, we extend this approach to incorporate “global duplication”, which reuses memory locations from other arrays to store duplicates of the elements of a given array. This paper also discusses how our approach operates under a memory size constraint. The experimental results from our implementation show that the proposed approach is successful in reducing memory requirements of the full duplication scheme for twelve array-based applications.", "corpus_id": 525351, "title": "Memory Space Conscious Loop Iteration Duplication for Reliable Execution" }
{ "abstract": "This paper highlights two shortcomings in the current design process of embedded systems of avionics. First, the current software design process does not adequately verify and validate worst-case timing scenarios that have to be guaranteed in order to meet deadlines. Consider the RTCA DO178B standard requiring coverage testing. An additional requirement, namely predictable timing behavior, is essential real-time embedded systems. Airbus requires their suppliers to provide verifiable bounds on worst-case execution time of software for planes under development, Boeing is considering it (e.g., for Airbus 380, Boeing 787 and military aircraft). The automotive industry, among others, is evaluating similar requirements. We provide an analysis of this problem that outlines directions for future research and tool development in this area. Second, the correctness of embedded systems is currently jeopardized by soft errors that may render control systems inoperable. In general, transient faults are increasingly a problem due to (a) smaller fabrication sizes and (b) deployment in harsh environments. In commercial aviation, the next-generation planes (Airbus 380 and Boeing 787) will deploy off-the-shelf embedded processors without hardware protection against soft errors. Since these planes are designed to fly over the North Pole with an order of magnitude higher radiation (due to a thinner atmosphere), system developers have been asked to consider the effect of single-event upsets (SEUs), i.e., infrequent single bit-flips, in their software design. Current developers do not know how to address this problem. We outline much needed research in this area. 1. Verification and Validation of Worst-Case Execution Times Current software design for safety-critical embedded systems requires stringent compliance with coding standards to ensure safety and reliability. One example is avionics where the RTCA DO-178B standard requires coverage testing (for statements, branches and conditionals). A very important additional requirement for realtime embedded systems is predictable timing behavior of software components. In particular, so-called hard real-time embedded systems have timing constraints that must be met or the system is may malfunction. Airbus (and likely also Boeing in the near future), e.g., requires their suppliers to provide verifiable bounds on worst-case execution time (WCET) for software to be deployed on planes currently under development (Airbus 380 and Boeing 787). The automotive industry is currently considering similar requirements, and others are likely to follow. Determining bounds on the WCET of embedded software is a critically important problem for next-generation embedded realtime systems [1]. Currently, practitioners resort to testing methods to determine execution times of real-time tasks. However, testing alone cannot provide a verifiable (safe) upper bound on WCET. Exhaustive testing of inputs is generally infeasible, even for moderately complex input spaces due to its exponential complexity. In contrast to dynamic testing, static timing analysis can provide safe upper bounds on the WCET of code sections, real-time tasks or entire applications. Hence, static timing analysis provides a safer and more efficient alternative to testing [2]. It yields verifiable bounds on the WCET of tasks regardless of program input by simulating execution along the control-flow paths within the program structure while considering architectural details, such as pipelining and caching [3]. These WCET bounds should also be tight to support high utilizations when determining if tasks can meet their deadlines via schedulability analysis. Tight bounds, however, can only be obtained if the behavior of hardware components is predicated accurately, yet conservatively with respect to its worst-case behavior. Static timing analysis techniques are constantly trailing behind the innovation curve in hardware. It is becoming increasingly difficult to provide tight and safe bounds in the presence of out-oforder execution, dynamic branch prediction and speculative execution. Simulation of hardware components is also prone to inaccuracy due to lack of information about subtle details of processors. We advocate research on new approaches to bounding the WCET. Most importantly, a realistic hybrid approach is needed that combines formal static timing analysis with concrete micro-timings observations of actual architectures. First, a formal approach guarantees correctness. Second, dynamic timings on actual processors for small code sections will allow advanced embedded processor designs to be used in such timecritical systems, even in the presence of dynamic and unpredictable execution features. Third, any architectural modifications in support of such a paradigm have to be realistic in that they should reuse existing infrastructure both on the architecture side and the methodology for static timing analysis. There is an immediate need to develop software tools that can provide verifiable execution times to allow validation of task schedules within time-critical embedded systems. 2. Protection Against Soft Errors Transient faults are becoming an increasing concern of system design for two reasons. First, smaller fabrication sizes have resulted in lower signal/noise ratio that more frequently leads to bit flips in CMOS circuits [4]. Second, embedded systems are increasingly deployed in harsh environments causing soft errors due to lack of protection on the hardware side [5]. The former reason affects computing at large while the latter is predominantly of concern for critical infrastructure. For example, the automotive industry has used temperature-hardened processors for control tasks around the engine block while space missions use radiation-hardened processors to avoid damage from solar radiation. Current trends indicate an increasing rate of transient faults (i.e., soft errors), not only due to smaller fabs but also because embedded systems are deployed in harsh environments they were not designed for. In commercial aviation, the next-generation planes (Airbus 380 and Boeing 787) will deploy off-the-shelf embedded processors without hardware protection against soft errors. Even though these planes are specifically designed to fly over the North Pole where radiation from space is more intensive due to a thinner atmosphere, target processors lack error detecting/correcting capabilities. Hence, system developers have been asked to consider the effect of single-event upsets (SEUs), i.e., infrequent single bit flips, in their software design. In practice, future systems may have to sustain transient faults due to any of the above causes. There exists a significant amount of work on detection of and protection against transient faults. Hardware can protect and even correct transient faults at the cost of redundant circuits [6–14]. Software approaches can also protect/correct these faults, e.g., by instruction duplication or algorithmic design [15–21]. Recent work focuses at a hybrid solution of both hardware and software support to counter transient faults [22–24]. Such hybrid solutions aim at a reduced cost of protection, i.e., cost in terms of extra die size, performance penalty and increased code size. We advocate novel research to address the problem of soft errors. Of interested are (1) software solutions and (2) hybrid hardware/software solutions. While a number of hardware solutions exist, commodity hardware is being deployed in systems subject to high rates of transient errors. In the complete absence of hardware support, a software methodology to address soft errors needs to be developed that retains performance. Current software schemes (e.g., [16]) reduce the performance of systems considerably, if not prohibitively, and are not supported by tools. Further research is required to reduce this overhead to developing novel schemes to tolerate faults in software. Hybrid solutions offer another promising avenue to address this problem. Minor architectural modifications that can be adopted within existing architectures should be accompanied by software solutions allowing soft errors to be detected at low overhead. Early results [22, 23] outline the potential of such an approach but leave many facets for improvement open. Protection at the level of code and different data sections of programs can be specialized by tool support to significantly reduce overhead even further. There is an immediate need to pursue innovative lines of research for soft error protection that have potentially high yields in performance while providing low error rates.", "corpus_id": 11792318, "title": "Two Shortcomings in Software Design for Avionics : Timing Analysis and Soft Error Protection" }
{ "abstract": "In this article, we report the results of gas-phase IR spectroscopy of neutral glycylglycine (Gly-Gly) in the 700-1850 cm-1 frequency range. A combination of laser desorption, jet-cooling, and IR multiple-photon dissociation vacuum-ultraviolet (IRMPD-VUV) action spectroscopy is employed, together with extensive quantum chemical calculations that assist in the analysis of the experimental data. As a result, we determined that the most favorable conformer in the low-temperature environment of the supersonic jet is the nearly planar structure with two C5 hydrogen-bonding interactions. Calculations clearly show that this conformer is favored because of its flexibility (considerable entropy stabilization) as well as efficient conformer relaxation processes in the jet. To gain more understanding into the relative stability of the lowest-energy Gly-Gly conformers, the relative strength of hydrogen bonding and steric interactions is analyzed using the noncovalent interactions (NCI) approach.", "corpus_id": 58537197, "score": 0, "title": "Conformational Preferences of Isolated Glycylglycine (Gly-Gly) Investigated with IRMPD-VUV Action Spectroscopy and Advanced Computational Approaches." }
{ "abstract": "This paper examines the goals of current family policy proposals from a feminist perspective. It reveals the fundamental pronatalist values that are inherent in such proposals. It reviews recent research that raises questions regarding the actual impact of Scandinavian family policies (which are often used as a model), in terms of actually achieving the stated objective of enhancing equality between the sexes. It briefly explores the family policy that already exists in the United States, having been judicially enacted by the Supreme Court, and finally, it shows how most current family based policy proposals serve to maintain inequality rather than to promote equality, both in society and the home. Copyright 1989 by The Policy Studies Organization.", "corpus_id": 154618288, "title": "A FEMINIST CASE AGAINST NATIONAL FAMILY POLICY: VIEW TO THE FUTURE" }
{ "abstract": "Western culture continues to present motherhood as a positive, happy, and thankful time for women and their families; ignoring the feelings of anger, sadness, anxiety and shock women may experience in their transition into motherhood, and upon the realisation that the realities of mothering do not always meet our societal and cultural ideals. Based on my autoethnographic research, my thesis will present the therapeutic and empowering potential of using journalling as an added coping mechanism to the diverse stresses and traumas women may experience in the highly gendered role of mothering. Previous studies on journalling have demonstrated that disclosure through personal writing may produce long term improvements in mood and an overall sense of well being, as well as allow individuals to create a coherent explanation of their situation, restore self e cacy, and find meaning to their particular situation. While these studies have examined a broad range of stressful events such as terminal illness, divorce, or job loss, little research has been conducted on applying methods of journalling or expressive writing to the often di cult, ambiguous and stressful transitions of motherhood. My research will therefore illustrate that journalling has the potential to provide women with a space to voice and process their experiences, opinions and feelings of mothering, as well as challenge societal and cultural ideals regarding the institution of motherhood.", "corpus_id": 148672913, "title": "Journalling through Motherhood:a Personal Exploration of the Therapeutic and Empowering Potential of Journalling" }
{ "abstract": "In an eftbrt to determine reasons for differential scientific productivity between similarly trained husband and wife professional pairs, responses by 200 psychologist couples to a survey question asking them to delineate problem areas were content analyzed. Although sexual discrimination accounted for a small portion of the problems, the larger number of problems cited by subjects were due to the fact that women were willing to place their careers secondary to (a) the needs qf their families and (b) the needs of their husband's careers.", "corpus_id": 144100031, "score": 2, "title": "Problems of Professional Couples: A Content Analysis." }
{ "abstract": "Soybean mosaic virus (SMV) is a devastating plant virus classified in the family Potyviridae, and known to infect cultivated soybeans (Glycine max). In this study, seven new SMVs were isolated from wild soybean samples and analyzed by whole-genome sequencing. An updated SMV phylogeny was built with the seven new and 83 known SMV genomic sequences. Results showed that three northeastern SMV isolates were distributed in clade III and IV, while four southern SMVs were grouped together in clade II and all contained a recombinant BCMV fragment (~900 bp) in the upstream part of the genome. This work revealed that wild soybeans in China also act as important SMV hosts and play a role in the transmission and diversity of SMVs.", "corpus_id": 199316, "title": "Erratum to: Complete nucleotide sequences of seven soybean mosaic viruses (SMV), isolated from wild soybeans (Glycine soja) in China" }
{ "abstract": "A new Soybean mosaic virus (SMV) strain was isolated in Korea and designated as G7H. Its virulence on eight differentials and 42 Korean soybean cultivars was compared with existing SMV strains. G7H caused the same symptoms as G7 did on the eight differential cultivars. However, it caused different symptoms on the G7-immune Korean soybean cultivars; G7H caused necrosis in Suwon 97 (Hwangkeumkong) and Suwon 181 (Daewonkong), and a mosaic symptom in Miryang 41 (Duyoukong), while G7 caused only local lesions on those varieties. The nucleotide sequence of the cylindrical inclusion region of G7H was determined and compared with other SMV strains. G7H shared 96.3 and 91.3% nucleotide similarities with G2 and G7, respectively; whereas G7 shared 95.6% nucleotide similarity with G5H.", "corpus_id": 73498674, "title": "G7H, a New Soybean mosaic virus Strain: Its Virulence and Nucleotide Sequence of CI Gene." }
{ "abstract": "Animal waste from concentrated swine farms is widely considered to be a source of environmental pollution, and the introduction of veterinary antibiotics in animal manure to ecosystems is rapidly becoming a major public health concern. A housefly larvae (Musca domestica) vermireactor has been increasingly adopted for swine manure value-added bioconversion and pollution control, but few studies have investigated its efficiency on antibiotic attenuation during manure vermicomposting. In this study we explored the capacity and related attenuation mechanisms of antibiotic degradation and its linkage with waste reduction by field sampling during a typical cycle (6 days) of full-scale larvae manure vermicomposting. Nine antibiotics were dramatically removed during the 6-day vermicomposting process, including tetracyclines, sulfonamides, and fluoroquinolones. Of these, oxytetracycline and ciprofloxacin exhibited the greater reduction rate of 23.8 and 32.9 mg m−2, respectively. Environmental temperature, pH, and total phosphorus were negatively linked to the level of residual antibiotics, while organic matter, total Kjeldahl nitrogen, microbial respiration intensity, and moisture exhibited a positive effect. Pyrosequencing data revealed that the dominant phyla related to Firmicutes, Bacteroidetes, and Proteobacteria accelerated manure biodegradation likely through enzyme catalytic reactions, which may enhance antibiotic attenuation during vermicomposting.", "corpus_id": 17312781, "score": 1, "title": "Attenuation of veterinary antibiotics in full-scale vermicomposting of swine manure via the housefly larvae (Musca domestica)" }
{ "abstract": "The case notes of 12 children with congenital H-type tracheo-oesophageal fistulae diagnosed at the Hospital for Sick Children, Great Ormond Street, who presented between 1980 and 1986 were reviewed. All patients presented early in the neonatal period with recurrent chest infections; abnormal chest radiographs were found in eight. Ten of a total of 19 contrast studies were negative. Tube oesophagograms were more likely to demonstrate a fistula than conventional contrast studies. Any delay in surgery was due to delay in diagnosis rather than to delay in presentation. The results suggest that tube oesophagograms should be performed early where there is clinical suspicion of an H-type fistula, and that other investigations (for example bronchoscopy) should be considered if the tube oesophagogram does not demonstrate a fistula.", "corpus_id": 562881, "title": "Difficulties in diagnosis of congenital H-type tracheo-oesophageal fistulae." }
{ "abstract": "The total prevalence rate of tracheo-oesophageal fistula and oesophageal atresia in 15 EUROCAT registries covering 1,546,889 births during 1980-8 was 2.86 per 10,000. There was a decreasing prevalence rate over time (3.5 per 10,000 in 1980-2, 2.7 in 1983-5, 2.5 in 1986-8). Ten per cent of cases were associated with chromosomal anomalies and of the remaining cases, half were multiply malformed. Sixty two per cent of cases were males. There was a significantly increased risk for mothers of less than 20 years of age (odds ratio compared with mothers of 25-29 = 1.82, 95% confidence interval 1.23 to 2.67). There were no apparent epidemiological differences between isolated and multiply malformed cases in secular trend, sex ratio, or maternal age. Both isolated and multiply malformed cases tended to be premature and small for gestational age. There was variation between centres in survival of affected liveborn children up to 1 year of age.", "corpus_id": 5794674, "title": "The epidemiology of tracheo-oesophageal fistula and oesophageal atresia in Europe. EUROCAT Working Group." }
{ "abstract": "Summary During the past six years, among 108 hospital admissions for esophageal atresia, 31 (29 per cent) have had one of the four unusual varieties, i.e., Type A, B, D, or E (“H”). In five patients with an initial diagnosis of atresia without fistula (Type A), primary esophageal anastomosis was performed following a planned delay of from four to ten weeks. This was accompanied in three infants by an elongation procedure to lengthen the proximal esophageal segment. This regimen of delayed primary anastomosis, following proximal pouch elongation, has replaced colon interposition in the management of Type A atresia in this center; and is probably the procedure of choice in premature infants with Type C atresia. Recognition of the upper tracheoesophageal fistula has proved difficult in five infants admitted with double fistula (Type D); and anastomotic leak has resulted in a high morbidity and mortality. When lying high in the neck, division of the upper fistula through a cervical incision as a separate procedure may be preferable to complete repair via thoracotomy. Major diagnostic problems are presented by the tracheoesophageal fistula without atresia, Type E (“H”). When this anomaly is seen in newborn infants with respiratory distress, the diagnosis is most readily established by the use of contrast studies of the upper esophagus. Reduced motility in the distal esophagus is a suggestive sign. In older infants and children presenting with intestinal (or abdominal) symptoms, in contrast, endoscopic studies are most helpful. All such fistulae have been divided through cervical incisions.", "corpus_id": 38838776, "score": 2, "title": "Esophageal atresia and tracheoesophageal fistula: management of the uncommon types." }
{ "abstract": "In the paper we present our Range-IT prototype, which is a 3D depth camera based electronic travel aid (ETA) to assist visually impaired people in finding out detailed information of surrounding objects. In addition to detecting indoor obstacles and identifying several objects of interest (e.g., walls, open doors and stairs) up to 7 meters, the Range-IT system employs a multimodal audio-vibrotactile user interface to present this spatial information.", "corpus_id": 2069012, "title": "Range-IT: detection and multimodal presentation of indoor objects for visually impaired people" }
{ "abstract": "Indoor navigation systems for users who are visually impaired typically rely upon expensive physical augmentation of the environment or expensive sensing equipment; consequently few systems have been implemented. We present an indoor navigation system called Navatar that allows for localization and navigation by exploiting the physical characteristics of indoor environments, taking advantage of the unique sensing abilities of users with visual impairments, and minimalistic sensing achievable with low cost accelerometers available in smartphones. Particle filters are used to estimate the user's location based on the accelerometer data as well as the user confirming the presence of anticipated tactile landmarks along the provided path. Navatar has a high possibility of large-scale deployment, as it only requires an annotated virtual representation of an indoor environment. A user study with six blind users determines the accuracy of the approach, collects qualitative experiences and identifies areas for improvement.", "corpus_id": 13359047, "title": "The user as a sensor: navigating users with visual impairments in indoor spaces using tactile landmarks" }
{ "abstract": "This paper develops a general framework for learning interpretable data representation via Long Short-Term Memory (LSTM) recurrent neural networks over hierarchal graph structures. Instead of learning LSTM models over the pre-fixed structures, we propose to further learn the intermediate interpretable multi-level graph structures in a progressive and stochastic way from data during the LSTM network optimization. We thus call this model the structure-evolving LSTM. In particular, starting with an initial element-level graph representation where each node is a small data element, the structure-evolving LSTM gradually evolves the multi-level graph representations by stochastically merging the graph nodes with high compatibilities along the stacked LSTM layers. In each LSTM layer, we estimate the compatibility of two connected nodes from their corresponding LSTM gate outputs, which is used to generate a merging probability. The candidate graph structures are accordingly generated where the nodes are grouped into cliques with their merging probabilities. We then produce the new graph structure with a Metropolis-Hasting algorithm, which alleviates the risk of getting stuck in local optimums by stochastic sampling with an acceptance probability. Once a graph structure is accepted, a higher-level graph is then constructed by taking the partitioned cliques as its nodes. During the evolving process, representation becomes more abstracted in higher-levels where redundant information is filtered out, allowing more efficient propagation of long-range data dependencies. We evaluate the effectiveness of structure-evolving LSTM in the application of semantic object parsing and demonstrate its advantage over state-of-the-art LSTM models on standard benchmarks.", "corpus_id": 6843706, "score": -1, "title": "Interpretable Structure-Evolving LSTM" }
{ "abstract": "To do this, people usually study mtDNA (or the Y chromosomes). The reason is simple: unlike in the autosomal chromosomes, there is no recombination, implying that each DNA sequence has a unique parent. If it were also true that there were no recurrent mutations, known efficient algorithms for character based phylogenies could reconstruct the tree uniquely. Recurrent mutations could make the problem harder, but not insurmountable. However, as more individuals are genotyped, the complexity of these algorithms will grow. Complete mtDNA of over 1500 humans is available. (EX:http://www.bch.umontreal.ca/ogmp/projects/other/mt list.html).", "corpus_id": 1231905, "title": "CSE 280 Class Projects ( Suggested ) Vineet Bafna January 22 , 2008 Projects 1 Population history" }
{ "abstract": "DNA sequence variation in a 1410-bp region including the Cu,Zn Sod locus was examined in 41 homozygous lines of Drosophila melanogaster. Fourteen lines were from Barcelona, Spain, 25 were from California populations and the other two were from laboratory stocks. Two common electromorphs, SODS and SODF, are segregating in the populations. Our sample of 41 lines included 19 SodS and 22 SodF alleles (henceforward referred to as Slow and Fast alleles). All 19 Slow alleles were identical in sequence. Of the 22 Fast alleles sequenced, nine were identical in sequence and are referred to as the Fast A haplotypes. The Slow allele sequence differed from the Fast A haplotype at a single nucleotide site, the site that accounts for the amino acid difference between SODS and SODF. There were nine other haplotypes among the remaining 13 Fast alleles sequenced. The overall level of nucleotide diversity (pi) in this sample is not greatly different than that found at other loci in D. melanogaster. It is concluded that the Slow/Fast polymorphism is a recently arisen polymorphism, not an old balanced polymorphism. The large group of nearly identical haplotypes suggests that a recent mutation, at the Sod locus or tightly linked to it, has increased rapidly in frequency to around 50%, both in California and Spain. The application of a new statistical test demonstrates that the occurrence of such large numbers of haplotypes with so little variation among them is very unlikely under the usual equilibrium neutral model. We suggest that the high frequency of some haplotypes is due to natural selection at the Sod locus or at a tightly linked locus.", "corpus_id": 7697062, "title": "Evidence for positive selection in the superoxide dismutase (Sod) region of Drosophila melanogaster." }
{ "abstract": "Recent neurocognitive studies show that perception and execution of actions are intimately linked. The mere observation of an action seems to evoke a tendency to execute that action. Since such imitative response tendencies are not adaptive in many everyday situations imitative response tendencies usually have to be inhibited. These inhibitory processes have never been investigated using brain imaging techniques. Former work on response inhibition and interference control has focused on paradigms such as the Stroop task or the go/no-go task. We have carried out an event-related functional magnetic resonance imaging study in order to investigate the cortical mechanisms underlying the inhibition of imitative responses. The experiment employs a simple response task in which subjects were instructed to execute predefined finger movements (tapping or lifting of the index finger) in response to an observed congruent or incongruent finger movement (tapping or lifting). A comparison of brain activation in incongruent and congruent trials revealed strong activation in the dorsolateral prefrontal cortex (middle frontal gyrus) and activation in the right frontopolar cortex and the right anterior parietal cortex, as well as in the precuneus. These results support the assumption of prefrontal involvement in response inhibition and extend this assumption to a \"new\" class of prepotent responses, namely, to imitative actions.", "corpus_id": 31496088, "score": 1, "title": "The Inhibition of Imitative Response Tendencies" }
{ "abstract": "Hot aqueous extraction of the basidiocarps of the mushroom Pleurotus sajor-caju provided a cold water-soluble, gel-like glucan, which was characterized chemically, and its effects on RAW 264.7 cell line (mouse leukaemic monocyte macrophage) activation were determined. NMR spectroscopy, HPSEC, methylation analysis, and a controlled Smith degradation showed it to have a branched structure with a (1→3)-linked β-Glcp main-chain, substituted at O-6 by single-unit β-Glcp side-chains, on the average of two to every third residues of the backbone, with a molar mass of 9.75 × 10(5) g mol(-1). In macrophage cell culture, the β-glucan induced production of NO and the cytokines TNF-α, IL-1β, these effects being very similar as those of Escherichia coli serotype 0111:B4 Sigma-Aldrich lipopolysaccharide (LPS), although not modifying the response of LPS-activated macrophages. The results suggest that the (1→3), (1→6)-linked β-glucan from P. sajor-caju may have potential for immunological activities, although additional experiments are necessary for a better understanding of the mechanisms involved.", "corpus_id": 5264762, "title": "Chemical and biological properties of a highly branched β-glucan from edible mushroom Pleurotus sajor-caju." }
{ "abstract": "Dietary fiber chemical and physical structures may be critical to the comprehension of how they may modulate gut bacterial composition. We purified insoluble polymers from Cookeina speciosa, and investigated its fermentation profile in an in vitro human fecal fermentation model. Two glucans, characterized as a (1 → 3),(1 → 6)-linked and a (1→3)-linked β-D-glucans were obtained. Both glucans were highly butyrogenic and propiogenic, with low gas production, during in vitro fecal fermentation and led to distinct bacterial shifts if compared to fructooligosaccharides. Specific increases in Bacteroides uniformis and genera from the Clostridium cluster XIVa, such as butyrogenic Anaerostipes and Roseburia were observed. The (1 → 3)-linked β-D-glucan presented a faster fermentation profile compared to the branched (1 → 3),(1 → 6)-linked β-D-glucan. Our findings support the view that depending on its fine chemical structure, and likely its insoluble nature, these dietary fibers can be utilized to direct a targeted promotion of the intestinal microbiota to butyrogenic Clostridium cluster XIVa bacteria.", "corpus_id": 5486177, "title": "In vitro fermentation of Cookeina speciosa glucans stimulates the growth of the butyrogenic Clostridium cluster XIVa in a targeted way." }
{ "abstract": "A qualitative and quantitative comparison of the neuropathological and neurobehavioral effects of early methylmercury (MeHg) exposure is presented. The focus of the qualitative comparison is the examination of how specific end-points (and categories of behavioral functions) compare across species. The focus of the quantitative comparison is the investigation of the relationship between MeHg exposure, target-organ dose and effects in humans and animals. The results of the comparisons are discussed in the context of the adequacy of the proposed EPA neurotoxicity battery to characterize the risk of MeHg to humans. The comparisons reveal several qualitative and quantitative similarities in the neuropathological effects of MeHg on humans and animals at high levels of exposure. Reports of neuropathological effects at lower levels are available for animals only, precluding any comparison. At high levels of exposure, specific neurobehavioral end-points affected across species are also similar. Effects at lower levels of exposure are similar if categories of neurobehavioral functioning are compared. Changes in the EPA test battery consistent with the results of the comparisons are discussed.", "corpus_id": 21907221, "score": 0, "title": "Methylmercury developmental neurotoxicity: a comparison of effects in humans and animals." }
{ "abstract": "Successful penetration and colonization of plant tissues by most fungal pathogens requires differentiation of specialised cell types or infection structures, e.g. germ ,tubes, appressoria, penetration hyphae, infection hyphae and haustoria. Each cell type is adapted to a particular role in the infection process, e.g. adhesion, contact-sensing, penetration and nutrient uptake [22,32]. Molecular genetic techniques, such as differential or subtractive hybridization and mutational analysis, are being used to identify genes involved in the morphogenesis and function of these infection structures. For example, many genes that are specifically expressed or up-regulated during the formation of appressoria by Colletotrichum, Magnaporthe and rust fungi have now been cloned [7,8,23,24,27,29,55,69,76]. Some of these genes have been sequenced and disrupted to determine their role in the infection process [24,69]. These approaches have so far been restricted to infection structures that can be obtained in vitro, such as appressoria. Identification of genes expressed by infection structures formed following host penetration is more difficult due to contamination with host mRNAs, although recent advances in the isolation of such structures from infected tissue may alleviate this problem [19,30,50]. . An alternative approach is to use monoclonal antibodies (MAbs) to identify differentiation-related proteins and carbohydrates. MAbs can be raised against previously uncharacterised molecules that may only be minor components of a complex mixture [16J. Thus, following immunization with whole cells or crude cell extracts, MAbs binding to molecules of interest can be selected using suitable screening assays. This approach has been used to study cell surface components of the zoospores and cysts of Phytophthora and Pythium spp. [15,20J, and the intracellular infection structures of Erysiphe pisi [30J and Rhizobium spp. [l0]. We have used MAbs to study infection structures formed by the anthracnose fungus, Colletotrichum lindemuthianum, on tissues of Phaseolus vulgaris. In this chapter, we", "corpus_id": 332647, "title": "USE OF MONOCLONAL ANTIBODIES TO STUDY DIFFERENTIATION OF COLLETOTRICHUM INFECTION STRUCTURES" }
{ "abstract": "SummaryThe ultrastructure and composition of the extracellular matrices (ECMs) associated with germ tubes and appressoria ofColletotrichum lindemuthianum have been examined. Flexuous fibres (fimbriae), up to 6 μm long and 4–30 nm in diameter, protruded from the surface of germ tubes and appressoria. Anionic colloidal gold and lectin cytochemistry showed that ECMs of germ tubes and appressoria contain basic proteins, α-D-mannose and α-D-galactose residues. A monoclonal antibody, UB26, was raised to infection structures isolated from leaves ofPhaseolus vulgaris infected withC. lindemuthianum. UB26 recognised a protein epitope on two glycoproteins (Mr 133,000 and 146,000). Reductions in the Mr of these proteins after treatment with peptide-N-glycosidase and trifluoromethane sulphonic acid suggest that they carry N- and O-linked side-chains. Immunofluorescence and EM-immunogold labelling showed that glycoproteins recognised by UB26 were restricted to the ECMs around germ tubes and appressoria but fimbriae were not labelled. Unlike appressorial germ tubes formed in vitro, intracellular infection hyphae were not labelled, suggesting that the glycoproteins recognised by UB26 are not present on fungal structures formed within host cells. In liquid culture, these glycoproteins were not released into the medium, suggesting they are physically linked to the cell wall. Also, the glycoproteins were not removed from glass surfaces by ultrasonication. These results suggest that glycoproteins recognised by UB26 may be involved in the adhesion of germ tubes and appressoria to substrata. Our results show that the ECMs of germ tubes and appressoria differ markedly in structure and composition from those of conidia and intracellular hyphae, and that extracellular glycoproteins are associated with specific regions of the fungal cell surface.", "corpus_id": 44772469, "title": "Composition and organisation of extracellular matrices around germ tubes and appressoria ofColletotrichum lindemuthianum" }
{ "abstract": "Ammonia-treated bagasse with 80%(w/w) moisture content was subjected to mixed-culture solid-substrate fermentation (SSF) with Trichoderma reesei LM-UC4 and Aspergillus phoenicis QM 329, in flask or pot fermenters, for cellulase production. Significantly higher activities of all the enzymes of the cellulase complex were achieved in 4 days of mixed-culture SSF than in single-culture (T. reesei) SSF. The highest filter-paper-cellulase and β-glucosidase activities seen in mixed-culture SSF were 18.7 and 38.6 IU/g dry wt, respectively, representing approx. 3- and 6-fold increases over the activities attained in single-culture SSF. The mixed-culture SSF process also converted about 46% of the cellulose and hemicellulose to reducing sugars and enriched the product with 13% fungal protein. The biomass productivity, 0.29 gl-1.h, and enzyme productivity, 28.0 IU I-1.h, were about twice as high in the mixed-culture than in the single-culture.", "corpus_id": 36668433, "score": 1, "title": "Cellulase production by mixed fungi in solid-substrate fermentation of bagasse" }
{ "abstract": "OBJECTIVES:Although aggressive fluid therapy during the first days of hospitalization is recommended by most guidelines and reviews on acute pancreatitis (AP), this recommendation is not supported by any direct evidence. We aimed to evaluate the association between the amount of fluid administered during the initial 24 h of hospitalization and the incidence of organ failure (OF), local complications, and mortality.METHODS:This was a prospective cohort study. We included consecutive adult patients admitted with AP. Local complications and OF were defined according to the Atlanta Classification. Persistent OF was defined as OF of >48-h duration. Patients were divided into three groups according to the amount of fluid administered during the initial 24 h: group A: <3.1 l (less than the first quartile), group B: 3.1–4.1 l (between the first and third quartiles), and group C: >4.1 l (more than the third quartile).RESULTS:A total of 247 patients were analyzed. Administration of >4.1 l during the initial 24 h was significantly and independently associated with persistent OF, acute collections, respiratory insufficiency, and renal insufficiency. Administration of <3.1 l during the initial 24 h was not associated with OF, local complications, or mortality. Patients who received between 3.1 and 4.1 l during the initial 24 h had an excellent outcome.CONCLUSIONS:In our study, administration of a small amount of fluid during the initial 24 h was not associated with a poor outcome. The need for a great amount of fluid during the initial 24 h was associated with a poor outcome; therefore, this group of patients must be carefully monitored.", "corpus_id": 2540297, "title": "Influence of Fluid Therapy on the Prognosis of Acute Pancreatitis: A Prospective Cohort Study" }
{ "abstract": "Background/Aims: In previous studies, we have demonstrated that hemoconcentration was an early marker for necrotizing pancreatitis.The aim of the present study was to determine whether fluid resuscitation could prevent pancreatic necrosis among patients with hemoconcentration at the time of admission. Methods: Data was pooled from the prior two studies of all patients with necrotizing pancreatitis and interstitial pancreatitis with a hematocrit of ≧44 on admission. Hematocrit values in necrotizing pancreatitis and interstitial pancreatitis were compared at admission and at 24 h. Statistical analyses were performed using the Wilcoxon rank-sum test. Results: A total of 39 patients satisfied our inclusion criteria, 28 with necrotizing pancreatitis and 11 with interstitial pancreatitis. Patients with necrotizing pancreatitis presented earlier than patients with interstitial pancreatitis (median 18 vs. 38 h, respectively) (p = 0.005). There was no significant difference between the intergroup median hematocrits on admission and at 24 h. All patients with hematocrits that failed to decrease at 24 h developed necrotizing pancreatitis (12/28 with necrotizing pancreatitis vs. 0/11 with interstitial pancreatitis) (p = 0.009). There was no significant difference at 24 h in rehydration among the three groups: 4.0 liters among the 12 patients with necrotizing pancreatitis whose hematocrits increased and 4.5 liters among the 16 whose hematocrits decreased at 24 h, and 4.1 liters among the 11 patients with interstitial pancreatitis (p = 0.81). Conclusion: Patients who presented early were more likely to have necrotizing pancreatitis than interstitial pancreatitis. While fluid resuscitation was not shown to prevent pancreatic necrosis, all patients with inadequate fluid resuscitation as evidenced by persistence of hemoconcentration at 24 h developed necrotizing pancreatitis.", "corpus_id": 27122205, "title": "Can Fluid Resuscitation Prevent Pancreatic Necrosis in Severe Acute Pancreatitis?" }
{ "abstract": "Cuprous oxide (Cu2O) thin films were prepared by using electrodeposition technique at different applied potentials (−0.1, −0.3, −0.5, −0.7, and −0.9 V) and were annealed in vacuum at a temperature of 100°C for 1 h. Microstructure and optical properties of these films have been investigated by X-ray diffractometer (XRD), field-emission scanning electron microscope (SEM), UV-visible (vis) spectrophotometer, and fluorescence spectrophotometer. The morphology of these films varies obviously at different applied potentials. Analyses from these characterizations have confirmed that these films are composed of regular, well-faceted, polyhedral crystallites. UV–vis absorption spectra measurements have shown apparent shift in optical band gap from 1.69 to 2.03 eV as the applied potential becomes more cathodic. The emission of FL spectra at 603 nm may be assigned as the near band-edge emission.", "corpus_id": 11953657, "score": 0, "title": "Microstructure and optical properties of nanocrystalline Cu2O thin films prepared by electrodeposition" }
{ "abstract": "We demonstrate an approach to solving the coagulation equation that involves using a finite number of moments of the particle size distribution. This approach is particularly useful when only general properties of the distribution, and their time evolution, are needed. The numerical solution to the integrodifferential Smoluchowski coagulation equation at every time step, for every particle size, and at every spatial location is computationally expensive and serves as the primary bottleneck in running evolutionary models over long periods of time. The advantage of using the moments method comes in the computational time savings gained from only tracking the time rate of change of the moments, as opposed to tracking the entire mass histogram which can contain hundreds or thousands of bins depending on the desired accuracy. The collision kernels of the coagulation equation contain all the necessary information about particle relative velocities, cross sections, and sticking coefficients. We show how arbitrary collision kernels may be treated. We discuss particle relative velocities in both turbulent and nonturbulent regimes. We present examples of this approach that utilize different collision kernels and find good agreement between the moment solutions and the moments as calculated from direct integration of the coagulation equation. As practical applications, we demonstrate how the moments method can be used to track the evolving opacity and also indicate how one may incorporate porous particles.", "corpus_id": 3008328, "title": "Solving the Coagulation Equation by the Moments Method" }
{ "abstract": "Dust growth and settling considerably affect the spectral energy distributions (SEDs) of protoplanetary disks. We investigated dust growth and settling in protoplanetary disks through numerical simulations to examine time evolution of the disk optical thickness and SEDs. In this paper we considered laminar disks as the first step in a series of papers. As a result of dust growth and settling, a dust layer forms around the midplane of a gaseous disk. After the formation of the dust layer, small dust grains remain floating above the layer. Although the surface density of the floating small grains is much less than that of the dust layer, they govern the disk optical thickness and the emission. Size distributions of the floating grains obtained from numerical simulations are well described by a universal power-law distribution, which is independent of the disk temperature, the disk surface density, the radial position in the disk, etc. The floating small grains settle onto the dust layer in a long timescale compared with the formation of the dust layer. Typically, it takes 106 yr for micron-sized grains. Rapid grain growth in the inner part of disks makes the radial distribution of the disk optical thickness less steep than that of the disk surface density, Σ. For disks with Σ ∝ R-3/2, the radial distribution of the optical thickness is almost flat for all wavelengths at t ≲ 106 yr. At t > 106 yr, the optical thickness of the inner disk (≲a few AU) almost vanishes, which may correspond to disk inner holes observed by Spitzer Space Telescope. Furthermore, we examined time evolution of disk SEDs, using our numerical results and the two-layer model. The grain growth and settling decrease the magnitude of the SEDs, especially at λ ≥ 100 μm. Our results indicate that grain growth and settling can explain the decrease in observed energy fluxes at millimeter/submillimeter wavelengths with timescales of 106-107 yr without depletion of the disks.", "corpus_id": 16760357, "title": "Dust Growth and Settling in Protoplanetary Disks and Disk Spectral Energy Distributions. I. Laminar Disks" }
{ "abstract": "Primordial and episodic theories for the origin of comets are discussed. The implications of the former type for the origin of the solar system are considered. Candidate sites for the formation of comets are compared. The possible existence of a massive inner Oort cloud is discussed.", "corpus_id": 117751959, "score": 2, "title": "The origin of comets - Implications for planetary formation" }
{ "abstract": "This paper considers a practical structure-borne sound source characterization for mechanical installations, which are connected to plate-like structures. It describes a laboratory-based measurement procedure, which will yield single values of source strength in a form transferable to a prediction of the structure-borne sound power generated in the installed condition. It is confirmed that two source quantities are required, corresponding to the source activity and mobility. For the source activity, a high-mobility reception plate method is proposed which yields a single value in the form of the sum of the squared free velocities, over the contact points. A low-mobility reception plate method also is proposed which, in conjunction with the above, yields the source mobility in the form of the average magnitude of the effective mobility, again over the contact points. Experimental case studies are described and the applicability of the laboratory data for prediction and limitations of the approach are discussed.", "corpus_id": 1783024, "title": "Vibration activity and mobility of structure-borne sound sources by a reception plate method." }
{ "abstract": "A laboratory-based experiment procedure of reception plate method for structure-borne sound source characterisation is reported in this paper. The method uses the assumption that the input power from the source installed on the plate is equal to the power dissipated by the plate. In this experiment, rectangular plates having high and low mobility relative to that of the source were used as the reception plates and a small electric fan motor was acting as the structure-borne source. The data representing the source characteristics, namely, the free velocity and the source mobility, were obtained and compared with those from direct measurement. Assumptions and constraints employing this method are discussed.", "corpus_id": 13394508, "title": "Characterisation of Structure-Borne Sound Source Using Reception Plate Method" }
{ "abstract": "INTRODUCTION The American Association of Endodontists (AAE) and the American Academy of Oral and Maxillofacial Radiology (AAOMR) have jointly developed this position statement. It is intended to provide scientifically based guidance to clinicians regarding the use of cone beam computed tomography (CBCT) in endodontic treatment as an adjunct to planar imaging. This document will be periodically revised to reflect new evidence. Endodontic disease adversely affects quality of life and can produce significant morbidity in afflicted patients. Radiography is essential for the successful diagnosis of odontogenic and non-odontogenic pathoses, treatment of the pulp chamber and canals of a compromised tooth, biomechanical instrumentation, evaluation of final canal obturation, and assessment of healing. Until recently, radiographic assessments in endodontic treatment have been limited to intraoral and panoramic radiography. These radiographic technologies provide two-dimensional representations of three-dimensional tissues. If any element of the geometric configuration is compromised, the image can demonstrate errors. In more complex cases, radiographic projections with different beam angulations can allow", "corpus_id": 38209681, "score": 1, "title": "Use of cone-beam computed tomography in endodontics Joint Position Statement of the American Association of Endodontists and the American Academy of Oral and Maxillofacial Radiology." }
{ "abstract": "Abstract Polyglucose newly synthesized by phosphorylase in endothelial cells of rabbit aorta was studied electron microscopically by Takeuchi and Sasaki's method to confirm the presence of phosphorylase activity in endothelial cells of aortic wall. The polyglucose stained irregularly with lead, forming irregular particles 200–500 A in diameter. It was found in both endothelial cells and smooth muscle of the aorta. Native glycogen was not recognized in endothelial cells, but was present in the smooth muscle of the media. On the contrary, both polyglucose and native glycogen were observed in the intermyofibrillar sarcoplasm of heart muscle, which was simultaneously studied as a control.", "corpus_id": 2239750, "title": "Glycogen in endothelial cells. Electronmicroscopic studies of polyglucose synthesized by phosphorylase in endothelial cells of aorta and heart muscle of rabbits." }
{ "abstract": "Il fut démontré que les changements précoces d'activité de la phosphorylase au niveau du myocarde sont essentiellement différents selon qu'il s'agit d'une cardiomyopathie primaire (non-occlusive) ou secondaire (occlusive).", "corpus_id": 33620816, "title": "Histochemically demonstrable phosphorylase as an early index of anoxic myocardial damage" }
{ "abstract": "Haughton Astrobleme is a major extraterrestrial impact structure located on Devon Island in the Canadian Arctic Archipelago, Northwest Territories. Apatite grains separated from shocked Precambrian gneiss contained in a polymict breccia from the center of the astrobleme yielded a fission-track date of 22.4 million � 1.4 million years before the present or early Miocene (Aquitanian). This provides a date for the impact event and an upper limit on the age of crater-filling lake sediments and a flora and vertebrate fauna occurring in them. A geologically precise date for these fossils provides an important biostratigraphic reference point for interpreting the biotic evolution of the Arctic.", "corpus_id": 38008174, "score": 0, "title": "Fission-Track Dating of Haughton Astrobleme and Included Biota, Devon Island, Canada" }
{ "abstract": "MobileInsight is a software tool that collects, analyzes and exploits runtime, fine-grained cellular network information over commodity phones. It is our first step to help developers and researchers understand the closed, large-scale cellular network system. It exposes the below-IP protocol messages to the user space, provides protocol analysis, and offers APIs for mobile applications to obtain low-level network information. We have built showcases to illustrate how MobileInsight can be applied to cellular network research.", "corpus_id": 1604028, "title": "MobileInsight: Analyzing Cellular Network Information on Smartphones" }
{ "abstract": "Mobility management is a prominent feature in cellular networks. In this paper, we examine the (in)stability of mobility management. We disclose that handoff may never converge in some real-world cases. We focus on persistent handoff oscillations, rather than those transient ones caused by dynamic networking environment and user mobility (e.g., moving back and force between two base stations). Our study reveals that persistent handoff loops indeed exist in operational cellular networks. They not only violate their design goals, but also incur excessive signaling overhead and data performance degradation. To detect and validate instability in mobility management, we devise MMDIAG, an in-device diagnosis tool for cellular network operations. The core of MMDIAG is to build a handoff decision automata based on 3GPP standards, and detect possible loops by checking the structural property of stability. We first leverage device-network signaling exchanges to retrieve mobility management policies and configurations, and then feed them into MMDIAG, along with runtime measurements. MMDIAG further emulates various handoff scenarios and identifies possible violations (i.e., loops) caused by the used policies and configurations. Finally, we validate the identified problems through real measurements over operational networks. Our preliminary results with a top-tier US carrier demonstrate that, unstable mobility management indeed occurs in reality and hurts both carriers and users. The proposed methodology is effective to identify persistent instabilities and pinpoint their root causes in problematic configurations and policy conflicts.", "corpus_id": 1496847, "title": "A First Look at Unstable Mobility Management in Cellular Networks" }
{ "abstract": "A mew generalized block-edge impairment metric (GBIM) is presented in this paper as a quantitative distortion measure for blocking artifacts in digital video and image coding. This distortion measure does not require the original image sequence as a comparative reference, and is found to be consistent with subjective evaluation.", "corpus_id": 14615558, "score": 1, "title": "A generalized block-edge impairment metric for video coding" }
{ "abstract": "This paper presents the development of humanoid robotics platform - 4 (or HRP-4 for short). The high-density implementation used for HRP-4C, the cybernetic human developed by AIST, is also applied to HRP-4. HRP-4 has a total of 34 degrees of freedom, including 7 degrees of freedom for each arm to facilitate object handling and has a slim, lightweight body with a height of 151 [cm] and weight 39 [kg]. The software platform OpenRTM-aist and a Linux kernel with the RT-Preempt patch are used in the HRP-4 software system. Design concepts and mechanisms are presented with its basic specification in this paper.", "corpus_id": 12182615, "title": "Humanoid robot HRP-4 - Humanoid robotics platform with lightweight and slim body" }
{ "abstract": "Honda has been doing research on robotics since 1986 with a focus upon bipedal walking technology. The research started with straight and static walking of the first prototype two-legged robot. Now, the continuous transition from walking in a straight line to making a turn has been achieved with the latest humanoid robot ASIMO. ASIMO is the most advanced robot of Honda so far in the mechanism and the control system. ASIMO9s configuration allows it to operate freely in the human living space. It could be of practical help to humans with its ability of five-finger arms as well as its walking function. The target of further development of ASIMO is to develop a robot to improve life in human society. Much development work will be continued both mechanically and electronically, staying true to Honda9s ‘challenging spirit’.", "corpus_id": 2547994, "title": "Honda humanoid robots development" }
{ "abstract": "This paper presents the novel structural analysis of ball-on-sphere system using bond graph technique. The structural analyses carried out are controllability and observability of the system. To achieve these analyses, the system was modelled using bond graph modelling technique. In the modelling procedures, the various subsystems, storage elements, junction structures, transformer elements with appropriate causality assignments and energy exchange that make up the ball-on-sphere system were identified and modelled. The structural controllability and observability properties of the system were carried out on the developed causal bond graph model of the system based on bond graph rules. From the structural analyses, it was established that the developed model was controllable and observable.", "corpus_id": 20393734, "score": -1, "title": "Structual analysis of ball-on-sphere system using bond graph technique" }
{ "abstract": "Significance The Sulfolobus islandicus rod-shaped virus 2 (SIRV2) has developed unique mechanisms to penetrate the plasma membrane and S-layer of its host Sulfolobus islandicus in order to leave the cell after replication. SIRV2 encodes the 10-kDa protein PVAP, which assembles into sevenfold symmetric virus-associated pyramids (VAPs) in the host cell plasma membrane. Toward the end of the viral replication cycle, these VAPs open to form pores through the plasma membrane and S-layer, allowing viral egress. Here we show that PVAP inserts spontaneously and forms VAPs in any kind of biological membrane. By electron cryotomography we have obtained a 3D map of the VAP and present a model describing the assembly of PVAP into VAPs. Our findings open new avenues for a large variety of biotechnological applications. Viruses have developed a wide range of strategies to escape from the host cells in which they replicate. For egress some archaeal viruses use a pyramidal structure with sevenfold rotational symmetry. Virus-associated pyramids (VAPs) assemble in the host cell membrane from the virus-encoded protein PVAP and open at the end of the infection cycle. We characterize this unusual supramolecular assembly using a combination of genetic, biochemical, and electron microscopic techniques. By whole-cell electron cryotomography, we monitored morphological changes in virus-infected host cells. Subtomogram averaging reveals the VAP structure. By heterologous expression of PVAP in cells from all three domains of life, we demonstrate that the protein integrates indiscriminately into virtually any biological membrane, where it forms sevenfold pyramids. We identify the protein domains essential for VAP formation in PVAP truncation mutants by their ability to remodel the cell membrane. Self-assembly of PVAP into pyramids requires at least two different, in-plane and out-of-plane, protein interactions. Our findings allow us to propose a model describing how PVAP arranges to form sevenfold pyramids and suggest how this small, robust protein may be used as a general membrane-remodeling system.", "corpus_id": 1052889, "title": "Self-assembly of the general membrane-remodeling protein PVAP into sevenfold virus-associated pyramids" }
{ "abstract": "BackgroundMembrane proteins are estimated to represent about 25% of open reading frames in fully sequenced genomes. However, the experimental study of proteins remains difficult. Considerable efforts have thus been made to develop prediction methods. Most of these were conceived to detect transmembrane helices in polytopic proteins. Alternatively, a membrane protein can be monotopic and anchored via an amphipathic helix inserted in a parallel way to the membrane interface, so-called in-plane membrane (IPM) anchors. This type of membrane anchor is still poorly understood and no suitable prediction method is currently available.ResultsWe report here the \"AmphipaSeeK\" method developed to predict IPM anchors. It uses a set of 21 reported examples of IPM anchored proteins. The method is based on a pattern recognition Support Vector Machine with a dedicated kernel.ConclusionAmphipaSeeK was shown to be highly specific, in contrast with classically used methods (e.g. hydrophobic moment). Additionally, it has been able to retrieve IPM anchors in naively tested sets of transmembrane proteins (e.g. PagP). AmphipaSeek and the list of the 21 IPM anchored proteins is available on NPS@, our protein sequence analysis server.", "corpus_id": 2675367, "title": "Prediction of amphipathic in-plane membrane anchors in monotopic proteins using a SVM classifier" }
{ "abstract": "This study investigated the neuromuscular mechanisms underlying the initial stage of adaptation to novel dynamics. A destabilizing velocity‐dependent force field (VF) was introduced for sets of three consecutive trials. Between sets a random number of 4–8 null field trials were interposed, where the VF was inactivated. This prevented subjects from learning the novel dynamics, making it possible to repeatedly recreate the initial adaptive response. We were able to investigate detailed changes in neural control between the first, second and third VF trials. We identified two feedforward control mechanisms, which were initiated on the second VF trial and resulted in a 50% reduction in the hand path error. Responses to disturbances encountered on the first VF trial were feedback in nature, i.e. reflexes and voluntary correction of errors. However, on the second VF trial, muscle activation patterns were modified in anticipation of the effects of the force field. Feedforward cocontraction of all muscles was used to increase the viscoelastic impedance of the arm. While stiffening the arm, subjects also exerted a lateral force to counteract the perturbing effect of the force field. These anticipatory actions indicate that the central nervous system responds rapidly to counteract hitherto unfamiliar disturbances by a combination of increased viscoelastic impedance and formation of a crude internal dynamics model.", "corpus_id": 13060398, "score": 1, "title": "Impedance control and internal model use during the initial stage of adaptation to novel dynamics in humans" }
{ "abstract": "The threading of 'U' shaped bent axles having diverse functionalities (Axle1-Axle10) is investigated by using a heteroditopic amido-amine macrocyclic (MC) wheel via NiII or CuII metal ion templation. These bent shaped axles are the derivatives of 4,4'-substituted 2,2'-bipyridine, which are composed of various terminal groups like alkene, alkyne, bromide, hydroxyl and azide. Such metallo [2]pseudorotaxanes are well characterised by ESI-MS, EPR and FT-IR spectroscopic studies, UV-Vis absorption studies, elemental analysis and single-crystal X-ray diffraction studies wherever possible. Experimental evidence supports 1 : 1 : 1 ternary complexation between MC, the metal ion and axle. The single crystal X-ray structures of three CuII templated ternary complexes (PR1', PR3' and PR7') show the penta-coordination arrangement around the templating metal ion. Interestingly, judicious selection of chemical functionalities in the complementary wheel and axle components enables to show the existence of various covalent and non-covalent interactions.", "corpus_id": 3372334, "title": "Threading of various 'U' shaped bidentate axles into a heteroditopic macrocyclic wheel via NiII/CuII templation." }
{ "abstract": "A new naphthalene containing macrocycle, NaphMC, and a new fluorophoric bidentate linear axle derivative of 5,5'-dimethyl-2,2'-bipyridine (L3) along with two other ligands 1,10-phenanthroline (L1) and 5,5'-dimethyl-2,2'-bipyridine (L2) are explored towards the synthesis of Cu(ii) templated [2]pseudorotaxanes. All ternary complexes are well characterized by ESI-MS, UV/Vis, EPR spectroscopy, elemental analysis and emission spectroscopic studies. Single crystal X-ray diffraction studies confirm the geometry around the Cu(ii) center as a distorted trigonal bipyramid via the contribution of [3 + 2] orthogonal motifs of the wheel (NaphMC) and the bidentate chelating ligands L1 and L2 in the cases of pseudorotaxanes, CuPR1 and CuPR2, respectively. Furthermore, the fluorescence \"OFF\" state of the fluorophoric axle L3 is achieved via threading it to the Cu(ii) complex of NaphMC, whereas fluorescence switching \"ON\" is demonstrated by the substitution of L3 of CuPR3 with a stronger chelating ligand L1.", "corpus_id": 52158123, "title": "Naphthalene containing amino-ether macrocycle based Cu(ii) templated [2]pseudorotaxanes and OFF/ON fluorescence switching via axle substitution." }
{ "abstract": "We have successfully achieved the molecular ordering of semiconducting small molecules comprising a newly designed A(D–A′–D)2 system. The A(D–A′–D)2 small molecules have two different acceptors based on isoindigo and diketopyrrolopyrrole along with tert-butoxycarbonyl (t-Boc) groups and hexyl chains, which improve the solubility of the materials. After simple thermal annealing, the t-Boc groups were removed to allow strong hydrogen bonds between N–H⋯CO to form, and this resulted in improved molecular ordering of the organic semiconductors. The crystalline morphology was confirmed by X-ray diffraction coupled with high-voltage electron microscopy, and the resulting materials showed improved hole mobilities. In this work, the effect of the incorporation of t-Boc groups onto A(D–A′–D)2-based organic semiconductors on their morphological and electrical properties was evaluated.", "corpus_id": 103080543, "score": 1, "title": "Molecular ordering of A(D-A’-D)2-based organic semiconductors through hydrogen bonding after simple cleavage of tert-butyloxycarbonyl protecting groups" }
{ "abstract": "The weak electrostatic interaction between nitro and carbonyl moieties has been observed by means of variable-temperature NMR spectroscopy. Its energetic contribution was evaluated to be about 3 kcal mol(-1) by DFT calculations, and confirmed by the measurement of internal energy barriers to the rotation of suitable nitroaryl rings.", "corpus_id": 11434177, "title": "The experimental observation of the intramolecular NO₂/CO interaction in solution." }
{ "abstract": "The stereolability of chiral Hoveyda-Grubbs II type ruthenium complexes bearing N-heterocyclic carbene (NHC) ligands with Syn-phenyl groups on the backbone and Syn- or Anti-oriented o-tolyl N-substituents was studied by resorting to dynamic high-performance liquid chromatography (D-HPLC). A complete chromatographic picture of the involved stereoisomers (four for Anti- and two for Syn-complexes) was achieved at very low temperatures (-53°C and -40°C respectively), at which the NHC-Ru bond rotations were frozen out. Inspection of the chromatographic profiles recorded at higher temperatures revealed the presence of plateau zones between the couples of either Syn or Anti stereoisomers, attesting to the active interconversion between the eluted species. Such dynamic chromatograms were successfully simulated through procedures based on both theoretical plate and classical stochastic models. The good superimposition achieved between experimental and simulated chromatographic profiles allowed determination of the related isomerization energy barriers (ΔGisom (#) ), all derived by rotation around the NHC-Ru bond. The obtained diastereomerization barriers between the Anti isomers were found in very good agreement with those previously measured by experimental nuclear magnetic resonance (NMR) and assessed through Density Functional Theory (DFT) calculations. With the same approach, for the first time we also determined the enantiomerization barrier of the Syn isomer. Focused changes to the structure of complex Syn, studied by a molecular modeling approach, were found suitable to strongly reduce the stereolability arising from rotation around the NHC-Ru bond.", "corpus_id": 21236147, "title": "Stereolability of chiral ruthenium catalysts with frozen NHC ligand conformations investigated by dynamic-HPLC." }
{ "abstract": "This paper reports on a molecular simulation study of the thermodynamics, structure and dynamics of water confined at ambient temperature in hydroxylated silica nanopores of a width H = 10 and 20 Å. The adsorption isotherms for water in these nanopores resemble those observed for experimental samples; the adsorbed amount increases continuously in the multilayer adsorption regime until a jump occurs due to capillary condensation of the fluid within the pore. Strong layering of water in the vicinity of the silica surfaces is observed as marked density oscillations are observed up to 8 Å from the surface in the density profiles for confined water. Our results indicate that water molecules within the first adsorbed layer tend to adopt a H-down orientation with respect to the silica substrate. For all pore sizes and adsorbed amounts, the self-diffusivity of confined water is lower than the bulk, due to the hydrophilic interaction between the water molecules and the hydroxylated silica surface. Our results also suggest that the self-diffusivity of confined water is sensitive to the adsorbed amount.", "corpus_id": 33447149, "score": 1, "title": "Molecular simulation of water confined in nanoporous silica" }
{ "abstract": "Although porphinatoiron complexes have been used extensively as biomimetic catalysts for oxidation of aliphatic and olefinic hydrocarbons, few oxidations of polycyclic aromatic hydrocarbons (PAH) have been reported. In all cases, heterogeneous iodosobenzene/tetraphenylporphinatoiron(III) systems were employed, oxidations were inefficient and control experiments demonstrating the requirement for catalyst were not described. The current study investigates the oxidation of pyrene, benzo[a]pyrene and benzanthracene in a homogeneous m-chloroperoxybenzoic acid/bifacially hindered porphinatoiron system in which the peroxyacid was shown to be unreactive in the absence of catalyst. Pyrene and benzo[a]pyrene were oxidized efficiently, with pyrene yielding mixtures of 1.6- and 1.8-quinones and benzo[a]pyrene yielding mixtures of phenols and quinones. Benzanthracene was oxidized less efficiently, primarily at the meso positions, to give 7.12-quinone. Initial oxidation of meso carbons of benzo[a]pyrene (confirmed by the presence of the 6-hydroxy derivative as a product) and benzanthracene indicates that PAH-to-catalyst charge transfer may be an important oxidation pathway. Oxidation of pyrene was performed by addition of pyrene to observable oxo iron(V) species as well as in a catalytic reaction where excess peroxyacid was added to a solution of pyrene and catalyst and oxo iron(V) is not generated as an observable intermediate. Yields (based on oxidant consumed), were identical under both conditions, strongly supporting oxo iron(V) as a common intermediate.", "corpus_id": 2575472, "title": "Porphinatoiron-mediated oxidation of polycyclic aromatic hydrocarbons." }
{ "abstract": "The isolation and identification of pyrene metabolites formed from pyrene by the fungus Cunninghamella elegans is described. C. elegans was incubated with pyrene for 24 h. Six metabolites were isolated by reversed-phase high-performance liquid (HPLC) and thin-layer chromatography (TLC) and characterized by the application of UV absorption, 1H-NMR and mass spectral techniques. C. elegans hydroxylated pyrene predominantly at the 1,6- and 1,8-positions with subsequent glucosylation to form glucoside conjugates of 1-hydroxypyrene, 1,6- and 1,8-dihydroxypyrene. In addition, 1,6- and 1,8-pyrenequinones and 1-hydroxypyrene were identified as metabolites. Experiments with [4-14C]pyrene indicated that over a 24-h period, 41% of pyrene was metabolized to ethyl acetate-soluble metabolites. The glucoside conjugates of 1-hydroxypyrene, 1,6- and 1,8-dihydroxypyrene accounted for 26%, 7% and 14% of the pyrene metabolized, respectively. Pyrenequinones accounted for 22%. The results indicate that the fungus C. elegans metabolized pyrene to non-toxic metabolites (glucoside conjugates) as well as to compounds (pyrenequinones) which have been suggested to be biologically active in higher organisms. In addition, there was no metabolism at the K-region of the molecule which is a major site of enzymatic attack in mammalian systems.", "corpus_id": 2715659, "title": "Microbial metabolism of pyrene." }
{ "abstract": "The responses to supplementing the diet of Single Comb White Leghorn (SCWL) cockerels with ethoxyquin were tested on two parameters: 1) tissue peroxidation and 2) immune response. In the first experiment, three concentrations of supplemental ethoxyquin (0, 500, and 1,000 ppm) were added to a basal diet and fed to SCWL cockerels for 6 wk. Tissue peroxidation was assessed by measuring the thiobarbituric acid reactive substances (TBARS) concentration in the liver, kidney, heart, and spleen. The TBARS concentration in response to 500 ppm dietary ethoxyquin was significantly lower in the liver and spleen tissues, whereas in the kidneys, 1,000 ppm ethoxyquin significantly lowered TBARS. In a second experiment, four concentrations of ethoxyquin (0, 125, 500, and 1,000 ppm) were added to a basal diet and fed to SCWL cockerels for 8 wk. The primary and secondary immune response were assessed by determining antibody titers to the Newcastle disease virus using hemagglutination inhibition (HI) and ELISA. The HI and ELISA titers for the primary and secondary immune response were not significantly different from the control. Analysis of body weight, feed conversion, and organ weight revealed no statistically significant differences between treatments, although in the second experiment the dietary treatment of 1,000 ppm ethoxyquin resulted in significantly higher relative liver weight.", "corpus_id": 3627347, "score": 0, "title": "The effect of ethoxyquin on tissue peroxidation and immune status of single comb White Leghorn cockerels." }
{ "abstract": "Described as the greatest health crisis to face mankind in the modern era, the threat of HIV and AIDS in the workforce provides important challenges for consumer businesses that seek to balance divergent public opinions with the need to provide adequately for the health care requirements of their workforce. This research, based upon a survey that included 42 of the UK's largest retailers, identifies the many issues surrounding the development of a credible business response to this health issue. The research findings suggest that many retailers have taken cognisance of customer feeling in their development of policies in this area and that in some instances the development of HIV/AIDS policies in respect of staff has had more to do with politically correct posturizing than a commitment to the welfare of staff.", "corpus_id": 154263483, "title": "We will not make a drama out of a health care crisis: an examination of UK retailers' HIV/AIDS policies" }
{ "abstract": "Examines UK retailers’ attitudes towards employee welfare in general, and the development and implementation of policies in respect of staff with HIV/AIDS in particular. From the research results, provides a valuable insight into retailer motivation in respect of staff welfare provision and the methods adopted by retailers in the development of welfare policy; identifies the need to ensure that a credible support strategy exists in order that welfare policies, such as those related to HIV/AIDS, can be implemented effectively. Indicates that the management of HIV/AIDS within the retail sector is complicated further by the threat of criminal attack on retail staff and the potential for negative consumer reaction to policy decisions, and concludes by suggesting that the development of a considered response to HIV/AIDS is one which UK retailers cannot afford to ignore.", "corpus_id": 167547589, "title": "UK retailers and AIDS ‐ an exploratory study" }
{ "abstract": "Abstract. Histologic studies were performed on capsular tissue resected from 21 patients who were implanted with smooth silicone prostheses filled with gel. The results disclosed a non-uniform response to the implants. The granulomatous reaction to the silicone showed important variations along the same surface of the implants, between the plane and the concave surfaces, between equivalent points at the right and left sides, and among the patients. Also, a significant difference was observed between reactions and capsules in early and late stages. The author believes these variations of the capsular inflammatory reaction promote different sites of contraction between cell-to-cell, or cell-to-collagen-to-cell. These adding forces result in vectors of different intensities and directions around the implants which explains the various clinical grades of capsular contracture.\n", "corpus_id": 25577846, "score": 1, "title": "Inflammatory Reaction and Capsular Contracture Around Smooth Silicone Implants\n" }
{ "abstract": "The present study examined the prevalence of disordered gambling behaviours in a community-based sample of adolescents (N = 532) living in eastern central Ontario. Of particular interest was examining the hypothesis that adolescents with learning disorders are at elevated risk for disordered gambling. Rates of disordered gambling in male adolescents with learning disorders were found to be significantly higher than adolescents without learning problems, even after controlling for negative affectivity and ADHD symptomatology. The implications for treatment and intervention of gambling problems in adolescence are discussed.", "corpus_id": 1977311, "title": "Gambling Behaviour in Adolescents with Learning Disorders" }
{ "abstract": "The study examined the relationship between emotional intelligence (EI) and several addiction-related behaviours (gambling problems, Internet abuse, and computer gaming misuse) in two adolescent samples: 270 clinical outpatients (180 males and 90 females) and 256 special needs students (160 males and 96 females). Gambling problems, Internet abuse, and computer gaming misuse were positively inter-correlated in both samples; approximately half of the variability in these addiction-related behaviours could be accounted for by a common dysfunctional preoccupation latent variable. Latent variable path analysis found emotional intelligence to be a moderate predictor of dysfunctional preoccupation in both adolescent samples.", "corpus_id": 145217233, "title": "Problem gambling, gaming and Internet use in adolescents: Relationships with emotional intelligence in clinical and special needs samples" }
{ "abstract": "Ischemic Stroke (IS) is a severe and complex disorder of high morbidity and mortality rates associated with clinical, environmental, and genetic predisposing factors. Despite previous studies have associated genetic variants to stroke, inconsistent results from different populations pointed to the genetic heterogeneity for IS. Therefore, we may hypothesize that an interaction effect among genetic variants could contribute to IS occurrence rather than genetic variants independently. In this context, we investigated the association and interaction between genetic variants and large-artery atherosclerosis IS (LAAS-IS) and cardioembolic IS (CE-IS). We genotyped 435 patients (195 LAAS-IS; 240 CE-IS) and 535 controls from a population of Joinville, Santa Catarina, Brazil. Association and interaction analysis were performed by chi-square test and Multifactor-dimensionality Reduction test. We found an association between rs2383207*A allele, nearby CDKN2B-AS1, and LAAS-IS [OR 2.35 (95% CI = 1.79-3.08); p = 4.66 × 10-10]. We found an interaction among rs2910829, rs966221 and rs152312, with an accuracy of 0.62 (p = 4.3 × 10-5) demonstrating the interaction effect among variants from different genes can contribute to CE-IS risk. Further prediction analysis confirmed that clinical information, such as hypertension and dyslipidemia, presented high accuracy to predict LAAS-IS (86.47%) and CE-IS (90.47%); however, the inclusion of genetic variant information did not increase the accuracy.", "corpus_id": 73452450, "score": 1, "title": "Association and interaction of genetic variants with occurrence of ischemic stroke among Brazilian patients." }
{ "abstract": "Abstract The Internet of Things (IoT) refers to an infrastructure that integrates things over standard wired/wireless networks and allows them to exchange information with each other. The IoT is a very complex heterogeneous network, enabling seamless integration of these things is a huge challenge. A publish/subscribe method of integration can be formulated to solve the problems of interconnecting billions of heterogeneous things. In our work, an IoT framework that uses an abstraction layer that decouples an application from the service calls and network interfaces is required to send and receive messages on a particular thing. This paper provides definitions and classifications for heterogeneous data/events/services according to the properties of the things in order to integrate them into a framework for description. Based on these definitions and classifications, heterogeneous data/events/services in the IoT were integrated via topic description through the Data Distribution Service (DDS) middleware standard for real-time publish/subscribe. This paper also concludes with general remarks and a discussion of future work.", "corpus_id": 2762100, "title": "Description and classification for facilitating interoperability of heterogeneous data/events/services in the Internet of Things" }
{ "abstract": "Internet of Things (IOT) visualizes a future of anything anywhere by anyone at any time. The Information and communication technologies help in creating a revolution in digital technology. IOT are known for interconnecting various physical devices with the networks. In IOT, various physical devices are embedded with different types of sensors and other devices to exchange data between them. An embedded system is a combination of software and hardware where they are programmed for functioning specific functions. These data can be accessed from any parts of the world by making use of cloud. This can be used for creating a digital world, smart homes, healthcare systems and real life data exchange like smart banking. Though Internet of things has emerged long back, it is now becoming popular and gaining attention lately. In healthcare industry, some of the hospitals started using sensors implemented in the bed to get the status of patient's movement and other activities. This paper contains various IOT applications and the role of IOT in the healthcare system, challenges in the healthcare system using IOT. Also, introduced a secured surveillance monitoring system for reading and storing patient's details using low power for transmitting the data.", "corpus_id": 5041075, "title": "An IOT based human healthcare system using Arduino uno board" }
{ "abstract": "With the rapid development of information storage and networking technologies, quintillion bytes of data are generated every day from social networks, business transactions, sensors, and many other domains. The increasing data volumes impose significant challenges to traditional data analysis tools in storing, processing, and analyzing these extremely large-scale data. For decades, hashing has been one of the most effective tools commonly used to compress data for fast access and analysis, as well as information integrity verification. Hashing techniques have also evolved from simple randomization approaches to advanced adaptive methods considering locality, structure, label information, and data security, for effective hashing. This survey reviews and categorizes existing hashing techniques as a taxonomy, in order to provide a comprehensive view of mainstream hashing techniques for different types of data and applications. The taxonomy also studies the uniqueness of each method and therefore can serve as technique references in understanding the niche of different hashing mechanisms for future development.", "corpus_id": 5321851, "score": -1, "title": "Hashing Techniques: A Survey and Taxonomy" }
{ "abstract": "Subthreshold Gm-C filters offer the low power and wide tunable range required for use in fully implantable bionic ears. The major design challenge that must be met is increasing the linear range. A capacitive-attenuation technique is presented and refined to allow the construction of wide-linear-range bandpass filters with greater than 1 V/sub pp/ swings. For a 100-200 Hz fully differential filter with second-order roll off slopes and greater than 60 dB dynamic range, experimental results from a 1.5-/spl mu/m, 2.8-V BiCMOS chip yield only 0.23 /spl mu/W power consumption; for a 5-10 kHz filter with the same specifications the power only increased to 6.36 /spl mu/W. Fully differential filters with first-order slopes had a dynamic range of 66 dB and power consumptions of 0.12 and 3.36 /spl mu/W in the 100-200 Hz and 5-10 kHz cases, respectively. We show that our experimental results of noise and linear range are in good accord with theoretical estimates of these quantities.", "corpus_id": 10498273, "title": "A practical micropower programmable bandpass filter for use in bionic ears" }
{ "abstract": "A new digital programmable CMOS analog front-end (AFE) IC for measuring electroencephalograph or electrocardiogram signals in a portable instrumentation design approach is presented. This includes a new high-performance rail-to-rail instrumentation amplifier (IA) dedicated to the low-power AFE IC. The measurement results have shown that the proposed biomedical AFE IC, with a die size of 4.81 mm/sup 2/, achieves a maximum stable ac gain of 10 000 V/V, input-referred noise of 0.86 /spl mu/ V/sub rms/ (0.3 Hz-150 Hz), common-mode rejection ratio of at least 115 dB (0-1 kHz), input-referred dc offset of less than 60 /spl mu/V, input common mode range from -1.5 V to 1.3 V, and current drain of 485 /spl mu/A (excluding the power dissipation of external clock oscillator) at a /spl plusmn/1.5-V supply using a standard 0.5-/spl mu/m CMOS process technology.", "corpus_id": 206646161, "title": "A CMOS analog front-end IC for portable EEG/ECG monitoring applications" }
{ "abstract": "In this paper, a single phase effective closed loop control for solar inverter is proposed. As solar irradiance level changes with atmospheric conditions, output of the inverter varies. To maintain the output voltage of the inverter constant a close loop is implemented using PWM technique. A PWM signal is generated by comparing the output of PI controller with two identical triangular signals having a phase difference of 180°. The output voltage is constant within ±0.5% of reference value. The circuit is validated using PSIM software. The simulated AC output waveform has THD within 2%.", "corpus_id": 42186101, "score": -1, "title": "A simple and effective control of single phase solar inverter" }
{ "abstract": "International ownership alters the role of multilateral trade institutions by redefining pecuniary externalities among countries. Regardless of the underlying cause - whether foreign direct investment, international portfolio diversification, cross-country mergers, or multinational firms -- international ownership can mitigate incentives that lead large countries to set inefficiently high tariffs. At the same time, however, foreign ownership introduces the potential for expropriation by investment-host countries, which can extract rent from foreign owners by manipulating local prices. The basic principle of reciprocity continues to serve as an important guide to efficiency, though its application must account for the pattern of international ownership in addition to traditional measures of market access.", "corpus_id": 56387511, "title": "Reevaluating the Role of Trade Agreements: Does Investment Globalization Make the WTO Obsolete?" }
{ "abstract": null, "corpus_id": 153396374, "title": "Tariffs, foreign capital and national welfare with sector-specific factors" }
{ "abstract": "Proprietors are an important group of stockholders and non-diversifiable entrepreneurial risk could therefore help explain time-varying risk premia on the aggregate stock market. This paper suggests an entrepreneurial distress factor that is highly correlated with the aggregate consumption-wealth ratio and that has considerable forecasting power for U.S. stock returns. I call this factor the cpy -residual because it can be be represented as a cointegrating relationship between consumption (c) and income from proprietary (p) and non-proprietary (y) wealth. My interpretation of cpy as an entrepreneurial risk factor is based on a number of empirical observations: first, cpy mainly reflects cyclical fluctuations in proprietary income and secondly it is highly correlated with cross-sectional measures of idiosyncratic entrepreneurial risk. Furthermore, and in line with the theoretical mechanism, its predictive power has started to decline since the beginning of the 1980s as stock market participation has widened with the advent of tax-deferable employer-sponsored pension plans and as proprietary income risk has become more easily diversifiable in the wake of state level bank deregulation.", "corpus_id": 56098396, "score": -1, "title": "Proprietary Income, Entrepreneurial Risk, and the Predictability of U.S. Stock Returns" }
{ "abstract": "We describe an extensible query optimizer for objectbase management systems. Since these systems are expected to serve data management needs of a wide range of application domains with possibly different query optimization requirements, extensibtity is essential. Our work is conducted within the context of TIGUKAT, which is a uniform behavioral system that models every system component as a first-class object. Consistent with this philosophy, we model ever y component of the optimizer M a first-class object, providing ultimate extensibfity. We describe the optimizer architecture and how the optimizer components are modeled as extensions of a uniform type system.", "corpus_id": 811451, "title": "An extensible query optimizer for an objectbase management system" }
{ "abstract": "XML is an emerging standard for data representation and exchange on the World-Wide Web. Due to the nature of information on the Web and the inherent flexibility of XML, we expect that much of the data encoded in XML will be semistructured: the data may be irregular or incomplete, and its structure may change rapidly or unpredictably. This paper describes the query processor of Lore, a DBMS for XML-based data supporting an expressive query language. We focus primarily on Lore's cost-based query optimizer. While all of the usual problems associated with cost-based query optimization apply to XML-based query languages, a number of additional problems arise, such as new kinds of indexing, more complicated notions of database statistics, and vastly different query execution strategies for different databases. We define appropriate logical and physical query plans, database statistics, and a cost model, and we describe plan enumeration including heuristics for reducing the large search space. Our optimizer is fully implemented in Lore and preliminary performance results are reported. This is a short version of the paper Query Optimization for Semistructured Data which is available at: http://www-db.stanford.edu/~mchughj/publications/qo.ps", "corpus_id": 6628824, "title": "Query Optimization for XML" }
{ "abstract": "AL is a high-level programming system for specification of manipulatory tasks such as assembly of an object from parts. AL includes an ALGOL-like source language, a translator for converting programs into runnable code, and a runtime system for controlling manipulators and other devices. The system includes advanced features for describing individual motions of manipulators, for using sensory information, and for describing assembly algorithms in terms of common domain-specific primitives. This document describes the design of AL, which is currently being implemented as a successor to the Stanford WAVE system.", "corpus_id": 58790015, "score": 1, "title": "AL, a programming system for automation." }
{ "abstract": "Sparse-view Reconstruction can be used to provide accelerated low dose CT imaging with both accelerated scan and reduced projection/back-projection calculation. Despite the rapid developments, image noise and artifacts still remain a major issue in the low dose protocol. In this paper, a deep learning based method named Improved GoogLeNet is proposed to remove streak artifacts due to projection missing in sparse-view CT reconstruction. Residual learning is used in GoogLeNet to study the artifacts of sparse-view CT reconstruction, and then subtracts the artifacts obtained by learning from the sparse reconstructed images, finally recovers a clear correction image. The intensity of reconstruction using the proposed method is very close to the full-view projective reconstructed image. The results indicate that the proposed method is practical and effective for reducing the artifacts and preserving the quality of the reconstructed image.", "corpus_id": 13991951, "title": "Artifact Removal using Improved GoogLeNet for Sparse-view CT Reconstruction" }
{ "abstract": "Model-based optimization methods and discriminative learning methods have been the two dominant strategies for solving various inverse problems in low-level vision. Typically, those two kinds of methods have their respective merits and drawbacks, e.g., model-based optimization methods are flexible for handling different inverse problems but are usually time-consuming with sophisticated priors for the purpose of good performance, in the meanwhile, discriminative learning methods have fast testing speed but their application range is greatly restricted by the specialized task. Recent works have revealed that, with the aid of variable splitting techniques, denoiser prior can be plugged in as a modular part of model-based optimization methods to solve other inverse problems (e.g., deblurring). Such an integration induces considerable advantage when the denoiser is obtained via discriminative learning. However, the study of integration with fast discriminative denoiser prior is still lacking. To this end, this paper aims to train a set of fast and effective CNN (convolutional neural network) denoisers and integrate them into model-based optimization method to solve other inverse problems. Experimental results demonstrate that the learned set of denoisers can not only achieve promising Gaussian denoising results but also can be used as prior to deliver good performance for various low-level vision applications.", "corpus_id": 1900475, "title": "Learning Deep CNN Denoiser Prior for Image Restoration" }
{ "abstract": "This seminar paper focusses on convolutional neural networks and a visualization technique allowing further insights into their internal operation. After giving a brief introduction to neural networks and the multilayer perceptron, we review both supervised and unsupervised training of neural networks in detail. In addition, we discuss several approaches to regularization. The second section introduces the different types of layers present in recent convolutional neural networks. Based on these basic building blocks, we discuss the architecture of the traditional convolutional neural network as proposed by LeCun et al. [LBD+89] as well as the architecture of recent implementations. The third section focusses on a technique to visualize feature activations of higher layers by backprojecting them to the image plane. This allows to get deeper insights into the internal working of convolutional neural networks such that recent architectures can be evaluated and improved even further.", "corpus_id": 16626430, "score": -1, "title": "Understanding Convolutional Neural Networks" }
{ "abstract": "Abstract We derive the equilibrium distribution at pre-arrival and arbitrary epochs and the waiting time distribution in a GI/M/1 queueing system with dependence between the service time of each customer and the subsequent interarrival times. In addition, the server takes exponentially distributed vacations when there are no customers left to serve in the queue.", "corpus_id": 816204, "title": "GI/M/1 Queues with Server Vacations and Dependent Interarrival and Service Times" }
{ "abstract": "The burstiness of the total arrival process has been previously characterized in packet network performance models by the dependence among successive interarrival times. It is shown that associated dependence among successive service times and between service times and interarrival times also can be important for packet queues involving variable packet lengths. These dependence effects are demonstrated analytically by considering a multiclass single-server queue with batch-Poisson arrival processes. For this model and more realistic models of packet queues, insight is gained from heavy-traffic limit theorems. This study indicates that all three kinds of dependence should be considered in the analysis and measurement of packet queues involving variables packet lengths. Specific measurements are proposed for real systems and simulations. This study also indicates how to predict expected packet delays under heavy loads. Finally, this study is important for understanding the limitations of procedures such as the queuing network analyzer (QNA) for approximately describing the performance of queuing networks using the techniques of aggregation and decomposition. >", "corpus_id": 9610706, "title": "Dependence in packet queues" }
{ "abstract": "This paper provides new information about the interrelated issues of teacher turnover (both within and across school districts and inside and outside of teaching) and the importance of nonpecuniary school characteristics, such as race and poverty, using new administrative data on Georgia teachers and the elementary schools in which they teach. Simple descriptive statistics indicate that teachers are more likely to change schools if they begin their teaching careers in schools with lower student test scores, schools with lower income students, or schools that have higher proportions of minority students. A linear probability and a competing risks model of transitions out of first teaching jobs allow us to separate the importance of these highly correlated school characteristics. The estimates from the model imply that teachers are much more likely to exit schools with large proportions of minority students, and that the other univariate statistical relationships associated with student test scores and poverty rates are driven to a large extent by the correlations of these variables with the minority variable. Thus we find that, while the common notion that teachers are more likely to leave high poverty schools is correct, it occurs because teachers are more likely to leave a particular type of poor school - that which has a large proportion of minority students.", "corpus_id": 153328935, "score": 0, "title": "Race, Poverty, and Teacher Mobility" }
{ "abstract": "Biological, chemical, and physical attributes of aquatic ecosystems are often strongly influenced by groundwater sources. Nonetheless, widespread access to predictions of subsurface contributions to rivers, lakes, and wetlands at a scale useful to environmental managers is generally lacking. In this paper, we describe a “neighborhood analysis” approach for estimating topographic constraints on spatial patterns of recharge and discharge and discuss how this index has proven useful in research, management, and conservation contexts. The Michigan Rivers Inventory subsurface flux model (MRI-DARCY) used digital elevation and hydraulic conductivity inferred from mapped surficial geology to estimate spatial patterns of hydraulic potential. Model predictions were calculated in units of specific discharge (meters per day) for a 30-m2-cell raster map and interpreted as an index of potential subsurface water flux (shallow groundwater and event through-flow). The model was evaluated by comparison with measurements of groundwater-related attributes at watershed, stream segment, and local spatial scales throughout Lower Michigan (USA). Map-based predictions using MRI-DARCY accounted for 85% of the observed variation in base flow from 128 USGS gauges, 69% of the observed variation in discharge accrual from 48 river segments, and 29% of the residual variation in local groundwater flux from 33 locations as measured by hyporheic temperature profiles after factoring out the effects of climate. Although it does not incorporate any information about the actual water table surface, by quantifying spatial variation of key constraints on groundwater-related attributes, the model provides strata for more intensive study, as well as a useful spatial tool for regional and local conservation planning, fisheries management, wetland characterization, and stream assessment.", "corpus_id": 26530, "title": "\nA GIS Model of Subsurface Water Potential for Aquatic Resource Inventory, Assessment, and Environmental Management\n" }
{ "abstract": "With the groundwater resources management in Poyang Lake catchment as an example, groundwater resources management system based on geographic information system (GIS)-MapInfo software was development using Matlab software, and visual basic language in this paper, and the main functions were also introduced. The user interface was developed using visual basic language, which can provide the running background work environment of MapInfo. Matlab was used to accomplish numerical calculation. The MapInfo’s functions of spatial analysis and display were used in this system by visual basic program activating MapInfo. Many functions could be accomplished including the spatial and attribute data displaying, editing and querying, spatial statistics and analysis, thematic map compiling, and groundwater quality visual evaluation. Groundwater resources effective management and groundwater quality visual evaluation could be realized by development of this system. The development of this system will provide assistant decision supports for reasonable exploiting groundwater resources in Poyang lake catchment.", "corpus_id": 14764559, "title": "Development of Groundwater Resources Management System Based on GIS in Poyang Lake Catchment" }
{ "abstract": "The mussel Mytilus californianus is a competitive dominant on wave—swept rocky intertidal shores. Mussel beds may exist as extensive monocultures; more often they are an everchanging mosaic of many species which inhabit wave—generated patches or gaps. This paper describes observations and experiments designed to measure the critical parameters of a model of patch birth and death, and to use the model to predict the spatial structure of mussel beds. Most measurements were made at Tatoosh Island, Washington, USA, from 1970—1979. Patch size ranged at birth from a single mussel to 38 m2; the distribution of patch sizes approximates the lognormal. Birth rates varied seasonally and regionally. At Tatoosh the rate of patch formation varied during six winters from 0.4—5.4% of the mussels removed per month. The disturbance regime during the summer and at two mainland sites was 5—10 times less. Annual disturbance patterns tended to be synchronous within 11 sites on one face of Tatoosh over a 10—yr interval, and over larger distances (16 km) along the coastline. The pattern was asynchronous, however, among four Tatoosh localities. Patch birth rate, and mean and maximum size at birth can be used as adequate indices of disturbance. Patch disappearance (death) occurs by three mechanisms. Very small patches disappear almost immediately due to a leaning response of the border mussels (0.2 cm/d). Intermediate—sized patches (<3.0 m2) are eventually obliterated by lateral movement of the peripheral mussels: estimates based on 94 experimental patches yield a mean shrinking rate of 0.05 cm/d from each of two principal dimensions. Depth of the adjacent mussel bed accounts for much of the local variation in closing rate. In very large patches, mussels must recruit as larvae from the plankton. Recovery begins at an average patch age of 26 mo; rate of space occupation, primarily due to individual growth, is 2.0—2.5%/mo. Winter birth rates suggest a mean turnover time (rotation period) for mussel beds varying from 8.1—34.7 yr, depending on the location. The minimal value is in close agreement with both observed and calculated minimal recovery times. Projections of total patch area, based on the model, are accurate to within 5% of the observed. Using a method for determining the age of patches, based on a growth curve of the barnacle Balanus cariosus, the model permits predictions of the age—size structure of the patch population. The model predicts with excellent resolution the distribution of patch area in relation to time since last disturbance. The most detailed models which include size structure within age categories are inconclusive due to small sample size. Predictions are food for large patches, the major determinants of environmental patterns, but cannot deal adequately with smaller patches because of stochastic effects. Colonization data are given in relation to patch age, size and intertidal position. We suggest that the reproductive season of certain long—lived, patch—dependent species is moulded by the disturbance regime. The necessary and vital connection between disturbance which generates spatial pattern and species richness in communities open to invasion is discussed.", "corpus_id": 84193451, "score": 2, "title": "Intertidal Landscapes: Disturbance and the Dynamics of Pattern" }
{ "abstract": "Activin and the Nodal-related proteins induce mesendodermal tissues during Xenopus development. These signals act through specific receptors to cause the phosphorylation, at their carboxyl termini, of Smad2 and Smad3. The phosphorylated Smad proteins form heteromeric complexes with Smad4 and translocate into the nucleus to activate the transcription, after the midblastula transition, of target genes such as Xbra and goosecoid (gsc). In this paper we use bimolecular fluorescence complementation (BiFC) to study complex formation between Smad proteins both in vivo and in response to exogenous proteins. The technique has allowed us to detect Smad2-Smad4 heteromeric interactions during normal Xenopus development and Smad2 and Smad4 homo- and heteromers in isolated Xenopus blastomeres. Smad2-Smad2 and Smad2-Smad4 complexes accumulate rapidly in the nuclei of responding cells following Activin treatment, whereas Smad4 homomeric complexes remain cytoplasmic. When cells divide, Smad2-Smad4 complexes associate with chromatin, even in the absence of ligand. Our observation that Smad2-Smad4 complexes accumulate in the nucleus only after the midblastula transition, irrespective of the stage at which cells were treated with Activin, may shed light on the mechanisms of developmental timing.", "corpus_id": 753286, "title": "Nuclear accumulation of Smad complexes occurs only after the midblastula transition in Xenopus" }
{ "abstract": "Transforming growth factor beta (TGFbeta) superfamily signaling has been implicated in patterning of the early Xenopus embryo. Upon ligand stimulation, TGFbeta receptors phosphorylate Smad proteins at carboxy-terminal SS(V/M)S consensus motifs. Smads 1/5/8, activated by bone morphogenetic protein (BMP) signaling, induce ventral mesoderm whereas Smad2, activated by activin-like ligands, induces dorsal mesoderm. Although ectopic expression studies are consistent with roles for TGFbeta signals in early Xenopus embryogenesis, when and where BMP and activin-like signaling pathways are active endogenously has not been directly examined. In this study, we investigate the temporal and spatial activation of TGFbeta superfamily signaling in early Xenopus development by using antibodies specific for the type I receptor-phosphorylated forms of Smad1/5/8 and Smad2. We find that Smad1/5/8 and two distinct isoforms of Smad2, full-length Smad2 and Smad2(delta)exon3, are phosphorylated in early embryos. Both Smad1/5/8 and Smad2/Smad2(delta)exon3 are activated after, but not before, the mid-blastula transition (MBT). Endogenous activation of Smad2/Smad2(delta)exon3 requires zygotic transcription, while Smad1/5/8 activation at MBT appears to involve transcription-independent regulation. We also find that the competence of embryonic cells to respond to TGF(delta) superfamily ligands is temporally regulated and may be a determinant of early patterning. Levels of phospho-Smad1/5/8 and of phospho-Smad2/Smad2(delta)exon3 are asymmetrically distributed across both the animal-vegetal and dorsoventral axes. The timing of the development of these asymmetries differs for phospho-Smad1/5/8 and for phospho-Smad2/Smad2(delta)exon3, and the spatial distribution of phosphorylation of each Smad changes dramatically as gastrulation begins. We discuss the implications of our results for endogenous functions of BMP and activin-like signals as candidate morphogens regulating primary germ layer formation and dorsoventral patterning of the early Xenopus embryo.", "corpus_id": 18631846, "title": "Endogenous patterns of TGFbeta superfamily signaling during early Xenopus development." }
{ "abstract": "In the present paper, the leakage flow in the clearance gap between stationary walls was studied experimentally, theoretically and numerically by the computational fluid dynamics (CFD) in order to find the relationship between leakage flow, pressure difference and clearance gap. The experimental set-up of the clearance gap between two stationary walls is the simplification of the gap between the guide vane faces and facing plates in Francis turbines. This model was built in the Waterpower laboratory at Norwegian University of Science and Technology (NTNU). The empirical formula for calculating the leakage flow rate between the two stationary walls was derived from the empirical study. The experimental model is simulated by computational fluid dynamics employing the ANSYS CFX commercial software in order to study the flow structure. Both numerical simulation results and empirical formula results are in good agreement with the experimental results. The correction of the empirical formula is verified by experimental data and has been proven to be very useful in terms of quickly predicting the leakage flow rate in the guide vanes for hydraulic turbines.", "corpus_id": 108515440, "score": 0, "title": "Study on the leakage flow through a clearance gap between two stationary walls" }
{ "abstract": "The application of geoagent in RS image have been studied in the paper. Samples of carbonate rocks were scanned into rock images. By analysing these samples of carbonate rocks, a new arithmetic model of geoagent was chosed and a standard curve of carbonate rocks by the arithmetic model can be gotten. Rs images were divided into grids. There are curves by the arithmetic in grids. The standard curve of carbonate rocks and curves in grids were compared. If both of curves look very similar, the grid is carbonate rocks areas. (The paper was supported by the research fund of doctoral program of southwest university, No:2008055,104220-20710913)", "corpus_id": 6304093, "title": "The Application of Geoagent in RS Image" }
{ "abstract": "Snow water equivalent (SWE) is one of the key parameters for many applications in climatology, hydrology, and water resource planning and management. Satellite-based passive microwave sensors have provided global, long-term observations that are sensitive to SWE. However, the complexity of the snowpack makes modeling the microwave emission and inversion of a model to retrieve SWE difficult, with the consequence that retrievals are sometimes incorrect. Here we develop a parameterized dry snow emission model for analyzing passive microwave data, including those from the Advanced Microwave Scanning Radiometer-Earth Observing System (AMSR-E) at 10.65 GHz, 18.7 GHz, and 36.5 GHz for SWE estimation. We first evaluate a multiple-scattering microwave emission model that consists of a single snow layer over a rough surface by comparing model calculations with data from two field measurements, from the Cold Land Process Experiment (CLPX) in 2003 and from Switzerland in 1995. This model uses the matrix doubling approach to include incoherent multiple-scattering in the snow, and the model combines the Dense Media Radiative Transfer Model (DMRT) for snow volume scattering and emission with the Advanced Integral Equation Model (AIEM) for the randomly rough snow/ground interface to calculate dry snow emission signals. The combined model agrees well with experimental measurements. With this confirmation, we develop a parameterized emission model, much faster computationally, using a database that the more physical multiple-scattering model generates. For a wide range of snow and soil properties, this parameterized model's results are within 0.013 of those from the multiple-scattering model. This simplified model can be applied to the simulation of the microwave emission signal and to developing algorithms for SWE retrieval.", "corpus_id": 14330686, "title": "A parameterized multiple-scattering model for microwave emission from dry snow" }
{ "abstract": "Interdependence of regolith density, moisture, and chemistry parameters, as well as their influence on natural gamma ray emissions, led us to investigate systematically whether statistical models could be inferred between gamma spectrometric data and regolith parameters. A series of soil sample parameters were analyzed vs. airborne gamma ray data through multiple linear regressions. Among the approximately 20 regolith parameters modeled (chemical, textural, and mineralogical), about 50% (equally distributed in each category) were found to be predictable, with acceptable error (Radj2 > 0.5 and p value < 5%). With an independent set of texture analyses, we validated two of the predicted parameters (sand and clay contents) with fairly low residuals, with standard deviations of 22 and 16%, respectively. Further statistical investigations revealed why a large number of soil parameters could successfully be modeled in this sedimentary environment. We showed that the main gamma emitters are hosted in weathering products and leached detrital materials. Two main mineral assemblages correlate with gamma variables: (i) fine‐grained weathering clays (with correlated Al, Fe, Mn, Mg, Pb, and V elements), and (ii) residual K‐rich minerals, interpreted as feldspar and/or muscovite (correlated with Na and Sr). Additionally, two chemical elements, Si and Ca, have specific behaviors and can scarcely be characterized by the gamma data: they apparently mitigate Ɣ signatures.", "corpus_id": 128895438, "score": 1, "title": "Regional Regolith Parameter Prediction Using the Proxy of Airborne Gamma Ray Spectrometry" }
{ "abstract": "This paper presents a generalized Gaussian quadrature method for numerical integration over regions with parabolic edges. Any region represented by R 1={(x, y)| a≤x≤b, f(x)≤y≤g(x)} or R 2={(x, y)| a≤y≤b, f(y)≤x≤g(y)}, where f(x), g(x), f(y) and g(y) are quadratic functions, is a region bounded by two parabolic arcs or a triangular or a rectangular region with two parabolic edges. Using transformation of variables, a general formula for integration over the above-mentioned regions is provided. A numerical method is also illustrated to show how to apply this formula for other regions with more number of linear and parabolic sides. The method can be used to integrate a wide class of functions including smooth functions and functions with end-point singularities, over any two-dimensional region, bounded by linear and parabolic edges. Finally, the computational efficiency of the derived formulae is demonstrated through several numerical examples.", "corpus_id": 3075703, "title": "Generalized Gaussian quadrature rules over regions with parabolic edges" }
{ "abstract": "In this paper, we introduce a Gauss Legendre quadrature method for numerical integration over a parabolic region; R = {(x, y) / 0 ≤ x ≤ 1, 0 ≤ y ≤ x 2 }, using transformation of variables a general formulae for numerical integration over the region R are derived which can be directly used for integrating arbitrary function over such region the performances of the method is illustrated with several numerical examples.", "corpus_id": 118247156, "title": "Gauss Legendre quadrature over a parabolic region" }
{ "abstract": "This paper summarizes two new results on the solution of linear rational expectations models arising from optimizing behavior. The first result concerns the development of conditions under which the general solution of the Euler equation associated with the Linear Rational Expectations (LRE) model can be obtained by finding the eigenvalues and the eigenvectors of its characteristic matrix polynomial. 1 The second result concerns the development of conditions under which the general solution of the Euler equation associated with the LRE model becomes the globally asymptotically stable particular solution of that equation. The conditions of these two results take the form of restrictions on the eigenvalues and the eigenvectors of the characteristic matrix, polynomial associated with the Euler equation of the LRE model. The usefulness of these results stems primarily from two facts. First, these results may be combined with the generaiized Wiener-Kolmogorov prediction formulae to obtain a non-recursive 'as dosed as possible to a closed form' solution to the LRE model. 2'3 Second, this solution is established under less restrictive conditions than the recursive solution of Hansen and. Sargent", "corpus_id": 153488727, "score": 1, "title": "A non-recursive solution for the linear rational expectations model" }
{ "abstract": "Despite benefits and uses of social networking sites (SNSs) users are not always satisfied with their behaviors on the sites. These desires for behavior change both provide insight into users' perceptions of how SNSs impact their lives (positively or negatively) and can inform tools for helping users achieve desired behavior changes. We use a 604-participant online survey to explore SNS users' behavior-change goals for Facebook, Instagram, and Twitter. While some participants want to reduce site use, others want to improve their use or increase a range of behaviors. These desired changes differ by SNS, and, for Twitter, by participants' levels of site use. Participants also expect a range of benefits from these goals, including increased time, contact with others, intrinsic benefits, better security/privacy, and improved self presentation. Based on these results we provide insights both into how participants perceive different SNSs, as well as potential designs for behavior-change mechanisms to target SNS behaviors.", "corpus_id": 893496, "title": "I Would Like To..., I Shouldn't..., I Wish I...: Exploring Behavior-Change Goals for Social Networking Sites" }
{ "abstract": "Guided by the underlying question of how--if at all--the self-disclosure process varies online, the present study explores the self-disclosure practices of 26 American graduate students on Facebook through in-depth interviews. Building on work by Derlega and Grzelak [12] on self- disclosure goals and focusing on the affordances of the site, findings reveal both commonalities with and extensions to existing communication research on self-disclosure, as users saw both benefits and drawbacks to the high visibility and persistence of content shared through the site. Furthermore, users employed a wide spectrum of strategies to help them achieve their disclosure goals while decreasing perceived risks associated with making disclosures in a public forum. Importantly, these strategies generally sought to recreate the offline boundaries blurred or removed by the technical structure of the site and allow users to engage in a more strategic disclosure process with their network.", "corpus_id": 1714070, "title": "\"You can't block people offline\": examining how facebook's affordances shape the disclosure process" }
{ "abstract": "This work examines the adsorption regime and the properties of microgel/enzyme thin films deposited onto conductive graphite-based substrates. The films were formed via two-step sequential adsorption. A temperature- and pH-sensitive poly(N-isopropylacrylamide)-co-(3-(N,N-dimethylamino)propylmethacrylamide) microgel (poly(NIPAM-co-DMAPMA microgel) was adsorbed first, followed by its interaction with the enzymes, choline oxidase (ChO), butyrylcholinesterase (BChE), or mixtures thereof. By temperature-induced stimulating both (i) poly(NIPAM-co-DMAPMA) microgel adsorption at T > VPTT followed by short washing and drying and then (ii) enzyme loading at T < VPTT, we can effectively control the amount of the microgel adsorbed on a hydrophobic interface as well as the amount and the spatial localization of the enzyme interacted with the microgel film. Depending on the biomolecule size, enzyme molecules can (in the case for ChO) or cannot (in the case for BChE) penetrate into the microgel interior and be localized inside/outside the microgel particles. Different spatial localization, however, does not affect the specific enzymatic responses of ChO or BChE and does not prevent cascade enzymatic reaction involving both BChE and ChO as well. This was shown by the methods of electrochemical impedance spectroscopy (EIS), atomic force microscopy (AFM), and amperometric analysis of enzymatic responses of immobilized enzymes. Thus, a novel simple and fast strategy for physical entrapment of biomolecules by the polymeric matrix was proposed, which can be used for engineering systems with spatially separated enzymes of different types.", "corpus_id": 19050604, "score": 0, "title": "Engineering Systems with Spatially Separated Enzymes via Dual-Stimuli-Sensitive Properties of Microgels." }
{ "abstract": "International trade of agricultural goods impacts local water scarcity. By quantifying the effect of trade on crop production on grid-cell level and combining it with cell- and crop-specific virtual water contents, we are able to determine green and blue water consumption and savings. Connecting the information on trade-related blue water usage to water shadow prices gives us the possibility to value the impact of international food crop trade on local blue water resources. To determine the trade-related value of the blue water usage, we employ two models: first, an economic land- and water-use model, simulating agricultural trade, production and water-shadow prices and second, a global vegetation and agricultural model, modeling the blue and green virtual water content of the traded crops. Our study found that globally, the international trade of food crops saves blue water worth 2.4 billion US$. This net saving occurs despite the fact that Europe exports virtual blue water in food crops worth 3.1 billion US$. Countries in the Middle East and South Asia profit from trade by importing water intensive crops, countries in Southern Europe on the other hand export water intensive agricultural goods from water scarce sites, deteriorating local water scarcity.", "corpus_id": 153419217, "title": "Valuing the impact of trade on local blue water" }
{ "abstract": "The development of approaches to tackle the European Union (EU) water-related challenges and shift towards sustainable water management and use is one of the main objectives of Horizon 2020, the EU strategy to lead a smart, sustainable and inclusive growth. The EU is an increasingly water challenged area and is a major agricultural trader. As agricultural trade entails an exchange of water embodied in goods as a factor of production, this study investigates the region's water-food-trade nexus by analysing intra-regional virtual water trade (VWT) in agricultural products. The analysed period (1993-2011) comprises the enactment of the Water Framework Directive (WFD) in the year 2000. Aspects of the VWT that are relevant for the WFD are explored. The EU is a net importer of virtual water (VW) from the rest of the world, but intra-regional VWT represents 46% of total imports and 75% of total exports. Five countries account for 60% of total VW imports (Germany, France, Italy, The Netherlands, Belgium) and 65% of total VW exports (The Netherlands, France, Germany, Belgium and Spain). Intra-EU VWT more than doubled over the period considered, while trade with extra-EU countries did not show such a marked trend. In the same period, blue VWT increased significantly within the region and net import from the rest of the world slightly decreased. Water scarce countries, such as Spain and Italy, are major exporters of blue water in the region. The traded volumes of VW have been increasing almost monotonically over the years, and with a substantial increase after 2000. The overall trend in changes in VWT does not seem to be in accordance with the WFD goals. This study demonstrated that VWT analyses can help evaluate intertwining effects of water, agriculture and trade policies which are often made separately in respective sectors.", "corpus_id": 3862044, "title": "Intra-EU agricultural trade, virtual water flows and policy implications." }
{ "abstract": "THis PAPER derives decision rules for corporate capital budgeting when cash flows resulting from the acceptance of projects are risky. The decision rules are in terms of a project's discounted value and rate of return, the two familiar measures of investment profitability under certainty.' The analysis of capital projects assumes throughout that stock values are determined in perfect capital markets and also ignores complications which arise due to taxation of corporations and individuals. The results are therefore of theoretical rather than direct practical relevance. A capital project may be defined as profitable, and should be accepted by a shareholder's wealth maximizing corporation if, and only if, acceptance of it adds to the stock value of the corporation more than it costs the shareholders. The prerequisite for developing capital budgeting decision rules is, therefore, a theory of stock valuation. The questions answered in this paper include the following. What is the stock value of the corporation in terms of its future, risky cash flow, earnings, and dividends? What determines the cost of capital to a corporation? How can the relevant or significant risk of a capital project be measured? Given measures of the relevant risk of a project's cash flows, how can the project be evaluated? Sharpe [14] and Lintner [5, 6] have shown convincingly that the stock value of a corporation cannot be determined from information regarding the size and risk of the corporation's dividend stream alone. Some risk attaching to returns on a corporation's stock can be avoided by shareholders holding a portfolio of many stocks. Risk that cannot be avoided by holding a well diversified portfolio of stocks is 'significant' risk and is relevant to valuation. The implication of portfolio analysis is clear. Stock values of individual corporations can only be determined when stocks are viewed as part of a portfolio. Conventional analyses of stock value in terms of the size and risk of corporate earnings or dividends such as found in Solomon [16] are inadequate. Section II of this paper develops a theory of stock valuation, using", "corpus_id": 154805631, "score": 1, "title": "PORTFOLIO ANALYSIS, STOCK VALUATION AND CAPITAL BUDGETING DECISION RULES FOR RISKY PROJECTS" }
{ "abstract": "Air quality is a critical matter of concern in terms of the impact on public health and well-being. Although the consequences of poor air quality are more severe in developing countries, they also have a critical impact in developed countries. Healthcare costs due to air pollution reach $150 billion in the USA, whereas particulate matter causes 412,000 premature deaths in Europe, every year. According to the Environmental Protection Agency (EPA), indoor air pollutant levels can be up to 100 times higher in comparison to outdoor air quality. Indoor air quality (IAQ) is in the top five environmental risks to global health and well-being. The research community explored the scope of artificial intelligence (AI) in the past years to deal with this problem. The IAQ prediction systems contribute to smart environments where advanced sensing technologies can create healthy living conditions for building occupants. This paper reviews the applications and potential of AI for the prediction of IAQ to enhance building environment and public health. The results show that most of the studies analyzed incorporate neural networksbased models and the preferred evaluation metrics are RMSE, R2 score and error rate. Furthermore, 66.6% of the studies include CO2 sensors for IAQ assessment. Temperature and humidity parameters are also included in 90.47% and 85.71% of the proposed methods, respectively. This study also presents some limitations of the current research activities associated with the evaluation of the impact of different pollutants based on different geographical conditions and living environments. Moreover, the use of reliable and calibrated sensor networks for real-time data collection is also a significant challenge.", "corpus_id": 222094522, "title": "Indoor air quality prediction systems for smart environments: A systematic review" }
{ "abstract": "Indoor air quality analysis is of interest to understand the abnormal atmospheric phenomena and external factors that affect air quality. By recording and analyzing quality measurements, we are able to observe patterns in the measurements and predict the air quality of near future. We designed a microchip made out of sensors that is capable of periodically recording measurements, and proposed a model that estimates atmospheric changes using deep learning. In addition, we developed an efficient algorithm to determine the optimal observation period for accurate air quality prediction. Experimental results with real-world data demonstrate the feasibility of our approach.", "corpus_id": 3581734, "title": "Indoor Air Quality Analysis Using Deep Learning with Sensor Data" }
{ "abstract": "When building ventilation is reduced, energy is saved because it is not necessary to heat or cool as much outside air. Reduced ventilation can result in higher levels of carbon dioxide, which may cause building occupants to experience symptoms. Heating or cooling for ventilation air can be enhanced by a DCV system, which can save energy while providing a comfortable environment. Carbon dioxide concentrations within a building are often used to indicate whether adequate fresh air is being supplied to the building. These DCV systems use carbon dioxide sensors in each space or in the return air and adjust the ventilation based on carbon dioxide concentration; the higher the concentration, the more people occupy the space relative to the ventilation rate. With a carbon dioxide sensor DCV system, the fresh air ventilation rate varies based on the number ofpeople in the space, saving energy while maintaining a safe and comfortable environment.", "corpus_id": 1804545, "score": -1, "title": "Carbon Dioxide Detection and Indoor Air Quality Control." }
{ "abstract": "Since the identification of HIV in 1984, the search for a safe and effective vaccine has been relentless. While investigator-initiated research has provided substantial information regarding HIV disease and pathogenesis, and over two dozen drugs are licensed in the USA to treat HIV, the global epidemic continues unabated. Early in HIV vaccine research, the pharmaceutical industry took the initiative to produce products for clinical testing. As the likelihood of a quick success decreased, private investment waned. The public sector responded with novel mechanisms to engage industry while continuing to support academic investigators. HIV vaccine research continues to rely on the creativity of individual investigators, as well as collaborations that vary in size and complexity and offer opportunities for the efficient use of resources and accelerated progress.", "corpus_id": 123059, "title": "A new era in HIV vaccine development" }
{ "abstract": "Background: Experimental observations suggest that human cancer cells actively interact with normal host cells and this cross-talk results, in most instances, in an increased potential of cancer cells to survive. On the other hand, it is also well documented that on rare occasions tumors can be dramatically destroyed by the host's immune response. Objective: In this review, we argue that understanding the mechanisms that bring about the immune response and lead to cancer destruction is of paramount importance for the design of future rational therapies. Methods: Here we summarize the present understanding of the phenomenology leading to cancer regression in humans and propose novel strategies for a more efficient study of human cancer under natural conditions and during therapy. Conclusion: The understanding of tumor/host interactions within the tumor microenvironment is a key component of the study of tumor immunology in humans, much can be learned by a dynamic study of such interactions at time points related to the natural history of the disease or its response to therapy. Such understanding will eventually lead to novel and more effective therapies.", "corpus_id": 72891783, "title": "Spontaneous and treatment-induced cancer rejection in humans" }
{ "abstract": "In this paper we propose a novel method for the automatic computation and digital fabrication of artistic string images. String art is a technique used by artists for the creation of abstracted images which are composed of straight lines of strings tensioned between pins distributed on a frame. Together the strings fuse to a perceptible image. Traditionally, artists craft such images manually in a highly sophisticated and tedious design process. To achieve this goal fully automatically we propose a computational setup driven by a discrete optimization algorithm which takes an ordinary picture as input and converts it into a connected graph of strings that tries to reassemble the input image best possibly. Furthermore, we propose a hardware setup for automatic digital fabrication of these images using an industrial robot that spans the strings. Finally, we demonstrate the applicability of our approach by generating and fabricating a set of real string art images.", "corpus_id": 1413983, "score": 0, "title": "String Art: Towards Computational Fabrication of String Images" }
{ "abstract": "High energy particle radiations induce severe microstructural damage in metallic materials. Nanoporous materials with a giant surface-to-volume ratio may alleviate radiation damage in irradiated metallic materials as free surface are defect sinks. Here we show, by using in situ Kr ion irradiation in a transmission electron microscope at room temperature, that nanoporous Au indeed has significantly improved radiation tolerance comparing with coarse-grained, fully dense Au. In situ studies show that nanopores can absorb and eliminate a large number of radiation-induced defect clusters. Meanwhile, nanopores shrink (self-heal) during radiation, and their shrinkage rate is pore size dependent. Furthermore, the in situ studies show dose-rate-dependent diffusivity of defect clusters. This study sheds light on the design of radiation-tolerant nanoporous metallic materials for advanced nuclear reactor applications.", "corpus_id": 3245861, "title": "In situ heavy ion irradiation studies of nanopore shrinkage and enhanced radiation tolerance of nanoporous Au" }
{ "abstract": "The key to perfect radiation endurance is perfect recovery. Since surfaces are perfect sinks for defects, a porous material with a high surface to volume ratio has the potential to be extremely radiation tolerant, provided it is morphologically stable in a radiation environment. Experiments and computer simulations on nanoscale gold foams reported here show the existence of a window in the parameter space where foams are radiation tolerant. We analyze these results in terms of a model for the irradiation response that quantitatively locates such window that appears to be the consequence of the combined effect of two length scales dependent on the irradiation conditions: (i) foams with ligament diameters below a minimum value display ligament melting and breaking, together with compaction increasing with dose (this value is typically ∼5 nm for primary knock on atoms (PKA) of ∼15 keV in Au), while (ii) foams with ligament diameters above a maximum value show bulk behavior, that is, damage accumulation (few hundred nanometers for the PKA's energy and dose rate used in this study). In between these dimensions, (i.e., ∼100 nm in Au), defect migration to the ligament surface happens faster than the time between cascades, ensuring radiation resistance for a given dose-rate. We conclude that foams can be tailored to become radiation tolerant.", "corpus_id": 5484272, "title": "Are nanoporous materials radiation resistant?" }
{ "abstract": "Abstract Using atomistic modeling, we show that restructuring of the network of interconnected ligaments causes coarsening in a model of nanoporous gold. The restructuring arises from the collapse of some ligaments onto neighboring ones and is enabled by localized plasticity at ligaments and nodes. This mechanism may explain the occurrence of enclosed voids and reduction in volume in nanoporous metals during their synthesis. An expression is developed for the critical ligament radius below which coarsening by network restructuring may occur spontaneously, setting a lower limit to the ligament dimensions of nanofoams.", "corpus_id": 53981910, "score": 2, "title": "Coarsening by network restructuring in model nanoporous gold" }
{ "abstract": "Abstract The reinforcement of premating barriers due to reduced hybrid fitness in sympatry may cause secondary sexual isolation within a species as a by-product. Consistent with this, in the fly Drosophila subquinaria, females that are sympatric with D. recens mate at very low rates not only with D. recens, but also with conspecific D. subquinaria males from allopatry. Here, we ask if these effects of reinforcement cascade more broadly to affect sexual isolation with other closely related species. We assay reproductive isolation of these species with D. transversa and find that choosy D. subquinaria females from the region sympatric with D. recens discriminate strongly against male D. transversa, whereas D. subquinaria from the allopatric region do not. This increased sexual isolation cannot be explained by natural selection to avoid mating with this species, as they are allopatric in geographic range and we do not identify any intrinsic postzygotic isolation between D. subquinaria and D. transversa. Variation in epicuticular hydrocarbons, which are used as mating signals in D. subquinaria, follow patterns of premating isolation: D. transversa and allopatric D. subquinaria are most similar to each other and differ from sympatric D. subquinaria, and those of D. recens are distinct from the other two species. We suggest that the secondary effects of reinforcement may cascade to strengthen reproductive isolation with other species that were not a target of selection. These effects may enhance the divergence that occurs in allopatry to help explain why some species are already sexually isolated upon secondary contact.", "corpus_id": 3550267, "title": "Patterns of reproductive isolation in the Drosophila subquinaria complex: can reinforced premating isolation cascade to other species?" }
{ "abstract": "BackgroundEvolutionary novelties, be they morphological or biochemical, fascinate both scientists and non-scientists alike. These types of adaptations can significantly impact the biodiversity of the organisms in which they occur. While much work has been invested in the evolution of novel morphological traits, substantially less is known about the evolution of biochemical adaptations.MethodsIn this review, we present the results of literature searches relating to one such biochemical adaptation: α-amanitin tolerance/resistance in the genus Drosophila.ResultsAmatoxins, including α-amanitin, are one of several toxin classes found in Amanita mushrooms. They act by binding to RNA polymerase II and inhibiting RNA transcription. Although these toxins are lethal to most eukaryotic organisms, 17 mushroom-feeding Drosophila species are tolerant of natural concentrations of amatoxins and can develop in toxic mushrooms. The use of toxic mushrooms allows these species to avoid infection by parasitic nematodes and lowers competition. Their amatoxin tolerance is not due to mutations that would inhibit α-amanitin from binding to RNA polymerase II. Furthermore, the mushroom-feeding flies are able to detoxify the other toxin classes that occur in their mushroom hosts. In addition, resistance has evolved independently in several D. melanogaster strains. Only one of the strains exhibits resistance due to mutations in the target of the toxin.ConclusionsGiven our current understanding of the evolutionary relationships among the mushroom-feeding flies, it appears that amatoxin tolerance evolved multiple times. Furthermore, independent lines of evidence suggest that multiple mechanisms confer α-amanitin tolerance/resistance in Drosophila.", "corpus_id": 47019693, "title": "Drosophila, destroying angels, and deathcaps! Oh my! A review of mycotoxin tolerance in the genus Drosophila" }
{ "abstract": "This study was aimed to investigate the in vitro permeation potential of hydrogel formulations containing the isoflavones formononetin and biochanin A and cyclodextrins in different combinations.", "corpus_id": 4805077, "score": 1, "title": "Hydroxypropyl‐β‐cyclodextrin‐containing hydrogel enhances skin formononetin permeation/retention" }
{ "abstract": "In order to determine whether there is a relationship between acquired free protein S deficiency and increased thrombin generation, we performed a cross-sectional study of patients with systemic lupus erythematosus (SLE). Plasma samples were assayed for free protein S and were correlated to levels of prothrombin fragments (F1 + 2); an elevated level of F1 + 2 was used as a surrogate marker for a prothrombotic state. Assays for anticardiolipin antibodies (ACA) and lupus anticoagulant (LA) were performed on two separate blood samples taken at least 3 months apart in order to detect the presence of antiphospholipid antibodies. Of the 36 subjects, 9 had reduced free protein S levels compared to 0 of 21 controls (P = 0.01) and the mean free protein S level was significantly lower in the SLE population than in controls (0.30 +/- 0.08 U/mL versus 0.43 +/- 0.10 U/mL, P < 0.001). Of the 24 subjects with antiphospholipid antibodies, 9 had reduced free protein S levels, compared to 0 of 12 subjects without antiphospholipid antibodies (P = .01). The mean F1 + 2 level was significantly higher in study subjects with reduced free protein S levels than in those with normal free protein S levels (1.22 +/- 0.50 nmol/L versus 0.78 +/- 0.27 nmol/L, P = 0.05). This study confirms an association between antiphospholipid antibodies and reduced free protein S levels and demonstrates that patients with SLE and acquired free protein S deficiency generate more thrombin than patients with SLE and normal free protein S levels. Further studies are needed to determine whether the thrombotic diathesis associated with the presence of antiphospholipid antibodies is directly caused by the concomitant presence of acquired free protein S deficiency.", "corpus_id": 1470077, "title": "Acquired free protein S deficiency is associated with antiphospholipid antibodies and increased thrombin generation in patients with systemic lupus erythematosus." }
{ "abstract": "PURPOSE\nTo determine if abnormalities in the protein C/protein S anticoagulant system exist in patients with phospholipid antibodies who had the primary clinical complaint of fetal wastage.\n\n\nPATIENTS AND METHODS\nEleven patients with fetal wastage and phospholipid antibodies were selected for study. Some patients also gave a history of previous thrombotic events related to oral contraceptives and/or pregnancy, but patients were not selected because of a history of clinical thrombosis. The levels of protein C (chromogenic assay), protein S (both free and bound) (Laurell rocket), and C4b-binding protein (Laurell rocket) were measured, and assays for the presence of antibodies against protein S or protein C were performed.\n\n\nRESULTS\nSeven of the 11 patients were found to have low levels of free protein S. Total protein S and protein C levels were within the normal range in all patients. Antibodies to protein C and protein S were not found in any patient. These findings suggest that free protein S levels may be abnormally low in some patients with phospholipid antibodies.\n\n\nCONCLUSION\nFree protein S levels are abnormally low in some patients with phospholipid antibodies, and this abnormality may be a factor contributing to the thrombotic diathesis associated with phospholipid antibodies.", "corpus_id": 20687396, "title": "The thrombotic diathesis associated with the presence of phospholipid antibodies may be due to low levels of free protein S." }
{ "abstract": "There is some controversy in Australia over the role of regional universities in the economic development of their regions. This paper assumes that regional universities can be valuable additions to regional development. To avoid the Grattan ‘taxpayer-money-recycled’ critiques, this paper examines students who provide other people’s money, notably international education students in the Northern Territory (NT) of Australia. The case is made that international education exports are a valuable part of the suite of the NT’s exports. It is posited that over the next decade the Territory’s international education exports can triple and the sector become the Territory’s fifth largest exporter and the second largest services exporter.", "corpus_id": 158978449, "score": 0, "title": "A Test of the Role of Universities in Regional Development: The Case of International Education Students in the Northern Territory" }
{ "abstract": "The parameters affecting the absolute radiochemical yield of the isotopic exchange reaction between radioiodine (125I-) and iodohippuric acid isomers on molten ammonium acetate as a medium exchange at 120 degrees C without any carrier added (radioiodine, 125I-) was determined. The isotopic exchange reactions of radioiodine as 125I- for iodine-127 of o- and p-iodohippuric acid isomers occur more rapidly than m-iodohippuric acid isomer. These reactions proceed by nucleophilic second order substitution reaction. The kinetics and thermodynamic parameters of these isotopic exchange reactions were determined. The absolute radiochemical yield and radio pharmaceutical purity were determined by HPLC and TLC techniques.", "corpus_id": 1500834, "title": "Chemical and thermodynamic characteristics of the isotopic exchange reaction between radioiodine and iodohippuric acid isomers." }
{ "abstract": "This study describe the organic synthesis of 2-iodobenzamido (2-N-nitrobenzen-5, 6, 7, 8 tetrahydrobenzothieno [2, 3-d]) pyrimidine -4-(3H) one as an example for some pyrimidine derivative used a new series of as potential cancer chemotherapeutic agents. The precursor derivative is ?-(2-iodobenzamido)-? - (4-nitrophenyl)-N- [3-ethoxy-carbonyl-4, 5, 6, 7-tetrahydrobenzothiophen-2-l] acrylic acid amide which react with hydrazine hydrate. The purification process was done via crystallization using solvent ethanol. The overall yield 78% the structure of the synthesized compound was confirmed by correct analytical and spectral data .Also, The synthesized compound was labeled with radioactive iodine -125 via nucleophilic substitution reaction ,in the presence cuprous chloride, the labeling process was carried out at 95oC for 60 min. the radiochemical yield was determined by using thin layer chromatography and the yield is equal to 80%.Preliminary in-vivo study was examined in normal mice were performed after intravenous injection through the tail vein and the data show the labeling compound was cleared quickly from most body organs. The radioiodinated compound showed high brain uptake .The results in this study suggest that radioiodinated pyrimidine derivative may be useful as cancer chemotherapeutic agents. Keywords : Pyrimidine Derivatives / Iodine -125/ Tissues Distribution.", "corpus_id": 40196377, "title": "Optimization of labeled 125I- pyrimidine derivative and its biological evaluation" }
{ "abstract": "clear transformation under selected experimental and preparative conditions are discussed. A 123Xe-*123I kit that could be supplied to hospitals is described. Sodd, et al (2-7) have provided an exhaustive evaluation of the accelerator production of 123I.Fur ther exploratory studies by Lambrecht, et al (1,8) have verified the suggestion that the production of I23I by the 123Xe generator coupled with rigorous chemical scrubbing (1,8) results in ^99.8% radionuclidic purity as 123I.The only radiohalogen con taminant is ^0.2% 125I.The 12:'Xedecays by posi tron emission and electron capture with a 2.1-hr half-life to 123I (T1/2 = 13.3 hr). The 125I (T1/2 = 60 days) contaminant results from the decay of 125Xe (T1/2 = 16.8 hr) which is produced simultaneously with the 123Xe. For the cyclotron production of 123I we recommend the 122Te(4He,3n)123Xe and 123Xe(/î+,EC)123I nuclear reactions with E4jje ~ 45-36 MeV. The alternate but as yet unexplored possibilities are the 12-Te(d,n)123Iand I24Te(p,2n)123I nuclear reactions which might be feasible if ultra-high purity 122Te or 124Te were commercially available. Extensive calculations (9,10) reported in 1969 and preliminary (11) experiments have indicated that multicurie quantities of 123Xe can be produced in high yield and purity with the I27I(p,5n)123Xe nu clear reaction. Subsequently, Fusco, et al (12) have verified the radiochemical purity and yield obtainable with the spallation reaction. The production of 123Xe", "corpus_id": 70657805, "score": 2, "title": "Kit for Carrier-Free 123I-Sodium Iodide. VIII" }
{ "abstract": "PURPOSE\nInvestigate the influence of adaptive statistical iterative reconstruction (ASIR) and the model-based IR (Veo) reconstruction algorithm in coronary computed tomography angiography (CCTA) images on quantitative measurements in coronary arteries for plaque volumes and intensities.\n\n\nMETHODS\nThree patients had three independent dose reduced CCTA performed and reconstructed with 30% ASIR (CTDIvol at 6.7 mGy), 60% ASIR (CTDIvol 4.3 mGy) and Veo (CTDIvol at 1.9 mGy). Coronary plaque analysis was performed for each measured CCTA volumes, plaque burden and intensities.\n\n\nRESULTS\nPlaque volume and plaque burden show a decreasing tendency from ASIR to Veo as median volume for ASIR is 314 mm3 and 337 mm3-252 mm3 for Veo and plaque burden is 42% and 44% for ASIR to 39% for Veo. The lumen and vessel volume decrease slightly from 30% ASIR to 60% ASIR with 498 mm3-391 mm3 for lumen volume and vessel volume from 939 mm3 to 830 mm3. The intensities did not change overall between the different reconstructions for either lumen or plaque.\n\n\nCONCLUSION\nWe found a tendency of decreasing plaque volumes and plaque burden but no change in intensities with the use of low dose Veo CCTA (1.9 mGy) compared to dose reduced ASIR CCTA (6.7 mGy & 4.3 mGy), although more studies are warranted.", "corpus_id": 2188240, "title": "First experiences with model based iterative reconstructions influence on quantitative plaque volume and intensity measurements in coronary computed tomography angiography." }
{ "abstract": "This manuscript has been written as a follow-up to the \"AI/ML great debate\" featured at the 2021 Society of Cardiovascular Computed Tomography (SCCT) Annual Scientific Meeting. In debate style, we highlighti the need for expectation management of AI/ML, debunking the hype around current AI techniques, and countering the argument that in its current day format AI/ML is the \"silver bullet\" for the interpretation of daily clinical CCTA practice.", "corpus_id": 250975356, "title": "Great debates in cardiac computed tomography: OPINION: \"Artificial intelligence and the future of cardiovascular CT - Managing expectation and challenging hype\"." }
{ "abstract": "Using data from a longitudinal study of the recently retired we attempt to separate the moral hazard effect of Medicare supplementary (Medigap) insurance on health care expenditures from the adverse selection effect of poor health on Medigap coverage. We find evidence of adverse selection, but its magnitude is unlikely to create serious efficiency problems. Taking adverse selection into account reduces the estimate of the moral hazard effect. In addition, we find a strong positive wealth effect on the demand for supplementary insurance.", "corpus_id": 4179070, "score": 1, "title": "Adverse selection, moral hazard, and wealth effects in the Medigap insurance market." }
{ "abstract": "We extend the methodology of a two-step profit function to obtain area and yield elasticities. We then estimate the effects of price and technology on crop output of France, Germany, and the UK. Area elasticities were obtained by adding area shadow price equations to the standard dual model of output and input equations. Change in output is dominated by technology in the UK and mixed in France and Germany. The results indicate policies affecting price will have diverse responses across countries and crops.", "corpus_id": 153155742, "title": "SUPPLY RESPONSE IN FRANCE, GERMANY, AND THE UK: TECHNOLOGY AND PRICE" }
{ "abstract": "We estimate yield-price elasticities by blending information from market-based datasets with experimental production data using a Bayesian procedure. Yield-price elasticities are dictated by features of the underlying production technology; therefore, data on crop response to relevant inputs provide extra information about parameters of interest. Bayesian econometrics allows for the joint and simultaneous estimation of all model parameters. The procedure is advocated in situations where field trail or experimental data are available to provide additional information helping recovering production technology parameters with higher precision.", "corpus_id": 158573536, "title": "Crop yield responses to prices: a Bayesian approach to blend experimental and market data" }
{ "abstract": "International donors invest billions of dollars to conserve ecosystems in low-income nations. The most common investments aim to encourage commercial activities, such as ecotourism, that indirectly generate ecosystem protection as a joint product. We demonstrate that paying for ecosystem protection directly can be far more cost-effective. Although direct-payment initiatives have imposing institutional requirements, we argue that all conservation initiatives face similar challenges. Thus conservation practitioners would be well advised to implement the first-best direct-payment approach, rather than a second-best policy option. An empirical example illustrates the spectacular cost savings that can be realized by direct-payment initiatives. (JEL H21, Q28)", "corpus_id": 1266110, "score": 1, "title": "The Cost-Effectiveness of Conservation Payments" }
{ "abstract": "Banerjee, Dolado and Mestre (J. Time Ser. Anal. 19 (1998) 267-283) introduce an error-correction test for the null hypothesis of no cointegration. The present paper supplements their work. They provide critical values for regressions with and without detrending. Here it is shown that the latter are not appropriate if the series display linear trends. This does not mean that detrending is required. Correct percentiles are suggested for the case that series follow linear time trends but tests are based on regressions without detrending. They are readily available from the literature.", "corpus_id": 153527645, "title": "Cointegration Testing in Single Error-Correction Equations in the Presence of Linear Time Trends" }
{ "abstract": "Estimation and inference in cointegrated models is examined in the presence of deterministic trends in the data. It is suggested that trends be excluded in the levels regression for maximal efficiency. Fully modified test statistics are asymptotically chi-square. A chi-square test for the validity of trend exclusion is presented. The asymptotic distributions of standard cointegration test statistics are shown to depend both upon regressor trends and estimation detrending methods.", "corpus_id": 8278686, "title": "Efficient estimation and testing of cointegrating vectors in the presence of deterministic trends" }
{ "abstract": "This is a presentation of recent work on quantum permutation groups. Contains: a short introduction to operator algebras and Hopf algebras; quantum permutation groups, and their basic properties; diagrams, integration formulae, asymptotic laws, matrix models; the hyperoctahedral quantum group, free wreath products, quantum automorphism groups of finite graphs, graphs having no quantum symmetry; complex Hadamard matrices, cocycle twists of the symmetric group, quantum groups acting on 4 points; remarks and comments.", "corpus_id": 14482973, "score": 1, "title": "Quantum permutation groups: a survey" }
{ "abstract": "ABSTRACT This study compared the performance of a commercial chromogenic medium, CHROMagarECC (CECC), and CECC supplemented with sodium pyruvate (CECCP) with the membrane filtration lauryl sulfate-based medium (mLSA) for enumeration of Escherichia coli and non-E. coli thermotolerant coliforms (KEC). To establish that we could recover the maximum KEC and E. coli population, we compared two incubation temperature regimens, 41 and 44.5°C. Statistical analysis by the Fisher test of data did not demonstrate any statistically significant differences (P = 0.05) in the enumeration of E. coli for the different media (CECC and CECCP) and incubation temperatures. Variance analysis of data performed on KEC counts showed significant differences (P = 0.01) between KEC counts at 41 and 44.5°C on both CECC and CECCP. Analysis of variance demonstrated statistically significant differences (P = 0.05) in the enumeration of total thermotolerant coliforms (TTCs) on CECC and CECCP compared with mLSA. Target colonies were confirmed to be E. coli at a rate of 91.5% and KEC of likely fecal origin at a rate of 77.4% when using CECCP incubated at 41°C. The results of this study showed that CECCP agar incubated at 41°C is efficient for the simultaneous enumeration of E. coli and KEC from river and marine waters.", "corpus_id": 2578245, "title": "Comparison and Recovery of Escherichia coli and Thermotolerant Coliforms in Water with a Chromogenic Medium Incubated at 41 and 44.5°C" }
{ "abstract": "The Colilert (CL) and Coliquik (CQ) systems were compared in a presence-absence format against the Standard Methods membrane filtration (MF) technique to determine whether differences existed in total coliform detection. Approximately 750 water samples were collected from distribution systems, covered and uncovered storage reservoirs, well sites, and the influent to drinking water treatment plants. Samples were analyzed for total coliforms and heterotrophic bacteria with MF, CL, and CQ. The agreements between CL and MF and between CQ and MF were both greater than 94.8%, which indicates that both may be acceptable methods for total coliform detection. Disagreement between the CL and CQ methods was primarily due to false-negative results. Furthermore, laboratory and field inoculation methods were compared for CL, more than 98% agreement was obtained. This finding indicates that sampling and immediate field inoculation may be an alternative to the traditional laboratory inoculation.", "corpus_id": 9061716, "title": "Total coliform detection in drinking water: comparison of membrane filtration with Colilert and Coliquik" }
{ "abstract": "IntroductionWe examined the functional relationship between seed size and seedling performance in the valley oak (Quercus lobata Née) by means of a 13-year common garden experiment.Materials and methodsAcorns were collected from five localities throughout the range of valley oak in autumn 1997, weighed and measured, and planted at Sedgwick Reserve, Santa Barbara County, California, USA.ResultsIn the short term, larger acorns produced larger seedlings that had lower survival than seedlings from smaller acorns. In the longer term, large seeds correlated positively with both seedling size and survival, with path analyses indicating that the latter effect was primarily indirect via initial seedling size. The longer-term relative growth rate was only weakly related to seed size, being a combination of a slight positive direct influence of seed size on relative growth rate and a comparable negative indirect effect via larger initial seedling size.DiscussionThese results generally matched the predictions of the “seedling size effect hypothesis” (larger seeds yield larger seedlings with greater competitive abilities), the only one of the three hypotheses we examined that predicts an inverse relationship between seed size and initial survival and a positive relationship between seed size and longer-term relative growth rate. The factors influencing the relationships between seed size and seedling performance are complex and may involve both direct effects of seed size and indirect effects mediated through initial seedling size. Although the seedling size effect was the most important in our study, other factors may be important under different environmental conditions and/or at different growth stages.", "corpus_id": 9503899, "score": 1, "title": "Fitness consequences of seed size in the valley oak Quercus lobata Née (Fagaceae)" }
{ "abstract": "One of the most challenging problems in online signature verification is to select the best features to model the signatures. A widely used technique to address this problem is to combine different feature sets selected by different criteria. In this paper, the combination of three different feature sets, viz., an automatically selected feature set, a feature set relevant to Forensic Handwriting Experts (FHEs), and a global feature set, on the basis of a score level fusion scheme, is proposed. In order to address the problem of conflicting results appearing when several classifiers are being used, the proposed combination is performed within the framework of the Belief Function Theory (BFT). Two different models, namely, the Denoeux and the Appriou models, are used to embed the problem within this framework, where the fusion is performed resorting to two well-known combination rules, namely, the Dempster-Shafer (DS) and the Proportional Conflict Redistribution (PCR5) one. Experimental results on a publicly available database, prove that the proposed fusion scheme allows the system to have a very good trade-off between verification results and reliability.", "corpus_id": 944551, "title": "Feature combination based on Belief Function Theory for online signature verification" }
{ "abstract": "The objective of this work is to present a signature verification system based on combination of off-line and online systems for managing conflict provided by the Support Vector Machine (SVM) classifiers. This system is basically divided into three parts: i) off-line verification stage, ii) on-line verification stage and iii) combination module using Dempster-Shafer theory (DST). The proposed framework allows combining the normalized SVM outputs and uses an estimation technique based on the dissonant model of Appriou to compute the belief assignments. Combination is performed using Dempster-Shafer (DS) rule followed by the likelihood ratio based decision making. Experiments are conducted on the well know NISDCC signature collection using false rejection and false acceptance criteria. The obtained results show that the proposed combination framework using DST yields the best verification accuracy compared to the sum rule even when individual off-line and on-line classifications provide conflicting results.", "corpus_id": 12191867, "title": "Combination of off-line and on-line signature verification systems based on SVM and DST" }
{ "abstract": "Estimating and evaluating confidence has become a key aspect of the speaker recognition problem because of the increased use of this technology in forensic applications. We discuss evaluation measures for speaker recognition and some of their properties. We then propose a framework for confidence estimation based upon scores and meta-information, such as utterance duration, channel type, and SNR. The framework uses regression techniques with multilayer perceptrons to estimate confidence with a data-driven methodology. As an application, we show the use of the framework in a speaker comparison task drawn from the NIST 2000 evaluation. A relative comparison of different types of meta-information is given. We demonstrate that the new framework can give substantial improvements over standard distribution methods of estimating confidence.", "corpus_id": 18544893, "score": 2, "title": "Estimating and evaluating confidence for forensic speaker recognition" }
{ "abstract": "Primary ocular posttransplantation lymphoproliferative disorder is rare. Epstein-Barr virus is implicated as the cause as a result of systemic immunosuppression after transplant surgery. We studied a patient who developed ocular posttransplantation lymphoproliferative disorder after orthotopic liver transplantation. Slitlamp and light microscopic photographs confirmed the diagnosis.", "corpus_id": 1906768, "title": "Posttransplantation lymphoproliferative disorder initially seen as iris mass and uveitis." }
{ "abstract": "Abstract:  Post‐transplant lymphoproliferative disorder (PTLD) is a complication of transplantation resulting from impaired immune surveillance because of pharmacologic immunosuppression. We present two cases of central nervous system (CNS) PTLD in children on calcineurin‐inhibitor free immunosuppression with dramatically different presentations and outcomes. One patient had brain and spinal cord lymphoma with a rapid and fatal course. The other patient had brain and ocular PTLD that responded to multimodal therapy with reduction of immunosuppression, high‐dose steroids, and rituximab given in a dose‐escalation protocol. This protocol may have enhanced the penetration of rituximab into the CNS. We review the literature on CNS and ocular PTLD and elaborate on the treatments available for both diseases.", "corpus_id": 10522583, "title": "Central nervous system lymphoproliferative disorder in pediatric kidney transplant recipients" }
{ "abstract": "We reported previously that patients with anterior open bite had tongue tip protrusion, slower movement of the rear part of the dorsal tongue, and earlier closure of the nasopharynx during deglutition. In the present study, the relationship between this characteristic tongue movement and maxillofacial morphology in patients with anterior open bite was investigated. The subjects were 10 female patients with anterior open bites and 10 women with normal overbites as controls. Maxillofacial morphology was measured by cephalometric radiography, and tongue movement during deglutition was analyzed by cineradiography. The relationship between each value obtained by cephalometric radiography and cineradiography was evaluated by simple correlation analysis. In the patients with anterior open bite, there were significant correlations between mandibular plane angle, ramus height of the mandible, or anteroposterior dimension of the maxilla and movement of the front part of the dorsal tongue during deglutition. Furthermore, there were also significant correlations in these patients between mandibular plane angle, gonial angle, or ramus height of the mandible and the change in the contact between tongue and palate during deglutition. The controls did not have the correlations like these. Our study suggests that characteristic tongue movements during deglutition in patients with anterior open bites are closely related to their morphological features.", "corpus_id": 30152890, "score": 1, "title": "Relationship between maxillofacial morphology and deglutitive tongue movement in patients with anterior open bite." }
{ "abstract": "One of the fundamental properties of bacterial spores is the ability to survive physical and chemical treatments which destroy the vegetative cells from which they arise. Of the resistance to deleterious agents that of heat survival is of great practical and theoretical interest. The relatively high heat resistance of the spores and the toxin production by this organism make the survival of Clostridium botulinum spores in heat processed foods a subject of considerable concern to the food technologists. The present study is concerned with the factors which may influence the heat resistance of the spores of C. botulinum. The primary considerations were (1) the relationship between the environmental conditions during sporulation and the heat tolerance of the spores and (2) the effect of some treatments of the formed spores on their heat survival.", "corpus_id": 7156407, "title": "STUDIES ON FACTORS AFFECTING THE HEAT RESISTANCE OF SPORES OF CLOSTRIDIUM BOTULINUM" }
{ "abstract": "The fine localization of mineral matter in spores of Bacillus megaterium and Bacillus cereus was studied by the technique of microincineration adapted for use with the electron microscope. The specimens, which included intact and thin-sectioned spores as well as shed spore coats, were burned either in the conventional way at high temperature or by a new technique using electrically excited oxygen at nearly room temperature. The ash residues were examined by bright field, dark field, and diffraction in the electron microscope and also with the phase contrast microscope. In some cases, the specimen was previewed in both microscopes before incineration. The results do not support a previous report that the mineral elements of the spore are confined to a peripheral layer, but rather indicate that the spore core as well as the coat are mineral-rich. The cortex may be deficient in minerals, but the possibility of artifact prevents a clear decision on this point. Incinerated B. megaterium spores show a highly ordered fine structure displaying 100 A periodicity in the ash of the middle layer of the coat. The nature of this structure is discussed, as is the technique which demonstrated it. The fine definition of the ash patterns, particularly those obtained with the low-temperature, excited-oxygen technique, suggests that microincineration may be generally useful in the study of fine structure. I N T R O D U C T I O N Bacterial spores contain metals and other mineral elements in quite appreciable amounts; in spores of Bacillus megaterium, for example, the incombustible residue is 11 to 12 per cent of the dry weight (39). The major mineral constituents are potassium, calcium, manganese, magnesium, copper, and phosphorus, with calcium frequently the most abundant (6, 22). Numerous studies have shown that the minerals play an important role not only in the development of the spore, but also in its unique tolerance to high temperatures. Manganese seems to be an absolute nutritional requirement for spore formation (5, 35), while calcium is highly correlated with the spore's heat resistance (1, 15, 35, 37). These considerations lead one to ask in what structures--e.g, coat, cortex, or core-of the spore the mineral elements are localized. Most of the evidence on this question is circumstantial. For example, Mayall and Robinow (23) showed that the cortex disappears during early germination at the same time that most of the calcium is released into the medium (27). They thus speculate that calcium is localized in the cortex. Consistent with 113 on A uust 1, 2017 jcb.rress.org D ow nladed fom this idea, calcium is incorporated into the developing spore during the time that the cortex is forming (48). There is also some evidence on mineral localization from analysis of isolated spore coats. This includes figures for total ash (4, 36, 47), phosphorus (11, 32, 36, 43, 47), calcium (21), magnesium (21) and manganese (21, 46) content. The results are difficult to evaluate, however, owing to the probable morphological heterogeneity of the preparations, and possible losses of mineral. A third kind of evidence was recently provided by Knaysi (17, 18) who applied the technique of microincineration (16, 25, 34). This long established method (first applied to bacterial spores in 1932 by Scott (33)) consists simply of burning the specimen on a microscope slide to remove all organic material, and then examining the pattern of ash which remains. The technique cannot localize individual metallic elements, but it can indicate whether a particular structure is rich or poor in mineral elements as a whole. Knaysi observed a ring-like residue of ash from the spores and concluded that the mineral elements must be largely concentrated in a peripheral layer, perhaps the cortex, surrounding a minerally poor core. His observations, however, were severely limited by the resolving power of the phase contrast microscope. It seemed desirable, therefore, to repeat and extend this work using the electron microscope. There have been a few previous reports of microincineration used with this instrument (8, 9, 26, 42, 45). To adapt the classical technique requires only that a suitably thin, heat-resistant specimen support be found. Thin films of aluminum and beryllium (9) or silicon monoxide on stainless steel grids (45) have been used successfully. Alternately, a thin film replica can be made of the ash deposit on a glass slide (26). The electron microscope specimen has usually been burned by high temperature, either in a furnace (26, 45) or by the electron beam within the microscope (8). Turkevich and Streznewsky, however, carried out the incineration at room temperature with a stream of electrically excited oxygen (42). Their specimens were simply carbon particles, but Turkevich suggested using the same technique on biological material (41). Independently, a similar excited oxygen technique for ashing larger carbonaceous specimens was developed by Gleit and Holland (13, 14). The present investigation used both high-temperature ashing and low-temperature, excited oxygen incineration following Gleit and Holland. The techniques were applied to intact, whole spores, as used by Knaysi, and also to thin-sectioned spores and germinated spore coats. The latter two preparations could not give reliable information on the over-all distribution of total mineral in spores, since they undoubtedly lost some material during processing, but they could show in more detail the fine localization of structurebound minerals which were retained. Bacillus megaterium spores were used for most of the experiments, but, to allow direct comparison with Knaysi's results, spores of Bacillus cereus were also employed. The work has been concerned with exploration of the technique as well as with the structure of spores, and is still in its initial phase. Results obtained so far seem to justify publication at this time, however. A brief, preliminary account of some of this work has been published elsewhere (38). M A T E R I A L S A N D M E T H O D S Devices for Incineration High-temperature incineration was performed either in an electrically heated muffle furnace or on a simple heating stage in a Kinney SC-3 vacuum evaporator. The heating stage permitted more flexible temperature control than the furnace and also allowed specimens to be heated under vacuum. For accurate determination of specimen temperature in the muffle furnace, a stainless steel block with a thermocouple junction embedded in it was placed on the floor of the furnace, and the electron microscope specimen grids, or milligram samples of spores, were placed in a small crucible resting directly on the block. The vacuum evaporator heating stage consisted of a platinum ribbon 0.13 x 6 x 54 ram, mounted horizontally in the standard filament holders (see Fig. 1). A thermocouple junction of 28-gauge chromel and alumel wires was spot welded to the under surface at the center of the ribbon. The specimen grid was placed on the upper surface directly over the junction. The thermocouple was calibrated by the melting points of various salts, a few crystals of which were placed on the ribbon in place of the specimen grid. Low-temperature, excited-oxygen incineration was carried out in a prototype version of the Tracerlab Low Temperature Asher, model LTA 500, described by Gleit and Holland (13, 14). The device consists essentially of a glass tube through which passes a 114 THE JOURNAL OF CELL BIOLOGY • VOLUME ~3, 1964 on A uust 1, 2017 jcb.rress.org D ow nladed fom s t ream of oxygen at a pressure of about 1 m m and a flow rate of 50 c c / m i n u t e (S.T.P.) . At the u p s t r e a m end, a coil su r round ing the tube imposes a h igh voltage, radio f requency electromagnet ic field on the gas, p roduc ing an electrodeless r ing discharge. This is seen as a purple glow, which extends a considerable dis tance d o w n s t r e a m f rom the coil. T h e spec imen to be incinera ted is placed in the glow at the downs t r eam end of the tube where the t empera ture is only slightly above ambient . T h e atomic oxygen and other metas table species formed by the discharge have sufficient energy to complete ly oxidize all organic c o m p o u n d s (14) but the react ion rate is found to vary greatly wi th different kinds of specimen. Surfaceto-volume ratio", "corpus_id": 250018611, "title": "Whole Spore Preparations" }
{ "abstract": "During the past 8 or 10 years bacterial vaccines have been used extensively, but, on the whole, with very unsatisfactory results. Were it not for the excellent theoretical foundation on which vaccine therapy rests, its use would probably have been relegated to the past. Bacterial vaccines, as used today, judged by the practical results obtained, are of little or no value aside from typhoid vaccine as used prophylactically and the undisputed value of staphylococcic and B. acne vaccines, in certain types of cases. Protective immunization experiments with killed cultures on laboratory animals have likewise not been crowned with marked success. Facts such as these may readily be gleaned from a survey of the literature. The conclusion would therefore seem justified that the dead organisms do not offer a suitable antigen for immunization processes. During the past year while conducting some perfusion experiments on rabbits, we were much impressed by the rapidity with which bacteria were taken up by phagocytes. The particular experiment in question concerned the perfusion of the liver of rabbits with emulsions of staphylococci. By inserting cannulae in the portal vein and superior vena cava the liver could readily be perfused with Locke's fluid, containing large quantities of staphylococci in suspension. It was found that a fluid containing 9,000,000 organisms per cubic centimeter could be sterilized in a few minutes by being passed through this organ. The endothelial cells of the liver, on section, were found to be literally packed with bacteria following this operation. This observation suggested that vaccines injected into an individual for the purpose of", "corpus_id": 72405103, "score": 2, "title": "The Effect of High Pressures on Bacteria" }
{ "abstract": "Hybrid Analog-Digital transceivers are employed with the view to reduce the hardware complexity and the energy consumption in millimeter wave/large antenna array systems by reducing the number of their Radio Frequency (RF) chains. However, the analog processing network requires power for its operation and it further introduces power losses, dependent on the number of the transceiver antennas and RF chains, that have to be compensated. Thus, the reduction in the power consumption is usually much less than it is expected and given that the hybrid solutions present in general inferior spectral efficiency than a fully digital one, it is possible for the former to be less energy efficient than the latter in several cases. Existing approaches propose hybrid solutions that maximize the spectral efficiency of the system without providing any insight on their actual energy requirements/efficiency. To that end, in this paper, a novel algorithmic framework is developed based on which energy efficient hybrid transceiver designs are developed and their performance is examined with respect to the employed number of RF chains. Solutions are proposed for fully and partially connected hybrid architectures. Numerical results provide insight on when a hybrid transceiver is the most energy efficient solution or not.", "corpus_id": 2275607, "title": "On the energy-efficiency of hybrid analog-digital transceivers for large antenna array systems" }
{ "abstract": "Energy-efficiency, high data rates and secure communications are essential requirements of the future wireless networks. In this paper, optimizing the secrecy energy efficiency is considered. The optimal beamformer is designed for a MISO system with and without considering the minimum required secrecy rate. Further, the optimal power control in a SISO system is carried out using an efficient iterative method, and this is followed by analyzing the trade-off between the secrecy energy efficiency and the secrecy rate for both MISO and SISO systems.", "corpus_id": 6089041, "title": "Secrecy energy efficiency optimization for MISO and SISO communication networks" }
{ "abstract": "Massive MIMO systems promise high spectrum efficiency by deploying M ≫ 1 antennas at the base station (BS). However, to achieve the full gain provided by massive MIMO, the BS requires M radio frequency (RF) chains, which are expensive. This motivates us to consider RF-chain limited massive MIMO systems with M antennas but only S ≪ M RF chains. We propose a two-stage precoding scheme to efficiently exploit the large spatial degree of freedom (DoF) gain in massive MIMO systems with limited RF chains and reduced channel state information (CSI) signaling overhead. In this scheme, the MIMO precoder is partitioned into a high-dimensional phase only RF precoder followed by a low-dimensional baseband precoder. The RF precoder is adaptive to the spatial correlation matrices for inter-cluster interference mitigation. The baseband precoder is adaptive to the reduced dimensional “effective” CSI for intra-cluster spatial multiplexing. We formulate the two stage precoding problem such that the minimum (weighted) average data rate of users is maximized under the phase only constraint on the RF precoder and the limited RF chain constraint. This is a combinatorial optimization problem which is in general NP-hard. We propose a low complexity solution based on a novel bi-convex approximation approach. Simulations show that the proposed design has significant gain over various baselines.", "corpus_id": 3729519, "score": 2, "title": "Phase Only RF Precoding for Massive MIMO Systems With Limited RF Chains" }
{ "abstract": "We suggest a new approach to Artin's constant that leads to its representation as an infinite sum divided by another infinite sum. The same approach works well for Stephens' constant and higher rank Artin's constants. The main results are theoretical but there are interesting experimental and computational aspects.", "corpus_id": 5351288, "title": "A note on Artin's constant" }
{ "abstract": "We humbly and briefly offer corrections and supplements to Mathematical Constants (2003) and Mathematical Constants II (2019), both published by Cambridge University Press. Comments are always welcome.", "corpus_id": 17856771, "title": "Errata and Addenda to Mathematical Constants" }
{ "abstract": "We assume the generalized Riemann hypothesis and prove an asymptotic formula for the number of primes for which F p * can be generated by r given multiplicatively independent numbers. In the case when the r given numbers are primes, we express the density as an Euler product and apply this to a conjecture of Brown-Zassenhaus (J. Number Theory 3 (1971), 306-309). Finally, in some examples, we compare the densities approximated with the natural densities calculated with primes up to 9 10 4 .", "corpus_id": 600157, "score": 2, "title": "On the r-rank Artin Conjecture" }
{ "abstract": "BackgroundAtlantic halibut (Hippoglossus hippoglossus) is a high-value, niche market species for cold-water marine aquaculture. Production of monosex female stocks is desirable in commercial production since females grow faster and mature later than males. Understanding the sex determination mechanism and developing sex-associated markers will shorten the time for the development of monosex female production, thus decreasing the costs of farming.ResultsHalibut juveniles were masculinised with 17 α-methyldihydrotestosterone (MDHT) and grown to maturity. Progeny groups from four treated males were reared and sexed. Two of these groups (n = 26 and 70) consisted of only females, while the other two (n = 30 and 71) contained balanced sex ratios (50% and 48% females respectively). DNA from parents and offspring from the two mixed-sex families were used as a template for Restriction-site Associated DNA (RAD) sequencing. The 648 million raw reads produced 90,105 unique RAD-tags. A linkage map was constructed based on 5703 Single Nucleotide Polymorphism (SNP) markers and 7 microsatellites consisting of 24 linkage groups, which corresponds to the number of chromosome pairs in this species. A major sex determining locus was mapped to linkage group 13 in both families. Assays for 10 SNPs with significant association with phenotypic sex were tested in both population data and in 3 additional families. Using a variety of machine-learning algorithms 97% correct classification could be obtained with the 3% of errors being phenotypic males predicted to be females.ConclusionAltogether our findings support the hypothesis that the Atlantic halibut has an XX/XY sex determination system. Assays are described for sex-associated DNA markers developed from the RAD sequencing analysis to fast track progeny testing and implement monosex female halibut production for an immediate improvement in productivity. These should also help to speed up the inclusion of neomales derived from many families to maintain a larger effective population size and ensure long-term improvement through selective breeding.", "corpus_id": 635493, "title": "Mapping the sex determination locus in the Atlantic halibut (Hippoglossus hippoglossus) using RAD sequencing" }
{ "abstract": "In the wake of the worst financial crisis since 1930, many countries face the challenge of creating a new economy model that relies less on financial services but more on production. Science therefore plays an increasingly important role to generate the knowledge that will eventually give rise to these new products. This coincides with the grand challenges of the 21 st century: dealing with the effects of global climate change, energy security and producing enough food for a growing human population. By 2050, nine billion people are expected to live on this planet, who will need access to food and clean water while the area of fertile land for agricultural is decreasing owing to overgrazing, salinisation, desertification and urban development.", "corpus_id": 1091692, "title": "Omics and the bioeconomy" }
{ "abstract": "Regenerative therapies, including cell injection and bioengineered tissue transplantation, have the potential to treat severe heart failure. Direct implantation of isolated skeletal myoblasts and bone-marrow-derived cells has already been clinically performed and research on fabricating three-dimensional (3-D) cardiac grafts using tissue engineering technologies has also now been initiated. In contrast to conventional scaffold-based methods, we have proposed cell sheet-based tissue engineering, which involves stacking confluently cultured cell sheets to construct 3-D cell-dense tissues. Upon layering, individual cardiac cell sheets integrate to form a single, continuous, cell-dense tissue that resembles native cardiac tissue. The transplantation of layered cardiac cell sheets is able to repair damaged hearts. As the next step, we have attempted to promote neovascularization within bioengineered myocardial tissues to overcome the longstanding limitations of engineered tissue thickness. Finally, as a possible advanced therapy, we are now trying to fabricate functional myocardial tubes that may have a potential for circulatory support. Cell sheet-based tissue engineering technologies therefore show an enormous promise as a novel approach in the field of myocardial tissue engineering.", "corpus_id": 12932388, "score": 1, "title": "Myocardial tissue engineering: toward a bioartificial pump" }
{ "abstract": "The ISO/IEC 9126 international standard for software product quality is a widely accepted reference for terminology regarding the multi-faceted concept of software product quality. Based on this standard, the Software Improvement Group has developed a pragmatic approach for measuring technical quality of software products. This quality model introduces another level below the hierarchy defined by ISO/IEC 9126, which consists of system properties such as volume, duplication, unit complexity and others. A mapping between system properties and ISO/IEC 9126 characteristics is defined in a binary fashion: a property either influences a characteristic or not. This mapping embodies consensus among three experts based, in an informal way, on their experience in software quality assessment. We have conducted a survey-based experiment to study the mapping between system properties and quality characteristics. We used the Analytic Hierarchy Process as a formally structured method to elicit the relative importance of system properties and quality characteristics from a group of 22 software quality experts. We analyzed the results of the experiment with two objectives: (i) to validate the original binary mapping and (ii) to refine the mapping using the elicited relative weights.", "corpus_id": 18788286, "title": "A survey-based study of the mapping of system properties to ISO/IEC 9126 maintainability characteristics" }
{ "abstract": "Selection and prioritization of software requirements represents an area of interest in Search-Based Software Engineering SBSE and its main focus is finding and selecting a set of requirements that may be part of a software release. This paper uses a systematic review to investigate which SBSE approaches have been proposed to address software requirement selection and prioritization problems. The search strategy identified 30 articles in this area and they were analyzed for 18 previously established quality criteria. The results of this systematic review show which aspects of the requirements selection and prioritization problems were addressed by researchers, the methods approaches and search techniques currently adopted to address these problems, and the strengths and weaknesses of each of these techniques. The review provides a map showing the gaps and trends in the field, which can be useful to guide further research.", "corpus_id": 35455620, "title": "A Systematic Review of Software Requirements Selection and Prioritization Using SBSE Approaches" }
{ "abstract": "Coding principles are central to understanding the organization of brain circuitry. Sparse coding offers several advantages, but a near-consensus has developed that it only has beneficial properties, and these are partially unique to sparse coding. We find that these advantages come at the cost of several trade-offs, with the lower capacity for generalization being especially problematic, and the value of sparse coding as a measure and its experimental support are both questionable. Furthermore, silent synapses and inhibitory interneurons can permit learning speed and memory capacity that was previously ascribed to sparse coding only. Combining these properties without exaggerated sparse coding improves the capacity for generalization and facilitates learning of models of a complex and high-dimensional reality.", "corpus_id": 205405156, "score": -1, "title": "Questioning the role of sparse coding in the brain" }
{ "abstract": "A new method using a double-sensor difference based algorithm for analyzing human segment rotational angles in two directions for segmental orientation analysis in the three-dimensional (3D) space was presented. A wearable sensor system based only on triaxial accelerometers was developed to obtain the pitch and yaw angles of thigh segment with an accelerometer approximating translational acceleration of the hip joint and two accelerometers measuring the actual accelerations on the thigh. To evaluate the method, the system was first tested on a 2 degrees of freedom mechanical arm assembled out of rigid segments and encoders. Then, to estimate the human segmental orientation, the wearable sensor system was tested on the thighs of eight volunteer subjects, who walked in a straight forward line in the work space of an optical motion analysis system at three self-selected speeds: slow, normal and fast. In the experiment, the subject was assumed to walk in a straight forward way with very little trunk sway, skin artifacts and no significant internal/external rotation of the leg. The root mean square (RMS) errors of the thigh segment orientation measurement were between 2.4 degrees and 4.9 degrees during normal gait that had a 45 degrees flexion/extension range of motion. Measurement error was observed to increase with increasing walking speed probably because of the result of increased trunk sway, axial rotation and skin artifacts. The results show that, without integration and switching between different sensors, using only one kind of sensor, the wearable sensor system is suitable for ambulatory analysis of normal gait orientation of thigh and shank in two directions of the segment-fixed local coordinate system in 3D space. It can then be applied to assess spatio-temporal gait parameters and monitoring the gait function of patients in clinical settings.", "corpus_id": 895070, "title": "Novel approach to ambulatory assessment of human segmental orientation on a wearable sensor system." }
{ "abstract": "Traditionally, human movement has been captured primarily by motion capture systems. These systems are costly, require fixed cameras in a controlled environment, and suffer from occlusion. Recently, the availability of low-cost wearable inertial sensors containing accelerometers, gyroscopes, and magnetometers have provided an alternative means to overcome the limitations of motion capture systems. Wearable inertial sensors can be used anywhere, cannot be occluded, and are low cost. Several groups have described algorithms for tracking human joint angles. We previously described a novel approach based on a kinematic arm model and the Unscented Kalman Filter (UKF). Our proposed method used a minimal sensor configuration with one sensor on each segment. This paper reports significant improvements in both the algorithm and the assessment. The new model incorporates gyroscope and accelerometer random drift models, imposes physical constraints on the range of motion for each joint, and uses zero-velocity updates to mitigate the effect of sensor drift. A highprecision industrial robot arm precisely quantifies the performance of the tracker during slow, normal, and fast movements over continuous 15-min recording durations. The agreement between the estimated angles from our algorithm and the high-precision robot arm reference was excellent. On average, the tracker attained an RMS angle error of about 3° for all six angles. The UKF performed slightly better than the more common Extended Kalman Filter.", "corpus_id": 3058855, "title": "Human Joint Angle Estimation with Inertial Sensors and Validation with A Robot Arm" }
{ "abstract": "Abstract In the present paper, graphene oxide (GO) was used to modify isobutyltriethoxysilane. GO/isobutyltriethoxysilane composite emulsion was then prepared by the sol–gel method. The properties of the obtained composite emulsion were characterized by Fourier Transform Infrared (FT-IR), X-ray Photoelectron Spectroscopy (XPS), Scanning Electron Microscope (SEM) and Energy Dispersive Spectrometer (EDS). The waterproof performance of GO/isobutyltriethoxysilane composite emulsion was studied by capillary water absorption and water contact angle experiments. The FT-IR results showed that the number of Si–O bonds in the composite emulsion were superior to that in silane emulsion while carboxyl groups were found to be less. This indicated that GO was successfully grafted to the isobutyltriethoxysilane monomer. The XPS data showed that amounts of C–OH in the composite emulsion were lower than those in silane emulsion, suggesting that part of C–OH in the composite emulsion participated in formation of Si–O–C covalent bond between GO and silane. SEM and EDS revealed that the composite emulsion and silane emulsion could form a dense hydrophobic layer on the concrete surface, thereby achieving waterproof effect. Water contact angle and capillary water absorption tests of concrete indicated composite emulsion with significantly improved waterproof performances when compared to silane emulsion.", "corpus_id": 139912273, "score": 1, "title": "Preparation and mechanism of graphene oxide/isobutyltriethoxysilane composite emulsion and its effects on waterproof performance of concrete" }
{ "abstract": "Stable isotopes were used to examine differential effects of fish farm waste on the water column and sediments. To achieve this objective, we chose 3 marine fish farms located along the coast of Sicily (Mediterranean Sea) as point-source disturbances, and a control area. The hypothesis that carbon and nitrogen isotope composition of particulate (POM) and sedimentary (SOM) organic matter varied with increasing distance (from cages to 1000 m) was tested at 3 levels of hydrodynam- ics: low (mean velocity of current (MVC) ~12 cm s -1 ), intermediate (MVC ~22 cm s -1 ), and high (MVC ~40 cm s -1 ). Different isotopic signals from allochthonous (fish waste) over natural (phytoplankton, terrigenous, and sand microflora) inputs allowed identification of the 'spatial effect regime' of fish farming. The increasing water current velocities seem to proportionally enlarge the relative area of influence of the cages, particularly on sediments. At low hydrodynamics, an increasing contribution of terrigenous signals was inferred: POM and SOM showing a depleted gradient of C (ranging from -22.0 to -24.0‰) and N (from 5.0 to 2.0‰). At an intermediate hydrodynamic level, C and N showed a slight increase in waste contribution, particularly in POM (δ 15 N from 2.6 to ~4.0‰). At high hydro- dynamics, an enriching isotopic gradient (δ 15 NPOM-SOM from 1.8 to 4.6‰) suggested a notable contribution of fish waste. Accordingly, the dispersal of waste from the cages seemed to be related to movements at the bottom of the water column, confirming the recently identified role played by resuspension movements.", "corpus_id": 4676846, "title": "Use of stable isotopes to investigate dispersal of waste from fish farms as a function of hydrodynamics" }
{ "abstract": "Measurement of C-N magnitude and C/N ratio from particulate matter is used to explain the source of terrestrial and sea particulates. Therefore, this study aimed at using C/N ratio in assessing land-based material in the west coast of Spermonde area, Indonesia on suspended matter. Samples of SPM were collected in two seasons (transition and dry seasons), in coastal waters of Tallo, Maros, and Pangkep estuaries. The results of research showed that Ctot was more abundant than was Ntot in particulates from river rather than from sea region, reflecting most of the terrestrial organic matter stored before meeting with sea. C/N ratio on the west coast of South Sulawesi was in the range of 7-19.7, showing that organic matter in Tallo estuary in transition season was dominantly autochthonous, while in dry season it was found to be dominantly terrigenous organic matter that gave an indication that land factor was significant in waste supply. The same thing was found in Maros estuary and Pangkep estuary in transition season and dry season; at all points of observation there were findings of particulates coming from terrigenous organic matter. Percentage of nutrient absorbed in particulate was low and could become a eutrophication stressor, where SPM found only ranged from 9.60 to 55.1 mgL-1 with maximum average in dry season and minimum in transition season. On the contrary, POM was maximum in transition season and minimum in dry season with dominant particulate organic matter source from the sea itself.", "corpus_id": 825117, "title": "The Use of C/N Ratio in Assessing the Influence of Land-Based Material in Coastal Water of South Sulawesi and Spermonde Archipelago, Indonesia" }
{ "abstract": "Relative importance of the factors like total salinity, sodium adsorption ratio (SAR), residual sodium carbonate (RSC) of the irrigation water, size of the gypsum fragments and flow velocity on the dissolution of gypsum placed in the irrigation channel was studied. Based on these factors, dimensions of a gypsum bed in the water course were calculated for the reclamation of sodic waters. The calculations showed only negligible effect of the electrolyte concentration and SAR of the irrigation water on the dimensions of the bed. A good agreement was found between the theoretically predicted and the experimentally determined solubility of gypsum using a bed of mixed size gypsum fragments. The RSC of water decreased from 5.75 to 0.75 m.e.l−1 after passing water through the gypsum bed, CO3−2+HCO3− concentration remained unchanged, showing no precipitation of CaCO3 in the gypsum bed.", "corpus_id": 101596644, "score": 1, "title": "Dimensions of Gypsum Bed in Relation to Residual Sodium Carbonate of Irrigation Water, Size of Gypsum Fragments and Flow Velocity" }
{ "abstract": "How different levels of biological organization interact to shape each other's function is a central question in biology. One particularly important topic in this context is how individuals' variation in behaviour shapes group-level characteristics. We investigated how fish that express different locomotory behaviour in an asocial context move collectively when in groups. First, we established that individual fish have characteristic, repeatable locomotion behaviours (i.e. median speeds, variance in speeds and median turning speeds) when tested on their own. When tested in groups of two, four or eight fish, we found individuals partly maintained their asocial median speed and median turning speed preferences, while their variance in speed preference was lost. The strength of this individuality decreased as group size increased, with individuals conforming to the speed of the group, while also decreasing the variability in their own speed. Further, individuals adopted movement characteristics that were dependent on what group size they were in. This study therefore shows the influence of social context on individual behaviour. If the results found here can be generalized across species and contexts, then although individuality is not entirely lost in groups, social conformity and group-size-dependent effects drive how individuals will adjust their behaviour in groups.", "corpus_id": 1209564, "title": "The role of individuality in collective group movement" }
{ "abstract": "Animals form groups for many reasons, but there are costs and benefits associated with group formation. One of the benefits is collective memory. In groups on the move, social interactions play a crucial role in the cohesion and the ability to make consensus decisions. When migrating from spawning to feeding areas, fish schools need to retain a collective memory of the destination site over thousands of kilometres, and changes in group formation or individual preference can produce sudden changes in migration pathways. We propose a modelling framework, based on stochastic adaptive networks, that can reproduce this collective behaviour. We assume that three factors control group formation and school migration behaviour: the intensity of social interaction, the relative number of informed individuals and the strength of preference that informed individuals have for a particular migration area. We treat these factors independently and relate the individuals’ preferences to the experience and memory for certain migration sites. We demonstrate that removal of knowledgeable individuals or alteration of individual preference can produce rapid changes in group formation and collective behaviour. For example, intensive fishing targeting the migratory species and also their preferred prey can reduce both terms to a point at which migration to the destination sites is suddenly stopped. The conceptual approaches represented by our modelling framework may therefore be able to explain large-scale changes in fish migration and spatial distribution.", "corpus_id": 693630, "title": "Fishing out collective memory of migratory schools" }
{ "abstract": "Abstract The spontaneous generation of inertia–gravity waves by balanced motion is investigated in the limit of small Rossby number ϵ ≪ 1. Particular (sheared disturbance) solutions of the three-dimensional Boussinesq equations are considered. For these solutions, there is a strict separation between balanced motion and inertia–gravity waves for large times. This makes it possible to estimate the amplitude of the inertia–gravity waves that are generated spontaneously from perfectly balanced initial conditions. It is shown analytically using exponential asymptotics, and confirmed numerically, that this amplitude is proportional to ϵ−1/2 exp(−α/ϵ), with a constant α > 0 and a proportionality constant that are given in closed form. This result demonstrates the inevitability of inertia–gravity wave generation and hence the nonexistence of an invariant slow manifold; it also exemplifies the remarkable, exponential, smallness of the wave generation for ϵ ≪ 1. The importance of the singularity structure of the b...", "corpus_id": 8321397, "score": 0, "title": "Exponentially Small Inertia–Gravity Waves and the Breakdown of Quasigeostrophic Balance" }
{ "abstract": "Objective:  The objective of this study is to present a novel approach for the treatment of severe, chronic knee joint pain following total knee arthroplasty utilizing peripheral subcutaneous field stimulation and discuss the role of this treatment modality in patients with symptoms that are refractory to conventional pharmacologic, surgical, and physical therapies.", "corpus_id": 1128960, "title": "Novel approach for peripheral subcutaneous field stimulation for the treatment of severe, chronic knee joint pain after total knee arthroplasty" }
{ "abstract": "▪ Abstract:  Supraorbital neuralgia has been identified as an infrequent cause of headache that may prove very difficult to control pharmacologically. Peripheral nerve stimulation using electrodes to stimulate the nerve segmentally responsible for the zone of pain may constitute a management alternative in such cases. We present the case of a patient with headache because of posttraumatic supraorbital neuralgia, refractory to medical treatment, with good analgesic control after peripheral nerve stimulation.", "corpus_id": 6078850, "title": "Peripheral Neurostimulation in Supraorbital Neuralgia Refractory to Conventional Therapy" }
{ "abstract": "There is a renewed interest in the use of PNS for the control of intractable pain caused by peripheral mononeuropathies and sympathetically mediated chronic pain syndromes. Technical advances in neurostimulation hardware, specifically lead design and surgical advancements with percutaneous and subcutaneous techniques, fuel this interest in part. The use of multipolar electrode arrays placed percutaneously in the region of peripheral nerves or in their dermatomal distribution without the need for extensive surgical dissection should help to support the use of PNS as a reasonable alternative to potentially destructive surgical procedures for chronic pain control.", "corpus_id": 32216667, "score": 2, "title": "Peripheral nerve neurostimulation." }
{ "abstract": "Background: The literature provides no guidelines for antibiotic use in palatoplasty. The authors sought to ascertain practice patterns; review a large, single-surgeon experience, and propose guidelines for antibiotic use in primary palatoplasty. Methods: A six-question survey was e-mailed to all surgeons of the American Cleft Palate-Craniofacial Association. A retrospective study was also conducted of the senior author’s 10-year primary palatoplasty series, and two groups were studied. Group 1 received no antibiotics. Group 2 received preoperative and/or postoperative antibiotics. Results: Three hundred twelve of 1115 surgeons (28 percent) responded to the survey. Eighty-five percent administered prophylactic antibiotics, including 26 percent who used a single preoperative dose. A further 23 percent gave 24 hours of postoperative therapy; 12 percent used 25 to 72 hours, 16 percent used 4 to 5 days, and 12 percent used 6 to 10 days. Five percent of surgeons administered penicillin, 64 percent administered a first-generation cephalosporin, 13 percent administered ampicillin/sulbactam, and 8 percent gave clindamycin. The authors reviewed 311 patients; 173 receive antibiotics and 138 did not. Delayed healing and fistula rates did not differ between groups: 16.8 percent versus 15.2 percent (p = 0.71) and 2.9 percent versus 1.4 percent (p = 0.47), respectively. A single patient treated without antibiotics developed a postoperative bacteremia. This case did not meet the Centers for Disease Control definition of a surgical site infection, but the patient developed a palatal fistula. Conclusions: Antibiotic use in primary palatoplasty varies widely. The authors’ data support a clinician’s choice to forego antibiotic use; however, given the significance of palatal fistulae and the single case of postoperative streptococcal bacteremia, the study group recommends a single preoperative dose of ampicillin/sulbactam. Current evidence cannot justify the use of protracted antibiotic regimens. CLINICAL QUESTION/LEVEL OF EVIDENCE: Therapeutic, III.", "corpus_id": 1086313, "title": "Antibiotic Use in Primary Palatoplasty: A Survey of Practice Patterns, Assessment of Efficacy, and Proposed Guidelines for Use" }
{ "abstract": "AbstractThe aim of the study was to determine the prevalence and bacteriology of bacteremia associated with cleft lip and palate (CLP) surgery. Three venous blood samples were obtained from 90 eligible subjects who presented for CLP surgery: before surgical incision, 1 minute after placement of the last suture, and 15 minutes thereafter. The samples were injected into an Oxoid Signal blood culture and transported to the laboratory for gram-positive/negative and aerobic/anaerobic bacteria analysis. Prevalence of bacteremia associated with cleft surgery was 38.1%. Prevalence rates of bacteremia in cleft lip surgery, cleft palate surgery, and alveoloplasty were 40.9%, 33.3%, and 50%, respectively. There was no significant difference in prevalence rate of positive blood culture in cleft lip surgery, cleft palate surgery, and alveoloplasty (P = 0.69). Positive blood culture was detected most frequently (47%) 1 minute after placement of the last suture. Of the 23 subjects who had positive blood culture at 1 minute, bacteremia persisted in 8 (35%) of them after 15 minutes. The most common bacteria isolated were coagulase-negative staphylococcus, Acinetobacter lwoffii, and coagulase-positive Staphylococcus aureus. Sex and age of the subjects, duration of surgery, blood loss, and type of cleft surgery were not significantly associated with positive blood culture. Bacteremia associated with CLP surgery is polymicrobial and persisted for at least 15 minutes after surgery in 35% of cases. This may reinforce the need for prophylactic antibiotics to protect at-risk patients from developing focal infection of the heart by oral flora.", "corpus_id": 34095730, "title": "Prevalence and Bacteriology of Bacteremia Associated With Cleft Lip and Palate Surgery" }
{ "abstract": "Abstract This is an application of the strict human capital model in accounting for income inequality in an LDC. Using individual characteristics of 1600 male Moroccan full-time employees, differences in schooling and experience explain about 70 percent of relative earnings dispersion. This result is based on the existence of an 'overtaking year of experience' occuring within the first decade of the working life of the individual. Furthermore, an attempt is made to isolate the rate of return to training from the returns to schooling by analysing the earnings of illiterate manual workers differentiated by the level of their skill. The results regarding the relationship between the returns to schooling versus training, the overtaking point, and the explanatory power of human capital variables are remarkably similar to those obtained in advanced countries.", "corpus_id": 153993371, "score": 0, "title": "Schooling, experience and earnings: The case of an LDC" }
{ "abstract": "Background: Surgical capacity assessments in low-income countries have demonstrated critical deficiencies. Though vital for planning capacity improvements, these assessments are resource intensive and impractical during the planning phase of a humanitarian crisis. This study aimed to determine cesarean sections to total operations performed (CSR) and emergency herniorrhaphies to all herniorrhaphies performed (EHR) ratios from Médecins Sans Frontières Operations Centre Brussels (MSF-OCB) projects and examine if these established metrics are useful proxies for surgical capacity in low-income countries affected by crisis. Methods: All procedures performed in MSF-OCB operating theatres from July 2008 through June 2014 were reviewed. Projects providing only specialty care, not fully operational or not offering elective surgeries were excluded. Annual CSRs and EHRs were calculated for each project. Their relationship was assessed with linear regression. Results: After applying the exclusion criteria, there were 47,472 cases performed at 13 sites in 8 countries. There were 13,939 CS performed (29% of total cases). Of the 4,632 herniorrhaphies performed (10% of total cases), 30% were emergency procedures. CSRs ranged from 0.06 to 0.65 and EHRs ranged from 0.03 to 1.0. Linear regression of annual ratios at each project did not demonstrate statistical evidence for the CSR to predict EHR [F(2,30)=2.34, p=0.11, R2=0.11]. The regression equation was: EHR = 0.25 + 0.52(CSR) + 0.10(reason for MSF-OCB assistance). Conclusion: Surgical humanitarian assistance projects operate in areas with critical surgical capacity deficiencies that are further disrupted by crisis. Rapid, accurate assessments of surgical capacity are necessary to plan cost- and clinically-effective humanitarian responses to baseline and acute unmet surgical needs in LICs affected by crisis. Though CSR and EHR may meet these criteria in ‘steady-state’ healthcare systems, they may not be useful during humanitarian emergencies. Further study of the relationship between direct surgical capacity improvements and these ratios is necessary to document their role in humanitarian settings.", "corpus_id": 1565322, "title": "An Analysis of Cesarean Section and Emergency Hernia Ratios as Markers of Surgical Capacity in Low-Income Countries Affected by Humanitarian Emergencies from 2008 – 2014 at Médecins sans Frontières Operations Centre Brussels Projects" }
{ "abstract": "BackgroundThe World Health Organization has a standardized tool to assess surgical capacity in low- and middle-income countries (LMICs), but it is often resource- and time-intensive. There currently exists no simple, evidence-based measure of surgical capacity in these settings. The proportion of cesarean deliveries in regard to the total operations (C/O ratio) has been suggested as a way to assess quickly the capacity for emergency and essential surgery in LMICs. This ratio has been estimated to be between 23.3 and 41.5 % in LMICs, but the tool’s utility has not been replicated.MethodsWe reviewed operative logbooks for the Partners In Health/Zanmi Lasante hospital in Cange, Haiti. We recorded data on all consecutive surgical patients from July 2008 to 2010 and calculated the C/O ratio by dividing the number of cesarean deliveries by the total number of operations performed. We also analyzed surgical data by surgeon nationality to provide additional information about local surgical capacity.ResultsA total of 3,641 operations were performed between 2008 and 2010. The C/O ratio decreased significantly between 2008–2009 and 2009–2010 (13.4 vs. 10.7 %, p = 0.001) as the surgical volume and resources increased. Nationality analysis demonstrated that Haitian surgeons were able to provide a spectrum of general and specialist surgical care.ConclusionsIn its inherent relation to essential surgical procedures and to the overall rate of cesarean deliveries in the region, the C/O ratio can provide an accessible assessment of regional surgical resources. In Haiti, the change in the C/O ratio demonstrated a relative increase in surgical capacity from 2008 to 2010. An additional analysis of surgeon nationality ensured that C/O ratio estimates more accurately reflect local practitioner activity, but deficiencies in the regional capacity to address the local burden of surgical disease may still exist.", "corpus_id": 6011996, "title": "Ratio of Cesarean Deliveries to Total Operations and Surgeon Nationality Are Potential Proxies for Surgical Capacity in Central Haiti" }
{ "abstract": "Polyaniline (PANI), an attractive conductive polymer, has been successfully applied in fabricating various types of enzyme-based biosensors. In this study, we have employed mesoporous silica SBA-15 to stably entrap horseradish peroxidase (HRP), and then deposited the loaded SBA-15 on the PANI modified platinum electrode to construct a GA/SBA-15(HRP)/PANI/Pt biosensor. The mesoporous structures and morphologies of SBA-15 with or without HRP were characterized. Enzymatic protein assays were employed to evaluate HRP immobilization efficiency. Our results demonstrated that the constructed biosensor displayed a fine linear correlation between cathodic response and H2O2 concentration in the range of 0.02 to 18.5 mM, with enhanced sensitivity. In particular, the current approach provided the PANI modified biosensor with improved stability for multiple measurements.", "corpus_id": 11053693, "score": 0, "title": "Immobilization of HRP in Mesoporous Silica and Its Application for the Construction of Polyaniline Modified Hydrogen Peroxide Biosensor" }
{ "abstract": "The goal of this paper is twofold. On one hand, our work revisits the minimization of the robust compliance in shape optimization, with a more natural and more general approach than what has been done before. On the other hand, following a more recent viewpoint on robust optimization, we study the maximization of the so-called stability radius for a fixed maximal compliance. We provide theorical as well as numerical results.", "corpus_id": 1956206, "title": "A Notion of Compliance Robustness in Topology Optimization" }
{ "abstract": "The framework of asymptotic analysis in singularly perturbed geometrical domains presented in the first part of this series of review papers can be employed to produce two-term asymptotic expansions for a class of shape functionals. In Part II (Novotny et al. in J Optim Theory Appl 180(3):1–30, 2019), one-term expansions of functionals are required for algorithms of shape-topological optimization. Such an approach corresponds to the simple gradient method in shape optimization. The Newton method of shape optimization can be replaced, for shape-topology optimization, by two-term expansions of shape functionals. Thus, the resulting approximations are more precise and the associated numerical methods are much more complex compared to one-term expansion topological derivative algorithms. In particular, numerical algorithms associated with first-order topological derivatives of shape functionals have been presented in Part II (Novotny et al. 2019), together with an account of their applications currently found in the literature, with emphasis on shape and topology optimization. In this last part of the review, second-order topological derivatives are introduced. Second-order algorithms of shape-topological optimization are used for numerical solution of representative examples of inverse reconstruction problems. The main feature of these algorithms is that the method is non-iterative and thus very robust with respect to noisy data as well as independent of initial guesses.", "corpus_id": 77393646, "title": "Topological Derivatives of Shape Functionals. Part III: Second-Order Method and Applications" }
{ "abstract": "Wireless distance measurement techniques based on portable embedded platforms are expected to play a key role in several industrial and domestic applications. In this paper the ranging accuracy of a commercial Chirp Spread Spectrum (CSS) kit is evaluated experimentally in a real-world context. The proposed analysis provides more precise and exhaustive information than what it is usually reported in the technical literature. In fact, this paper is specifically focused on performance evaluation and it deals with the case of short-range indoor scenarios in both Line-of-Sight (LOS) and Non-Line-of-Sight (NLOS) repeatable conditions. The resulting analysis represents the first step towards the design of a custom indoor embedded navigation system for a smart rollator assisting impaired people to move safely in an indoor public environment.", "corpus_id": 34085, "score": 1, "title": "Performance evaluation of Chirp Spread Spectrum ranging for indoor embedded navigation systems" }
{ "abstract": "For over 140 years, medical scientists have searched for ways to identify women at high risk of breast cancer from those not at elevated risk ([1][1]). Recently, we have seen this search come to a logical end point with a proof of principle that risk reduction is in fact possible through selective", "corpus_id": 1301439, "title": "Epidemiology and Prevention of Breast Cancer" }
{ "abstract": null, "corpus_id": 1404734, "title": "Trends in the incidence rate and risk factors for breast cancer in Japan" }
{ "abstract": "CONTEXT\nRaloxifene hydrochloride is a selective estrogen receptor modulator that has antiestrogenic effects on breast and endometrial tissue and estrogenic effects on bone, lipid metabolism, and blood clotting.\n\n\nOBJECTIVE\nTo determine whether women taking raloxifene have a lower risk of invasive breast cancer.\n\n\nDESIGN AND SETTING\nThe Multiple Outcomes of Raloxifene Evaluation (MORE), a multicenter, randomized, double-blind trial, in which women taking raloxifene or placebo were followed up for a median of 40 months (SD, 3 years), from 1994 through 1998, at 180 clinical centers composed of community settings and medical practices in 25 countries, mainly in the United States and Europe.\n\n\nPARTICIPANTS\nA total of 7705 postmenopausal women, younger than 81 (mean age, 66.5) years, with osteoporosis, defined by the presence of vertebral fractures or a femoral neck or spine T-score of at least 2.5 SDs below the mean for young healthy women. Almost all participants (96%) were white. Women who had a history of breast cancer or who were taking estrogen were excluded.\n\n\nINTERVENTION\nRaloxifene, 60 mg, 2 tablets daily; or raloxifene, 60 mg, 1 tablet daily and 1 placebo tablet; or 2 placebo tablets.\n\n\nMAIN OUTCOME MEASURES\nNew cases of breast cancer, confirmed by histopathology. Transvaginal ultrasonography was used to assess the endometrial effects of raloxifene in 1781 women. Deep vein thrombosis or pulmonary embolism were determined by chart review.\n\n\nRESULTS\nThirteen cases of breast cancer were confirmed among the 5129 women assigned to raloxifene vs 27 among the 2576 women assigned to placebo (relative risk [RR], 0.24; 95% confidence interval [CI], 0.13-0.44; P<.001). To prevent 1 case of breast cancer, 126 women would need to be treated. Raloxifene decreased the risk of estrogen receptor-positive breast cancer by 90% (RR, 0.10; 95% CI, 0.04-0.24), but not estrogen receptor-negative invasive breast cancer (RR, 0.88; 95% CI, 0.26-3.0). Raloxifene increased the risk of venous thromboembolic disease (RR, 3.1; 95% CI, 1.5-6.2), but did not increase the risk of endometrial cancer (RR, 0.8; 95% CI, 0.2-2.7).\n\n\nCONCLUSION\nAmong postmenopausal women with osteoporosis, the risk of invasive breast cancer was decreased by 76% during 3 years of treatment with raloxifene.", "corpus_id": 31717815, "score": -1, "title": "The effect of raloxifene on risk of breast cancer in postmenopausal women: results from the MORE randomized trial. Multiple Outcomes of Raloxifene Evaluation." }
{ "abstract": "The molecular dimensions of proteins such as green fluorescent protein (GFP) are large as compared to the ones of solvents like water or glycerol. The microscopic viscosity, which determines the resistance to diffusion of, e.g. GFP, is then the same as that determined from the resistance of the solvent to flow, which is known as macroscopic viscosity. GFP in water/glycerol mixtures senses this macroscopic viscosity, because the translational and rotational diffusion coefficients are proportional to the reciprocal value of the viscosity as predicted by the Stokes–Einstein equations. To test this hypothesis, we have performed time-resolved fluorescence anisotropy (reporting on rotational diffusion) and fluorescence correlation spectroscopy (reporting on translational diffusion) experiments of GFP in water/glycerol mixtures. When the solvent also contains macromolecules of similar or larger dimensions as GFP, the microscopic and macroscopic viscosities can be markedly different and the Stokes–Einstein relations must be adapted. It was established from previous dynamic fluorescence spectroscopy observations of diffusing proteins with dextran polysaccharides as co-solvents (Lavalette et al 2006 Eur. Biophys. J. 35 517–22), that rotation and translation sense a different microscopic viscosity, in which the one arising from rotation is always less than that from translation. A microscopic viscosity parameter is defined that depends on scaling factors between GFP and its immediate environment. The direct consequence is discussed for two reported diffusion coefficients of GFP in living cells. PAPER", "corpus_id": 38440543, "title": "GFP as potential cellular viscosimeter" }
{ "abstract": null, "corpus_id": 969931, "title": "Global analysis of fluorescence fluctuation data" }
{ "abstract": "We have implemented scanning fluorescence correlation spectroscopy (sFCS) for precise determination of diffusion coefficients of fluorescent molecules in solution. The measurement volume where the molecules are excited, and from which the fluorescence is detected, was scanned in a circle with radius comparable to its size at frequencies 0.5-2 kHz. The scan radius R, determined with high accuracy by careful calibration, provides the spatial measure required for the determination of the diffusion coefficient D, without the need to know the exact size of the measurement volume. The difficulties in the determination of the measurement volume size have limited the application of standard FCS with fixed measurement volume to relative measurements, where the diffusion coefficient is determined by comparison with a standard. We demonstrate, on examples of several common fluorescent dyes, that sFCS can be used to measure D with high precision without a need for a standard. The correct value of D can be determined in the presence of weak photobleaching, and when the measurement volume size is modified, indicating the robustness of the method. The applicability of the presented implementation of sFCS to biological systems in demonstrated on the measurement of the diffusion coefficient of eGFP in the cytoplasm of HeLa cells. With the help of simulations, we find the optimal value of the scan radius R for the experiment.", "corpus_id": 42086770, "score": -1, "title": "Precise measurement of diffusion coefficients using scanning fluorescence correlation spectroscopy." }
{ "abstract": "OBJECTIVE\nTo analyze the prognostic value of malnutrition in children with idiopathic dilated cardiomyopathy.\n\n\nMETHODS\nThis is a retrospective study of 165 patients with idiopathic dilated cardiomyopathy, diagnosed from September 1979 to March 2003. It analyzed the following variables: gender, age, previous viral illness in the preceding 3 months, functional class according to the New York Heart Association (NYHA), evaluation of nutritional status (normal vs. malnutrition), percentile and standard deviation (z index) of weight. Weight was measured 744 times during the first 72 months, 93 during the first month. Statistical analysis was performed by Chi Squared, Student t test and analysis of variance for repeated measures (ANOVA). Ninety-five percent confidence intervals (CI95) and odds ratios (OR) were calculated. An alpha value of 0.05 and beta of 0.80 were used.\n\n\nRESULTS\nMean age at presentation was 2.2+/-3.2 years with higher incidence in those younger than 2 years (75.8%-CI95 = 68.5% to 82.1%) (p < 0.0001). NYHA classes III and IV were observed in 81.2% (CI95 = 74.4% to 86.9%) (p < 0.0001) and all 40 deaths were this group (p = 0.0008). At presentation, myocarditis occurred in 39.4% (CI95 = 31.9% to 47.3%) (p = 0.0001) and a high level of association between myocarditis and previous viral illness was observed (p = 0.0005) (OR = 3.15-CI95 = 1.55 to 6.44). Malnutrition at presentation did not influence death (p = 0.10), however progressive malnutrition was a marker for death (p = 0.02) (OR = 3.21-CI95 = 1.04 to 9.95). No significant differences weight percentiles (p = 0.15) or in z scores (p = 0.14) were observed. Observed mean weight percentiles (34.9+/-32.6 vs. 8.6+/-16.0) (p < 0.0001) and z scores (-0.62+/-1.43 vs. -2.02+/-1.12) (p < 0.0001) during the study period were greater among survivors. ANOVA demonstrated significant differences in weight percentile progression (p = 0.0417) and z scores (p = 0.0005) from the first month onwards.\n\n\nCONCLUSION\nThe evaluation of nutritional status is easy to perform, it does not imply additional costs and should become routine for children with chronic heart failure.", "corpus_id": 1230323, "title": "[The impact of malnutrition on idiopathic dilated cardiomyopathy in children]." }
{ "abstract": "Previous studies in adults with dilated cardiomyopathy suggest that the presence of arrhythmia, especially ventricular tachycardia, correlates with increased mortality. We performed a retrospective analysis of 63 children with idiopathic dilated cardiomyopathy to determine the prognostic significance of arrhythmias and other findings with respect to mortality. The mean age at diagnosis of the cardiomyopathy was 4.96 +/- 5.3 years. The overall mortality rate was 16% over a 10 year follow-up period. Persistent congestive heart failure and ST-T wave changes correlated with increased mortality (p less than 0.05). No other variables affected outcome. Arrhythmias were found in 46% of the patients; of the arrhythmias, 48% were atrial arrhythmias. Ventricular tachycardia was present in six patients. Death occurred in 4 (14%) of 29 patients with known arrhythmia; 1 of the 5 died suddenly. The remaining 6 deaths in the series occurred in the 34 patients without a documented arrhythmia. It is concluded that 1) arrhythmias are frequently seen in children with dilated cardiomyopathy but are not predictive of outcome; 2) sudden death in children with this disease is rare; and 3) persistent congestive heart failure portends a poor prognosis.", "corpus_id": 32355017, "title": "Clinical course of idiopathic dilated cardiomyopathy in children." }
{ "abstract": "Une etude clinique et histopathologique de la myocardite du syndrome de Kawasaki avec les consequences therapeutiques", "corpus_id": 1886610, "score": 2, "title": "Myocarditis in Kawasaki syndrome. A minor villain?" }
{ "abstract": "Recently, many concerns are paid for dual action drugs such as ACE/NEP dual inhibitors which have two different biological activities. To identify multiple active drugs by supervised learning approach, a multi-label classification technique is required. In the present work, we investigated the classification of antihypertensive drugs including ACE/NEP dual inhibitors using support vector machines (SVMs). Biological activity data of the drugs were taken from the MDDR database and they were employed for the computational trial for the training of the SVM classifiers. Structural feature representation of each drug molecule was based on topological fragment spectra (TFS) method. The obtained classifiers were tested for finding ACE/NEP dual inhibitors. The result suggests that the TFS-based SVM classifiers are useful for finding multiple active drugs such as ACE/NEP dual inhibitors.", "corpus_id": 2045236, "title": "Identification of the Dual Action Antihypertensive Drugs Using TFS-Based Support Vector Machines" }
{ "abstract": "Multi-label classification extends the standard multi-class classification paradigm by dropping the assumption that classes have to be mutually exclusive, i.e., the same data item might belong to more than one class. Multi-label classification has many important applications in e.g. signal processing, medicine, biology and information security, but the analysis and understanding of the inference methods based on data with multiple labels are still underdeveloped. In this paper, we formulate a general generative process for multi-label data, i.e. we associate each label (or class) with a source. To generate multi-label data items, the emissions of all sources in the label set are combined. In the training phase, only the probability distributions of these (single label) sources need to be learned. Inference on multi-label data requires solving an inverse problem, models of the data generation process therefore require additional assumptions to guarantee well-posedness of the inference procedure. Similarly, in the prediction (test) phase, the distributions of all single-label sources in the label set are combined using the combination function to determine the probability of a label set. We formally describe several previously presented inference methods and introduce a novel, general-purpose approach, where the combination function is determined based on the data and/or on a priori knowledge of the data generation mechanism. This framework includes cross-training and new source training (also named label power set method) as special cases. We derive an asymptotic theory for estimators based on multi-label data and investigate the consistency and efficiency of estimators obtained by several state-of-the-art inference techniques. Several experiments confirm these findings and emphasize the importance of a sufficiently complex generative model for real-world applications.", "corpus_id": 15315680, "title": "Asymptotic analysis of estimators on multi-label data" }
{ "abstract": "1. The excretion of phenylmercapturic acid by rabbits receiving benzene has been studied. 2. The presence of phenylmercapturic acid in urine after administration of benzene has been shown by developing a specific turbidimetric method of estimation, based on the formation of phenylmercuric mercaptide from phenylmercapturic acid by alkaline hydrolysis. 3. The iodometric method for the determination of mercapturic acid has been shown to be influenced by diet and by hydroxyquinol, a metabolite of benzene, and may give high results when small amounts of mercapturic acids have to be estimated in urine. 4. By the turbidimetric method about 1 % of the benzene fed at a dose level of 0 5 g./kg. and about 0X8 % at 1 g./kg. was excreted as phenylmercapturic acid, which was found in the urine for 2-3 days after dosing. 5. Some arylcysteines and mercapturic acids have been synthesized and their ultraviolet absorption spectra recorded. The expenses ofthis work were in part defrayed by a grant from the Medical Research Council.", "corpus_id": 10828027, "score": 1, "title": "Esterification of the carboxyl groups in wool." }