query dict | pos dict | neg dict |
|---|---|---|
{
"abstract": "Nuclear patch-clamp experiments can be performed with intact nuclei or with nuclei from which the outer nuclear membrane has been removed. This protocol presents procedures for harvesting different types of cultured cells, isolating nuclei, and exposing the inner nuclear membrane by agitating in the presence of sodium citrate. Particulars about obtaining and maintaining the cells of interest in culture are not described here. However, care should be taken not to allow the cells to grow beyond a density of 2-3 × 10(6) cells/mL because this may decrease both the cell viability and the success rate of detecting active inositol 1,4,5-trisphosphate receptor (InsP3R) channels in nuclear patches.",
"corpus_id": 11948754,
"title": "Isolating nuclei from cultured cells for patch-clamp electrophysiology of intracellular Ca(2+) channels."
} | {
"abstract": "The ubiquitous inositol 1,4,5-trisphosphate (InsP3) receptor (InsP3R) channel, localized primarily in the endoplasmic reticulum (ER) membrane, releases Ca2+ into the cytoplasm upon binding InsP3, generating and modulating intracellular Ca2+ signals that regulate numerous physiological processes. Together with the number of channels activated and the open probability of the active channels, the size of the unitary Ca2+ current (iCa) passing through an open InsP3R channel determines the amount of Ca2+ released from the ER store, and thus the amplitude and the spatial and temporal nature of Ca2+ signals generated in response to extracellular stimuli. Despite its significance, iCa for InsP3R channels in physiological ionic conditions has not been directly measured. Here, we report the first measurement of iCa through an InsP3R channel in its native membrane environment under physiological ionic conditions. Nuclear patch clamp electrophysiology with rapid perfusion solution exchanges was used to study the conductance properties of recombinant homotetrameric rat type 3 InsP3R channels. Within physiological ranges of free Ca2+ concentrations in the ER lumen ([Ca2+]ER), free cytoplasmic [Ca2+] ([Ca2+]i), and symmetric free [Mg2+] ([Mg2+]f), the iCa–[Ca2+]ER relation was linear, with no detectable dependence on [Mg2+]f. iCa was 0.15 ± 0.01 pA for a filled ER store with 500 µM [Ca2+]ER. The iCa–[Ca2+]ER relation suggests that Ca2+ released by an InsP3R channel raises [Ca2+]i near the open channel to ∼13–70 µM, depending on [Ca2+]ER. These measurements have implications for the activities of nearby InsP3-liganded InsP3R channels, and they confirm that Ca2+ released by an open InsP3R channel is sufficient to activate neighboring channels at appropriate distances away, promoting Ca2+-induced Ca2+ release.",
"corpus_id": 2750737,
"title": "Unitary Ca2+ current through recombinant type 3 InsP3 receptor channels under physiological ionic conditions"
} | {
"abstract": "Simple approximations to some limiting cases of Ca++ signalling provide insight into the complex problems of buffered diffusion and of Ca++ homeostasis in the presence of buffers. Three cases are presented, where the influence of Ca++ buffers can readily be understood in the limit of small signals: the return of global cellular [Ca++] following a short stimulus in a 'Single Compartment', buffered diffusion along a cylindrical axon in the 'Rapid Buffer Approximation', and nonequilibrium microdomains of elevated [Ca++] in the immediate vicinity of open Ca++ channels.",
"corpus_id": 4156761,
"score": -1,
"title": "Usefulness and limitations of linear approximations to the understanding of Ca++ signals."
} |
{
"abstract": "\n Background: This study aimed to translate the English version of the supportive care needs scale of head and neck cancer patients (SCNS-HNC) questionnaire into Mandarin and to test its reliability and validity.Methods: The Mandarin version of the Supportive Care Needs Survey Short-Form (SCNS-SF34) and SCNS-HNC scales were used to assess 206 patients with head and neck cancer in Chengdu, China. Among them, 51 patients were re-tested 2 or 3 days after the first survey. The internal consistency of the scale was evaluated by Cronbach's alpha coefficient, the retest reliability of the scale was evaluated by retest correlation coefficient r, the structural validity of the scale was evaluated by exploratory factor analysis, and the ceiling and floor effects of the scale were evaluated.Results: The Mandarin version of the SCNS-HNC had Cronbach's alpha coefficients greater than 0.700 (0.737 ≤ 0.962) for all of the domains. Except for the psychological demand dimension (r=0.674) of the SCNS-SF34 scale, the retest reliability of the other domains was greater than 0.8. Three common factors were extracted by exploratory factor analysis, and the cumulative variance contribution rate was 64.39%. Conclusions: The Mandarin version of the SCNS-HNC demonstrated satisfactory reliability and validity and is able to measure the supportive care needs of Chinese patients with head and neck cancer.Clinical registration number: ChiCTR1900026635",
"corpus_id": 235018062,
"title": "Reliability and validity of the Mandarin version of the Head and Neck Cancer-specific Supportive Care Needs (SCNS-HNC) scale"
} | {
"abstract": "PURPOSE ::: The purpose of this study is to assess the psychometric properties of the Dutch version of the 34-item Short-Form Supportive Care Needs Survey (SCNS-SF34) and the newly developed module for head and neck cancer (HNC) patients (SCNS-HNC). ::: ::: ::: METHODS ::: HNC patients were included from two cross-sectional studies. Content validity of the SCNS-HNC was analysed by examining redundancy and completeness of items. Factor structure was assessed using confirmatory and exploratory factor analyses. Cronbach's alpha, Spearman's correlation, Mann-Whitney U test, Kruskall-Wallis and intraclass correlation coefficients (ICC) were used to assess internal consistency, construct validity and test-retest reliability. ::: ::: ::: RESULTS ::: Content validity of the SCNS-HNC was good, although some HNC topics were missing. For the SCNS-SF34, a four-factor structure was found, namely physical and daily living, psychological, sexuality and health system and information and patient support (alpha = .79 to .95). For the SCNS-HNC, a two-factor structure was found, namely HNC-specific functioning and lifestyle (alpha = .89 and .60). Respectively, 96 and 89 % of the hypothesised correlations between the SCNS-SF34 or SCNS-HNC and other patient-reported outcome measures were found; 57 and 67 % also showed the hypothesised magnitude of correlation. The SCNS-SF34 domains discriminated between treatment procedure (physical and daily living p = .02 and psychological p = .01) and time since treatment (health system, information and patient support p = .02). Test-retest reliability of SCNS-SF34 domains and HNC-specific functioning domain was above .70 (ICC = .74 to .83), and ICC = .67 for the lifestyle domain. Floor effects ranged from 21.1 to 70.9 %. ::: ::: ::: CONCLUSIONS ::: The SCNS-SF34 and SCNS-HNC are valid and reliable instruments to evaluate the need for supportive care among (Dutch) HNC patients.",
"corpus_id": 3788494,
"title": "The need for supportive care among head and neck cancer patients: psychometric assessment of the Dutch version of the Supportive Care Needs Survey Short-Form (SCNS-SF34) and the newly developed head and neck cancer module (SCNS-HNC)"
} | {
"abstract": "Final analysis of a phase II study of modified FOLFIRINOX in locally advanced and metastatic pancreatic cancer",
"corpus_id": 18347786,
"score": -1,
"title": "Final analysis of a phase II study of modified FOLFIRINOX in locally advanced and metastatic pancreatic cancer"
} |
{
"abstract": "We develop two ab initio quantum approaches to thin-film x-ray cavity quantum electrodynamics with spectrally narrow x-ray resonances, such as those provided by Mossbauer nuclei. The first method is based on a few-mode description of the cavity, and promotes and extends existing phenomenological few-mode models to an ab initio theory. The second approach uses analytically-known Green's functions to model the system. The two approaches not only enable one to ab initio derive the effective few-level scheme representing the cavity and the nuclei in the low-excitation regime, but also provide a direct avenue for studies at higher excitation, involving non-linear or quantum phenomena. The ab initio character of our approaches further enables direct optimizations of the cavity structure and thus of the photonic environment of the nuclei, to tailor the effective quantum optical level scheme towards particular applications. To illustrate the power of the ab initio approaches, we extend the established quantum optical modeling to resonant cavity layers of arbitrary thickness, which is essential to achieve quantitative agreement for cavities used in recent experiments. Further, we consider multi-layer cavities featuring electromagnetically induced transparency, derive their quantum optical few-level systems ab initio, and identify the origin of discrepancies in the modeling found previously using phenomenological approaches as arising from cavity field gradients across the resonant layers.",
"corpus_id": 214727821,
"title": "Ab initio\n quantum models for thin-film x-ray cavity QED"
} | {
"abstract": "Cooperative phenomena arising due to the coupling of individual atoms via the radiation field are a cornerstone of modern quantum and optical physics. Recent experiments on x-ray quantum optics added a new twist to this line of research by exploiting superradiance in order to construct artificial quantum systems. However, so far, systematic approaches to deliberately design superradiance properties are lacking, impeding the desired implementation of more advanced quantum optical schemes. Here, we develop an analytical framework for the engineering of single-photon superradiance in extended media applicable across the entire electromagnetic spectrum, and show how it can be used to tailor the properties of an artificial quantum system. This “reverse engineering” of superradiance not only provides an avenue towards non-linear and quantum mechanical phenomena at x-ray energies, but also leads to a unified view on and a better understanding of superradiance across different physical systems.",
"corpus_id": 4774551,
"title": "Tailoring superradiance to design artificial quantum systems"
} | {
"abstract": "Published photometric observations of several OH bands are analyzed with the aid of available transition probabilities. The rate of excitation of the vibrational levels with υ≤9 by the excitation mechanism seems to be nearly independent of υ. The relative populations of the vibrational levels are computed, and the predicted absolute intensities of all the OH bands are given.",
"corpus_id": 128948301,
"score": -1,
"title": "On the excitation rates and intensities of OH in the airglow"
} |
{
"abstract": "To analyze the association between neutrophil‐to‐lymphocyte ratio (NLR) and intravesical prostatic protrusion (IPP) in men with benign prostatic hyperplasia.",
"corpus_id": 202582032,
"title": "Association between the neutrophil‐to‐lymphocyte ratio and intravesical prostatic protrusion in men with benign prostatic hyperplasia"
} | {
"abstract": "PURPOSE\nThe aim of this study was to evaluate inflammation parameters and assess the utility of the neutrophil- lymphocyte ratio (NLR) as a simple and readily available predictor for clinical disease activity in patients with nenign prostate hyperplasia BPH. We also aimed to investigate the relationship between inflammatory parameters with α-blocker therapy response, and evaluate the potential association between NLR and the progression of benign prostatic hyperplasia (BPH).\n\n\nMATERIALS AND METHODS\nWe examined 320 consecutive patients (July 2013-December 2013) admitted to our outpatient clinic with symptoms of the lower urinary tract at Bozok University. The mean age was 60 (range, 51-75) years. Complete blood count (CBC), prostate-specific antigen (PSA), erythrocyte sedimentation rate (ESR), and C-reactive protein (CRP) were assessed. Correlations between PSA, CRP, ESR, prostate volume, International Prostate Symptom Score (IPPS), maximum urinary flow rate (Qmax), and NLR were assessed statistically. Patients were divided into two groups: high and low risk of progression.\n\n\nRESULTS\nNLR was positively correlated with IPSS (p=0.001, r=0.265), PSA (p=0.001, r=0.194), and negatively correlated with Qmax (p<0.001, r=-0.236). High-risk patients a had a higher NLR compared with low-risk patients, based on IPSS (p<0.001), PSA (p=0.013), and Qmax (p<0.001); however, there were no significant differences between the groups in terms of age (p>0.05), and prostate volume (p>0.05).\n\n\nCONCLUSIONS\nNLR can predict BPH progression. We propose that increased inflammation is negatively associated with clinical status in BPH patients and suggest that NLR can give information along with LUTS severity which may be used as a readikly accessible marker for patient follow-up.",
"corpus_id": 2369522,
"title": "Is the neutrophil-lymphocyte ratio an indicator of progression in patients with benign prostatic hyperplasia?"
} | {
"abstract": "A crossover study was conducted to identify the best α1-adrenoceptor (α1AR) antagonist for individual patients with lower urinary tract symptoms (LUTS) associated with benign prostatic hyperplasia (BPH). One hundred thirteen patients (mean age 70.8 years) were enrolled. All patients met BPH clinical study guidelines. Seven agents were utilized:tamsulosin 0.2mg, silodosin 8mg, urapidil 60mg, naftopidil 50mg, prazosin 1mg, terazosin 2mg, and doxazosin 1mg. Patients were initially prescribed tamsulosin or silodosin for a week and then urapidil for a week. Two weeks later, they were prescribed the better of the 2 agents for a week and a new agent for the next week. This cycle was repeated until all 7 agents were tested. Efficacy was evaluated with the International Prostate Symptom Score. The agent rankings were doxazosin (25 [22%]), silodosin (22 [19%]), urapidil (19 [17%]), naftopidil (17 [15%]), terazosin (12 [11%]), tamsulosin (11 [10%]), prazosin (7 [6%]). Only 12 patients (11%) changed agents after the crossover study was completed. The major reason was adverse events (83%). We found that each of the 7 α1AR antagonists has its own supporters. Further, the one-week crossover study was useful in identifying the best agent for the treatment of each individual with LUTS.",
"corpus_id": 13320679,
"score": -1,
"title": "Comparison of 7 α(1)-adrenoceptor antagonists in patients with lower urinary tract symptoms associated with benign prostatic hyperplasia:a short-term crossover study."
} |
{
"abstract": "In the last few years, many reports have been describing promising biocompatible and biodegradable materials that can mimic in a certain extent the multidimensional hierarchical structure of bone, while are also capable of releasing bioactive agents or drugs in a controlled manner. Despite these great advances, new developments in the design and fabrication technologies are required to address the need to engineer suitable biomimetic materials in order tune cells functions, i.e. enhance cell-biomaterial interactions, and promote cell adhesion, proliferation, and differentiation ability. Scaffolds, hydrogels, fibres and composite materials are the most commonly used as biomimetics for bone tissue engineering. Dynamic systems such as bioreactors have also been attracting great deal of attention as it allows developing a wide range of novel in vitro strategies for the homogeneous coating of scaffolds and prosthesis with ceramics, and production of biomimetic constructs, prior its implantation in the body. Herein, it is overviewed the biomimetic strategies for bone tissue engineering, recent developments and future trends. Conventional and more recent processing methodologies are also described.",
"corpus_id": 53382129,
"title": "Chapter X Biomimetic Strategies to Engineer Mineralised Human Tissues"
} | {
"abstract": "Apatite layers were grown on the surface of newly developed starch/polycaprolactone (SPCL)-based scaffolds by a 3D plotting technology. To produce the biomimetic coatings, a sodium silicate gel was used as nucleating agent, followed by immersion in a simulated body fluid (SBF) solution. After growing a stable apatite layer for 7 days, the scaffolds were placed in SBF under static, agitated (80 strokes min(-1)) and circulating flow perfusion (Q=4 ml min(-1); t(R)=15s) for up to 14 days. The materials were characterized by scanning electron microscopy/energy dispersive X-ray spectroscopy, Fourier transform infrared spectroscopy and thin-film X-ray diffraction. Cross-sections were obtained and the coating thickness was measured. The elemental composition of solution and coatings was monitored by inductively coupled plasma spectroscopy. After only 6 h of immersion in SBF it was possible to observe the formation of small nuclei of an amorphous calcium phosphate (ACP) layer. After subsequent SBF immersion from 7 to 14 days under static, agitated and circulating flow perfusion conditions, these layers grew into bone-like nanocrystalline carbonated apatites covering each scaffold fiber without compromising its initial morphology. No differences in the apatite composition/chemical structure were detectable between the coating conditions. In case of flow perfusion, the coating thickness was significantly higher. This condition, besides mimicking better the biological milieu, allowed for the coating of complex architectures at higher rates, which can greatly reduce the coating step.",
"corpus_id": 1049500,
"title": "Nucleation and growth of biomimetic apatite layers on 3D plotted biodegradable polymeric scaffolds: effect of static and dynamic coating conditions."
} | {
"abstract": "Polymer scientists, working closely with those in the device and medical fields, have made tremendous advances over the past 30 years in the use of synthetic materials in the body. In this article we will focus on properties of biodegradable polymers which make them ideally suited for orthopedic applications where a permanent implant is not desired. The materials with the greatest history of use are the poly(lactides) and poly(glycolides), and these will be covered in specific detail. The chemistry of the polymers, including synthesis and degradation, the tailoring of properties by proper synthetic controls such as copolymer composition, special requirements for processing and handling, and mechanisms of biodegradation will be covered. An overview of biocompatibility and approved devices of particular interest in orthopedics are also covered.",
"corpus_id": 19290778,
"score": -1,
"title": "Synthetic biodegradable polymers as orthopedic devices."
} |
{
"abstract": "For decades, computer scientists have worked to develop an artificial intelligence for the game of Go intelligent enough to beat skilled human players. In 2016, Google accomplished just that with their program, AlphaGo. AlphaGo was a huge leap forward in artificial intelligence, but required quite a lot of computational power to run. The goal of our project was to take some of the techniques that make AlphaGo so powerful, and integrate them with a less resource intensive artificial intelligence. Specifically, we expanded on the work of last year’s MQP of integrating a neural network into an existing Go AI, Pachi. We rigorously tested the resultant program’s performance. We also used SPSA training to determine an adaptive value function so as to make the best use of the neural network.",
"corpus_id": 182092655,
"title": "Adaptive Neural Network Usage in Computer Go"
} | {
"abstract": "Abstract: The game of Go is more challenging than other board games, due to the difficulty of constructing a position or move evaluation function. In this paper we investigate whether deep convolutional networks can be used to directly represent and learn this knowledge. We train a large 12-layer convolutional neural network by supervised learning from a database of human professional games. The network correctly predicts the expert move in 55% of positions, equalling the accuracy of a 6 dan human player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional search program GnuGo in 97% of games, and matched the performance of a state-of-the-art Monte-Carlo tree search that simulates a million positions per move.",
"corpus_id": 7355762,
"title": "Move Evaluation in Go Using Deep Convolutional Neural Networks"
} | {
"abstract": "Simulation Balancing is an optimization algorithm to automatically tune the parameters of a playout policy used inside a Monte Carlo Tree Search. The algorithm fits a policy so that the expected result of a policy matches given target values of the training set. Up to now it has been successfully applied to Computer Go on small 9 × 9 boards but failed for larger board sizes like 19 × 19. On these large boards apprenticeship learning, which fits a policy so that it closely follows an expert, continues to be the algorithm of choice. In this paper we introduce several improvements to the original simulation balancing algorithm and test their effectiveness in Computer Go. The proposed additions remove the necessity to generate target values by deep searches, optimize faster and make the algorithm less prone to overfitting. The experiments show that simulation balancing improves the playing strength of a Go program using apprenticeship learning by more than 200 ELO on the large board size 19 × 19.",
"corpus_id": 11641766,
"score": -1,
"title": "Monte-Carlo simulation balancing revisited"
} |
{
"abstract": "Abstract The school type can greatly affect the posture and psychological state of adolescent girls by controlling many factors, such as school bag load and way of carriage, furniture-anthropometry degree of matching, and availability of resources. This study aimed to compare postural changes and psychological aspects of adolescent Egyptian girls, attending two different school types: public versus private schools. An observational study was conducted on 200 adolescent girls, whose ages ranged from 14 to 17 years, with a body mass index of 19–25 kg/m2, selected from two different school types (two public and another two private) at El-Sharkia government, Egypt. They were assigned into two equal groups: group (A), including 100 public schoolgirls, and group (B) involving 100 international schoolgirls. Postural and mechanical changes were assessed at the head, rib cage, shoulders, hips, and knees in both groups by the Posture Screen® Mobile app and a photographic method from lateral and frontal views while standing. Assessing anxiety, depression, and stress signals was done using depression anxiety stress scales-21. The girls of public and international schools had different schoolbag characteristics but assumed the same poor posture. The results indicated more significant changes in the public schoolgirls of group (A) (p ˂ 0.05) in the head shift, shoulder tilting, hip shift, hip tilting, total shift, and total tilting, compared to the international girls of the group (B), while there were more changes in head weight, and effective head weight, indicating more forward head in the international girls of the group (B). For psychological aspects, the public-school girls had statistically significantly higher stress and anxiety scores (p ˂ 0.05) than the international schoolgirls. Thus, it could be concluded that the school type may affect the posture and psychology of Egyptian adolescent girls, as the girls at the public schools had more coronal postural changes together with more stress and anxiety than the girls enrolled in the private schools, while girls of the private schools had more forward head posture.",
"corpus_id": 253545313,
"title": "Impact of school type on the posture and psychological state of Egyptian adolescent girls: An observational study"
} | {
"abstract": "[Purpose] Measurement of posture is important for those with a clinical diagnosis as well as researchers aiming to understand the impact of faulty postures on the development of musculoskeletal disorders. A reliable, cost-effective and low tech posture measure may be beneficial for research and clinical applications. The purpose of this study was to determine rater reliability and construct validity of a posture screening mobile application in healthy young adults. [Subjects and Methods] Pictures of subjects were taken in three standing positions. Two raters independently digitized the static standing posture image twice. The app calculated posture variables, including sagittal and coronal plane translations and angulations. Intra- and inter-rater reliability were calculated using the appropriate ICC models for complete agreement. Construct validity was determined through comparison of known groups using repeated measures ANOVA. [Results] Intra-rater reliability ranged from 0.71 to 0.99. Inter-rater reliability was good to excellent for all translations. ICCs were stronger for translations versus angulations. The construct validity analysis found that the app was able to detect the change in the four variables selected. [Conclusion] The posture mobile application has demonstrated strong rater reliability and preliminary evidence of construct validity. This application may have utility in clinical and research settings.",
"corpus_id": 46770741,
"title": "Rater reliability and construct validity of a mobile application for posture analysis"
} | {
"abstract": "Question Is neck/shoulder pain in adolescents related to their sitting spinal posture, taking account of gender? Design Cross-sectional survey and direct observation. Participants 1597 adolescents from the ‘Raine' birth cohort study (781 females, 816 males) with a mean age of 14.1 years (SD 0.2). Outcome measures Neck/shoulder pain prevalence and gender was measured by survey. Spinal posture (7 angles) during sitting was measured from photographs. Results Life, month, and point prevalence for neck/shoulder pain among adolescents were 47%, 29%, and 5% respectively. Life prevalence was 10% higher in females than in males and month prevalence was 12% higher. When looking straight ahead, females sat with 2 degrees (95% CI 1 to 3) less neck flexion, 2 degrees (95% CI 0 to 3) less craniocervical angle, 7 degrees (95% CI 6 to 8) less cervicothoracic angle, 13 degrees (95% CI 12 to 14) less trunk angle, 10 degrees (95% CI 8 to 12) less lumbar angle, and 9 degrees (95% CI 7 to 11) more anterior pelvic tilt than males. Adolescents with neck/shoulder pain sat with 2 degrees (95% CI 1 to 3) less trunk angle, and 1 degree (95% CI 0 to 2) less cervicothoracic angle than those without pain. After controlling for gender, OR for neck/shoulder pain ever predicted by any angle ranged from 0.99 to 1.00 (range of 95% CI 0.98 to 1.01). Conclusion Neck/shoulder pain is highly prevalent in Australian adolescents. Sitting spinal posture differs between males and females and differs slightly between those with and without neck/shoulder pain. However, posture was not predictive of neck/shoulder pain ever after controlling for gender.",
"corpus_id": 6240085,
"score": -1,
"title": "Sitting spinal posture in adolescents differs between genders, but is not clearly related to neck/shoulder pain: an observational study."
} |
{
"abstract": "The data that have been collected from different resources might be redundant and duplicate. These data need to be cleaned in order for it to be used for other processing. The data should undergo detection process for any occurrence of duplication in the datasets. Two strategies are used to identify duplicates which are windowing or blocking. The aims of this paper are to review, to analyze and to compare algorithms in order to find the most efficient in terms of better accuracy and less number of comparisons. A comparison was made with the five most popular algorithms: DYSNI, PSNM, Dedup, InnWin and DCS++. Two benchmark datasets were used for the experiment, which are Restaurant and Cora. The results reveal that the DYNSI algorithm using both datasets gives high accuracy with respect to the number of comparisons. It is hoped that the results obtained from this study able to give the best review and comparison among the existing algorithms in producing high quality data and serve as a guidance to implement a better initiative for data storage system.",
"corpus_id": 197675619,
"title": "AN EFFICIENT ALGORITHM FOR DATA CLEANSING 1"
} | {
"abstract": "Duplicate detection is the process of identifying multiple representations of same real world entities. Today, duplicate detection methods need to process ever larger datasets in ever shorter time: maintaining the quality of a dataset becomes increasingly difficult. We present two novel, progressive duplicate detection algorithms that significantly increase the efficiency of finding duplicates if the execution time is limited: They maximize the gain of the overall process within the time available by reporting most results much earlier than traditional approaches. Comprehensive experiments show that our progressive algorithms can double the efficiency over time of traditional duplicate detection and significantly improve upon related work.",
"corpus_id": 8509218,
"title": "Progressive Duplicate Detection"
} | {
"abstract": "Entity matching that finds records referring to the same entity is an important operation in data cleaning and integration. Existing studies usually use a given similarity function to quantify the similarity of records, and focus on devising index structures and algorithms for efficient entity matching. However it is a big challenge to define \"how similar is similar\" for real applications, since it is rather hard to automatically select appropriate similarity functions. In this paper we attempt to address this problem. As there are a large number of similarity functions, and even worse thresholds may have infinite values, it is rather expensive to find appropriate similarity functions and thresholds. Fortunately, we have an observation that different similarity functions and thresholds have redundancy, and we have an opportunity to prune inappropriate similarity functions. To this end, we propose effective optimization techniques to eliminate such redundancy, and devise efficient algorithms to find the best similarity functions. The experimental results on both real and synthetic datasets show that our method achieves high accuracy and outperforms the baseline algorithms.",
"corpus_id": 18536078,
"score": -1,
"title": "Entity Matching: How Similar Is Similar"
} |
{
"abstract": "This work addresses the direction-of-arrival (DOA) estimation issue with multiple noncoherent subarrays. We use a maximum likelihood approach to derive a weighted MUSIC (w-MUSIC) algorithm for such arrays, which obtains the overall spatial spectrum via combining the weighted MUSIC spectrum of the subarrays. Theoretical analysis and numerical examples demonstrate that the w-MUSIC algorithm has a better performance compared to a previously introduced MUSIC algorithm for noncoherent subarrays.",
"corpus_id": 16126533,
"title": "Improved MUSIC Algorithm for Multiple Noncoherent Subarrays"
} | {
"abstract": "In this paper, a new direction-of-arrival (DOA) estimation technique applicable to partly-calibrated arrays (PCAs) composed of arbitrary subarrays with unknown subarray displacements is developed. The new method is not restricted to any specific array geometry and allows joint estimation of the DOAs and calibration of the entire sensor array. Computer simulations show that the proposed approach substantially outperforms the known DOA estimation methods applicable to such PCAs.",
"corpus_id": 2934315,
"title": "Direction-of-arrival estimation and array calibration for partly-calibrated arrays"
} | {
"abstract": "A sparse recovery approach for direction finding in partly calibrated arrays composed of subarrays with unknown displacements is introduced. The proposed method is based on mixed nuclear norm and $\\ell _1$ norm minimization and exploits block-sparsity and low-rank structure in the signal model. For efficient implementation a compact equivalent problem reformulation is presented. The new technique is applicable to subarrays of arbitrary topologies and grid-based sampling of the subarray manifolds. In the special case of subarrays with a common baseline our new technique admits extension to a gridless implementation. As shown by simulations, our new block- and rank-sparse direction finding technique for partly calibrated arrays outperforms the state of the art method RARE in difficult scenarios of low sample numbers, low signal-to-noise ratio, or correlated signals.",
"corpus_id": 3318196,
"score": -1,
"title": "Block- and Rank-Sparse Recovery for Direction Finding in Partly Calibrated Arrays"
} |
{
"abstract": "While the endogenous fatty acid amide oleamide has hypnotic properties, neither the breadth of its behavioral actions nor the mechanism(s) by which these behaviors may be mediated has been elucidated. Therefore, the effects of oleamide on the performance of rats in tests of motor function, analgesia, and anxiety were investigated. Oleamide reduced the distance traveled in the open field (ED50 = 14, 10-19 mg/kg, mean, 95% confidence interval), induced analgesia and hypothermia, but did not cause catalepsy. Moreover, a dose of oleamide without effect on motor function was anxiolytic in the social interaction test and elevated plus-maze. These actions of a single dose of oleamide lasted for 30 to 60 min. While rats became tolerant to oleamide following 8 days of repeated administration, oleamide is a poor inducer of physical dependence. Pretreatment with antagonists of the serotonin (5HT)1A, 5HT2C, and vanilloid receptors did not modify oleamide's effects. However, the cannabinoid receptor antagonist SR 141716A inhibited oleamide-induced analgesia in the tail-flick assay, the gamma-aminobutyric acid (GABA)A receptor antagonist bicuculline reversed the analgesia and hypothermia, and the dopamine D2 receptor antagonist L 741626 blocked oleamide's locomotor and analgesic actions. Interestingly, oleamide analogs resistant to hydrolysis by fatty acid amide hydrolase (FAAH) maintained but did not show increased behavioral potency or duration of action, whereas two FAAH inhibitors produced analogous behavioral effects. Thus, oleamide induces behaviors reminiscent of the actions of endogenous cannabinoids, but the involvement of GABAergic and dopaminergic systems, either directly or indirectly, in the actions of oleamide cannot be ruled out.",
"corpus_id": 1566942,
"title": "Behavioral evidence for the interaction of oleamide with multiple neurotransmitter systems."
} | {
"abstract": "Abstract Rationale: There is evidence that cannabinoids cause tolerance and physical dependence in humans and animals. Objectives: The aim of this work was to study whether the endogenous ligand for the cannabinoid receptor, arachidonylethanolamide (anandamide), induced behavioral tolerance and physical dependence in rats. Methods: Rats were injected with anandamide (20 mg/kg IP) daily for 2 weeks. To assess tolerance, on days 1, 8 and 15 of treatment rats were observed and behavior was tested. Two common methods were employed to assess physical dependence: interruption of anandamide dosing and vehicle substitution or administration of the cannabinoid CB1 receptor antagonist SR141716A (3 mg/kg IP). Results: Full or partial tolerance developed to the classical behavioral effects elicited by the cannabinoids: hypothermia, catalepsy, hypomotility, decrease in stereotypic activity (rearing and grooming) and hindlimb splaying. No tolerance to anandamide was observed for reduced defecation. An abstinence syndrome appeared after abrupt cessation of cannabinoid intake and after withdrawal precipitated by SR141716A; the withdrawal signs were scratching, licking and biting, eating of feces, ptosis, arched back, wet dog shakes, head shakes, myoclonic spasms, writhing, forepaw fluttering, teeth chattering and piloerection. Conclusions: These findings indicate that the endogenous cannabinoid ligand, administered exogenously, induces both tolerance and physical dependence in rats.\n",
"corpus_id": 544941,
"title": "Precipitated and spontaneous withdrawal in rats tolerant to anandamide"
} | {
"abstract": "Cannabis is one of the most widely used drugs throughout the world. The psychoactive constituent of cannabis, delta 9-tetrahydrocannabinol (delta 9-THC), produces a myriad of pharmacological effects in animals and humans. For many decades, the mechanism of action of cannabinoids, compounds which are structurally similar to delta 9-THC, was unknown. Tremendous progress has been made recently in characterizing cannabinoid receptors both centrally and peripherally and in studying the role of second messenger systems at the cellular level. Furthermore, an endogenous ligand, anandamide, for the cannabinoid receptor has been identified. Anandamide is a fatty-acid derived compound that possesses pharmacological properties similar to delta 9-THC. The production of complex behavioral events by cannabinoids is probably mediated by specific cannabinoid receptors and interactions with other neurochemical systems. Cannabis also has great therapeutic potential and has been used for centuries for medicinal purposes. However, cannabinoid-derived drugs on the market today lack specificity and produce many unpleasant side effects, thus limiting therapeutic usefulness. The advent of highly potent analogs and a specific antagonist may make possible the development of compounds that lack undesirable side effects. The advancements in the field of cannabinoid pharmacology should facilitate our understanding of the physiological role of endogenous cannabinoids.",
"corpus_id": 24516672,
"score": -1,
"title": "Cannabis: pharmacology and toxicology in animals and humans."
} |
{
"abstract": "Abstract Accurate soil-moisture monitoring is essential for water-resource management and agricultural applications, and is now widely undertaken using satellite remote sensing or terrestrial hydrological models’ products. While both methods have limitations, e.g. the limited soil depth resolution of space-borne data and data deficiencies in models, data-assimilation techniques can provide an alternative approach. Here, we use the recently developed data-driven Kalman–Takens approach to integrate satellite soil-moisture products with those of the Australian Water Resources Assessment system Landscape (AWRA-L) model. This is done to constrain the model’s soil-moisture simulations over Australia with those observed from the Advanced Microwave Scanning Radiometer-Earth Observing System and Soil-Moisture and Ocean Salinity between 2002 and 2017. The main objective is to investigate the ability of the integration framework to improve AWRA-L simulations of soil moisture. The improved estimates are then used to investigate spatiotemporal soil-moisture variations. The results show that the proposed model-satellite data integration approach improves the continental soil-moisture estimates by increasing their correlation to independent in situ measurements (∼10% relative to the non-assimilation estimates). Highlights Satellite soil-moisture measurements are used to improve model simulation. A data-driven approach based on Kalman–Takens is applied. The applied data-integration approach improves soil-moisture estimates.",
"corpus_id": 197579088,
"title": "Integrating satellite soil-moisture estimates and hydrological model products over Australia"
} | {
"abstract": "Climate change can significantly influence terrestrial water changes around the world particularly in places that have been proven to be more vulnerable such as Bangladesh. In the past few decades, climate impacts, together with those of excessive human water use have changed the country's water availability structure. In this study, we use multi-mission remotely sensed measurements along with a hydrological model to separately analyze groundwater and soil moisture variations for the period 2003-2013, and their interactions with rainfall in Bangladesh. To improve the model's estimates of water storages, terrestrial water storage (TWS) data obtained from the Gravity Recovery And Climate Experiment (GRACE) satellite mission are assimilated into the World-Wide Water Resources Assessment (W3RA) model using the ensemble-based sequential technique of the Square Root Analysis (SQRA) filter. We investigate the capability of the data assimilation approach to use a non-regional hydrological model for a regional case study. Based on these estimates, we investigate relationships between the model derived sub-surface water storage changes and remotely sensed precipitations, as well as altimetry-derived river level variations in Bangladesh by applying the empirical mode decomposition (EMD) method. A larger correlation is found between river level heights and rainfalls (78% on average) in comparison to groundwater storage variations and rainfalls (57% on average). The results indicate a significant decline in groundwater storage (∼32% reduction) for Bangladesh between 2003 and 2013, which is equivalent to an average rate of 8.73 ± 2.45mm/year.",
"corpus_id": 4477531,
"title": "A study of Bangladesh's sub-surface water storages using satellite products and data assimilation scheme."
} | {
"abstract": "Background Variation in the behavioural repertoire of animals is acquired by learning in a range of animal species. In nest-building birds, the assemblage of nest materials in an appropriate structure is often typical of a bird genus or species. Yet plasticity in the selection of nest materials may be beneficial because the nature and abundance of nest materials vary across habitats. Such plasticity can be learned, either individually or socially. In Corsican populations of blue tits Cyanistes caeruleus, females regularly add in their nests fragments of several species of aromatic plants during the whole breeding period. The selected plants represent a small fraction of the species present in the environment and have positive effects on nestlings. Methodology/Principal Findings We investigated spatiotemporal variations of this behaviour to test whether the aromatic plant species composition in nests depends on 1) plant availability in territories, 2) female experience or 3) female identity. Our results indicate that territory plays a very marginal role in the aromatic plant species composition of nests. Female experience is not related to a change in nest plant composition. Actually, this composition clearly depends on female identity, i.e. results from individual preferences which, furthermore, are repeatable both within and across years. A puzzling fact is the strong difference in plant species composition of nests across distinct study plots. Conclusions/Significance This study demonstrates that plant species composition of nests results from individual preferences that are homogeneous within study plots. We propose several hypotheses to interpret this pattern of spatial variation before discussing them in the light of preliminary results. As a conclusion, we cannot exclude the possibility of social transmission of individual preferences for aromatic plants. This is an exciting perspective for further work in birds, where nest construction behaviour has classically been considered as a stereotypic behaviour.",
"corpus_id": 3773679,
"score": -1,
"title": "Local Individual Preferences for Nest Materials in a Passerine Bird"
} |
{
"abstract": "Software quality is the degree to which a component, system or process meets specified requirements and meets customer or user needs or expectations. Software quality is best described as a combination of several factors. The aim of this paper was to investigate the measures available to determine different quality factors. The identification of factors and as well as the metrics and measures was done on the basis of the literature survey by studying and analysis various research papers. The results benefit software developers, researchers and academicians to easily identify the metrics used to measure the quality characteristics of the software. Furthermore, the work aimed at providing some suggestions, using the potential deficiencies detected as a basis.",
"corpus_id": 17554715,
"title": "A Report on the Analysis of Metrics and Measures on Software Quality Factors – A Literature Study"
} | {
"abstract": "Technical documentation is now fully taking the step from stale printed booklets (or electronic versions of these) to interactive and online versions. This provides opportunities to reconsider how we define and assess the quality of technical documentation. This paper suggests an approach based on the Goal-Question-Metric paradigm: predefined quality goals are continuously assessed and visualized by the use of metrics. To test this approach, we perform two experiments. We adopt well known software analysis techniques, e.g., clone detection and test coverage analysis, and assess the quality of two real world documentations, that of a mobile phone and of (parts of) a warship. The experiments show that quality issues can be identified and that the approach is promising.",
"corpus_id": 16163883,
"title": "A Metrics-Based Approach to Technical Documentation Quality"
} | {
"abstract": null,
"corpus_id": 17554715,
"score": -1,
"title": "A Report on the Analysis of Metrics and Measures on Software Quality Factors – A Literature Study"
} |
{
"abstract": "The ability of Google Trends data to forecast the number of new daily cases and deaths of COVID-19 is examined using a dataset of 158 countries. The analysis includes the computations of lag correlations between confirmed cases and Google data, Granger causality tests, and an out-of-sample forecasting exercise with 18 competing models with a forecast horizon of 14 days ahead. This evidence shows that Google-augmented models outperform the competing models for most of the countries. This is significant because Google data can complement epidemiological models during difficult times like the ongoing COVID-19 pandemic, when official statistics maybe not fully reliable and/or published with a delay. Moreover, real-time tracking with online-data is one of the instruments that can be used to keep the situation under control when national lockdowns are lifted and economies gradually reopen.",
"corpus_id": 221781231,
"title": "Short-term Forecasting of the COVID-19 Pandemic using Google Trends Data: Evidence from 158 Countries"
} | {
"abstract": "We propose the use of Google online search data for nowcasting and forecasting the number of food stamps recipients. We perform a large out-of-sample forecasting exercise with almost 3000 competing models with forecast horizons up to 2 years ahead, and we show that models including Google search data statistically outperform the competing models at all considered horizons. These results hold also with several robustness checks, considering alternative keywords, a falsification test, different out-of-samples, directional accuracy and forecasts at the state-level.",
"corpus_id": 804026,
"title": "Nowcasting and Forecasting the Monthly Food Stamps Data in the US Using Online Search Data"
} | {
"abstract": "It is widely agreed in empirical studies that allowing for potential structural change in economic processes is an important issue. In existing literature, tests for cointegration between time series data allow for one regime shift. This paper extends three residual-based test statistics for cointegration to the cases that take into account two possible regime shifts. The timing of each shift is unknown a priori and it is determined endogenously. The distributions of the tests are non-standard. We generate new critical values via simulation methods. The size and power properties of these test statistics are evaluated through Monte Carlo simulations, which show the tests have small size distortions and very good power properties. The test methods introduced in this paper are applied to determine whether the financial markets in the US and the UK are integrated.",
"corpus_id": 153437469,
"score": -1,
"title": "Tests for cointegration with two unknown regime shifts with an application to financial market integration"
} |
{
"abstract": "In this paper, the design, analysis, and room-temperature performance of two W-band LNA MMICs fabricated in two different technology variations are presented. The investigation demonstrates the noise improvement of the given 50-nm gate-length InGaAs mHEMT technology with reduced necessary drain currents. Therefore, a single-ended and balanced W-band LNA MMIC were designed, fabricated, and characterized. The amplifiers exhibit state-of-the-art noise temperatures with an average value for the single-ended LNA of 159 K (1.9 dB) with lowest values of 132 K (1.6 dB). Due to the technology investigation it was possible to reduce the noise temperature by about 15 K compared to the reference technology in combination with superior MMIC yield.",
"corpus_id": 198930963,
"title": "W-Band LNA MMICs Based on a Noise-Optimized 50-nm Gate-Length Metamorphic HEMT Technology"
} | {
"abstract": "Based on two low-noise amplifier (LNA) millimeter-wave integrated circuits (MMICs), this paper reports on a comparison between a 35-nm and a 50-nm gate-length metamorphic high-electron-mobility transistor technology. The LNA targets applications in an extended W-band with an operating frequency between 67–116 GHz. Both MMICs yield an |S21| of at least 20 dB for more than an octave bandwidth. The average|S21| of the 35-nm (LNA 1) and 50-nm LNA (LNA 2) is 26.2 dB and 25 dB, respectively. The measured noise figure of LNA 1 and LNA 2 achieves an excellent average value for the entire W-band (75–110 GHz) of 1.9 dB and 2.1 dB, respectively. To the best of the authors' knowledge LNA 1 is the first MMIC which yields an average noise figure of 1.9 dB over the entire W-band.",
"corpus_id": 2455156,
"title": "Comparison of a 35-nm and a 50-nm gate-length metamorphic HEMT technology for millimeter-wave low-noise amplifier MMICs"
} | {
"abstract": "Neocortical \"theta\" oscillation (5-12 Hz) has been observed in animals and human subjects but little is known about how the oscillation is organized in the cortical intrinsic networks. Here we use voltage-sensitive dye and optical imaging to study a carbachol/bicuculline induced theta ( approximately 8 Hz) oscillation in rat neocortical slices. The imaging has large signal-to-noise ratio, allowing us to map the phase distribution over the neocortical tissue during the oscillation. The oscillation was organized as spontaneous epochs and each epoch was composed of a \"first spike,\" a \"regular\" period (with relatively stable frequency and amplitude), and an \"irregular\" period (with variable frequency and amplitude) of oscillations. During each cycle of the regular oscillation, one wave of activation propagated horizontally (parallel to the cortical lamina) across the cortical section at a velocity of approximately 50 mm/s. Vertically the activity was synchronized through all cortical layers. This pattern of one propagating wave associated with one oscillation cycle was seen during all the regular cycles. The oscillation frequency varied noticeably at two neighboring horizontal locations (330 microm apart), suggesting that the oscillation is locally organized and each local oscillator is about </=300 microm wide horizontally. During irregular oscillations, the spatiotemporal patterns were complex and sometimes the vertical synchronization decomposed, suggesting a de-coupling among local oscillators. Our data suggested that neocortical theta oscillation is sustained by multiple local oscillators. The coupling regime among the oscillators may determine the spatiotemporal pattern and switching between propagating waves and irregular patterns.",
"corpus_id": 1067546,
"score": -1,
"title": "Propagating wave and irregular dynamics: spatiotemporal patterns of cholinergic theta oscillations in neocortex in vitro."
} |
{
"abstract": "Recently, in a paper published in IEEE Transactions on Intelligent Transportation Systems, the authors proposed a genetic algorithm-based solution approach to the transit coordination problem using integer-ratio headways. This note provides more accurate mathematical formulations of the cost components of the objective functions considered in the transit coordination optimization, to improve precision in evaluating the cost effectiveness of the coordinated transit timetable design. In addition, this note discusses a new cost component and some feasible, real-time, operational control strategies and tactics that can be used to achieve better transit transfer synchronization.",
"corpus_id": 19536172,
"title": "A Note on Transit Coordination Using Integer-Ratio Headways"
} | {
"abstract": "Coordination of transit routes is essential in reducing the travel time for connecting passengers, thus improving the service quality of the transit system. Perfect coordination can be achieved through the usage of common headways; however, this leads to an increase in operational costs, particularly when the variance of route headways is high. In this case, route coordination can be achieved through integer-ratio headways, where the headway of each coordinated route is an integer multiple of a base cycle. In this paper, we propose a novel genetic algorithm that creates clusters of routes whose coordination reduces the transfer time for connecting passengers. The objective is to minimize the total system cost, which includes the in-vehicle, waiting, and transfer costs for all the passengers served by the transit system and the operating cost of all transit vehicles. The experimental study conducted on one transit network from literature and a new network based on the Istanbul rail system demonstrates that this approach produces superior results compared with literature.",
"corpus_id": 1292406,
"title": "Transit Coordination Using Integer-Ratio Headways"
} | {
"abstract": "A low-temperature wafer-level hybrid bonding process using micro Cu pillar solder bumps and photopatternable dry film adhesive is developed and investigated, where the microbumps and dry film adhesive are simultaneously bonded at a low temperature of 240 °C. The proposed hybrid bonding method has been applied to an 8-in wafer-to-wafer bonding. The effects of two kinds of bonding profiles, i.e., conventional bonding profile and optimized step applying force bonding profile, on post-bonding misalignment are evaluated, and the results show that a misalignment below $5~\\mu \\text{m}$ is achieved using optimized step applying force bonding profile. In addition, by optimizing the total thickness difference between the bump part and the dry film adhesive part and by increasing the bonding force to 13 KN, a seam-free hybrid bonding interface can be achieved, which shows an average shear strength of about 21.3 MPa. Herein, the proposed method is highly cost-effective and promising for wafer-level low-temperature hybrid bonding aimed at the future 3-D integrated circuit integration.",
"corpus_id": 52965024,
"score": -1,
"title": "Optimization and Characterization of Low-Temperature Wafer-Level Hybrid Bonding Using Photopatternable Dry Film Adhesive and Symmetric Micro Cu Pillar Solder Bumps"
} |
{
"abstract": "Demand forecasts are the basis of most decisions in supply chain management. The granularity of these decisions lead to different forecast requirements. For example, inventory replenishment decisions require forecasts at the individual SKU level over lead time, whereas forecasts at higher levels, over longer horizons, are required for supply chain strategic decisions. The most accurate forecasts are not always obtained from data at the 'natural' level of aggregation. In some cases, forecast accuracy may be improved by aggregating data or forecasts at lower levels, or disaggregating data or forecasts at higher levels, or by combining forecasts at multiple levels of aggregation. Temporal and cross-sectional aggregation approaches are well established in the literature. More recently, it has been argued that these two approaches do not make the fullest use of data available at the different hierarchical levels of the supply chain. Therefore, consideration of forecasting hierarchies (over time and other dimensions), and combinations of forecasts across hierarchical levels, have been recommended. This paper provides a comprehensive review of research dealing with aggregation and hierarchical forecasting in supply chains, based on a systematic search. The review enables the identification of major research gaps and the presentation of an agenda for further research.",
"corpus_id": 244939910,
"title": "Demand forecasting in supply chains: a review of aggregation and hierarchical approaches"
} | {
"abstract": "Intermittent demand is characterized by occasional demand arrivals interspersed by time intervals during which no demand occurs. These demand patterns pose considerable difficulties in terms of forecasting and stock control due to their compound nature, which implies variability both in terms of demand arrivals and demand sizes. An intuitively appealing strategy to deal with such patterns from a forecasting and stock control perspective is to aggregate demand in lower-frequency ‘time buckets’, thereby reducing the presence of zero observations. In this paper, we investigate the impact of forecasting aggregation on the stock control performance of intermittent demand patterns. The benefit of the forecasting aggregation approach is empirically assessed by means of analysis on a large demand dataset from the Royal Air Force (UK). The results show that the aggregation forecasting approach results in higher achieved service levels as compared to the classical forecasting approach. Moreover, when the combined service-cost performance is considered, the results also show that the former approach is more efficient than the latter, especially for high target service levels.",
"corpus_id": 153544510,
"title": "Impact of temporal aggregation on stock control performance of intermittent demand estimators: Empirical analysis"
} | {
"abstract": "Applies time‐series forecasting, a traditional operations analysis methodology, to develop a forecasting procedure and ordering policy for a natural‐gas customer of Columbia Gas of Ohio, USA. Evaluates six time‐series methods and four operating policies against four commonly used measures of error and the cost consequences of error to the customer. Demonstrates that time‐series forecasting and decision theory developed by operations and applied in an actual industrial situation can become a powerful marketing technique. Provides further insights into evaluating forecasting models and ordering policies, demonstrating that introducing optimal planned bias is a robust decision‐making/forecasting approach within services. There are three parts to the study. The first is a straightforward testing of forecasting methods, using the forecasts as the natural‐gas ordering policy. Results vary depending upon how well forecasts are fitted to the data. For example, one inaccurate forecast with a poor fit incurs a pena...",
"corpus_id": 153746445,
"score": -1,
"title": "Applying Contemporary Forecasting and Computer Technology for Competitive Advantage in Service Operations"
} |
{
"abstract": "This paper presents the fabrication process of single-crystalline diamond platform used for high power RF components. We report-for the first time to the best of our knowledge-results of a Coplanar Waveguide (CPW) transmission line printed on the single-crystalline diamond substrate using the Aerosol Jet Printing technique. The transmission line is 2.44 mm long and is printed on the 3.5 mm × 3.5 mm × 0.3 mm diamond substrate utilizing a silver ink as the conducting material. The characteristic impedance of the CPW line is designed to be 50 Ω. The measured average loss per millimeter of the line is 0.28 dB/mm and 0.46 dB/mm at 20 GHz and 40 GHz, respectively. This results show the single-crystalline diamond substrate is a good candidate for the development of highly integrated RF circuits.",
"corpus_id": 22634655,
"title": "RF characterization of coplanar waveguide (CPW) transmission lines on single-crystalline diamond platform for integrated high power RF electronic systems"
} | {
"abstract": "This paper presents low-loss 3-D transmission lines and vertical interconnects fabricated by aerosol jet printing (AJP) which is an additive manufacturing technology. AJP stacks up multiple layers with minimum feature size as small as 20 $\\mu \\text{m}$ in the $xy$ -direction and 0.7 $\\mu \\text{m}$ in the z-direction. It also solves the problem of fabricating vias to realize the vertical transition by 3-D printing. The loss of the stripline is measured to be 0.53 dB/mm at 40 GHz. The vertical transition achieves a broadband bandwidth from 0.1 to 40 GHz. The results of this paper demonstrate the feasibility of utilizing 3-D printing for low-cost multilayer system-on-package RF/millimeter-wave front-ends.",
"corpus_id": 16951190,
"title": "Low-Loss 3-D Multilayer Transmission Lines and Interconnects Fabricated by Additive Manufacturing Technologies"
} | {
"abstract": "This paper examines the security vulnerabilities and threats imposed by the inherent open nature of wireless communications and to devise efficient defense mechanisms for improving the wireless network security. We first summarize the security requirements of wireless networks, including their authenticity, confidentiality, integrity and availability issues. Next, a comprehensive overview of security attacks encountered in wireless networks is presented in view of the network protocol architecture, where the potential security threats are discussed at each protocol layer. We also provide a survey of the existing security protocols and algorithms that are adopted in the existing wireless network standards, such as the Bluetooth, Wi-Fi, WiMAX, and the long-term evolution (LTE) systems. Then, we discuss the state-of-the-art in physical-layer security, which is an emerging technique of securing the open communications environment against eavesdropping attacks at the physical layer. We also introduce the family of various jamming attacks and their counter-measures, including the constant jammer, intermittent jammer, reactive jammer, adaptive jammer and intelligent jammer. Additionally, we discuss the integration of physical-layer security into existing authentication and cryptography mechanisms for further securing wireless networks. Finally, some technical challenges which remain unresolved at the time of writing are summarized and the future trends in wireless security are discussed.",
"corpus_id": 6779551,
"score": -1,
"title": "A Survey on Wireless Security: Technical Challenges, Recent Advances, and Future Trends"
} |
{
"abstract": "To promote engineering self-aware and self-adaptive software systems in a reusable manner, architectural patterns and the related methodology provide an unified solution to handle the recurring problems in the engineering process. However, in existing patterns and methods, domain knowledge and engineers’ expertise that is built over time are not explicitly linked to the self-aware processes. This link is important, as knowledge is a valuable asset for the related problems and its absence would cause unnecessary overhead, possibly misleading results, and unwise waste of the tremendous benefits that could have been brought by the domain expertise. This article highlights the importance of synergizing domain expertise and the self-awareness to enable better self-adaptation in software systems, relying on well-defined expertise representation, algorithms, and techniques. In particular, we present a holistic framework of notions, enriched patterns and methodology, dubbed DBASES, that offers a principled guideline for the engineers to perform difficulty and benefit analysis on possible synergies, in an attempt to keep “engineers-in-the-loop.” Through three tutorial case studies, we demonstrate how DBASES can be applied in different domains, within which a carefully selected set of candidates with different synergies can be used for quantitative investigation, providing more informed decisions of the design choices.",
"corpus_id": 210838951,
"title": "Synergizing Domain Expertise With Self-Awareness in Software Systems: A Patternized Architecture Guideline"
} | {
"abstract": "Autoscaling system can reconfigure cloud-based services and applications, through various configurations of cloud software and provisions of hardware resources, to adapt to the changing environment at runtime. Such a behavior offers the foundation for achieving elasticity in a modern cloud computing paradigm. Given the dynamic and uncertain nature of the shared cloud infrastructure, the cloud autoscaling system has been engineered as one of the most complex, sophisticated, and intelligent artifacts created by humans, aiming to achieve self-aware, self-adaptive, and dependable runtime scaling. Yet the existing Self-aware and Self-adaptive Cloud Autoscaling System (SSCAS) is not at a state where it can be reliably exploited in the cloud. In this article, we survey the state-of-the-art research studies on SSCAS and provide a comprehensive taxonomy for this field. We present detailed analysis of the results and provide insights on open challenges, as well as the promising directions that are worth investigated in the future work of this area of research. Our survey and taxonomy contribute to the fundamentals of engineering more intelligent autoscaling systems in the cloud.",
"corpus_id": 4640616,
"title": "A Survey and Taxonomy of Self-Aware and Self-Adaptive Cloud Autoscaling Systems"
} | {
"abstract": "With object-oriented programming languages, Object Relational Mapping (ORM) frameworks such as Hibernate have gained popularity due to their ease of use and portability to different relational database management systems. Hibernate implements the Java Persistent API, JPA, and frees a developer from authoring software to address the impedance mismatch between objects and relations. In this paper, we evaluate the performance of Hibernate by comparing it with a native JDBC implementation using a benchmark named BG. BG rates the performance of a system for processing interactive social networking actions such as view profile, extend an invitation from one member to another, and other actions. Our key findings are as follows. First, an object-oriented Hibernate implementation of each action issues more SQL queries than its JDBC counterpart. This enables the JDBC implementation to provide response times that are significantly faster. Second, one may use the Hibernate Query Language (HQL) to refine the object-oriented Hibernate implementation to provide performance that approximates the JDBC implementation.",
"corpus_id": 13868532,
"score": -1,
"title": "An Evaluation of the Hibernate Object-Relational Mapping for Processing Interactive Social Networking Actions"
} |
{
"abstract": "The movement and transport of people and goods is spatial by its very nature. Thus, geospatial fundamentals of transport systems need to be adequately considered in transport models. Until recently, this was not always the case. Instead, transport research and geography evolved widely independently in domain silos. However, driven by recent conceptual, methodological and technical developments, the need for an integrated approach is obvious. This paper attempts to outline the potential of Geographical Information Systems (GIS) for transport modeling. We identify three fields of transport modeling where the spatial perspective can significantly contribute to a more efficient modeling process and more reliable model results, namely, geospatial data, disaggregated transport models and the role of geo-visualization. For these three fields, available findings from various domains are compiled, before open aspects are formulated as research directions, with exemplary research questions. The overall aim of this paper is to strengthen the spatial perspective in transport modeling and to call for a further integration of GIS in the domain of transport modeling.",
"corpus_id": 8463810,
"title": "GIS and Transport Modeling - Strengthening the Spatial Perspective"
} | {
"abstract": "Abstract The ‘human activity approach’ to the study of travel behaviour represents a synthesis of concepts and analytic approaches partially drawn from several subdisciplines concerned with human spatial behaviour. Underlying the approach is the widely accepted view that travel demand emerges in response to individual and household requirements for activity participation. Study of the literature reveals a diverse array of research interests, equalled by the application of a broad assortment of modelling approaches and tools for analysis. The paper begins with a discussion of several conceptual issues that, if addressed, could enhance the behavioural rigour of on‐going research. The rest of the paper updates the literature with respect to state of the art and emerging approaches to activity–travel analysis and modelling. Overall, it is concluded that the advancement of new modelling concepts and approaches, in the presence of substantial methodological diversity, needs to be balanced with research into the kinds of behavioural and analytic issues raised in the paper.",
"corpus_id": 154491138,
"title": "Activity–Travel Behaviour Research: Conceptual Issues, State of the Art, and Emerging Perspectives on Behavioural Analysis and Simulation Modelling"
} | {
"abstract": "User equilibrium refers to the network-wide state where individual travelers cannot gain improvement by unilaterally changing their behaviors. The Wardropian Equilibrium has been the focus of a transportation equilibrium study. This paper modifies the dynamic traffic assignment method through utilizing the TRANSIMS system to reach the dynamic user equilibrium state in a microscopic model. The focus of research is developing three heuristics in a Routing-Microsimulation-Equilibrating order for reaching system-wide equilibrium while simultaneously minimizing the computing burden and execution. The heuristics are implemented to a TRANSIMS model to simulate a subarea of Houston, TX.",
"corpus_id": 12789408,
"score": -1,
"title": "Modeling User Equilibrium in Microscopic Transportation Simulation"
} |
{
"abstract": "Computational approaches to simultaneous interpretation are stymied by how little we know about the tactics human interpreters use. We produce a parallel corpus of translated and si-multaneously interpreted text and study differ-ences between them through a computational approach. Our analysis reveals that human interpreters regularly apply several effective tactics to reduce translation latency, including sentence segmentation and passivization. In addition to these unique, clever strategies, we show that limited human memory also causes other idiosyncratic properties of human interpretation such as generalization and omission of source content",
"corpus_id": 266057131,
"title": "Interpretese vs. Translationese: The Uniqueness of Human Strategies in Simultaneous Interpretation"
} | {
"abstract": "This paper describes the collection of an English-Japanese/Japanese-English simultaneous interpretation corpus. There are two main features of the corpus. The first is that professional simultaneous interpreters with different amounts of experience cooperated with the collection. By comparing data from simultaneous interpretation of each interpreter, it is possible to compare better interpretations to those that are not as good. The second is that for part of our corpus there are already translation data available. This makes it possible to compare translation data with simultaneous interpretation data. We recorded the interpretations of lectures and news, and created time-aligned transcriptions. A total of 387k words of transcribed data were collected. The corpus will be helpful to analyze differences in interpretations styles and to construct simultaneous interpretation systems.",
"corpus_id": 9574685,
"title": "Collection of a Simultaneous Translation Corpus for Comparative Analysis"
} | {
"abstract": "This paper presents the first continental‐scale study of the crust and upper mantle shear velocity (Vs) structure of Canada and adjacent regions using ambient noise tomography. Continuous waveform data recorded between 2003 and 2009 with 788 broadband seismograph stations in Canada and adjacent regions were used in the analysis. The higher primary frequency band of the ambient noise provides better resolution of crustal structures than previous tomographic models based on earthquake waveforms. Prominent low velocity anomalies are observed at shallow depths (<20 km) beneath the Gulf of St. Lawrence in east Canada, the sedimentary basins of west Canada, and the Cordillera. In contrast, the Canadian Shield exhibits high crustal velocities. We characterize the crust‐mantle transition in terms of not only its depth and velocity but also its sharpness, defined by its thickness and the amount of velocity increase. Considerable variations in the physical properties of the crust‐mantle transition are observed across Canada. Positive correlations between the crustal thickness, Moho velocity, and the thickness of the transition are evident throughout most of the craton except near Hudson Bay where the uppermost mantle Vs is relatively low. Prominent vertical Vs gradients are observed in the midcrust beneath the Cordillera and beneath most of the Canadian Shield. The midcrust velocity contrast beneath the Cordillera may correspond to a detachment zone associated with high temperatures immediately beneath, whereas the large midcrust velocity gradient beneath the Canadian Shield probably represents an ancient rheological boundary between the upper and lower crust.",
"corpus_id": 17034837,
"score": -1,
"title": "Ambient seismic noise tomography of Canada and adjacent regions: Part I. Crustal structures"
} |
{
"abstract": "Paraphrases are textual expressions that convey the same meaning using different surface forms. Capturing the variability of language, they play an important role in many natural language applications includ ing question answering, machine translation, and multi-document summarization. In linguistics, paraphrases are characterized by approximate conceptual equivalence. Since no automated semantic interpretation systems available today can identify conceptual equivalence, paraphrases are difficult to acquire without human effort. In this paper, we present a method for automatically acquiring paraphrases using a monolingual corpus. We learn paraphrases at both the surface and lexico-syntactic levels and build two paraphrase resources each containing about 2 million phrases. We evaluate these paraphrases extrinsically by using them to learn patterns for Information Extraction (IE). We show that the lexico-syntactic paraphrases performs better than the surface-level paraphrases for IE. We further show that the patterns learned using the lexico-syntactic paraphrases attain comparable performance to the traditional IE approach of learning patterns from domain-specific corpora.",
"corpus_id": 12510128,
"title": "Acquiring paraphrases from text corpora"
} | {
"abstract": "Paraphrases have proved to be useful in many applications, including Machine Translation, Question Answering, Summarization, and Information Retrieval. Paraphrase acquisition methods that use a single monolingual corpus often produce only syntactic paraphrases. We present a method for obtaining surface paraphrases, using a 150GB (25 billion words) monolingual corpus. Our method achieves an accuracy of around 70% on the paraphrase acquisition task. We further show that we can use these paraphrases to generate surface patterns for relation extraction. Our patterns are much more precise than those obtained by using a state of the art baseline and can extract relations with more than 80% precision for each of the test relations.",
"corpus_id": 1753223,
"title": "Large Scale Acquisition of Paraphrases for Learning Surface Patterns"
} | {
"abstract": "Measuring the semantic similarity between words is an important component in various tasks on the web such as relation extraction, community mining, document clustering, and automatic metadata extraction. Despite the usefulness of semantic similarity measures in these applications, accurately measuring semantic similarity between two words (or entities) remains a challenging task. We propose an empirical method to estimate semantic similarity using page counts and text snippets retrieved from a web search engine for two words. Specifically, we define various word co-occurrence measures using page counts and integrate those with lexical patterns extracted from text snippets. To identify the numerous semantic relations that exist between two given words, we propose a novel pattern extraction algorithm and a pattern clustering algorithm. The optimal combination of page counts-based co-occurrence measures and lexical pattern clusters is learned using support vector machines. The proposed method outperforms various baselines and previously proposed web-based semantic similarity measures on three benchmark data sets showing a high correlation with human ratings. Moreover, the proposed method significantly improves the accuracy in a community mining task.",
"corpus_id": 803218,
"score": -1,
"title": "A Web Search Engine-Based Approach to Measure Semantic Similarity between Words"
} |
{
"abstract": "Nowadays, convolutional neural networks (CNNs) are the core of many intelligent systems, including those that run on mobile and embedded devices. However, the execution of computationally demanding and memory-hungry CNNs on resource-limited mobile and embedded devices is quite challenging. One of the main problems, when running CNNs on such devices, is the limited amount of memory available. Thus, reduction of the CNN memory footprint is crucial for the CNN inference on mobile and embedded devices. The CNN memory footprint is determined by the amount of memory required to store CNN parameters (weights and biases) and intermediate data, exchanged between CNN operators. The most common approaches, utilized to reduce the CNN memory footprint, such as pruning and quantization, reduce the memory required to store the CNN parameters. However, these approaches decrease the CNN accuracy. Moreover, with the increasing depth of the state-of-the-art CNNs, the intermediate data exchanged between CNN operators takes even more space than the CNN parameters. Therefore, in this paper, we propose a novel approach, which allows to reduce the memory, required to store intermediate data, exchanged between CNN operators. Unlike pruning and quantization approaches, our proposed approach preserves the CNN accuracy and reduces the CNN memory footprint at the cost of decreasing the CNN throughput. Rus, our approach is orthogonal to the pruning and quantization approaches, and can be combined with these approaches for further CNN memory footprint reduction.",
"corpus_id": 222296676,
"title": "Buffer Sizes Reduction for Memory-efficient CNN Inference on Mobile and Embedded Devices"
} | {
"abstract": "Unified Memory is an emerging technology which is supported by CUDA 6.X. Before CUDA 6.X, the existing CUDA programming model relies on programmers to explicitly manage data between CPU and GPU and hence increases programming complexity. CUDA 6.X provides a new technology which is called as Unified Memory to provide a new programming model that defines CPU and GPU memory space as a single coherent memory (imaging as a same common address space). The system manages data access between CPU and GPU without explicit memory copy functions. This paper is to evaluate the Unified Memory technology through different applications on different GPUs to show the users how to use the Unified Memory technology of CUDA 6.X efficiently. The applications include Diffusion3D Benchmark, Parboil Benchmark Suite, and Matrix Multiplication from the CUDA SDK Samples. We changed those applications to corresponding Unified Memory versions and compare those with the original ones. We selected the NVIDIA Keller K40 and the Jetson TK1, which can represent the latest GPUs with Keller architecture and the first mobile platform of NVIDIA series with Keller GPU. This paper shows that Unified Memory versions cause 10% performance loss on average. Furthermore, we used the NVIDIA Visual Profiler to dig the reason of the performance loss by the Unified Memory technology.",
"corpus_id": 6967962,
"title": "An Evaluation of Unified Memory Technology on NVIDIA GPUs"
} | {
"abstract": "Wireless sensor networks constitute a powerful technology particularly suitable for environmental monitoring. With regard to wildfires, they enable low-cost fine-grained surveillance of hazardous locations like wildland–urban interfaces. This paper presents work developed during the last 4 years targeting a vision-enabled wireless sensor network node for the reliable, early on-site detection of forest fires. The tasks carried out ranged from devising a robust vision algorithm for smoke detection to the design and physical implementation of a power-efficient smart imager tailored to the characteristics of such an algorithm. By integrating this smart imager with a commercial wireless platform, we endowed the resulting system with vision capabilities and radio communication. Numerous tests were arranged in different natural scenarios in order to progressively tune all the parameters involved in the autonomous operation of this prototype node. The last test carried out, involving the prescribed burning of a 95 × 20-m shrub plot, confirmed the high degree of reliability of our approach in terms of both successful early detection and a very low false-alarm rate.",
"corpus_id": 41509195,
"score": -1,
"title": "Early forest fire detection by vision-enabled wireless sensor networks"
} |
{
"abstract": "In this letter, we present a detailed investigation on how dynamic thermal phenomena take place in state-of-the-art SiGe HBTs when excited by sinusoidal power dissipation. To give a better insight into the mechanisms leading to the thermal impedance (<inline-formula> <tex-math notation=\"LaTeX\">$\\text{Z}_{{\\textsf {th}}})$ </tex-math></inline-formula> decay, we introduce the concept of thermal penetration depth; then, with the help of 3-D thermal simulations, we illustrate its effect on the spatial distribution of the temperature variations within the transistor structure, according to the frequency of operation. In order to experimentally analyze the impact on a real device, dedicated HBT structures are designed; they consist of multi-finger SiGe HBTs realized in B55 technology from STMicroelectronics, for which modifications are made in the back-end-of-line (BEOL) metallization or in the transistor layout, increasing its deep trench isolation enclosed area. For these transistors, <inline-formula> <tex-math notation=\"LaTeX\">$\\text{Z}_{{\\textsf {th}}}$ </tex-math></inline-formula> measurements are carried out in the frequency range 10kHz–1GHz; the results show that the metal connections configuration in the BEOL or layout modifications can considerably impact the <inline-formula> <tex-math notation=\"LaTeX\">$\\text{Z}_{{\\textsf {th}}}$ </tex-math></inline-formula> decay at low frequencies. An identical <inline-formula> <tex-math notation=\"LaTeX\">$\\text{Z}_{{\\textsf {th}}}$ </tex-math></inline-formula> trend is instead measured above 1–2 MHz, demonstrating that at higher frequencies just the region close to the heat source is concerned by dynamic thermal phenomena.",
"corpus_id": 12286863,
"title": "Thermal Penetration Depth Analysis and Impact of the BEOL Metals on the Thermal Impedance of SiGe HBTs"
} | {
"abstract": "This paper investigates alternative topologies of silicon germanium heterojunction bipolar transistors designed and fabricated in the state-of-the-art BiCMOS process from STMicroelectronics for improved safe-operating characteristics. Electrical and thermal behaviors of various structures are analyzed and compared, along with a detailed discussion on drawbacks and advantages. The test structures under study are different in terms of emitter-finger layouts as well as the metal stacks in the back-end-of-line. It is observed that the multifinger transistor structures having nonuniform finger lengths with wider area enclosed by the deep trench and higher metallization stacks yield an improved thermal behavior. Therefore, the safe-operating area of multifinger transistors can be extended without degrading the RF performances.",
"corpus_id": 1816836,
"title": "Innovative SiGe HBT Topologies With Improved Electrothermal Behavior"
} | {
"abstract": "This paper presents a new method to characterize the dynamics of the charge trapped in the dielectric layer of contactless microelectromechanical systems. For sampled-time systems, this allows knowing the state of the net charge at each sampling time without distorting the measurement. This approach allows one to model the expected behaviour of dielectric charging as a response to a sigma–delta control of charge. The goodness of the proposed approach is obtained by matching the experimentally obtained closed loop response with the one predicted using the proposed characterization method. The characterization method also provides a criterion to avoid nonlinear effects, such as fractal-like behaviour, in charge control.",
"corpus_id": 254239847,
"score": -1,
"title": "Real-time characterization of dielectric charging in contactless capacitive MEMS"
} |
{
"abstract": "\n BackgroundQuantitative PCR (qPCR) is one of the most common and accurate methods of gene expression analysis. However, the biggest challenge for this kind of examinations is normalization of the results, which requires the application of dependable internal controls. The selection of appropriate reference genes (RGs) is one of the most crucial points in qPCR data analysis and for correct assessment of gene expression. Because of the fact that many reports indicate that the expression profiles of typically used RGs can be unstable in certain experimental conditions, species or tissues, reference genes with stable expression levels should be selected individually for each experiment. In this study, we analysed a set of ten candidate RGs for wheat seedlings under short-term drought stress. Our tests included five ‘traditional’ RGs (GAPDH, ACT, UBI, TUB, and TEF1) and five novel genes developed by the RefGenes tool from the Genevestigator database.ResultsExpression stability was assessed using five different algorithms: geNorm, NormFinder, BestKeeper, RefFinder and the delta Ct method. In the final ranking, we identified three genes: CJ705892, ACT, and UBI, as the best candidates for housekeeping genes. However, our data indicated a slight variation between the different algorithms that were used. We revealed that the novel gene CJ705892, obtained by means of in silico analysis, showed the most stable expression in the experimental tissue and condition.ConclusionsOur results support the statement, that novel genes selected for certain experimental conditions have a more stable level of expression in comparison to routinely applied RGs, like genes encoding actin, tubulin or GAPDH. Selected CJ705892 gene can be used as a housekeeping gene in the expression analysis in wheat seedlings under short-term drought. The results of our study will be useful for subsequent analyses of gene expression in wheat tissues subjected to drought.",
"corpus_id": 260437951,
"title": "Identification of stable reference gene for qPCR studies in common wheat (Triticum aestivum L.) seedlings under short-term osmotic stress"
} | {
"abstract": "BackgroundInternal control genes with highly uniform expression throughout the experimental conditions are required for accurate gene expression analysis as no universal reference genes exists. In this study, the expression stability of 24 candidate genes from Triticum aestivum cv. Cubus flag leaves grown under organic and conventional farming systems was evaluated in two locations in order to select suitable genes that can be used for normalization of real-time quantitative reverse-transcription PCR (RT-qPCR) reactions. The genes were selected among the most common used reference genes as well as genes encoding proteins involved in several metabolic pathways.FindingsIndividual genes displayed different expression rates across all samples assayed. Applying geNorm, a set of three potential reference genes were suitable for normalization of RT-qPCR reactions in winter wheat flag leaves cv. Cubus: TaFNRII (ferredoxin-NADP(H) oxidoreductase; AJ457980.1), ACT2 (actin 2; TC234027), and rrn26 (a putative homologue to RNA 26S gene; AL827977.1). In addition of these three genes that were also top-ranked by NormFinder, two extra genes: CYP18-2 (Cyclophilin A, AY456122.1) and TaWIN1 (14-3-3 like protein, AB042193) were most consistently stably expressed.Furthermore, we showed that TaFNRII, ACT2, and CYP18-2 are suitable for gene expression normalization in other two winter wheat varieties (Tommi and Centenaire) grown under three treatments (organic, conventional and no nitrogen) and a different environment than the one tested with cv. Cubus.ConclusionsThis study provides a new set of reference genes which should improve the accuracy of gene expression analyses when using wheat flag leaves as those related to the improvement of nitrogen use efficiency for cereal production.",
"corpus_id": 708710,
"title": "Reference genes for gene expression studies in wheat flag leaves grown under different farming conditions"
} | {
"abstract": "Only limited public transcriptomics resources are available for durum wheat and its responses to environmental changes. We developed a quantitative reverse transcription-PCR (qRT-PCR) platform for analysing the expression of primary C and N metabolism genes in durum wheat in leaves (125 genes) and roots (38 genes), based on available bread wheat genes and the identification of orthologs of known genes in other species. We also assessed the expression stability of seven reference genes for qRT-PCR under varying environments. We therefore present a functional qRT-PCR platform for gene expression analysis in durum wheat, and suggest using the ADP-ribosylation factor as a reference gene for qRT-PCR normalization. We investigated the effects of elevated [CO(2)] and temperature at two levels of N supply on C and N metabolism by combining gene expression analysis, using our qRT-PCR platform, with biochemical and physiological parameters in durum wheat grown in field chambers. Elevated CO(2) down-regulated the photosynthetic capacity and led to the loss of N compounds, including Rubisco; this effect was exacerbated at low N. Mechanistically, the reduction in photosynthesis and N levels could be associated with a decreased transcription of the genes involved in photosynthesis and N assimilation. High temperatures increased stomatal conductance, and thus did not inhibit photosynthesis, even though Rubisco protein and activity, soluble protein, leaf N, and gene expression for C fixation and N assimilation were down-regulated. Under a future scenario of climate change, the extent to which C fixation capacity and N assimilation are down-regulated will depend upon the N supply.",
"corpus_id": 2562685,
"score": -1,
"title": "Quantitative RT-PCR Platform to Measure Transcript Levels of C and N Metabolism-Related Genes in Durum Wheat: Transcript Profiles in Elevated [CO2] and High Temperature at Different Levels of N Supply."
} |
{
"abstract": "Surgical resection of skull base tumors in children is increasingly accomplished through an expanded endonasal approach (EEA). We aim to evaluate the potential effect of the EEA on midfacial growth as a result of iatrogenic damage to nasal growth zones.",
"corpus_id": 148569400,
"title": "The impact of expanded endonasal skull base surgery on midfacial growth in pediatric patients"
} | {
"abstract": "Several studies have investigated the effects of septoplasty on facial growth in children, with conflicting results. However, just handful of those employed objective measures or evaluated patients after facial growth completion. Objective This study assesses the effects of the Metzenbaum septoplasty, which preserves the perichondrium and growth-related areas on nasal and facial growth in children. Method We included those children submitted to surgery before the age of 14 and who had 16 years or years of follow up. Sixteen patients were selected. We evaluated the following parameters: clinical satisfaction (nasal patency and aesthetics), anthropometric measurements and cephalometry. Scientific design: cross-sectional historical cohort. Results The mean age at surgery was 13 years, children were assessed on average 4.3 years after surgery. Only one patient had anthropometric and cephalometric values below normal, but no aesthetics or patency complaints. Four other patients complained about their nasal aesthetics and three had patency complaints. Conclusion The Metzenbaum septoplasty appears to be a safe technique to correct caudal septum deviations. This technique had no significant impact on facial growth of the patients assessed.",
"corpus_id": 787278,
"title": "The impact of Metzembaum septoplasty on nasal and facial growth in children"
} | {
"abstract": "This study investigates the feasibility of performing uterine artery embolization (UAE) via transradial access (TRA). Growing evidence demonstrates significant benefits of TRA versus standard transfemoral access during percutaneous coronary intervention, now making it the preferred approach at many centers worldwide. At a single institution from March 2013 to October 2013, 29 consecutive patients were treated by transradial UAE. Technical success rate was 100%, with no immediate major or minor complications. The radial artery was patent at 1-month follow-up evaluation in all cases. These preliminary data suggest that transradial UAE is feasible and safe.",
"corpus_id": 5856318,
"score": -1,
"title": "Uterine artery embolization using a transradial approach: initial experience and technique."
} |
{
"abstract": "In this paper perfluorinated graded-index polymer optical fibers are characterized with respect to the influence of relative humidity changes on spectral transmission absorption and Rayleigh backscattering. The hygroscopic and thermal expansion coefficient of the fiber are determined to be CHE = (7.4 ± 0.1) ·10−6 %r.h.−1 and CTE = (22.7 ± 0.3) ·10−6 K−1, respectively. The influence of humidity on the Brillouin backscattering power and linewidth are presented for the first time to our knowledge. The Brillouin backscattering power at a pump wavelength of 1319 nm is affected by temperature and humidity. The Brillouin linewidth is observed to be a function of temperature but not of humidity. The strain coefficient of the BFS is determined to be CS= (−146.5 ± 0.9) MHz/% for a wavelength of 1319 nm within a strain range from 0.1% to 1.5%. The obtained results demonstrate that the humidity-induced Brillouin frequency shift is predominantly caused by the swelling of the fiber over-cladding that leads to fiber straining.",
"corpus_id": 53568035,
"title": "Investigation on the Influence of Humidity on Stimulated Brillouin Backscattering in Perfluorinated Polymer Optical Fibers"
} | {
"abstract": "We demonstrate a quasi-distributed sensor for cantilever health inspection measurements using a fiber Bragg grating (FBG) array inscribed in a polymer optical fiber. The FBGs were characterized and calibrated for axial strain, temperature, and relative humidity prior to their mounting on a carbon cantilever beam, the tail rotor of a helicopter. By using the zero-crossing demodulation algorithm, we recovered the time-dependent, wavelength response from each Bragg grating sensor and the vibration response of the beam was extracted. We used the response of the beam to study how the addition of masses at different positions on the beam influences the vibrational behavior and mimics the location of “damage” through the time-dependent results. We show that health inspection measurements are feasible with polymer-based fiber Bragg gratings, offering accurate and rapid detection of damage points on a structural beam.",
"corpus_id": 3427522,
"title": "Carbon Cantilever Beam Health Inspection Using a Polymer Fiber Bragg Grating Array"
} | {
"abstract": "This paper presents the achievements and progress made on the polymer optical fiber (POF) gratings inscription in different types of Fiber Bragg Gratings (FBGs) and long period gratings (LPGs). Since the first demonstration of POFBGs in 1999, significant progress has been made where the inscription times that were higher than 1 h have been reduced to 15 ns with the application of the krypton fluoride (KrF) pulsed laser operating at 248 nm and thermal treatments such as the pre-annealing of fibers. In addition, the application of dopants such as benzyl dimethyl ketal (BDK) has provided a significant decrease of the fiber inscription time. Furthermore, such improvements lead to the possibility of inscribing POF gratings in 850 nm and 600 nm, instead of only the 1550 nm region. The progress on the inscription of different types of polymer optical fiber Bragg gratings (POFBGs) such as chirped POFBGs and phase-shifted POFBGs are also reported in this review.",
"corpus_id": 44098961,
"score": -1,
"title": "Advances on Polymer Optical Fiber Gratings Using a KrF Pulsed Laser System Operating at 248 nm"
} |
{
"abstract": "1. Department of Experimental Physiology and Pathophysiology, Laboratory of the Centre for Preclinical Research, Medical University of Warsaw, Warsaw, Poland. 2. Department of Soft Condensed Matter, Institute of Physical Chemistry, Polish Academy of Sciences, Warsaw, Poland. 3. Mass Spectrometry Laboratory, Institute of Biochemistry and Biophysics, Polish Academy of Sciences, Warsaw, Poland. 4. Department of Renal and Body Fluid Physiology, M. Mossakowski Medical Research Centre, Polish Academy of Sciences, Warsaw, Poland. Running title: TMAO exerts beneficial effect in HF rats Funding: The study was supported by the National Science Centre, Poland grant no. 2018/31/B/NZ5/00038.",
"corpus_id": 235208983,
"title": "Full title: TMAO, a seafood-derived molecule, produces diuresis and reduces mortality in heart failure rats"
} | {
"abstract": "Trimethylamine N-oxide (TMAO) is an organic osmolyte present at high levels in elasmobranchs, in which it counteracts the deleterious effects of urea on proteins, and is also accumulated by deep-living invertebrates and teleost fishes. To test the hypothesis that TMAO may compensate for the adverse effects of elevated pressure on protein structure in deep-sea species, we studied the efficacy of TMAO in preventing denaturation and enhanced proteolysis by hydrostatic pressure. TMAO was compared to a common 'compatible' osmolyte, glycine, using muscle-type lactate dehydrogenase (A(4)-LDH) homologs from three scorpaenid teleost fish species and from a mammal, the cow. Test conditions lasted 1 h and were: (1) no addition, (2) 250 mmol l(-)(1) TMAO and (3) 250 mmol l(-)(1) glycine, in the absence and presence of trypsin. Comparisons were made at 0. 1 and 101.3 MPa for the deeper occurring Sebastolobus altivelis, 0.1, 50.7 and 101.3 MPa for the moderate-depth congener S. alascanus, 0. 1 and 25.3 MPa for shallow-living Sebastes melanops and 0.1 and 50.7 MPa for Bos taurus. Susceptibility to denaturation was determined by the residual LDH activity. For all the species and pressures tested, 250 mmol l(-)(1) TMAO reduced trypsinolysis significantly. For all except S. altivelis, which was minimally affected by 101.3 MPa pressure, TMAO stabilized the LDH homologs and reduced pressure denaturation significantly. Glycine, in contrast, showed no ability to reduce pressure denaturation alone, and little or no ability to reduce the rate of proteolysis.",
"corpus_id": 1108892,
"title": "Trimethylamine oxide stabilizes teleost and mammalian lactate dehydrogenases against inactivation by hydrostatic pressure and trypsinolysis."
} | {
"abstract": "This article uses a retrospective approach to looking at participatory visual work with girls, in relation to addressing gender violence in and around schools in sub-Saharan Africa. Drawing on a variety of work focusing on the visual, including Jo Spence's innovative work from the 1990s (‘What can a woman do with a camera?’), this article seeks to extend and elaborate the idea of feminist visual methodologies in order to uncover the critical issue of girls' safety and security. Participatory work with girls, the article argues, as part of what is referred to here as ‘girl-method’, can be an effective way to reveal the perspectives of girls. At the same time, the use of the visual (and in particular, visual artefacts such as photos, videos, drawings, and digital archiving) invites researchers and communities (including the girls themselves) to re-visit the data and in so doing to explore it further. The article concludes with a call for new and longer-term increased levels of participation when it comes to working with girls, by highlighting the use of the participatory digital archive as a feminist visual tool.",
"corpus_id": 146242853,
"score": -1,
"title": "What's Participation Got to Do with it? Visual Methodologies in ‘Girl-Method’ to Address Gender-Based Violence in the Time of AIDS"
} |
{
"abstract": "Abstract Background: Cardiac-specific troponin T (cTnT) and troponin I (cTnI) are considered diagnostically equal in patients with acute coronary syndrome (ACS). The aim of this systematic review was to compare the prevalence and prognostic strength of elevations of cTnT and cTnI in patients with other conditions than ACS. Methods: A systemic review was conducted in concordance with the PRISMA guidelines. The studies were identified by searching PubMed, EMBASE and Cochrane Central Register, from May to August 2016. Studies measuring both cTnT and cTnI in populations without ACS were eligible. Results: Twenty-nine studies were included (n = 25,859). Seventeen studies reported on prognostic information with follow-up time ranging for 30 d–5 years. Elevation above the 99th percentile (reference value for a healthy population) in non-ACS population was reported to be 0–39% for cTnI and 40–100% for cTnT. Elevation of cTnT tends to be a superior predictor for all-cause mortality and elevation of cTnI tends to be a superior predictor for cardiovascular related mortality. Discussion: In the absence of ACS, elevation of cTnT is more frequent than elevation of cTnI. Conclusion: Both cTnT and cTnI elevations have important prognostic information regarding morbidity, cardiac mortality and all-cause mortality.",
"corpus_id": 205770857,
"title": "Head-to-head comparison of cardiac troponin T and troponin I in patients without acute coronary syndrome: a systematic review"
} | {
"abstract": "BACKGROUND\nCardiac troponins I (cTnI) and T (cTnT) have received international endorsement as the standard biomarkers for detection of myocardial injury, for risk stratification in patients suspected of acute coronary syndrome, and for the diagnosis of myocardial infarction. An evidence-based clinical database is growing rapidly for high-sensitivity (hs) troponin assays. Thus, clarifications of the analytical principles for the immunoassays used in clinical practice are important.\n\n\nCONTENT\nThe purpose of this mini-review is (a) to provide a background for the biochemistry of cTnT and cTnI and (b) to address the following analytical questions for both hs cTnI and cTnT assays: (i) How does an assay become designated hs? (ii) How does one realistically define healthy (normal) reference populations for determining the 99th percentile? (iii) What is the usual biological variation of these analytes? (iv) What assay imprecision characteristics are acceptable? (v) Will standardization of cardiac troponin assays be attainable?\n\n\nSUMMARY\nThis review raises important points regarding cTnI and cTnT assays and their reference limits and specifically addresses hs assays used to measure low concentrations (nanograms per liter or picograms per milliliter). Recommendations are made to help clarify the nomenclature. The review also identifies further challenges for the evolving science of cardiac troponin measurement. It is hoped that with the introduction of these concepts, both laboratorians and clinicians can develop a more unified view of how these assays are used worldwide in clinical practice.",
"corpus_id": 333901,
"title": "Analytical characteristics of high-sensitivity cardiac troponin assays."
} | {
"abstract": "We have isolated a cDNA recombinant plasmid (pA29) identified as encoding part of the ventricular muscle myosin light chain MLC1v. This cDNA contains a 300-base pair fragment which under conditions of moderate stringency shows specific hybridization to MLC1v mRNA with no detectable cross-hybridization with the mRNAs encoding the fast skeletal muscle isoforms MLC1F and MLC3F, or the atrial muscle isoform MLC1A. Under these conditions hybridization is seen with an abundant mRNA present in slow skeletal muscle (soleus) which is indistinguishable from ventricular MLC1V mRNA on the basis of size and of thermal stability of hybrids formed with plasmid pA29. The mouse MLC1V and MLC1S proteins are found to co-migrate on two-dimensional gels. We therefore conclude that these isoforms are the same and are encoded by the same mRNA. Analysis of mouse DNA has identified a single region of the genome which hybridizes to this same fragment of pA29. This region has been isolated in a recombinant phage and has been shown to contain a single gene showing homology with MLC1V mRNA by R-loop analysis. We therefore conclude that MLC1V and MLC1S are encoded by a single gene. The pattern of segregation of a restriction fragment length polymorphism identified for this gene between Mus musculus and Mus spretus has been followed in an F1 backcross between these two mouse species. The results show the MLC1V/MLC1S gene to be closely linked to a marker at the distal end of mouse chromosome 9.",
"corpus_id": 28984147,
"score": -1,
"title": "The myosin alkali light chains of mouse ventricular and slow skeletal muscle are indistinguishable and are encoded by the same gene."
} |
{
"abstract": "Concatenative speech synthesis (CSS) provides the greatest naturalness. However, it requires a huge stored database resulting a huge footprint. Reducing the capacity of stored database while preserving the quality of CSS, or improving the quality to size ratio (QSr), is still a challenge. In this paper, we propose a method of transforming fundamental frequency (F0) contours of lexical tones, developed from TD-GMM framework that successfully applied for transforming spectral sequence in previous researches, in order to improve the QSr of CSS of tonal languages that results CSS available with limited data at offline stage, storing small online footprint, while preserving perceptual quality. The experimental results show that the proposed F0 transformation outperforms conventional and state-of-the-art F0 contour transformations for transforming lexical tones in terms of speech quality. When applying the proposed F0 contour transformation for transforming lexical tones in CSS of tonal languages, the QSr is enhanced compared with the method of simple F0 exchange while the quality of synthetic speech is preserved.",
"corpus_id": 30100166,
"title": "Transformation of F0 contours for lexical tones in concatenative speech synthesis of tonal languages"
} | {
"abstract": "In this paper, we describe a novel spectral conversion method for voice conversion (VC). A Gaussian mixture model (GMM) of the joint probability density of source and target features is employed for performing spectral conversion between speakers. The conventional method converts spectral parameters frame by frame based on the minimum mean square error. Although it is reasonably effective, the deterioration of speech quality is caused by some problems: 1) appropriate spectral movements are not always caused by the frame-based conversion process, and 2) the converted spectra are excessively smoothed by statistical modeling. In order to address those problems, we propose a conversion method based on the maximum-likelihood estimation of a spectral parameter trajectory. Not only static but also dynamic feature statistics are used for realizing the appropriate converted spectrum sequence. Moreover, the oversmoothing effect is alleviated by considering a global variance feature of the converted spectra. Experimental results indicate that the performance of VC can be dramatically improved by the proposed method in view of both speech quality and conversion accuracy for speaker individuality.",
"corpus_id": 14206014,
"title": "Voice Conversion Based on Maximum-Likelihood Estimation of Spectral Parameter Trajectory"
} | {
"abstract": "Various fields in medicine require scientific research and computer application. This results in computation time optimization becoming a task that is of increasing importance due to its highly parallel architecture. As is well-known, the graphics processing unit (GPU) is regarded as a powerful engine for application programs that demand fairly high computation capabilities. Our study is based on the deep analysis of the parallelism pertaining to the calculation of the gray level co-occurrence matrix, whereby an algorithm was introduced to optimize the method used to compute the gray-level co-occurrence matrix (GLCM) of an image. Furthermore, strategies (e.g., copying, image partitioning, and so on) were proposed to optimize the parallel algorithm. Our experiments indicate that without losing the computational accuracy, the speed-up ratio of the GLCM computation of images with different resolutions by GPU utilizing compute unified device architecture was at least 50 times faster than that of the GLCM computation by the central processing unit. This manifestation of a significantly improved performance can lead to the development of a very useful computational tool in medical computer vision.",
"corpus_id": 54203453,
"score": -1,
"title": "Computation of Gray Level Co-Occurrence Matrix Based on CUDA and Optimization for Medical Computer Vision Application"
} |
{
"abstract": "This thesis focuses on the intersection between class and ethnic identity among second-generation Ghanaians. I explore how middle-class second generation Ghanaians construct and maintain (if they do at all) their ethnic identity and the role of class in its construction. For my participants, their narratives engage with the role of education as the driver for social mobility, the issues of belonging to the host nation as a visible minority, explorations on how their ethnic identity is linked to their socio-economic identity and how they create a space in both the cultures. There is very little written about this long-settled community and indeed about middle-class identity and ethnicity in general. The study engages with the literature on diaspora, race and racism and the intersection between ethnicity and class. My research interrogates this statement and focuses on people born of Ghanaian parentage who have been raised in England. Drawing on a semi-structured thematic interview approach, I spoke to 21 participants aged 27-41. The study finds that the role of education and family is key to the development of the participants. It was clear that for my participants their class identity had little impact on their chosen ethnic identity. For the majority, as they matured, the need to engage more with their Ghanaian identity manifested. I argue that being perceived as ‘Other’, experiencing racism, prejudice and microaggressions led the majority to dis-identify with being ‘English’, but, for some, being seen as an outsider in Ghana meant they felt they did not belong there either. In response, many constructed an identity based on their understanding of a Ghanaian identity and their experiences as part of the second generation in the UK.",
"corpus_id": 204392949,
"title": "Being Black, Being British, Being Ghanaian: Second Generation Ghanaians, Class, Identity, Ethnicity and Belonging"
} | {
"abstract": "The dominant image of Africa in the colonial discourses of Western explorers has been persistently negative, stereotypical, and demeaning. The unauthenticated negative images are the largest influencer of Western perception of Africa because as long as the misrepresentation lingers on in Western media and film, the dominant white attitude toward African Americans as inferior could be rationalized and sustained. Moreover the real impact of such misrepresentations on how African Americans see and relate to their ancestral homeland remains grossly under studied. Through purposive sampling, this study interviewed 55 East St. Louis African Americans on the meaning of Africa to them. The results show that they had very positive perceptions of Africa and prided themselves as Africans despite centuries of negative Western portrayal of the continent. Their most important positive images included: Africa as the ancestral home of black people, African festivals and attire, African history and traditions, and African spirituality and belief systems. Through factor analysis, seven underlying dimensions that structure how African Americans construct their perceptions of Africa emerged – including: their Black African heritage, their fantasy of going back to Africa, their inerasable African identity, Connecting Africa to the world, African heroes, keeping the Black pride alive, and the failed stereotypic portrayal of Africa over the centuries.",
"corpus_id": 153864202,
"title": "The perception of Africa by African Americans in the predominantly black community of East St. Louis, Illinois, USA"
} | {
"abstract": "The essays in this volume describe not only the historical practices of the dominated, but also demonstrates the centrality of the subaltern perspective in understanding dominant formations and representations.",
"corpus_id": 197650930,
"score": -1,
"title": "Subaltern studies ... : writings on South Asian history and society"
} |
{
"abstract": "It is a well-established fact that playing at home field is an advantageous condition for professional sport teams. For this reason, the home field advantage in team sports is an important issue to be explored. It is also one of the different topics that physical education and sports students can use when they want to perform performance analysis on various sports branches in the future. The concept of home field advantage is determined by the ratio of the points the teams get from matches played at home field compared to the overall points obtained at the end of the season. Only when this advantage ratio is over 50% can we talk about such a benefit. The aim of this study is to show students how to calculate home field advantage in football in the physical education and sports departments in universities with Turkish Super League example. For this reason, 30 seasons and n = 18,052 matches from 1987 to 2017 were analyzed. Data were obtained from tff.org, Turkey Football Federation's official website. According to the results obtained, home-playing teams were found to score over 50% in the light of all seasons analyzed. In the study, the mean home team advantage was found to be 61.4 ± 2.95%. When the literature is examined, it presents similar results with this study. At the end of the study, it was also found that there are different variables by which both the home playing team and the visiting team are influenced when determining the home advantage.",
"corpus_id": 169187110,
"title": "Home Field Advantage Calculation for Physical Education and Sport Students."
} | {
"abstract": "The home advantage is one of the best established phenomena in sports (Courneya & Carron, 1992), and crowd noise has been suggested as one of its determinants (Nevill & Holder, 1999). However, the psychological processes that mediate crowd noise influence and its contribution to the home advantage are still unclear. We propose that crowd noise correlates with the criteria referees have to judge. As crowd noise is a valid cue, referee decisions are strongly influenced by crowd noise. Yet, when audiences are not impartial, a home advantage arises. Using soccer as an exemplar, we show the relevance of this influence in predicting outcomes of real games via a database analysis. Then we experimentally demonstrate the influence of crowd noise on referees' yellow cards decisions in soccer. Finally, we discuss why the focus on referee decisions is useful, and how more experimental research could benefit investigations of the home advantage.",
"corpus_id": 1684879,
"title": "Crowd noise as a cue in referee decisions contributes to the home advantage."
} | {
"abstract": "Three experiments examined the influence of prior judgements on direct and indirect memory tests in gymnastic judging",
"corpus_id": 197651353,
"score": -1,
"title": "Prior processing effects on gymnastic judging."
} |
{
"abstract": "We hypothesize that political uncertainty has an adverse effect on investments in activities related to innovation. Combining two hand-collected data sets on changes in local government officials and research and development (R&D) activity at the firm level in China, we examine how political turnover influences investments in R&D. We find that a change in local political leaders is associated with a significant decrease in R&D activity. This result is robust to various robustness tests. The decrease is larger when the new political leader is promoted from outside the city in question. Moreover, the decrease is significantly larger for privately controlled firms, firms operating in regions characterized by weak economic institutions, and firms within R&D-intensive industries. Our findings suggest that political uncertainty constitutes an important channel through which the local political process influences activities related to innovation.",
"corpus_id": 158530720,
"title": "Political Uncertainty and Innovation in China"
} | {
"abstract": "We study the effects of political participation on holdings of liquid assets in Chinese privately controlled listed firms. Previous research has shown that the risk of political extraction by politicians and bureaucrats in countries with weak institutions has an adverse effect on holdings of liquid assets. We propose that political participation by private entrepreneurs can function as a means to alleviate some of that risk. We find that political participation in China is positively related to cash holdings in regions with weaker institutions. Our results also show that investments in “hard” assets such as PPE and inventories, which are less susceptible to the grabbing hand, are higher in regions with weaker institutions, but that political participation mitigates this effect. Finally, cash holdings have an insignificantly positive effect on firm value on its own, while political participation is positively associated with firm value. The interaction between cash holdings and political participation is positively related to firm value, again suggesting that political participation facilitates the holding of liquid assets in China, which in turn results in better firm performance.",
"corpus_id": 153458394,
"title": "Escaping political extraction: Political participation, institutions, and cash holdings in China"
} | {
"abstract": "Abstract This paper assesses the progress of China's transition towards a market economy by examining the structure of ownership, productivity, and profitability, as well as the concentration of production across firms, industries, and regions. It does this by analyzing a database of firm microdata of the quarter of a million industrial companies in operation during the 1998–2003 period. Results show that the private sector now accounts for more than half of industrial output, compared with barely more than a quarter of it in 1998, and operates much more efficiently than the public sector. Higher productivity has fed through to improved profitability, motivating greater regional specialization of production. These changes are consistent with what would be expected in a market-based economy and suggest that reforms are making rapid progress.",
"corpus_id": 155054961,
"score": -1,
"title": "HAS A PRIVATE SECTOR EMERGED IN CHINA 'S INDUSTRY ? EVIDENCE FROM A QUARTER OF A MILLION CHINESE FIRMS"
} |
{
"abstract": "On the basis of previously unpublished observations, we hypothesized that prolonged use of proton pump inhibitors (PPIs) causes an increase in 99mTc-sestamibi uptake in the stomach wall, manifested as curvilinear activity surrounding the photopenic fundus of the stomach cavity. We prospectively evaluated the frequency of stomach wall uptake in patients undergoing myocardial perfusion SPECT who were taking PPIs or H2 antagonists. Methods: Patients (n = 138) who were scheduled for single-day rest/stress 99mTc-sestamibi SPECT were randomly selected. Poststress SPECT was performed 30 min after treadmill exercise or 45 min after dipyridamole infusion. The rest scan was obtained 45 min after tracer injection. All patients drank 473 mL of water 5–10 min after both the rest and the stress radiotracer injections. Patients were questioned regarding their use of PPIs and H2 antagonists. The significant use of either was defined as more than 2 wk of continuous therapy before cardiac SPECT. Masked observers assessed poststress planar projection images in endless-loop cinematic format for the following 3 patterns: stomach cavity uptake, attributable to duodenogastric reflux of tracer; stomach wall uptake; and no stomach uptake. A 2-tailed χ2 test with Yates correction was used to calculate statistically significant associations among variables. Results: Only patients with observed patterns of stomach wall uptake (n = 30) and no stomach wall uptake (n = 91) were included. Patients with stomach cavity uptake (n = 17) were excluded because the assessment of the adjacent stomach wall uptake was not possible. Of the patients included (n = 121), 30 were men and 91 were women. Sixty-seven patients were older than 60 y; 26 patients were taking PPIs. Of the 95 patients not taking PPIs, 14 were taking H2 antagonists. No patients were taking both medications. Stomach wall uptake was strongly associated with prolonged use of PPIs (χ2 = 51.9, P < 0.0001). No statistically significant association was noted among age, sex, or use of H2 antagonists (P = NS). Conclusion: Prolonged PPI therapy, but not H2 antagonist therapy, contributes to a significant increase in stomach wall activity, potentially resulting in Compton scatter or ramp filter artifacts affecting the inferior wall of the left ventricle. Stomach wall activity, unlike the stomach cavity activity, cannot be prevented by the ingestion of water before imaging. Therefore, it is important to elicit a history of prolonged PPI use to better anticipate the possibility of increased stomach wall activity, which can confound the image quality and interpretation.",
"corpus_id": 20905941,
"title": "Effect of Proton Pump Inhibitors and H2 Antagonists on the Stomach Wall in 99mTc-Sestamibi Cardiac Imaging"
} | {
"abstract": "Coronary artery disease (CAD) is the main cause of death in elderly patients. Single-photon emission computed tomography (SPECT) with technetium-99m ((99m)Tc)-labeled agents is extremely useful for the diagnosis and risk stratification of CAD in the general population. However, its prognostic value for the elderly has not been established. This study examined disease outcome in 328 patients aged 74 or older, with suspected CAD who were submitted to either pharmacological (dipyridamole) or exercise stress SPECT with (99m)Tc-sestamibi, seven of whom were completely lost to follow-up. Endpoints were defined as hard (myocardial infarction or cardiac death) or total events (myocardial infarction, cardiac death or myocardial revascularization). Mean follow-up was 34+/-15 months. During this period 24 cardiac deaths, 11 myocardial infarctions and 21 cases of revascularization were observed. Perfusion defects were found in 27.1% of patients (12.8% reversible, 6.2% partially reversible and 8.1% fixed). Abnormal studies were predominant in men, patients with chest pain and those with ST-T abnormalities in the baseline electrocardiogram (ECG) or in the exercise treadmill test. An abnormal scan was significantly associated with cardiac events (P<0.0001). Multivariate analysis revealed that a abnormal scan was the most important independent predictor of hard or total cardiac events. Event rates increased according to myocardial perfusion scintigraphy (MPS): <1.0% of hard events per year in patients with normal MPS versus 14.3% per year in those with abnormal MPS. (99m)Tc-sestamibi SPECT was demonstrated to be a powerful tool for the prognostic evaluation of elderly patients with suspected CAD.",
"corpus_id": 20481,
"title": "Incremental prognostic value of myocardial perfusion 99m-technetium-sestamibi SPECT in the elderly."
} | {
"abstract": "1. A Brief Historical Perspective on Nuclear Cardiology 2. Radiation Physics and Radiation Safety 3. Imaging Instrumentation 4. Kinetics of Myocardial Perfusion Imaging Radiotracers 5. Acquisition, Processing and Quantification of Nuclear Cardiac Images 6. Image Artifacts 7. Myocardial Perfusion Single-Photon Emission Computed Tomography Attenuation Correction 8. Gated Single-Photon Emission Computed Tomography 9. Exercise Treadmill Testing and Exercise Myocardial Perfusion Imaging 10. Pharmacologic Stress Testing and Other Alternative Techniques in the Diagnosis of Coronary Artery Disease 11. Risk assessment in CAD 12. Use of Nuclear Techniques in the Assessment of Patients before and after Cardiac Revascularization Procedures 13. Diagnosis and Risk Assessment in Women 14. Evaluation of Patients with Acute Chest Pain Syndromes: Assessment with Perfusion Imaging in the Emergency Department 15. Fatty Acid and MIBG Imaging 16. Myocardial Perfusion Imaging in the Assessment of Therapeutic Interventions 17. Radionuclide Angiography 18. Positron Emission Tomography 19. Myocardial Viability/Hibernation 20. Other Heart Diseases 21. Development of Newer Radiotracers for Evaluation of Myocardial and Vascular Disorders 22. Nuclear Cardiology Compared to Other Imaging Methods 23. Cost-Effectiveness Analysis in Nuclear Cardiology 24. Interpreting and Reporting Nuclear Studies 25. ACC/AHA/ASNC Guidelines and Position Papers 26. Practical Aspects of Running a Nuclear Cardiology Laboratory",
"corpus_id": 70682693,
"score": -1,
"title": "Nuclear Cardiac Imaging: Principles and Applications"
} |
{
"abstract": "This thesis explores people's perceptions of building and furnishing materials in domestic interiors in relation to human health. Although recently there has been an increase in discussion of the adverse impacts building and furnishing materials have on human health, it is also noted that change in removing 'risk' materials from the market is not happening fast enough. Rather than focusing on professional views or the regulative changes that have effected some improvements, this thesis focuses on popular views, as these are currently an under-researched but significant factor for change. Popular perception of the healthiness of materials directly relates to everyday choices which might influence indoor air quality in people's homes. Hence understanding these perceptions is an important element in improving this situation. The primary question of this thesis is how informed, or knowledgeable, the general population is about risks to human health associated with building and furnishing materials, and secondarily, whether any predictors of people's views can be observed. Because of the limited availability of similar studies this thesis is exploratory. It consists of two main studies: - The core survey of 247 participants from three countries (61 NZ general, 65 NZ architects, 60 US, and 61 UK) explores what people think about the healthiness of common materials and evaluates this data for any demographic or psychological predictors of knowledge; and - The follow-up trial evaluates the effectiveness of an educational intervention and provides more detailed mixed-method data on the views of 12 participants. The studies use quantitative approaches that are commonly used in psychological research. The thesis shows that there are significant limitations in the existing knowledge of risks associated with building and furnishing materials especially amongst the general population, which poorly differentiates between the health impact of similar looking materials such as vinyl and linoleum, and particleboard and MDF with and without formaldehyde. This leads to the conclusion there is need for improvement in the general level of knowledge about the healthiness of materials. In terms of predictors, gender is found to be the strongest predictor of recognition of risks with women tending to rate materials more accurately in terms of their risk to health, and males rating all materials higher. Similarly, women demonstrated greater change in their ratings and actions following the educational intervention. Experience with asthma and allergies was also a predictor of more accurate rating of materials but this trend was milder. When the five personality traits were evaluated, openness mildly but consistently correlated with more accurate health ratings of materials, while agreeableness correlated with tendency to give high ratings regardless of how healthy materials were. No clear patterns were found for extraversion, emotional stability and conscientiousness. No clear pattern for the environmental concerns was found in the core study, although these seemed to be predictors after the educational intervention. These findings show that exploring people's views about architecture using psychological instruments has produced useful results. This thesis observed a number of possible predictors of people's architectural views and choices, suggesting a possible new research field to confirm these.",
"corpus_id": 106426721,
"title": "Building Materials and Health: A study of perceptions of the healthiness of building and furnishing materials in homes"
} | {
"abstract": "The emission of di-2-ethylhexyl phthalate (DEHP) from vinyl flooring (VF) was measured in specially designed stainless steel chambers. In duplicate chamber studies, the gas-phase concentration in the chamber increased slowly and reached a steady state level of 0.8-0.9 μg/m(3) after about 20 days. By increasing the area of vinyl flooring and decreasing that of the stainless steel surface within the chamber, the time to reach steady state was significantly reduced, compared to a previous study (1 month versus 5 months). The adsorption isotherm of DEHP on the stainless steel chamber surfaces was explicitly measured using solvent extraction and thermal desorption. The strong partitioning of DEHP onto the stainless steel surface was found to follow a simple linear relationship. Thermal desorption resulted in higher recovery than solvent extraction. Investigation of sorption kinetics showed that it takes several weeks for the sorption of DEHP onto the stainless steel surface to reach equilibrium. The content of DEHP in VF was measured at about 15% (w/w) using pressurized liquid extraction. The independently measured or calculated parameters were used to validate an SVOC emission model, with excellent agreement between model prediction and the observed gas-phase DEHP chamber concentrations.",
"corpus_id": 2886980,
"title": "Measuring and predicting the emission rate of phthalate plasticizer from vinyl flooring in a specially-designed chamber."
} | {
"abstract": "Semivolatile organic compounds (SVOCs) are present in many indoor materials. SVOC emissions can be characterized with a critical parameter, y0 , the gas-phase SVOC concentration in equilibrium with the source material. To reduce the required time and improve the accuracy of existing methods for measuring y0 , we developed a new method which uses solid-phase microextraction (SPME) to measure the concentration of an SVOC emitted by source material placed in a sealed chamber. Taking one typical indoor SVOC, di-(2-ethylhexyl) phthalate (DEHP), as the example, the experimental time was shortened from several days (even several months) to about 1 day, with relative errors of less than 5%. The measured y0 values agree well with the results obtained by independent methods. The saturated gas-phase concentration (ysat ) of DEHP was also measured. Based on the Clausius-Clapeyron equation, a correlation that reveals the effects of temperature, the mass fraction of DEHP in the source material, and ysat on y0 was established. The proposed method together with the correlation should be useful in estimating and controlling human exposure to indoor DEHP. The applicability of the present approach for other SVOCs and other SVOC source materials requires further study.",
"corpus_id": 1627116,
"score": -1,
"title": "A SPME‐based method for rapidly and accurately measuring the characteristic parameter for DEHP emitted from PVC floorings"
} |
{
"abstract": "Blood samples were obtained from fetuses and premature babies (n=51) (15‐34 weeks gestation) to determine at what stage the fetal immune system was able to produce a positive proliferative response to common allergens. Peripheral blood mononuclear cells (PB MC) were stimulated with the mitogen, phytohaemagglutinin (PHA), and the allergens, house dust mite, cat fur. birch tree pollen, β‐lactoglobulin, ovalbumin and bee venom (mellitin). Results were expressed as ratios of stimulated to unstimulated 3H thymidine incorporation, and as percent positive responders. There was an increase in proliferation ratio which correlated with increasing gestational age for PHA (p < 0.0001), cat fur (p=0.042), birch pollen (p=0.022) and β‐lactoglobulin (p=0, 006). The point in gestation when cells from some individuals began responding to the allergens with a ratio of 2. 0 was at approximately 22 weeks. PBMC proliferative response ratios were higher from samples from babies > 22 weeks gestation compared to < 22 weeks for the mitogen and all allergens, except mellitin. There was also a greater proportion of positive responders from samples > 22 weeks compared to < 22 weeks for the mitogen and all allergens, except mellitin. Maternal exposure to birch pollen, which has a discrete season, was assessed to determine whether exposure had occurred at 22 weeks gestation or beyond. Results showed a higher prolifera tive response in infant cells stimulated with birch pollen (p=0.005) and higher proportion of positive responders (p=0.01) in the group of babies whose mothers had been exposed to hirch pollen beyond 22 weeks, compared to those whose mothers had not been so exposed. These results suggest that in utero fetal exposure to an allergen from around 22 weeks gestation may result in primary sensitisation to that allergen, leading to positive proliferative responses, at birth.",
"corpus_id": 39762706,
"title": "Fetal peripheral blood mononuclear cell proliferative responses to mitogenic and allergenic stimuli during gestation"
} | {
"abstract": "Proliferative responses of cord blood lymphocytes (CBLs) to food antigens and cord blood IgE concentrations were measured in 37 full term newborn infants for the prediction of allergic disorders. In these 37 infants who were followed up for two years, allergic history of the family was found in four (sensitivity 57.1%) and cord blood IgE concentrations were greater than 0.5 IU/ml in three (sensitivity 42.9%) of seven infants who developed allergic disorders. When CBLs were stimulated twice by ovalbumin or bovine serum albumin, the value of the stimulation index in proliferative responses of CBLs to ovalbumin or bovine serum albumin was greater than 1.5 in six (sensitivity 85.7%) of seven infants who developed allergic disorders. The specificity of the responses of CBLs in the prediction of the development of allergic disorders was 93.3%. The proliferative responses of CBLs to food antigens were useful in the prediction of not only development of allergic disorders but also offending allergens. These observations provide further evidence that sensitisation is occurring in utero. This would appear to be increasingly important in the genesis of early atopic problems. As our follow up is only two years, in utero sensitisation is a prediction for the early development of atopic disease but only longer follow up will show whether this holds good for allergic disorders at any age.",
"corpus_id": 2326039,
"title": "Cord blood lymphocyte responses to food antigens for the prediction of allergic disorders."
} | {
"abstract": "Platonin is one of the photosensitive dyes of trithiazole pentamethine cyanine. It is used as an effective medicine for rheumatoid arthritis. In our study, platonin suppressed the immunoglobulin (Ig) production of human peripheral blood lymphocytes (PBL) stimulated by Staphylococcus aureus Cowan I (SAC) or pokeweed mitogen (PWM). It also suppressed the PWM induced Ig production of B cells when T cells and/or B cells were pretreated with platonin, respectively. The percentage of CD8+ T cells was increased by platonin. Our results suggest that platonin suppresses Ig production through suppressing B cells and enhancing CD8+ (suppressor/cytotoxic) T cells.",
"corpus_id": 11770056,
"score": -1,
"title": "B cell suppressing and CD8+ T cell enhancing effects of photosensitive dye platonin in humans."
} |
{
"abstract": "ABSTRACT Objectives: Thalassaemia is a potentially lethal inherited anaemia, caused by reduced or absent synthesis of globin chains. Measurement of the minor adult haemoglobin Hb A2, combining α- with δ-globin, is critical for the routine diagnosis of carrier status for α- or β-thalassaemia. Here, we aim to characterize a novel δ-globin variant, Hb A2 Episkopi, in a single family of mixed Lebanese and Cypriot ancestry with mild hypochromic anaemia and otherwise normal globin genotype, which also presents with a coincidental 0.78-Mb sequence duplication on chromosome 1 (1q44) and developmental abnormalities. Methods: Analyses included comprehensive haematological analyses, cation-exchange high-performance liquid chromatography (CE-HPLC), cellulose acetate electrophoresis (CAE), Sanger sequencing and structure-based stability predictions for Hb A2 Episkopi. Results: The GCT > GTT missense mutation, underlying Hb A2 Episkopi, HBD:c.428C > T, introduces a cd142 codon change in the mature protein, resulting in reduced normal Hb A2 amounts and a novel, less abundant Hb A2 variant (HGVS: HBD:p.A143V), detectable as a delayed peak by CE-HPLC. The latter was in line with structure-based stability predictions, which indicated that the substitution of a marginal, non-helical and non-interface residue, five amino acids from the δ-globin chain carboxy-terminus, was moderately destabilizing. Discussion: Detection of the new variant depends on the diagnostic set-up and had failed by CAE and on an independent CE-HPLC system, which, in unfavourable circumstances, may lead to misdiagnoses of β-thalassaemia as α-thalassaemia. Given the mixed background of the affected family, the ethnic origin of the mutation is unclear, and this study thus suggests awareness for possible detection of Hb A2 Episkopi in both the Cypriot and the Lebanese populations.",
"corpus_id": 10886822,
"title": "Hb A2 Episkopi – a novel δ-globin chain variant [HBD:c.428C>T] in a family of mixed Cypriot–Lebanese descent"
} | {
"abstract": "Background The present study is designed to evaluate the reliability and cost effectiveness of cellulose acetate Hb electrophoresis and high performance liquid chromatography (HPLC) in the determination of HbA2 levels. Methods The test population comprised 160 individuals divided into four groups: normal individuals, β-thalassemia trait (BTT) patients, iron deficiency anemia (IDA) patients, and co-morbid patients (BTT with IDA). HbA2 levels determined using cellulose acetate Hb electrophoresis and HPLC were compared. Results HbA2 levels were found to be diagnostic for classical BTT using either method. In co-morbid cases, both techniques failed to diagnose all cases of BTT. The sensitivity, specificity, and Youden's index for detection of the co-morbid condition was 69% and 66% for HPLC and cellulose acetate Hb electrophoresis, respectively. Conclusion This study revealed that semi-automated cellulose acetate Hb electrophoresis is more suitable for use in β-thalassemia prevention programs in low-income countries like Pakistan. This technique is easily available, simple and cost effective.",
"corpus_id": 1790015,
"title": "Comparative analysis of cellulose acetate hemoglobin electrophoresis and high performance liquid chromatography for quantitative determination of hemoglobin A2"
} | {
"abstract": "Background Dose continues to be an area of concern in preclinical imaging studies, especially for those imaging disease progression in longitudinal studies. To our knowledge, this work is the first to characterize and assess dose from the Inveon CT imaging platform using nanoDot dosimeters. This work is also the first to characterize a new low-dose configuration available for this platform. Methodology/Principal Findings nanoDot dosimeters from Landauer, Inc. were surgically implanted into 15 wild type mice. Two nanoDots were placed in each animal: one just under the skin behind the spine and the other located centrally within the abdomen. A manufacturer-recommended CT protocol was created with 1 projection per degree of rotation acquired over 360 degrees. For best comparison of the low dose and standard configurations, noise characteristics of the reconstructed images were used to match the acquisition protocol parameters. Results for all dose measurements showed the average dose delivered to the abdomen to be 13.8 cGy±0.74 and 0.97 cGy±0.05 for standard and low dose configurations respectively. Skin measurements of dose averaged 15.99 cGy±0.72 and 1.18 cGy±0.06. For both groups, the standard deviation to mean was less than 5.6%. The maximum dose received for the abdomen was 15.12 cGy and 0.97 cGy while the maximum dose for the skin was 17.3 cGy and 1.32 cGy. Control dosimeters were used for each group to validate that no unwanted additional radiation was present to bias the results. Conclusions/Significance This study shows that the Inveon CT platform is suitable for imaging mice both for single and longitudinal studies. Use of low-dose detector hardware results in significant reductions in dose to subjects with a >12x (90%) reduction in delivered dose. Installation of this hardware on another in vivo microCT platform resulted in dose reductions of over 9x (89%).",
"corpus_id": 9829103,
"score": -1,
"title": "Characterization of X-ray Dose in Murine Animals Using microCT, a New Low-Dose Detector and nanoDot Dosimeters"
} |
{
"abstract": "Subterranean cave ecosystems are characterized by perpetual darkness, almost constant ambient temperature, limited source of food supply and relatively high humidity. The occurrence of circadian rhythms in organisms living in such an ecosystem always attracts chronobiologists to understand the phenomena of time-measuring mechanisms. Few attempts have been made to correlate such rhythmic patterns of the organism with the putative periodicities in weak zeitgebers. In the present study, the effects of periodic feeding schedules on the characteristics of circadian rhythm in the vertical swimming activity of the cave loach Nemacheilus evezardi were examined. Results reveal that periodic feeding at 18:00 has the ability to synchronize the vertical swimming activity rhythm. It seems that periodic restricted feeding could act as a powerful zeitgeber of circadian rhythms in subterranean organisms.",
"corpus_id": 84220425,
"title": "Timed feeding synchronizes circadian rhythm in vertical swimming activity in cave loach, Nemacheilus evezardi"
} | {
"abstract": "Locomotor activity rhythm in the hypogean population of Nemacheilus evezardi was recorded first under light-to-dark (LD) 12 : 12 h cycle and then DD. The results were compared with that of its epigean counterpart held under comparable regimes. In LD 12 : 12, while hypogean loach exhibited a distinct bimodality in its locomotor activity rhythm, it was altogether absent in the case of epigean population. In hypogean loach, dark-to-light transition peak in LD was observed to free-run under DD. The same was not discernible in case of epigean loach. The locomotor activity rhythm in epigean fish was noticed to free-run in DD either from the dawn peak or dusk peak in LD. It is hypothesized that the hypogean fish still possesses a functional oscillator underlying its overt circadian rhythm in locomotor activity. The ecophysiological significance of these findings is yet to be fully understood.",
"corpus_id": 1183431,
"title": "Temporal Organization in Locomotor Activity of the Hypogean Loach, Nemacheilus Evezardi, and its Epigean Ancestor"
} | {
"abstract": "Introduction. Inflammation in dental pulp cells (DPCs) initiated by Lipopolysaccharide (LPS) results in dental pulp necrosis. So far, whether there is a common system regulating inflammation response and tissue regeneration remains unknown. miR-146a is closely related to inflammation. Basic fibroblast growth factor (bFGF) is an important regulator for differentiation. Methods. To explore the effect of miR-146a/bFGF on inflammation and tissue regeneration, polyethylene glycol-polyethyleneimine (PEG-PEI) was synthesized, and physical characteristics were analyzed by dynamic light scattering and gel retardation analysis. Cell absorption, transfection efficiency, and cytotoxicity were assessed. Alginate gel was combined with miR-146a/PEG-PEI nanoparticles and bFGF. Drug release ratio was measured by ultraviolet spectrophotography. Proliferation and odontogenic differentiation of DPCs with 1 μg/mL LPS treatment were determined. Results. PEG-PEI prepared at N/P 2 showed complete gel retardation and smallest particle size and zeta potential. Transfection efficiency of PEG-PEI was higher than lipo2000. Cell viability decreased as N/P ratio increased. Drug release rate amounted to 70% at the first 12 h and then maintained slow release afterwards. Proliferation and differentiation decreased in DPCs with LPS treatment, whereas they increased in miR-146a/bFGF gel group. Conclusions. PEG-PEI is a promising vector for gene therapy. miR-146a and bFGF play critical roles in inflammation response and tissue regeneration of DPCs.",
"corpus_id": 16366732,
"score": -1,
"title": "Effect of miR-146a/bFGF/PEG-PEI Nanoparticles on Inflammation Response and Tissue Regeneration of Human Dental Pulp Cells"
} |
{
"abstract": "One of the touchstone principles in Australia's regulation of the use of animals for scientific and educational purposes is reduction, refinement and replacement (3Rs). However, the use of animals for scientific and educational purposes is increasing in Australia, raising concerns about the effectiveness of the current regulatory framework in achieving the objectives of the 3Rs. This article critically evaluates the current regulatory framework in Australia. Several strengths are identified. However, 4 recommendations to improve the regulatory environment are proposed to bring Australia in line with international best practice. Specifically, Australian regulation governing the use of animals for scientific or educational purposes could be improved through greater transparency, higher standards of competency, the development of a central regulatory authority, and greater incentives to encourage research and development into nonanimal alternatives.",
"corpus_id": 44397890,
"title": "Australian Regulation of Animal Use in Science and Education: A Critical Appraisal"
} | {
"abstract": "Animal ethics committees have been set up in many countries as a way to scrutinize animal experimentation and to assure the public that if animals are used in research then it is for a worthwhile cause and suffering is kept to a minimum. The ideals of Refinement, Reduction and Replacement are commonly upheld. However, while refinement and reduction receive much attention in animal ethics committees, the replacement of animals is much more difficult to incorporate into the committees’ deliberations. At least in Australia there are certain structural reasons for this but it is likely that most of the reasons why replacement is left out apply to other countries as well.",
"corpus_id": 153200669,
"title": "Why animal ethics committees don't work"
} | {
"abstract": "We agree with Bara and Joffe that there is a need for improvement of reporting quality in animal research (AR) [1]. Presenting details on both methods used and potential cofounders during AR is not only important to reproduce results, and to ensure animal welfare (AW) and public support, it is a duty when animals are compromised, stressed or sacrificed to understand diseases or to identify treatment targets. \n \nThe article addresses important issues, unfortunately without a solution and ignoring personal responsibility. As mentioned, the word count within manuscripts is strictly limited. In Germany and most European countries, the approval procedure for AR includes applications (>6,000 words) covering all analysed readouts: anaesthesia, pain control, euthanasia methods, termination criteria, statistical planning, funding, discussion of reduction, refinement and replacement, and a systematic review. Nevertheless, how should these long method descriptions be included in manuscripts of 3,500 words? \n \nReally what are required are special conferences and articles focusing on methods in AR, a uniform summarising data sheet as supplementary material, and the presentation of the registration number given by the AW committee including a recheck by the committees to ensure that AW was considered. Finally, there is a need for commitment among scientists to standardise experiments to allow collaborative exchange of data, body fluids and tissues to privilege synergetic benefits; to improve the informative value of an approach by stratification of animals [2]; and to also present negative results to avoid double testing. These changes will lead to increased quality in reporting, realisation of reduction, refinement and replacement, and public perception.",
"corpus_id": 17428483,
"score": -1,
"title": "Criticizing reporting standards fails to improve quality in animal research"
} |
{
"abstract": "Passive millimeter-wave images (PMMW) often suffer from issues such as low resolution, noise, and blurring. In this paper, we proposed a blind image deconvolution method for the passive millimeter-wave images. The purpose of the proposed method is to simultaneously solve the point spread function (PSF) and restoration image. In this method, the data fidelity item is constructed based on Gaussian noise assuming, and the regularization item is constructed as the hyper-Laplace function ‖x‖0.6, which is fitted according to the high-resolution PMMW images. Moreover, a data-selected matrix is proposed to select the regions that are helpful for estimating the accurate PSF. The proposed method has been applied to simulated and real PMMW image experiments. Comparative results demonstrate that the proposed method significantly outperforms the state-of-the-art deconvolution methods on both qualitative and quantitative assessments.",
"corpus_id": 12958160,
"title": "Robust blind deconvolution for PMMW images with sparsity presentation"
} | {
"abstract": "Estimation techniques in computer vision applications must estimate accurate model parameters despite small-scale noise in the data, occasional large-scale measurement errors (outliers), and measurements from multiple populations in the same data set. Increasingly, robust estimation techniques, some borrowed from the statistics literature and others described in the computer vision literature, have been used in solving these parameter estimation problems. Ideally, these techniques should effectively ignore the outliers and measurements from other populations, treating them as outliers, when estimating the parameters of a single population. Two frequently used techniques are least-median of squares (LMS) [P. J. Rousseeuw, {J. Amer. Statist. Assoc., 79 (1984), pp. 871--880] and M-estimators [Robust Statistics: The Approach Based on Influence Functions, F. R. Hampel et al., John Wiley, 1986; Robust Statistics, P. J. Huber, John Wiley, 1981]. LMS handles large fractions of outliers, up to the theoretical limit of 50% for estimators invariant to affine changes to the data, but has low statistical efficiency. M-estimators have higher statistical efficiency but tolerate much lower percentages of outliers unless properly initialized. ::: While robust estimators have been used in a variety of computer vision applications, three are considered here. In analysis of range images---images containing depth or X, Y, Z measurements at each pixel instead of intensity measurements---robust estimators have been used successfully to estimate surface model parameters in small image regions. In stereo and motion analysis, they have been used to estimate parameters of what is called the ''fundamental matrix,'' which characterizes the relative imaging geometry of two cameras imaging the same scene. Recently, robust estimators have been applied to estimating a quadratic image-to-image transformation model necessary to create a composite, ''mosaic image'' from a series of images of the human retina. In each case, a straightforward application of standard robust estimators is insufficient, and carefully developed extensions are used to solve the problem.",
"corpus_id": 151561,
"title": "Robust Parameter Estimation in Computer Vision"
} | {
"abstract": "Data-efficient reinforcement learning (RL) in continuous state-action spaces using very high-dimensional observations remains a key challenge in developing fully autonomous systems. We consider a particularly important instance of this challenge, the pixels-to-torques problem, where an RL agent learns a closed-loop control policy (\"torques\") from pixel information only. We introduce a data-efficient, model-based reinforcement learning algorithm that learns such a closed-loop policy directly from pixel information. The key ingredient is a deep dynamical model for learning a low-dimensional feature embedding of images jointly with a predictive model in this low-dimensional feature space. Joint learning is crucial for long-term predictions, which lie at the core of the adaptive nonlinear model predictive control strategy that we use for closed-loop control. Compared to state-of-the-art RL methods for continuous states and actions, our approach learns quickly, scales to high-dimensional state spaces, is lightweight and an important step toward fully autonomous end-to-end learning from pixels to torques.",
"corpus_id": 12148476,
"score": -1,
"title": "Data-Efficient Learning of Feedback Policies from Image Pixels using Deep Dynamical Models"
} |
{
"abstract": "Despite active research and significant progress in the last 30 years, eye detection and tracking remains challenging due to the individuality of eyes, occlusion, variability in scale, location, and light conditions. Data on eye location and details of eye movements have numerous applications and are essential in face detection, biometric identification, and particular human-computer interaction tasks. This paper reviews current progress and state of the art in video-based eye detection and tracking in order to identify promising techniques as well as issues to be further addressed. We present a detailed review of recent eye models and techniques for eye detection and tracking. We also survey methods for gaze estimation and compare them based on their geometric properties and reported accuracies. This review shows that, despite their apparent simplicity, the development of a general eye detection technique involves addressing many challenges, requires further theoretical developments, and is consequently of interest to many other domains problems in computer vision and beyond.",
"corpus_id": 16489508,
"title": "In the Eye of the Beholder: A Survey of Models for Eyes and Gaze"
} | {
"abstract": "This paper describes a real-time prototype computer vision system for monitoring driver vigilance. The main components of the system consists of a remotely located video CCD camera, a specially designed hardware system for real-time image acquisition and for controlling the illuminator and the alarm system, and various computer vision algorithms for simultaneously, real-time and non-intrusively monitoring various visual bio-behaviors that typically characterize a driver's level of vigilance. The visual behaviors include eyelid movement, face orientation, and gaze movement (pupil movement). The system was tested in a simulating environment with subjects of different ethnic backgrounds, different genders, ages, with/without glasses, and under different illumination conditions, and it was found very robust, reliable and accurate.",
"corpus_id": 10529675,
"title": "Real-Time Eye, Gaze, and Face Pose Tracking for Monitoring Driver Vigilance"
} | {
"abstract": "Rear camera or other wide-angle camera mounted on vehicle has serious perspective effect which makes driver unable to feel distance correctly and is not helpful for succeeding image analysis. To remove perspective effect, it's necessary to change diagonal view to bird's-eye view. In this paper, a software-hardware cooperative bird's-eye view system is proposed, which can estimates the transformation matrix and changes the view automatically in real-time. The matrix estimation module is implemented by software, because it only needs to calculate once. The transformation part is implemented by hardware, because it must be done repetitively and it requires high computation power. To optimize hardware performance and save cost, three optimization approaches like one-step transformation, memory pre-estimation and look-up table (LUT) have been employed.",
"corpus_id": 6856799,
"score": -1,
"title": "A software-hardware cooperative implementation of bird's-eye view system for camera-on-vehicle"
} |
{
"abstract": "Energy consumption of digital circuits has become a primary constraint in electronic design. The increasing popularity of the portable devices like smart phone, ipad, tablet and notebook has created an overwhelming demand for extended battery life of these devices. Numerous methods for energy reduction in CMOS circuits have been proposed in the literature. Power reduction techniques at various levels of abstraction are used in modern digital designs. Most popular techniques used include power gating, clock gating, multiple-supply voltages, multiple threshold devices. In this work we propose a technique to use dual supply voltages in digital designs in order to get a reduction in energy consumption. Three new algorithms are proposed for finding and assigning low voltage in dual voltage designs. Given a circuit and a supply voltage, the first algorithm finds a suitable value for a lower supply voltage and the other two algorithms assign that lower voltage to individual gates. A linear time algorithm described in the literature is used for computing slacks for all gates in a circuit for a given supply voltage. The slack of a gate is the difference between the critical path delay and the delay of the longest path though that gate. A positive slack for a gate implies that the timing constraints are met, thus making a negative slack undesirable. An optimal lower supply voltage that maximizes the dynamic power savings is first found. For the computed gate slacks and the lower supply voltage, the gates in the circuit are divided into three groups. No gate in the first group can be assigned the lower supply without violating the positive slack condition. All gates in the second group can be simultaneously set to lower supply voltage while maintaining positive slack for all gates. The gates in the third group are assigned low voltage in small subgroups satisfying the condition that the sum of the delay increases due to voltage lowering for all gates in the subgroup is less",
"corpus_id": 16466212,
"title": "Polynomial Time Algorithms"
} | {
"abstract": "With scaling of Vt sub-threshold leakage power is increasing and expected to become significant part of total power consumption In present work three new configurations of level shifters for low power application in 0.35{\\mu}m technology have been presented. The proposed circuits utilize the merits of stacking technique with smaller leakage current and reduction in leakage power. Conventional level shifter has been improved by addition of three NMOS transistors, which shows total power consumption of 402.2264pW as compared to 0.49833nW with existing circuit. Single supply level shifter has been modified with addition of two NMOS transistors that gives total power consumption of 108.641pW as compared to 31.06nW. Another circuit, contention mitigated level shifter (CMLS) with three additional transistors shows total power consumption of 396.75pW as compared to 0.4937354nW. Three proposed circuit's shows better performance in terms of power consumption with a little conciliation in delay. Output level of 3.3V has been obtained with input pulse of 1.6V for all proposed circuits.",
"corpus_id": 16675518,
"title": "Level Shifter Design for Low Power Applications"
} | {
"abstract": "This work studies the problem of CMOS operational amplifiers (op-amps) optimization. A front Pareto based-MOGA (Multi-Objective Genetic Algorithm) methodology is proposed to optimize the operational amplifier. The proposed approach is used to find the optimal dimensional transistor parameters in order to obtain operational amplifier performances for analog and mixed CMOS-based circuit applications. To evaluate the proposed approach, an example in both time and frequency domains for a two-stage CMOS Operational Transconductance Amplifier (OTA) is presented in 0.18µm process. The simulation results confirm the efficiency of MOGA in determining the device sizes in an analog circuit.",
"corpus_id": 6671846,
"score": -1,
"title": "Multi-Objective Genetic Algorithm optimization of CMOS operational amplifiers"
} |
{
"abstract": "The research field of glucose biosensing has shown remarkable growth and development since the first reported enzyme electrode in 1962. Extensive research on various immobilization methods and the improvement of electron transfer efficiency between the enzyme and the electrode have led to the development of various sensing platforms that have been constantly evolving with the invention of advanced nanostructures and their nano-composites. Examples of such nanomaterials or composites include gold nanoparticles, carbon nanotubes, carbon/graphene quantum dots and chitosan hydrogel composites, all of which have been exploited due to their contributions as components of a biosensor either for improving the immobilization process or for their electrocatalytic activity towards glucose. This review aims to summarize the evolution of the biosensing aspect of these glucose sensors in terms of the various generations and recent trends based on the use of applied nanostructures for glucose detection in the presence and absence of the enzyme. We describe the history of these biosensors based on commercialized systems, improvements in the understanding of the surface science for enhanced electron transfer, the various sensing platforms developed in the presence of the nanomaterials and their performances.",
"corpus_id": 226040353,
"title": "A Critical Review of Electrochemical Glucose Sensing: Evolution of Biosensor Platforms Based on Advanced Nanosystems"
} | {
"abstract": "There is a major challenge to attach nanostructures on to the electrode surface while retaining their engineered morphology, high surface area, physiochemical features for promising sensing applications. In this study, we have grown vertically-aligned ZnO nanorods (NRs) on fluorine doped tin oxide (FTO) electrodes and decorated with CuO to achieve high-performance non-enzymatic glucose sensor. This unique CuO-ZnO NRs hybrid provides large surface area and an easy substrate penetrable structure facilitating enhanced electrochemical features towards glucose oxidation. As a result, fabricated electrodes exhibit high sensitivity (2961.7 μA mM−1 cm−2), linear range up to 8.45 mM, low limit of detection (0.40 μM), and short response time (<2 s), along with excellent reproducibility, repeatability, stability, selectivity, and applicability for glucose detection in human serum samples. Circumventing, the outstanding performance originating from CuO modified ZnO NRs acts as an efficient electrocatalyst for glucose detection and as well, provides new prospects to biomolecules detecting device fabrication.",
"corpus_id": 3323256,
"title": "Highly Efficient Non-Enzymatic Glucose Sensor Based on CuO Modified Vertically-Grown ZnO Nanorods on Electrode"
} | {
"abstract": "Direct alcohol fuel cells (DAFCs) are attracting increasing interest as power sources for portable applications due to some unquestionable advantages over analogous devices fed with hydrogen.1 Alcohols, such as methanol, ethanol, ethylene glycol, and glycerol, exhibit high volumetric energy density, and their storage and transport are much easier as compared to hydrogen. On the other hand, the oxidation kinetics of any alcohol are much slower and still H2-fueled polymer electrolyte fuel cells (PEMFCs) exhibit superior electrical performance as compared to DAFCs with comparable electroactive surface areas.2,3 Increasing research efforts are therefore being carried out to design and develop more efficient anode electrocatalysts for DAFCs.",
"corpus_id": 7597187,
"score": -1,
"title": "Palladium-based electrocatalysts for alcohol oxidation in half cells and in direct alcohol fuel cells."
} |
{
"abstract": "Except where reference is made to the work of others, the work described in this dissertation is my own or was done in collaboration with my advisory committee. This dissertation does not include proprietary or classified information. Permission is granted to Auburn University to make copies of this dissertation at its discretion, upon the request of individuals or institutions and at their expense. The author reserves all publication rights. Auburn University and was actively involved in the development of an interactive database and matrix population modeling/simulation software package \" AvesModeler \". He obtained his Master of Science degree in Fall 2004. He joined the doctoral program at Auburn Uni-the area of verification of DDR memory protocols. In the summer of 2008, he interned with NXP Semiconductors, The Netherlands, and worked on static timing analysis and silicon timing data. Due to increasing design complexities of digital circuits in recent years, a growing problem in Very Large Scale Integrated (VLSI) digital circuit testing is the exponential rise in the test generation complexity and an increasing need for high quality test vectors. For Built-In Self-Test (BIST) of digital circuit, the in-built pattern generator shows increased area overhead, as larger number and more specific patterns need to be generated. In this thesis we address these issues of digital circuit testing. We propose a novel test generation algorithm for sequential circuits using spectral methods. We generate test vectors for faults defined at Register-Transfer Level (RTL) and analyze them for spectral properties. New test vectors are generated using these properties to detect all faults of the circuit. Our proposed algorithm shows equal or improved test coverage and reduced test generation time as compared to a commercial sequential test generation tool, FlexTest, for various benchmark circuits. For an experimental processor PARWAN, FlexTest achieved a test coverage of 93.40% requiring 1403 test vectors in 26430 CPU seconds. The proposed spectral method achieved a coverage of 98.23% requiring 2327 vectors in 2442 CPU seconds. We also propose a Design-For-Testability (DFT) method at RTL which enables improved test coverage and reduced test generation time. v We define N-model tests that target faults belonging to N specified fault models of choice. We propose a method for minimizing these tests using Integer Linear Programming (ILP) without reducing the individual fault model coverage. Stuck-at, transition, and pseudo stuck-at IDDQ faults are used as illustrations. The proposed method shows a noticeable reduction in test …",
"corpus_id": 113013479,
"title": "Spectral Methods for Testing of Digital Circuits"
} | {
"abstract": "We discuss the compaction of independent test sequences for sequential circuits. Our first contribution is the formulation of this problem as an integer program, which we then solve through a well-known method employing linear programming relaxation and randomized rounding. The key contribution of this approach is that it yields the first polynomial time approximation algorithm for this problem. More specifically, it provides a provably good approximation guarantee while running in time polynomial with respect to the number of vectors in the original test sequences and the number of faults. Another virtue of our approach is that it provides a lower bound for the compacted set of test sequences and, therefore, a quality measure for the test compaction algorithm. Experimental results on benchmark circuits demonstrate that the proposed solution efficiently identifies nearly optimal sets of compacted test sequences.",
"corpus_id": 624995,
"title": "Independent test sequence compaction through integer programming"
} | {
"abstract": "There is overwhelming evidence suggesting that the real users of IR systems often prefer using extremely short queries (one or two individual words) but they try out several queries if needed. Such behavior is fundamentally different from the process modeled in the traditional test collection-based IR evaluation based on using more verbose queries and only one query per topic. In the present paper, we propose an extension to the test collection-based evaluation. We will utilize sequences of short queries based on empirically grounded but idealized session strategies. We employ TREC data and have test persons to suggest search words, while simulating sessions based on the idealized strategies for repeatability and control. The experimental results show that, surprisingly, web-like very short queries (including one-word query sequences) typically lead to good enough results even in a TREC type test collection. This finding motivates the observed real user behavior: as few very simple attempts normally lead to good enough results, there is no need to pay more effort. We conclude by discussing the consequences of our finding for IR evaluation.",
"corpus_id": 4295768,
"score": -1,
"title": "Test Collection-Based IR Evaluation Needs Extension toward Sessions - A Case of Extremely Short Queries"
} |
{
"abstract": "Background Neuronal dysfunction plays an important role in the high prevalence of HIV-associated neurocognitive disorders (HAND) in people with HIV (PWH). Transcranial direct current stimulation (tDCS)—with its capability to improve neuronal function—may have the potential to serve as an alternative therapeutic approach for HAND. Brain imaging and neurobehavioral studies provide converging evidence that injury to the anterior cingulate cortex (ACC) is highly prevalent and contributes to HAND in PWH, suggesting that ACC may serve as a potential neuromodulation target for HAND. Here we conducted a randomized, double-blind, placebo-controlled, partial crossover pilot study to test the safety, tolerability, and potential efficacy of anodal tDCS over cingulate cortex in adults with HIV, with a focus on the dorsal ACC (dACC). Methods Eleven PWH (47–69 years old, 2 females, 100% African Americans, disease duration 16–36 years) participated in the study, which had two phases, Phase 1 and Phase 2. During Phase 1, participants were randomized to receive ten sessions of sham (n = 4) or cingulate tDCS (n = 7) over the course of 2–3 weeks. Treatment assignments were unknown to the participants and the technicians. Neuropsychology and MRI data were collected from four additional study visits to assess treatment effects, including one baseline visit (BL, prior to treatment) and three follow-up visits (FU1, FU2, and FU3, approximately 1 week, 3 weeks, and 3 months after treatment, respectively). Treatment assignment was unblinded after FU3. Participants in the sham group repeated the study with open-label cingulate tDCS during Phase 2. Statistical analysis was limited to data from Phase 1. Results Compared to sham tDCS, cingulate tDCS led to a decrease in Perseverative Errors in Wisconsin Card Sorting Test (WCST), but not Non-Perseverative Errors, as well as a decrease in the ratio score of Trail Making Test—Part B (TMT-B) to TMT—Part A (TMT-A). Seed-to-voxel analysis with resting state functional MRI data revealed an increase in functional connectivity between the bilateral dACC and a cluster in the right dorsal striatum after cingulate tDCS. There were no differences in self-reported discomfort ratings between sham and cingulate tDCS. Conclusions Cingulate tDCS is safe and well-tolerated in PWH, and may have the potential to improve cognitive performance and brain function. A future study with a larger sample is warranted.",
"corpus_id": 249360330,
"title": "Cingulate transcranial direct current stimulation in adults with HIV"
} | {
"abstract": "BACKGROUND AND PURPOSE: Validated neuroimaging markers of HIV-associated neurocognitive disorder in patients on antiretroviral therapy are urgently needed for clinical trials. The purpose of this study was to explore the relationship between cognitive impairment and brain metabolism in older subjects with HIV infection. It was hypothesized that MR spectroscopy measurements related to neuronal health and function (particularly N-acetylaspartate and glutamate) would be lower in HIV-positive subjects with worse cognitive performance. MATERIALS AND METHODS: Forty-five HIV-positive patients (mean age, 58.9 ± 5.3 years; 33 men) underwent detailed neuropsychological testing and brain MR spectroscopy at 7T. Twenty-four subjects were classified as having asymptomatic cognitive impairment, and 21 were classified as having symptomatic cognitive impairment. Single-voxel proton MR spectra were acquired from 5 brain regions and quantified using LCModel software. Brain metabolites and neuropsychological test results were compared using nonparametric statistics and Pearson correlation coefficients. RESULTS: Differences in brain metabolites were found between symptomatic and asymptomatic subjects, with the main findings being lower measures of N-acetylaspartate in the frontal white matter, posterior cingulate cortex, and precuneus. In the precuneus, glutamate was also lower in the symptomatic group. In the frontal white matter, precuneus, and posterior cingulate cortex, NAA and glutamate measurements showed significant positive correlation with better performance on neuropsychological tests. CONCLUSIONS: Compared with asymptomatic subjects, symptomatic HIV-positive subjects had lower levels of NAA and glutamate, most notably in the frontal white matter, which also correlated with performance on neuropsychological tests. High-field MR spectroscopy offers insight into the pathophysiology associated with cognitive impairment in HIV and may be useful as a quantitative outcome measure in future treatment trials.",
"corpus_id": 3463257,
"title": "7T Brain MRS in HIV Infection: Correlation with Cognitive Impairment and Performance on Neuropsychological Tests"
} | {
"abstract": "The accuracy of endometrial aspiration smears obtained with the Isaacs cell sampler in the diagnosis of malignant mixed mesodermal tumors (MMMT) was compared to the results obtained with routine cervical and vaginal smears in five cases of MMMT found in a series of 220 endometrial aspirations. Cervical and vaginal smears previously taken on these patients were positive for adenocarcinoma or MMMT in two cases and suspicious for adenocarcinoma in the remaining three cases. Endometrial aspirates were positive for MMMT in three cases and positive for adenocarcinoma or MMMT in two cases. The endometrial aspiration smears contained a variety of cells: malignant glandular, squamous, spindly stromal, undifferentiated, osteoid and tumor giant cells; chondrocytes and free psammoma bodies were also observed. These cases indicated that endometrial aspiration can accurately detect the heterologous cellular elements found in MMMT and is an effective technique in its diagnosis.",
"corpus_id": 28791127,
"score": -1,
"title": "Cytodiagnosis of endometrial malignant mixed mesodermal tumor."
} |
{
"abstract": "Heterothermic insects like honeybees, foraging in a variable environment, face the challenge of keeping their body temperature high to enable immediate flight and to promote fast exploitation of resources. Because of their small size they have to cope with an enormous heat loss and, therefore, high costs of thermoregulation. This calls for energetic optimisation which may be achieved by different strategies. An ‘economizing’ strategy would be to reduce energetic investment whenever possible, for example by using external heat from the sun for thermoregulation. An ‘investment-guided’ strategy, by contrast, would be to invest additional heat production or external heat gain to optimize physiological parameters like body temperature which promise increased energetic returns. Here we show how honeybees balance these strategies in response to changes of their local microclimate. In a novel approach of simultaneous measurement of respiration and body temperature foragers displayed a flexible strategy of thermoregulatory and energetic management. While foraging in shade on an artificial flower they did not save energy with increasing ambient temperature as expected but acted according to an ‘investment-guided’ strategy, keeping the energy turnover at a high level (∼56–69 mW). This increased thorax temperature and speeded up foraging as ambient temperature increased. Solar heat was invested to increase thorax temperature at low ambient temperature (‘investment-guided’ strategy) but to save energy at high temperature (‘economizing’ strategy), leading to energy savings per stay of ∼18–76% in sunshine. This flexible economic strategy minimized costs of foraging, and optimized energetic efficiency in response to broad variation of environmental conditions.",
"corpus_id": 15062448,
"title": "Energetic Optimisation of Foraging Honeybees: Flexible Change of Strategies in Response to Environmental Challenges"
} | {
"abstract": "Graphical abstract . Highlights ► We demonstrate the benefits of a combined use of infrared thermography with respiratory measurements in insect ecophysiological research. ► Infrared thermography enables repeated investigation of behaviour and thermoregulation without behavioural impairment. ► Comparison with respirometry brings new insights into the mechanisms of energetic optimisation of bee and wasp foraging. ► Combination of methods improves interpretation of respiratory traces in determinations of insect critical thermal limits.",
"corpus_id": 326880,
"title": "Assessing honeybee and wasp thermoregulation and energetics—New insights by combination of flow-through respirometry with infrared thermography"
} | {
"abstract": "Flow-through respirometry is a powerful, accurate methodology for metabolic measurement that is applicable to organisms spanning a body mass range of many orders of magnitude. Concentrating on flow-through respirometry that utilizes a chamber to contain the experimental animals, we describe the most common flow measurement and control methodologies (push, pull and stop-flow) and their associated advantages and disadvantages. Objective methods for calculating air flow rates through the chamber, based on the body mass and taxon of the experimental organism, are presented. Techniques for removing the effect of water vapor dilution, including the direct measurement of water vapor pressure and mathematical compensation for its presence, are described and evaluated, as are issues surrounding the analysis of one or both of the respiratory gases (oxygen and carbon dioxide), and issues related to the mathematical correction of wash-out phenomena (response correction). Two important biomedical applications of flow-through respirometry (metabolic phenotyping and room calorimetry) are discussed in detail, and we conclude with a list of suggestions aimed primarily at investigators starting out in applying flow-through respirometry.",
"corpus_id": 28822133,
"score": -1,
"title": "Flow-through respirometry applied to chamber systems: pros and cons, hints and tips."
} |
{
"abstract": "Kelly et al. (2007) studied sensorimotor alignment effects in the learning environment and novel environment. It found that sensorimotor alignment effects disappeared in the novel environment. But Xiao and Liu (2014) found that sensorimotor alignment effects always appeared in the novel environment except when participants faced the opposite direction to the learning direction. These two studies’ results were both interpreted by the dual system spatial memory theories, which made a hypothesis that sensorimotor and memory alignment effects need different representations. The reason might be that the different promoting extend of memory and body movement to the spatial updating procedure. The promotion effects of memory to spatial updating were efficient both in online and offline representations. Therefore, it is possible to make a comparative study on the two promotion effects of memory and body movement to the spatial updating. The paradigm used in Kelly et al. (2007) was applied in the present study. After remembering a body-centered spatial layout, participants were asked to finish spatial judgments in imagined perspectives (for example, “imagine that you faced A, point to B.”). The imagined perspectives were memory-aligned (the imagined perspective was aligned with learning perspective), sensorimotor-aligned (the imagined perspective was aligned with the current body direction) and misaligned (the imagined perspective was neither aligned with learning perspective nor aligned with the current body direction. And it was defined as the opposite direction of sensorimotor- aligned perspective while the learned perspective was the axis of symmetry) perspectives. The promotion of memory to spatial updating was defined as the subtraction of misaligned and memory-aligned performances. The promotion of body movement to spatial updating was defined as the subtraction of misaligned and sensorimotor-aligned performances. to the left or right before they performed spatial judgments from a perspective aligned with the learning direction (memory aligned), aligned with the direction they face (sensorimotor aligned), and the novel direction misaligned with the two directions mentioned above (misaligned). In each imagined perspective, participants pointed to all the 8 objects of the layout (e.g. “Imagine that you are facing the ball, please point to the candle”). Each participant performed 48 trials (8 target objects × 3 imagined perspectives × 2 blocks). Participants in experiment 2 finished the same spatial judgment task in the novel environment. After learning the spatial layout, the participants of Experiment 2a were disoriented before standing at the testing position in the novel environment, facing 180 degrees opposite to the learning direction. And the participants of Experiment 2b walked straight forward to the testing position in the novel environment remaining in their orientation. The participants of Experiment 2c turned to face the direction opposite to learning perspective after walking straightforward to the novel environment. The dependent measures were the latency and the absolute angular error of the pointing response. In Experiment 1, the pointing latency and absolute pointing error were subjected to mixed-model analyses of variance (ANOVAs), with imagined heading (memory aligned, sensorimotor aligned, or misaligned) as the within-subject variable. Participants pointed more accurately and faster from the memory aligned perspective than from the misaligned perspective (a memory alignment effect), and faster from the sensorimotor aligned perspective than from the misaligned perspective (a sensorimotor alignment effect). The same effects appeared in Experiment 2a, 2b, but not 2c. The Pearson correlations between the promotion of memory to spatial updating and promotion of body movement to spatial updating were significantly high in all of the conditions. And these two effects were significantly different only in Experiment 2. In conclusion, results in the present study indicate that the environment dependent effect of body movement exists. The promotion effect of body movement is equally effective in the learning environment but significantly worse in the novel environment than the promotion effect of memory to spatial updating.",
"corpus_id": 152039362,
"title": "Environment dependent effect of body movement promoting spatial updating"
} | {
"abstract": "The non-visual updating of body-centred spatial relationships was investigated in an experiment in which blindfolded patients had to point to previously seen targets after a body rotation in the absence of vision. Patients with lesions to the right dorsal (RD) area were impaired at updating their positions relative to non-RD patients and normal subjects: they tended to underestimate systematically the angle through which they had turned. The results are interpreted in terms of impoverished locomotor input and/or systematically biased processing or locomotor proprioception in the RD patients, which prevented accurate tracking of changes in egocentric spatial relationships.",
"corpus_id": 354535,
"title": "The automatic updating of egocentric spatial relationships and its impairment due to right posterior cortical lesions"
} | {
"abstract": "AbstractThe tree species Alnus acuminata and Morella pubescens, native to South America, are candidates for soil quality improvement and afforestation of degraded areas and may serve as nurse trees for later inter-planting of other trees, including native crop trees. Both species not only form symbioses with arbuscular mycorrhizal fungi (AMF) and ectomycorrhizal fungi (EMF), but also with N2-fixing actinobacteria. Because tree seedlings inoculated with appropriate mycorrhizal fungi in the nursery resist transplanting stress better than non-mycorrhizal seedlings, we evaluated for A. acuminata and M. pubescens the potential of inoculation with mycorrhizal fungi for obtaining robust tree seedlings. For the first time, a laboratory-produced mixed AMF inoculum was tested in comparison with native soil from stands of both tree species, which contains AMF and EMF. Seedlings of both tree species reacted positively to both types of inocula and showed an increase in height, root collar diameter and above- and belowground biomass production, although mycorrhizal root colonization was rather low in M. pubescens. After 6 months, biomass was significantly higher for all mycorrhizal treatments when compared to control treatments, whereas aboveground biomass was approximately doubled for most treatments. To test whether mycorrhiza formation positively influences plant performance under reduced water supply the experiment was conducted under two irrigation regimes. There was no strong response to different levels of watering. Overall, application of native soil inoculum improved growth most. It contained sufficient AMF propagules but potentially also other soil microorganisms that synergistically enhance plant growth performance. However, the AMF inoculum pot-produced under controlled conditions was an efficient alternative for better management of A. acuminata and M. pubescens in the nursery, which in the future may be combined with defined EMF and Frankia inocula for improved management practices.\n",
"corpus_id": 14353360,
"score": -1,
"title": "Cultured arbuscular mycorrhizal fungi and native soil inocula improve seedling development of two pioneer trees in the Andean region"
} |
{
"abstract": "Machine learning is the powerful tool of the artificial intelligence which is popularly implemented in several applications. Spectrum sensing is the important function of a cognitive radio which detects the available channels. Although, energy detection is the simplest technique, its detection performance suffers from several communication environment factors. Since each classifiers of the machine learning has its own advantages/disadvantages and suits for a specific data characteristic. In this paper, we evaluate the performance of energy detection implemented with the machine learning methods including logistic regression, k-nearest neighbor (KNN) and neural network. Then, the simulated performance is compared to the traditional energy detection (CFAR). The simulation results show that the data characteristic of the detected energy under different distances is not linear. Therefore, it is difficult to determine the existence of PU by using linear machine learning model such as logistic regression. On the other hand, KNN presents lower performance than CFAR because the training data is not appropriate. Moreover, KNN suffers from long sensing time. The neural network presents the highest performance since it suits for non-linear distribution of data and it does not consider the pre-determined decision threshold. Moreover, it consumes the shortest sensing time.",
"corpus_id": 235475774,
"title": "A Performance Comparison Of Spectrum Sensing Exploiting Machine Learning Algorithms"
} | {
"abstract": "In cognitive radio networks, the secondary users can use the frequency bands when the primary users are not present. Hence secondary users need to constantly sense the presence of the primary users. When the primary users are detected, the secondary users have to vacate that channel. This makes the probability of detection important to the primary users as it indicates their protection level from secondary users. When the secondary users detect the presence of a primary user which is in fact not there, it is referred to as false alarm. The probability of false alarm is important to the secondary users as it determines their usage of an unoccupied channel. Depending on whose interest is of priority, either a targeted probability of detection or false alarm shall be set. After setting one of the probabilities, the other can be optimized through cooperative sensing. In this paper, we show that cooperating all secondary users in the network does not necessary achieve the optimum performance, but instead, it is achieved by cooperating a certain number of users with the highest primary user's signal to noise ratio. Computer simulations have shown that the Pd can increase from 92.03% to 99.88% and Pf can decrease from 6.02% to 0.06% in a network with 200 users.",
"corpus_id": 10544313,
"title": "Optimization for Cooperative Sensing in Cognitive Radio Networks"
} | {
"abstract": "We consider a two-stage sensing scheme for cognitive radios where coarse sensing based on energy detection is performed in the first stage and, if required, fine sensing based on cyclostationary detection in the second stage. We design the detection threshold parameters in the two sensing stages so as to maximize the probability of detection, given constraints on the probability of false alarm. We compare this scheme with ones where only energy detection or cyclostationary detection is performed. The performance comparison is made based on the probability of detection, probability of false alarm and mean detection time.",
"corpus_id": 6663411,
"score": -1,
"title": "Two-stage spectrum sensing for cognitive radios"
} |
{
"abstract": "Executive functions (EFs) are essential for daily living activities but decline with age. Convenient assessment and timely intervention have particular significance for older adults. However, the traditional laboratory tasks of EFs are typically monotonous and inconvenient. The current study aimed to develop an interesting and convenient supplementary tool to assess EFs for older adults. According to the theory of EFs, we developed a serious game, FISHERMAN, to assess EFs. The game includes three subgames, Cautious Fisherman, Agile Fisherman, and Wise Fisherman, targeting core components of inhibition, shifting, and working memory, respectively. The current study aims to verify the reliability and validity of the game. One hundred and eight healthy older adults participated in this study and were tested through the FISHERMAN game and a battery of cognitive tests. The results show that the FISHERMAN game has high internal consistency reliability and good construct validity as well as criterion-related validity, suggesting that the game design is valid and can be used in EFs assessment for older adults. Future studies are warranted to establish the norm of the FISHERMAN game in older adults and investigate whether the FISHERMAN game can be generalized to other populations.",
"corpus_id": 250091742,
"title": "FISHERMAN: A Serious Game for Executive Function Assessment of Older Adults"
} | {
"abstract": "Background and Aims: Adaptive behavior depends on the ability to voluntarily suppress context-inappropriate behaviors, a process referred to as response inhibition. Stop Signal tests (SSTs) are the most frequently studied paradigm used to assess response inhibition. Previous studies of SSTs have indicated that inhibitory control behavior can be explained using a common model in which GO and STOP processes are initiated independent from one and another, and the process that is completed first determines whether the behavior is elicited (GO process) or terminated (STOP process). Consistent with this model, studies have indicated that individuals strategically delay their behaviors during SSTs in order to increase their stopping abilities. Despite being controlled by distinct neural systems, prior studies have largely documented similar inhibitory control performance across eye and hand movements. Though, no existing studies have compared the extent to which individuals strategically delay behavior across different effectors is not yet clear. Here, we compared the extent to which inhibitory control processes and the cognitive strategies that support them during oculomotor and manual motor behaviors. Methods: We examined 29 healthy individuals who performed parallel oculomotor and manual motor SSTs. Participants also completed a separate block of GO trials administered prior to the Stop Signal tests to assess baseline reaction times for each effector and reaction time increases during interleaved GO trials of the SST. Results: Our results showed that stopping errors increased for both effectors as the interval between GO and STOP cues was increased (i.e., stop signal delay), but performance deteriorated more rapidly for eye compared to hand movements with increases in stop signal delay. During GO trials, participants delayed the initiation of their responses for each effector, and greater slowing of reaction times on GO trials was associated with increased accuracy on STOP trials for both effectors. However, participants delayed their eye movements to a lesser degree than their hand movements, and strategic reaction time slowing was a stronger determinant of stopping accuracy for hand compared to eye movements. Overall, stopping accuracies for eye and hand movements were only modestly correlated, and the time it took individuals to cancel a response was not related for eye and hand movements. Discussion and Conclusion: Our findings that GO and STOP processes are independent and that individuals strategically delay their behavioral responses to increase stopping accuracy regardless of effector indicate that inhibitory control of oculomotor and manual motor behaviors both follow common guiding principles. Yet, our findings document that eye movements are more difficult to inhibit than hand movements, and the timing, magnitude, and impact of cognitive control strategies used to support voluntary response inhibition are less robust for eye compared to hand movements. This suggests that inhibitory control systems also show unique characteristics that are behavior-dependent. This conclusion is consistent with neurophysiological evidence showing important differences in the architecture and functional properties of the neural systems involved in inhibitory control of eye and hand movements. It also suggests that characterizing inhibitory control processes in health and disease requires effector-specific analysis.",
"corpus_id": 21260,
"title": "Inhibitory Control Processes and the Strategies That Support Them during Hand and Eye Movements"
} | {
"abstract": "BACKGROUND\nInhibitory control deficits are common in autism spectrum disorder (ASD) and associated with more severe repetitive behaviors. Inhibitory control deficits may reflect slower execution of stopping processes, or a reduced ability to delay the onset of behavioral responses in contexts of uncertainty. Previous studies have documented relatively spared stopping processes in ASD, but whether inhibitory control deficits in ASD reflect failures to delay response onset has not been systematically assessed. Further, while improvements in stopping abilities and response slowing are seen through adolescence/early adulthood in health, their development in ASD is less clear.\n\n\nMETHODS\nA stop-signal test (SST) was administered to 121 individuals with ASD and 76 age and IQ-matched healthy controls (ages 5-28). This test included 'GO trials' in which participants pressed a button when a peripheral target appeared and interleaved 'STOP trials' in which they were cued to inhibit button-presses when a stop-signal appeared at variable times following the GO cue. STOP trial accuracy, RT of the stopping process (SSRT), and reaction time (RT) slowing during GO trials were examined.\n\n\nRESULTS\nRelative to controls, individuals with ASD had reduced accuracy on STOP trials. SSRTs were similar across control and ASD participants, but RT slowing was reduced in patients compared to controls. Age-related increases in stopping ability and RT slowing were attenuated in ASD. Reduced stopping accuracy and RT slowing were associated with more severe repetitive behaviors in ASD.\n\n\nDISCUSSION\nOur findings show that inhibitory control deficits in ASD involve failures to strategically delay behavioral response onset. These results suggest that reduced preparatory behavioral control may underpin inhibitory control deficits as well as repetitive behaviors in ASD. Typical age-related improvements in inhibitory control during late childhood/early adolescence are reduced in ASD, highlighting an important developmental window during which treatments may mitigate cognitive alterations contributing to repetitive behaviors.",
"corpus_id": 5003664,
"score": -1,
"title": "Cognitive mechanisms of inhibitory control deficits in autism spectrum disorder"
} |
{
"abstract": "Cinnamaldehyde and cinnamaldehyde‐derived compounds are candidates for the development of anticancer drugs that have received extensive research attention. In this review, we summarize recent findings detailing the positive and negative aspects of cinnamaldehyde and its derivatives as potential anticancer drug candidates. Furthermore, we describe the in vivo pharmacokinetics and metabolism of cinnamaldehydes. The oxidative and antioxidative properties of cinnamaldehydes, which contribute to their potential in chemotherapy, have also been discussed. Moreover, the mechanism(s) by which cinnamaldehydes induce apoptosis in cancer cells have been explored. In addition, evidence of the regulatory effects of cinnamaldehydes on cancer cell invasion and metastasis has been described. Finally, the application of cinnamaldehydes in treating various types of cancer, including breast, prostate, and colon cancers, has been discussed in detail. The effects of cinnamaldehydes on leukemia, hepatocellular carcinoma, and oral cancer have been summarized briefly. Copyright © 2016 John Wiley & Sons, Ltd.",
"corpus_id": 25542338,
"title": "Cinnamaldehydes in Cancer Chemotherapy"
} | {
"abstract": "Colorectal cancer (CRC) is a major cause of tumor-related morbidity and mortality worldwide. Recent research suggests that pharmacological intervention using dietary factors that activate the redox sensitive Nrf2/Keap1-ARE signaling pathway may represent a promising strategy for chemoprevention of human cancer including CRC. In our search for dietary Nrf2 activators with potential chemopreventive activity targeting CRC, we have focused our studies on trans-cinnamic aldehyde (cinnamaldeyde, CA), the key flavor compound in cinnamon essential oil. Here we demonstrate that CA and an ethanolic extract (CE) prepared from Cinnamomum cassia bark, standardized for CA content by GC-MS analysis, display equipotent activity as inducers of Nrf2 transcriptional activity. In human colon cancer cells (HCT116, HT29) and non-immortalized primary fetal colon cells (FHC), CA- and CE-treatment upregulated cellular protein levels of Nrf2 and established Nrf2 targets involved in the antioxidant response including heme oxygenase 1 (HO-1) and γ-glutamyl-cysteine synthetase (γ-GCS, catalytic subunit). CA- and CE-pretreatment strongly upregulated cellular glutathione levels and protected HCT116 cells against hydrogen peroxide-induced genotoxicity and arsenic-induced oxidative insult. Taken together our data demonstrate that the cinnamon-derived food factor CA is a potent activator of the Nrf2-orchestrated antioxidant response in cultured human epithelial colon cells. CA may therefore represent an underappreciated chemopreventive dietary factor targeting colorectal carcinogenesis.",
"corpus_id": 1307211,
"title": "The Cinnamon-Derived Dietary Factor Cinnamic Aldehyde Activates the Nrf2-Dependent Antioxidant Response in Human Epithelial Colon Cells"
} | {
"abstract": "Recent studies indicate that natural isothiocyanates, such as sulforaphane (SFN) and phenethyl isothiocyanate (PEITC) possess strong antitumor activities in vitro and in vivo. The nuclear factor kappa B (NF-kappaB) is believed to play an important role in cancer chemoprevention due to its involvement in tumor cell growth, proliferation, angiogenesis, invasion, apoptosis, and survival. In this study, we investigated the effects and the molecular mechanisms of SFN and PEITC on NF-kappaB transcriptional activation and NF-kappaB-regulated gene expression in human prostate cancer PC-3 C4 cells. Treatment with SFN (20 and 30 microM) and PEITC (5 and 7.5 microM) significantly inhibited NF-kappaB transcriptional activity, nuclear transloction of p65, and gene expression of NF-kappaB-regulated VEGF, cylcin D1, and Bcl-X(L) in PC-3 C4 cells. To further elucidate the mechanism, we utilized the dominant-negative mutant of inhibitor of NF-kappaB alpha (IkappaBalpha) (SR-IkappaBalpha). Analogous to treatments with SFN and PEITC, SR-IkappaBalpha also strongly inhibited NF-kappaB transcriptional activity as well as VEGF, cylcin D1, and Bcl-X(L) expression. Furthermore, SFN and PEITC also inhibited the basal and UVC-induced phosphorylation of IkappaBalpha and blocked UVC-induced IkappaBalpha degradation in PC-3 C4 cells. In examining the upstream signaling, we found that the dominant-negative mutant of IKKbeta (dnIKKbeta) possessed inhibitory effects similar to SFN and PEITC on NF-kappaB, VEGF, cylcin D1, Bcl-X(L) as well as IkappaBalpha phosphorylation. In addition, treatment with SFN and PEITC potently inhibited phosphorylation of both IKKbeta and IKKalpha and significantly inhibited the in vitro phosphorylation of IkappaBalpha mediated by IKKbeta. Taken together, these results suggest that the inhibition of SFN and PEITC on NF-kappaB transcriptional activation as well as NF-kappaB-regulated VEGF, cyclin D1, and Bcl-X(L) gene expression is mainly mediated through the inhibition of IKK phosphorylation, particularly IKKbeta, and the inhibition of IkappaBalpha phosphorylation and degradation, as well as the decrease of nuclear translocation of p65 in PC-3 cells.",
"corpus_id": 29295708,
"score": -1,
"title": "Suppression of NF-kappaB and NF-kappaB-regulated gene expression by sulforaphane and PEITC through IkappaBalpha, IKK pathway in human prostate cancer PC-3 cells."
} |
{
"abstract": "Individuals recovering from substance use often seek social support (emotional and informational) on online recovery forums, where they can both write and comment on posts, expressing their struggles and successes. A common challenge in these forums is that certain posts (some of which may be support seeking) receive no comments. In this work, we use data from two Reddit substance recovery forums: /r/Leaves and /r/OpiatesRecovery, to determine the relationship between the social supports expressed in the titles of posts and the number of comments they receive. We show that the types of social support expressed in post titles that elicit comments vary from one substance use recovery forum to the other.",
"corpus_id": 226283955,
"title": "Does Social Support (Expressed in Post Titles) Elicit Comments in Online Substance Use Recovery Forums?"
} | {
"abstract": "Prescription drug abuse is a pressing public health issue, and people who misuse prescription drugs are turning to online forums for help. Are such forums effective? We analyze the process of opioid withdrawal, recovery and relapse on Forum77, MedHelp.org's online health forum for substance abuse recovery. Applying Prochashka's Transtheoretical Model for behavior change, we develop a taxonomy describing phases of addiction expressed by Forum77 members. We examine activity and linguistic features across the phases USING, WITHDRAWING and RECOVERING. We train statistical classifiers to identify addiction phase, relapse and whether a user was RECOVERING at the time of her last post. Applying our classifiers to 2,848 users, we find that while almost 50% relapse, the prognosis for ending in RECOVERING is favorable. Supplementing our results with users' own accounts of their experiences, we discuss Forum77's efficacy and shortcomings, and implications for future technologies.",
"corpus_id": 920933,
"title": "Forum77: An Analysis of an Online Health Forum Dedicated to Addiction Recovery"
} | {
"abstract": "Psychological distress in the form of depression, anxiety and other mental health challenges among college students is a growing health concern. Dearth of accurate, continuous, and multi-campus data on mental well-being presents significant challenges to intervention and mitigation efforts in college campuses. We examine the potential of social media as a new \"barometer\" for quantifying the mental well-being of college populations. Utilizing student-contributed data in Reddit communities of over 100 universities, we first build and evaluate a transfer learning based classification approach that can detect mental health expressions with 97% accuracy. Thereafter, we propose a robust campus-specific Mental Well-being Index: MWI. We find that MWI is able to reveal meaningful temporal patterns of mental well-being in campuses, and to assess how their expressions relate to university attributes like size, academic prestige, and student demographics. We discuss the implications of our work for improving counselor efforts, and in the design of tools that can enable better assessment of the mental health climate of college campuses.",
"corpus_id": 1690595,
"score": -1,
"title": "A Social Media Based Index of Mental Well-Being in College Campuses"
} |
{
"abstract": "Managing Employee Motivation Through the Process of Government Furloughs by Kim Charisc Hill MBA, Touro University International, 2007 BS, Touro University International, 2004 AA, City Colleges of Chicago, 1999 Dissertation Submitted in Partial Fulfilment of the Requirements for the Degree of Doctor of Philosophy Management",
"corpus_id": 148748775,
"title": "Managing Employee Motivation Through the Process of Government Furloughs"
} | {
"abstract": "Previous research has shown that both past unemployment and anticipated future unemployment have a detrimental impact on employees' attitudes and behaviours, which may affect organisational performance. Surprisingly, however, very little is known about the relative impact of past unemployment compared with current job insecurity. Although it is possible that both effects operate simultaneously, this paper – focused on employees' job satisfaction and utilising a set of cross-sectional data derived from the European Social Survey 2006–2007 – reports on a strongly pronounced insecurity effect: anticipated unemployment substantially reduces employees' job satisfaction. Interestingly, inclusion of the perceived risk of future unemployment as a separate predictor variable in ordered probit regressions relegates the experience of past unemployment to a statistically insignificant coefficient and thus weakens the ‘scarring’ hypothesis. These results hold true even when several socio-demographic characteristics and proxies for individual personality traits are controlled. Implications for organisations and human resource practitioners and scope for future research endeavours conclude the analysis of the paper.",
"corpus_id": 154037584,
"title": "Scarred from the past or afraid of the future? Unemployment and job satisfaction across European labour markets"
} | {
"abstract": "We consider the link between poverty and subjective well-being, and focus in particular on the role of time. We use panel data on 49,000 individuals living in Germany from 1992 to 2012 to uncover three empirical relationships. First, life satisfaction falls with both the incidence and intensity of contemporaneous poverty. Second, poverty scars: those who have been poor in the past report lower life satisfaction today, even when out of poverty. Last, the order of poverty spells matters: for a given number of years in poverty, satisfaction is lower when the years are linked together. As such, poverty persistence reduces well-being. These effects differ by population subgroups.",
"corpus_id": 13936895,
"score": -1,
"title": "Soeppapers on Multidisciplinary Panel Data Research Poverty Profi Les and Well-being: Panel Evidence from Germany Soeppapers on Multidisciplinary Panel Data Research at Diw Berlin Poverty Profiles and Well-being: Panel Evidence from Germany*"
} |
{
"abstract": "The CIRSY system (or Chick Instance Recognition System) is am image processing system developed as part of this research to detect images of chicks in highly-populated images that uses the leading algorithm in instance segmentation tasks, called the Mask R-CNN. It extends on the Faster R-CNN framework used in object detection tasks, and this extension adds a branch to predict the mask of an object along with the bounding box prediction. Mask R-CNN has proven to be effective in instance segmentation and object detection tasks after outperforming all existing models on evaluation of the Microsoft Common Objects in Context (MS COCO) dataset (He, Gkioxari, Dollár, & Girshick, 2017). However, this research explores to what extent the Mask R-CNN framework can perform in instance level recognition of small objects in poorly lit images. By leveraging on the benefits of transfer learning in training deep neural networks, this research further explores if fine tuning the Mask R-CNN algorithm can significantly improve the models performance after it has been trained after applying the weights from the implementation of the model trained on the MS COCO dataset. The CIRSY system was trained on various synthetic datasets with varying degrees of transformation and noise applied. These datasets were built from a collection of CCTV footage of chicks in a poultry farm. The experiments conducted showed that although there were slight improvements in the model performance, these improvements were not statistically significant.",
"corpus_id": 214384670,
"title": "Image Instance Segmentation: Using the CIRSY System to Identify Small Objects in Low Resolution Images"
} | {
"abstract": null,
"corpus_id": 11164512,
"title": "IMAGE SEGMENTATION BY USING THRESHOLDING TECHNIQUES FOR MEDICAL IMAGES"
} | {
"abstract": "Grape constitutes one of the most widely grown fruit crop in the India. Manual observation of experts is used in practice for detection of leaf diseases, which takes more time for further control action. Without accurate disease diagnosis, proper control actions cannot be taken at appropriate time. This is where modern agriculture technique is required to detect and prevent the leaf from different diseases. This paper aims to introduce a new approach for detection of grape leaf diseases using image processing, which will minimize the loss and increase its profit due to automation. In this system, classification is done using Support Vector Machine (SVM) and Artificial Neural Network (ANN) classifies separately. A new classifier is proposed using fusion classification technique which ensembles classifiers from SVM and ANN to regenerate base classifier for grape leaf disease detection. Based on detection of disease the proper mixture of fungicides will be provided to the grape farmers.",
"corpus_id": 33333756,
"score": -1,
"title": "Fusion classification technique used to detect downy and Powdery Mildew grape leaf diseases"
} |
{
"abstract": "In this thesis, we aim to use the spectral graph theory to develop a framework to solve the problems of computer vision. The graph spectral methods are concerned with using the eigenvalues and eigenvectors of the adjacency matrix or closely related Laplacian matrix. In this thesis we develop four methods using spectral techniques: (1) We use a Hermitian property matrix for point pattern matching problem; (2) We use coefficients of symmetric polynomials to cluster similar human poses using the skeletal representation acquired from Microsoft Kinect; (3) We use coefficients of the elementary symmetric polynomials to make the direction of the eigenvectors of the proximity matrices consistent with each other for the problem of correspondence matching; (4) We use commute time embedding to construct a 3D shape descriptor for the purpose of 3D shape classification. \n \nIn Chapter 3 we address the problem of correspondence matching. We extend the Laplacian matrix to the complex domain by constructing a Hermitian property matrix. We construct a Hermitian property matrix from the spatial locations of the 2D feature points extracted from a pair of images and the angular information associated with these feature points. We construct the Hermitian property matrix in a way that reflects the Laplacian matrix. The complex eigenvectors of the Hermitian matrix is then used to find the correspondences between pairs of points across two images. We embed the complex eigenvectors of the Hermitian property matrix in the iterative alignment EM algorithm developed by Carcassoni and Hancock to make it robust to rotation, noise and point-position jitter. Experimental results on both synthetic and real world data have been presented. \n \nChapter 4 develops a clustering method using four different type of feature vectors constructed from the complex coefficients of the elementary symmetric polynomials. These polynomials are computed from the eigenvalues and the complex eigenvectors of a Hermitian property matrix. The feature vectors are embedded into a pattern-space using Principal Component Analysis (PCA) and Multidimensional Scaling (MDS) to cluster similar human poses acquired using the Microsoft Kinect device for Xbox 360. The Hermitian property matrix is constructed from the length of the limbs and the angles subtended by each pair of limbs using the three-dimensional skeletal data produced by the Kinect device. The given skeleton is converted to its equivalent line graph to compute the angles between pairs of limbs. The joints locations are used to compute the limb lengths. \n \nIn Chapter 5, we describe a method to correct the sign of eigenvectors of the proximity matrix for the problem of correspondence matching. The signs of the eigenvectors of a proximity matrix are not unique and play an important role in computing the correspondences between a set of feature points. We use the coefficients of the elementary symmetric polynomials to make the direction of the eigenvectors of the two proximity matrices consistent with each other. \n \nChapter 6 describes a 3D shape descriptor that is robust to changes in pose and topology. The descriptor is based on the D2 shape descriptor developed by Osada et al, which is essentially the frequency distribution of the Euclidian distance between randomly selected points on the surface of the 3D shape. We use the commute-time distance instead of using the Euclidian distance between randomly selected points. A new and completely unsupervised mesh segmentation algorithm is proposed, which is based on the commute time embedding of the mesh and k-means clustering using the embedded mesh vertices.",
"corpus_id": 9906826,
"title": "Spectral representation for matching and recognition"
} | {
"abstract": "This paper describes an efficient algorithm for inexact graph matching. The method is purely structural, that is, it uses only the edge or connectivity structure of the graph and does not draw on node or edge attributes. We make two contributions: 1) commencing from a probability distribution for matching errors, we show how the problem of graph matching can be posed as maximum-likelihood estimation using the apparatus of the EM algorithm; and 2) we cast the recovery of correspondence matches between the graph nodes in a matrix framework. This allows one to efficiently recover correspondence matches using the singular value decomposition. We experiment with the method on both real-world and synthetic data. Here, we demonstrate that the method offers comparable performance to more computationally demanding methods.",
"corpus_id": 903453,
"title": "Structural Graph Matching Using the EM Algorithm and Singular Value Decomposition"
} | {
"abstract": "The perceptual recognition of objects is conceptualized to be a process in which the image of the input is segmented at regions of deep concavity into an arrangement of simple geometric components, such as blocks, cylinders, wedges, and cones. The fundamental assumption of the proposed theory, recognition-by-components (RBC), is that a modest set of generalized-cone components, called geons (N £ 36), can be derived from contrasts of five readily detectable properties of edges in a two-dimensiona l image: curvature, collinearity, symmetry, parallelism, and cotermination. The detection of these properties is generally invariant over viewing position an$ image quality and consequently allows robust object perception when the image is projected from a novel viewpoint or is degraded. RBC thus provides a principled account of the heretofore undecided relation between the classic principles of perceptual organization and pattern recognition: The constraints toward regularization (Pragnanz) characterize not the complete object but the object's components. Representational power derives from an allowance of free combinations of the geons. A Principle of Componential Recovery can account for the major phenomena of object recognition: If an arrangement of two or three geons can be recovered from the input, objects can be quickly recognized even when they are occluded, novel, rotated in depth, or extensively degraded. The results from experiments on the perception of briefly presented pictures by human observers provide empirical support for the theory. Any single object can project an infinity of image configurations to the retina. The orientation of the object to the viewer can vary continuously, each giving rise to a different two-dimensional projection. The object can be occluded by other objects or texture fields, as when viewed behind foliage. The object need not be presented as a full-colored textured image but instead can be a simplified line drawing. Moreover, the object can even be missing some of its parts or be a novel exemplar of its particular category. But it is only with rare exceptions that an image fails to be rapidly and readily classified, either as an instance of a familiar object category or as an instance that cannot be so classified (itself a form of classification).",
"corpus_id": 8054340,
"score": -1,
"title": "Recognition-by-components: a theory of human image understanding."
} |
{
"abstract": "............................................................................................... i Acknowledgement ............................................................................. ii Table of",
"corpus_id": 60474557,
"title": "A psychophysical investigation of human visual perceptual memory : a study of the retention of colour, spatial frequency and motion visual information by human visual short term memory mechanisms"
} | {
"abstract": "Abstract : Cells were recorded with tungsten electrodes in the dorsal lateral geniculate body of the cat. Receptive fields of these units were mapped out, in the light-adapted state, with small sports of light. In their general arrangement geniculate receptive fields resembled those of retinal ganglion cells, having an excitatory ('on') centre and inhibitory ('off') preriphery, or reverse. The two portions of a receptive field were mutually antagonistic; the decrease in centre responses cauded by inclusion of peripheral portions of receptive fields was termed peripheral suppression. Cells recorded in layers A and B of the lateral geniculate body were driven from the contralateral eye; cells in layer A1 from the ipsilateral eye. In penetrations normal to the layers receptive fields of cells in a single layer were close together or superimposed, and from one layer to the next occupied exactly homologous positions in the two retinas. Binocular interaction was not observed in any of the cells studied. All three layers of the lateral geniculate contained both 'on'-centre and 'off'-centre units. Cells in layers A and A1 were similar both in their firing patterns and in average receptive field size. Cells in layer B were more sluggish in their responses to light stimuli, and tended to have larger receptive field centres. Cells with receptive fields within or near the area centralis tended to have smaller field centres and stronger suppression by the receptive field periphery than cells with their fields situated in more peripheral regions of the retina.",
"corpus_id": 3063181,
"title": "Integrative action in the cat's lateral geniculate body"
} | {
"abstract": "501 It has been widely accepted that spiking neurons code information in their firing rates1. A related assumption is that each neuron serves as an independent channel transmitting information from one stage to another. In principle, however, much more information can be coded in the activity of a neuronal ensemble if temporal correlations between neurons are also used to code additional information not in the isolated spike trains2. It remains an important question whether the nervous system uses such distributed codes, and if so, how such ‘multiplexed’ information is encoded and decoded. Simultaneous recordings from multiple neurons have shown robust interneuronal correlations within many areas of the brain2–20. The functional significance of these temporal correlations, however, remains speculative. The occurrence of correlations depends on features of the sensory inputs15–17 or the behavioral states of the animal18,19. Similarly, different modes of firing of single neurons can be induced by modulatory inputs21, and it has been suggested that these modes can have distinct roles in information transmission22. Such studies may lend insights into the potential roles of temporal correlations in the coding and processing of information. Neurons in the cat lateral geniculate nucleus (LGN) with overlapping receptive fields show precisely correlated firing8. These correlations are faster and often stronger than those previously described in retina, LGN or visual cortex3–16: many spikes from pairs of geniculate cells occur nearly simultaneously, with a precision on the order of one millisecond. The existence and strength of the correlation depend on the degree of overlap between the two receptive fields. When the two receptive fields are well overlapped and similar (both X or both Y; both ‘on’ or both ‘off ’), they are almost always highly correlated; such pairs typically fire at least 20% of their spikes synchronously. When the cells are of different types (one X, the other Y) or when their centers are only partially overlapped, tightly correlated firing is found among about 10% of pairs and the correlations are much weaker (~2%). As the LGN is the main source of afferent input to the primary visual cortex, these precise temporal correlations could form an important feature of the visual signals received by the cortex. Here we have used a reverse reconstruction technique23 and information-theoretic analysis24 to investigate the role of such correlated spiking in visual coding within the LGN. We found that much more information could be extracted from a pair of neurons if the synchronous spikes between them are considered separately. The percentage of increase in information is approximately proportional to the degree of correlation. We therefore conclude that these precise temporal correlations could be used as additional information channels from the LGN to the visual cortex.",
"corpus_id": 246018455,
"score": -1,
"title": "Art-Reid (E)"
} |
{
"abstract": "The NBOMe compounds are a novel series of hallucinogenic drugs that are potent agonists of the 5-HT2A receptor, have a short history of human consumption and are available to buy online, in most countries. In this study, we sought to investigate the patterns of use, characteristics of users and self-reported effects. A cross-sectional anonymous online survey exploring the patterns of drug use was conducted in 2012 (n = 22,289), including questions about the use of 25B-NBOMe, 25C-NBOMe, and 25I-NBOMe and comparison drugs. We found that 2.6% of respondents (n = 582) reported having ever tried one of the three NBOMe drugs and that at 2.0%, 25I-NBOMe was the most popular (n = 442). Almost all (93.5%) respondents whose last new drug tried was a NBOMe drug, tried it in 2012, and 81.2% of this group administered the drug orally or sublingually/buccally. Subjective effects were similar to comparison serotonergic hallucinogens, though higher ‘negative effects while high’ and greater ‘value for money’ were reported. The most common (41.7%) drug source was via a website. The NBOMe drugs have emerged recently, are frequently bought using the internet and have similar effects to other hallucinogenic drugs; however, they may pose larger risks, due to the limited knowledge about them, their relatively low price and availability via the internet.",
"corpus_id": 35219099,
"title": "The NBOMe hallucinogenic drug series: Patterns of use, characteristics of users and self-reported effects in a large international sample"
} | {
"abstract": "This publication reports analytical properties of a new hallucinogenic substance identified in blotter papers seized from the drug market, namely 25C-NBOMe [2-(4-chloro-2,5-dimethoxyphenyl)-N-(2-methoxybenzyl)ethanamine]. The identification was based on results of comprehensive study including several analytical methods, i.e., GC-EI-MS (without derivatization and after derivatization with TFAA), LC-ESI-QTOF-MS, FTIR and NMR. The GC-MS spectrum of 25C-NBOMe was similar to those obtained for other representatives of the 25-NBOMe series, with dominant ions observed at m/z=150, 121 and 91. Fragment ions analogic to those in 2C-C (4-chloro-2,5-dimethoxy-β-phenylethanamine) were also observed, but their intensities were low. Derivatization allowed the determination of molecular mass of the investigated substance. The exact molecular mass and chemical formula were confirmed by LC-QTOF-MS experiments and fragmentation pattern under electrospray ionization was determined. The MS/MS experiments confirmed that the investigated substance was N-(2-methoxy)benzyl derivative of 2C-C. The substance was also characterized by FTIR spectroscopy to corroborate its identity. Final elucidation of the structure was performed by NMR spectroscopy.",
"corpus_id": 6609163,
"title": "25C-NBOMe--new potent hallucinogenic substance identified on the drug market."
} | {
"abstract": "Abstract Multiple linear regression analysis has been used to identify the most important properties relevant to psychotomimetic activity displayed by 37 phenylalkylamines. Using the minimal topologic differences (MTD) parameter, lipophilicity (log P, calculated by using π Hansch substituent terms), average electrostatic field (AEF) and electronic descriptors, lowest unoccupied molecular orbital energies (ELUMO) and net atomic charges (obtained from AM1 calculations), good correlations with biological activity were obtained (R2 = 0.79 − 0.92). Cross-validation procedure was applied indicating a good predictability of the proposed models (R2cv = 0.67 − 0.81).",
"corpus_id": 95667687,
"score": -1,
"title": "QSAR study with steric (MTD), electronic and hydrophobicity parameters on psychotomimetic phenylalkylamines"
} |
{
"abstract": "This paper aims to systematically review the major findings from meta-analyses comparing different treatment options for hepatocellular carcinoma (HCC). A total of 153 relevant papers were searched via the PubMed, EMBASE, and Cochrane library databases. They were classified according to the mainstay treatment modalities (i.e., liver transplantation, surgical resection, radiofrequency ablation, transarterial embolization or chemoembolization, sorafenib, and others). The primary outcome data, such as overall survival, diseases-free survival or recurrence-free survival, progression-free survival, and safety, were summarized. The recommendations and uncertainties regarding the treatment of HCC were also proposed.",
"corpus_id": 15887445,
"title": "Management of hepatocellular carcinoma: an overview of major findings from meta-analyses"
} | {
"abstract": "The efficacy of adjuvant interferon treatment for the management of patients with viral hepatitis‐related hepatocellular carcinoma (HCC) following curative treatment is controversial. We have conducted a systematic review with meta‐analysis to assess the effects of adjuvant interferon therapy on survival outcomes. Randomized and nonrandomized studies (NRSs) comparing adjuvant interferon treatment with the standard of care for viral hepatitis‐related HCC after curative treatment were included. CENTRAL, Medline, EMBASE and the Science Citation Index were searched with complementary manual searches. The primary outcomes were recurrence‐free survival (RFS) and overall survival (OS). Nine randomized trials and 13 NRSs were included in the meta‐analysis. These nine randomized trials included 942 participants, of whom, 490 were randomized to the adjuvant interferon treatment group and 452 to the control group. The results of meta‐analysis showed unexplained heterogeneity for both RFS and OS. The 13 NRSs included 2214 participants, of whom, 493 were assigned to the adjuvant interferon treatment group and 1721 to the control group. The results of meta‐analysis showed that, compared with controls, adjuvant interferon treatment significantly improved the RFS [hazard ratio (HR) 0.66, 95% confidence interval (CI) 0.52–0.84, I2 = 29%] and OS (HR 0.43, 95% CI 0.34–0.56, I2 = 0%) of patients with hepatitis C virus‐related HCC following curative treatment. There was little evidence for beneficial effects on patients with hepatitis B virus‐related HCC. Future research should be aimed at clarifying whether the effects of adjuvant interferon therapy are more prominent in hepatitis C patients with sustained virological responses.",
"corpus_id": 486332,
"title": "A systematic review and meta‐analysis of adjuvant interferon therapy after curative treatment for patients with viral hepatitis‐related hepatocellular carcinoma"
} | {
"abstract": "The recurrence rate of hepatocellular carcinoma (HCC) after potentially curative hepatic resection (HR) is very high. Many clinical trials have explored the efficacy of several treatment modalities to prevent recurrence, including adjuvant and chemopreventive therapy, but they have often reported contradictory findings. As a result, most liver guidelines and liver seminars do not unequivocally endorse adjuvant or chemopreventive therapy for HCC patients after potentially curative HR. To examine the available evidence on this question, we comprehensively searched PubMed for controlled studies that included a supportive care or placebo control arm, and we used the GRADE system to classify and assess the results.",
"corpus_id": 3007314,
"score": -1,
"title": "Adjuvant and chemopreventive therapies for resectable hepatocellular carcinoma: a literature review"
} |
{
"abstract": "This paper presents a systematic examination of tour guide management in Hainan, China. A series of issues and problems with the implemented management mechanisms are identified. It reveals how inappropriate management measures contribute to deterioration of the guiding service and unhealthy operation of the guiding business. It argues that an enhanced tour guide management system could raise the professionalism of tour guides and also restrain improper business practices, thereby enhancing the quality of services provided by tour guides.",
"corpus_id": 153919274,
"title": "Tour Guide Management in Hainan, China: Problems, Implications and Solutions"
} | {
"abstract": "Abstract Tour guides are one of the key front-line players in the tourism industry. Through their knowledge and interpretation of a destination's attractions and culture, and their communication and service skills, they have the ability to transform the tourists’ visit from a tour into an experience. The role and duties may not be that glamorous as the profession, in many countries, lacks a well-defined career path and their incomes are reliant on a variety of income sources. Service professionalism has become an important issue as destinations compete for tourists in a very competitive environment, especially in Asia as it reels from the effects of the 1997 Asian financial crisis. This study examines the nature of tour guiding in Hong Kong, assessing the existing level of professional service standards, and identifying issues and challenges facing the profession in the 21st century. Tour guiding issues were identified through an extensive series of in-depth and focus group interviews. Based on the findings, a set of recommendations was formulated. A key recommendation includes the establishment of a monitoring system to ensure high standards of service performance by the tour guides. It is recognised that the experiences faced by the Hong Kong tour guides are unlikely to be unique and there may be some issues and problems raised that are common to the guiding profession in most other countries. However, very few studies about the professional status and issues faced by the tour guiding profession have been reported in the English-based literature and this study would represent one of the first attempts to do so. In sharing the Hong Kong experience, there will be some lessons to be learnt for those in other countries, especially as the profession continues its efforts to improve the status and service professionalism of tour guiding throughout the world.",
"corpus_id": 153642697,
"title": "Case study on tour guiding : professionalism, issues and problems"
} | {
"abstract": "Abstract Emotional intelligence (EI) is being recognized as a correlate of success in various domains of personal and professional life. The aim of this study is to generate and evaluate a shortened Chinese version of the Emotional Skills Assessment Process-Condensed Version (ESAP-CV) instrument for tour guides. Two stages with a total sample of 660 tour guides were conducted. The first sample (N = 260) was to develop the brief version through various deletion criteria, and the second sample (N = 400) was to examine factor structure, reliability, and validity of the short form through confirmatory factor analysis (CFA). The results indicate that the reliable and valid 35-item version (ESAP-CV-35), reduced from 104 items, captures the multidimensional nature of EI in six subscales. It offers tourism researchers a promising tool for conducting further EI-related research in a timely, effective and easily-administered manner.",
"corpus_id": 3022739,
"score": -1,
"title": "A short-form measure for assessment of emotional intelligence for tour guides: Development and evaluation"
} |
{
"abstract": "Introduction A poor oral hygiene is associated with dental caries, gingivitis, periodontal diseases, bad breath, respiratory and cardiovascular diseases, and chronic kidney diseases. Moreover, a poor oral health has psychosocial impacts that diminish a quality of life and restrict activities in school, at work, and home. African regions carry a major burden of oral health problems. However, very few studies highlighted about oral hygiene practices and there is also paucity of information in Ethiopia. This study was, therefore, designed to identify an oral hygiene practice on patients/clients visiting dental clinics in Hawassa City, Southern Ethiopia. Objective To assess oral hygiene practices and associated factors among patients/clients visiting private dental clinics, Hawassa City, Southern Ethiopia. Methods Institution-based cross-sectional study was employed among patients/clients attending private clinics in Hawassa City from January 27 to February 8, 2018. Systematic random sampling technique was used to select 403 study participants. Data were entered into EpiData 3.1, cleaned, and analyzed by SPSS 20. A multivariable logistic regression analysis was performed to assess the association between independent and outcome variables. Crude and adjusted OR with 95% confidence level was estimated, and variables having P value ≤0.05 in multivariable analysis were considered as significant. Results 393 study participants participated making a response rate of 97.52%. A median age of respondents was 27 ± 10.9. About 153 (39.9%) of the study participants had poor oral hygienic practice. Male (AOR: 1.63, 95% CI: (1.053, 2.523)), rural residence (AOR: 3.79, 95% CI: (1.724, 8.317)), and poor knowledge about oral hygiene (AOR: 2.38, 95% CI: (1.402, 4.024)) were independently associated to poor oral hygienic practice. Conclusion More than one-third of the study participants had poor oral hygienic practice. Providing health information regarding oral hygiene for the patients/clients in the facilities with a special focus from rural areas is recommended.",
"corpus_id": 233171351,
"title": "Oral Hygiene Practices and Associated Factors among Patients Visiting Private Dental Clinics at Hawassa City, Southern Ethiopia, 2018"
} | {
"abstract": "Background: Periodontal diseases, dental caries, malocclusion, and oral cancer are the most prevalent dental diseases affecting people in the Indian community. Objective: The study was conducted to assess the awareness and practices on oral hygiene and its association with the sociodemographic factors among patients attending the general Outpatient Department (OPD). Materials and Methods: A cross-sectional study was conducted among 224 patients attending the general OPD of the SSKM Hospital, Kolkata, India, from 1 April to 30 April, 2013. The study tool was a pre-designed and pre-tested semi-structured schedule. Results: About 69.20% of the participants used a toothbrush with toothpaste as a method of cleaning their teeth; 35.71% brushed twice in a day; 33.03% brushed both in the morning and at bedtime; and 8.93% used mouthwash. About 40.62% visited the dentist during the last six months; among them 61.18% attended because of pain. Almost three-fourth of the participants knew that tooth decay and bad breath were the effects of not cleaning the teeth. It was known to 71.42, 63.39, 70.53, and 73.21% of the respondents, respectively, that excess sweet, cold drink, alcohol, and smoking/pan chewing were bad for dental health. Television was the source of knowledge to 57.14% of the participants and 35.71% acquired their knowledge from a dentist. Females, literates, urban residents, users of mouthwash, and regular visitors to the dentist had good oral hygiene practices. Conclusion: Oral health awareness and practices among the study population are poor and need to improve.",
"corpus_id": 577141,
"title": "Awareness and Practices of Oral Hygiene and its Relation to Sociodemographic Factors among Patients attending the General Outpatient Department in a Tertiary Care Hospital of Kolkata, India"
} | {
"abstract": "Fibrillation of articular surface and depletion of proteoglycans are the structural changes related to early osteoarthrosis. These changes make cartilage softer and prone to further degeneration. The aim of the present study was to combine mechanical and acoustic measurements towards quantitative arthroscopic evaluation of cartilage quality. The performance of the novel ultrasound indentation instrument was tested with elastomers and bovine articular cartilage in vitro. The instrument was capable of measuring elastomer thickness (r = 1.000, p < 0.01, n = 8) and dynamic modulus (r = 0.994, p < 0.01, n = 13) reliably. Osteochondral plugs were tested before and after enzymatic degradation of cartilage proteoglycans by trypsin or chondroitinase ABC, and of cartilage collagens by collagenase. Trypsin and collagenase induced a mean decrease of -31.2 +/- 12.3% (+/- SD, p < 0.05) and -22.9 +/- 20.8% (p = 0.08) in dynamic modulus, respectively. Rate of cartilage deformation, i.e. creep rate, increased by +117.8 +/- 71.4% (p < 0.05) and +24.7 +/- 35.1% (p = 0.17) in trypsin and chondroitinase ABC treatments, respectively. Collagenase induced a greater decrease in the ultrasound reflection from the cartilage surface (-54.2 +/- 29.6%, p < 0.05) than trypsin (-17.1 +/- 13.5%, p = 0.08). In conclusion, combined quantitation of tissue modulus, viscoelasticity and ultrasound reflection from the cartilage surface provides a sensitive method to distinguish between normal and degenerated cartilage, and even to discern proteoglycan loss and collagen degradation from each other.",
"corpus_id": 26063785,
"score": -1,
"title": "Novel mechano-acoustic technique and instrument for diagnosis of cartilage degeneration."
} |
{
"abstract": "To solve the problem of generating segmentations of meaningful parts from scanned models with freeform surfaces, we explore a compact shape prior-based segmentation approach in this paper. Our approach is inspired by an observation that a variety of natural objects consist of meaningful components in the form of compact shape and these components with compact shape are usually separated with each other by salient features. The segmentation for multiregions is performed in two phases in our framework. First, the segmentation is taken in low-level with the help of discrete Morse complex enhanced by anisotropic filtering. Second, we extract components with compact shape by using agglomerative clustering to optimize the normalized cut metric, in which the affinities of boundary compatibility, 2D shape compactness and 3D shape compactness are incorporated. The practical functionality of our approach is proved by applying it to the application of customized dental treatment. Note to Practitioners-The research work presented in this paper is to support the procedure of customized design and manufacturing. As a very important preprocessing step for the industrial design of many applications, the 3D shape of real objects must be scanned and reconstructed in computer systems. To assign semantic information to the reconstructed mesh surface, the surface are segmented into meaningful components which, however, is not a well-defined problem. There is no general segmentation approach that has good performance for scanned models with freeform surfaces. According to the observation that models in many industrial applications (e.g., customized dental treatment) have meaningful components in the form of compact shape (e.g., teeth) separating from other regions (e.g., gum), a segmentation method is developed in this paper by using the compact shape prior. The techniques developed here can speedup the design and manufacturing of devices for customized dental treatment (e.g., orthodontic braces).",
"corpus_id": 7590460,
"title": "Multiregion Segmentation Based on Compact Shape Prior"
} | {
"abstract": "We introduce a novel solid modeling framework taking advantage of the architecture of parallel computing on modern graphics hardware. Solid models in this framework are represented by an extension of the ray representation - Layered Depth-Normal Images (LDNI), which inherits the good properties of Boolean simplicity, localization and domain decoupling. The defect of ray representation in computational intensity has been overcome by the newly developed parallel algorithms running on the graphics hardware equipped with Graphics Processing Unit (GPU). The LDNI for a solid model whose boundary is represented by a closed polygonal mesh can be generated efficiently with the help of hardware accelerated sampling. The parallel algorithm for computing Boolean operations on two LDNI solids runs well on modern graphics hardware. A parallel algorithm is also introduced in this paper to convert LDNI solids to sharp-feature preserved polygonal mesh surfaces, which can be used in downstream applications (e.g., finite element analysis). Different from those GPU-based techniques for rendering CSG-tree of solid models Hable and Rossignac (2007, 2005) [1,2], we compute and store the shape of objects in solid modeling completely on graphics hardware. This greatly eliminates the communication bottleneck between the graphics memory and the main memory.",
"corpus_id": 11310722,
"title": "Author's Personal Copy Computer-aided Design Solid Modeling of Polyhedral Objects by Layered Depth-normal Images on the Gpu"
} | {
"abstract": "Correctly rendering non-refractive transparent surfaces with core OpenGL functionality [9] has the vexing requirements of depth-sorted traversal and nonintersecting polygons. This is frustrating for most application developers using OpenGL because the natural order of scene traversal (usually one object at a time) rarely satisfies these requirements. Objects can be complex, with their own transformation hierarchies. Even more troublesome, with advanced graphics hardware, the vertices and fragments of objects may be altered by user-defined per-vertex or per-fragment operations within the GPU. When these features are employed, it becomes intractable to guarantee that fragments will arrive in sorted order for each pixel. The technique presented here solves the problem of order dependence by using a technique we call depth peeling. Depth peeling is a fragment-level depth sorting technique described by Mammen using Virtual Pixel Maps [7] and by Diefenbach using a dual depth buffer [3]. Though no dual depth buffer hardware fitting Diefenbach’s description exists, Bastos observed that shadow mapping hardware in conjunction with alpha test can be used to achieve the same effect [2]. Using this variation of depth peeling, each unique depth in the scene is extracted into layers, and the layers are composited in depth-sorted order to produce the correctly blended final image. The peeling of a layer requires a single order-independent pass over the scene. Figure 1 contrasts correct and incorrect rendering of transparent surfaces. (a) (b)",
"corpus_id": 5813703,
"score": -1,
"title": "Interactive Order-Independent Transparency"
} |
{
"abstract": "In multi-agent-based simulation (MABS) the behavior of individual actors is modelled in large detail. The analysis and validation of such models is rated as difficult in the literature and requires support by innovative methods, techniques, and tools. Problems include the complexity of the models, the amount and often qualitative representation of the simulation results, and the typical dichotomy between microscopic modeling and macroscopic observation perspectives. In recent years, the application of data mining techniques has been increasingly propagated in this context. Data mining might, to some degree, bear the potential to integrate aspects of automated, formal validation on the one hand and explorative, qualitative analysis on the other hand. A promising approach is found in the field of process mining. Due to its rooting in business process analysis, process mining shares several process- and organization-oriented analysis perspectives and use cases with agent-based modeling. On the basis of detailed literature research and practical experiences from case studies, this thesis proposes a conceptual framework for the systematic application of process mining to the analysis and validation of MABS. As a foundation, agent-oriented analysis perspectives and simulation-specific use cases are identified and embellished with methods, techniques, and further results from the literature. Additionally, a partial formalization of the identified analysis perspectives is sketched by utilizing the concept of process dimensions by Rembert and Ellis as well as the MAS architecture Mulan by Rolke. With a view to future tool support the use cases are broadly related to concepts of scientific workflow and data flow modeling. Furthermore, simulation-specific requirements and limitations for the application of process mining techniques are identified as guidelines. Beyond the conceptual work, process mining is practically applied in two case studies related to different modeling and simulation approaches. The first case study integrates process mining into the model-driven approach of Petri net-based agent-oriented software engineering (PAOSE). On the one hand, process mining techniques are practically applied to the analysis of agent interactions. On the other hand, more general implications of combining process mining with reference net-based agent modeling are sketched. The second case study starts from a more code-centric MABS for the quantitative analysis of different logistic strategies for city courier services. In this context, the practical utility and applicability of different process mining techniques within a large simulation study is evaluated. Focus is put on exploratory validation and the reconstruction of modularized agent behavior. \nIn der agentenbasierten Simulation wird das Verhalten individueller Akteure detailliert im Modell abgebildet. Die Analyse und Validierung dieser Modelle gilt in der Literatur als schwierig und bedarf der Unterstutzung durch innovative Methoden, Techniken und Werkzeuge. Probleme liegen in der Komplexitat der Modelle, im Umfang und der oft qualitativen Darstellungsform der Ergebnisse sowie in der typischen Dichotomie zwischen mikroskopischer Modellierungs- und makroskopischer Beobachtungssicht begrundet. In den letzten Jahren wurde in diesem Zusammenhang zunehmend der Einsatz von Techniken aus dem Data Mining propagiert. Diese bergen in gewisser Weise das Potenzial, Aspekte der automatisierten, formalen Validierung mit denen der explorativen, qualitativen Analyse zu vereinen. Einen vielversprechenden Ansatz bietet das sogenannte Process Mining, welches aufgrund seiner Nahe zur Geschaftsprozessmodellierung mit der agentenbasierten Modellierung vergleichbare prozess- und organisationsorientierte Modellsichten (Perspektiven) und Anwendungsfalle aufweist. Ziel der vorliegenden Arbeit ist es, auf Basis umfangreicher Literaturrecherche und in Fallstudien gesammelter Erfahrungen ein konzeptionelles Rahmenwerk fur den systematischen Einsatz von Process Mining zur Analyse und Validierung agentenbasierter Simulationsmodelle vorzuschlagen. Als Grundlage werden agentenspezifische Analyseperspektiven und simulationsspezifische Anwendungsfalle identifiziert und durch Methoden, Techniken und weitere Ergebnisse aus der Literatur ausgestaltet. Daruber hinaus wird ansatzweise eine Teilformalisierung der Analyseperspektiven unter Verwendung des Prozessdimensionen-Konzepts nach Rembert und Ellis sowie der auf Referenznetzen basierenden Architektur Mulan nach Rolke angestrebt. Die Anwendungsfalle werden mit Blick auf eine mogliche Werkzeugunterstutzung mit Konzepten der wissenschaftlichen Workflow- und Datenflussmodellierung in Beziehung gesetzt und durch die Identifikation simulationsspezifischer Anwendungsrichtlinien fur das Process Mining erganzt. Neben der konzeptionellen Arbeit wird der Einsatz von Process Mining praktisch in unterschiedlichen Modellierungs- und Simulationsansatzen erprobt. Die erste Fallstudie integriert Process Mining konzeptionell und technisch in den modellgetriebenen Ansatz der Petrinetzbasierten agentenorientierten Softwareentwicklung (PAOSE). Dabei wird einerseits der praktische Einsatz von Process Mining-Techniken zur Interaktionsanalyse von Agenten beschrieben. Andererseits zeigt die Studie generelle Implikationen der Kombination von Process Mining und Referenznetz-basierter Agentenmodellierung auf. Ausgangspunkt der zweiten Fallstudie ist eine eher Code-zentrierte agentenbasierte Simulation zur quantitativen Analyse verschiedener Logistikstrategien fur Stadtkurierdienste. Im Rahmen dieser Fallstudie werden Process Mining-Techniken im Hinblick auf Anwendbarkeit und Nutzen fur eine grosen Simulationsstudie untersucht. Dabei steht die explorative Validierung und die Rekonstruktion modularisierten Agentenverhaltens im Vordergrund.",
"corpus_id": 30872603,
"title": "Process-Oriented Analysis and Validation of Multi-Agent-Based Simulations: Concepts and Case Studies"
} | {
"abstract": "Providers of Web based services are interested in monitoring the usage of their services in combination with those of other providers. The identification of services frequently accessed together may be valuable as a basis for strategic collaboration among their owners. We propose data mining to discover services of different providers which could complement one another, based on their usage. In particular we model the activities of a user as a sequence of service invocations recorded in a log, on which pattern discovery techniques can be applied. However we claim that conventional sequence mining is not adequate for this type of application. This is because, conventional mining concentrates on frequent (or infrequent) patterns of access, while we also require a notion of the consistency of these access patterns as a basis for collaboration. We present a model for constructing patterns that depict consistently used sequences of activities. This model is general enough to be applied to any system of autonomous entities, where relationships between entities are dynamic. For testing our model, we have analyzed the behavior of users in a news group, in order to determine consistent patterns in the way users respond to questions posed to the group.",
"corpus_id": 1382796,
"title": "Modeling interactions based on consistent patterns"
} | {
"abstract": "Information systems’ (IS) design concerns modeling systems that are dynamic in nature. A dynamic system essentially has two dimensions of concern – static structure and dynamic behavior. The existence of dynamics – or interactions among parts of the system distinguish a dynamic system from a heap or collection of parts. Specification and management of the static aspects of an information system like the data and metadata have been fairly well addressed by existing paradigms. However, an understanding of the dynamic nature of information systems is still low. Currently most paradigms model behavioral properties above an existing structural model, resulting in what may be called “entity centric” modeling. Such a kind of modeling would neglect properties that can be attributed to behavioral processes themselves, and relationships that might exist among such processes. This thesis argues that the dynamics of an information system are best managed by explicitly characterizing an “interaction space” of the information system. An interaction space is defined as an abstract domain that represents the set of all dynamics of the information system. This is contrasted with an “entity space” that represents elements of the static structure of the information system. Recent results on the nature of interactive behavior and of open systems indicate that interaction spaces are characteristically different from the hierarchical nature of algorithmic problem solving. Interaction spaces consist of multiple interactive processes which affect the behavior of one another. Paradigms for the characterization of these spaces are hence explored as part of the thesis.",
"corpus_id": 17092858,
"score": -1,
"title": "The notion of the interaction space of an information system"
} |
{
"abstract": "©2019 Türkiye Spor Hekimleri Derneği. Tüm hakları saklıdır. ABSTRACT Objectives: Balance is one of the most important parameters in athletic performance. In this study, it has been targeted to investigate the effects of pes planus deformity on balance performance of the athletes. Materials and Methods: This study included 36 athletes with a mean age of 17,08±2,79 years, height of 166,97 ± 11,84 cm, body weight of 62,38 ± 18,29 kg, sport age of 6 (4-8) years, and with 3rd degree bilateral pes planus deformity. A total of 36 athletes with a mean age of 17,63±3,03 years, height of 165,97±17,19 cm, body weight of 59,88±12,31 kg, sports age of 6 (4-7) year and with no foot deformity were included as a control group. The presence of pes planus was evaluated according to the Feiss line. Stability and balance measurements performed with HUBER 360 electronic device. The data obtained were compared by using independent samples ttest. Results: It was determined that the dominant side oscillation lengths of the study group were significantly higher than the control group (p <0.05). There was no significant difference between the dominant and non-dominant side oscillation length and oscillation area of the athletes in both groups (p> 0.05). Conclusion: Balance on one foot of the athletes with bilateral pes planus are adversely affected on the dominant side. For this reason, pes planus deformity should be taken into consideration in the selection of athletes and sports rehabilitation processes, where balance performance is especially important on the dominant leg.",
"corpus_id": 242942717,
"title": "Does Pes Planus Influence Balance Performance in Athletes?"
} | {
"abstract": "Differences in arch height may have a certain impact on lower extremity muscle strength and physical performance. However, there is little evidence from investigation of the possible correlation of arch height with ankle muscle strength and physical performance measures. Sixty-seven participants took part in this study. Arch height index (AHI) was assessed and categorized using a 3-dimension foot scanner. Ankle muscle strength was measured employing a dynamometer. Physical performance measures including agility, force and proprioception were randomly tested. Compared to the medium AHI, the high AHI had lower plantarflexion and inversion peak torque. The high AHI also had lower peak torque per body weight value for plantarflexion and inversion at 120°/s (P = 0.026 and 0.006, respectively), and dorsiflexion at 30°/s (P = 0.042). No significant ankle muscle strength difference was observed between the low and medium AHI. Additionally, AHI was negatively correlated with eversion and inversion peak torque at 120°/s, and negatively associated with plantarflexion, eversion and inversion peak torque per body weight at both 30°/s and 120°/s (r ranged from -0.26 to -0.36, P values < 0.050). However, no significant relationship was found between arch height and physical performance measures. The results showed that high arches had lower ankle muscle strength while low arches exhibited greater ankle muscle strength. Arch height was negatively associated with ankle muscle strength but not related to physical performance. We suggest that the lower arch with greater ankle muscle strength may be an adaptation to weight support and shock absorption.",
"corpus_id": 2908164,
"title": "Association of arch height with ankle muscle strength and physical performance in adult men"
} | {
"abstract": "Pes planus is a common foot and ankle physiologic deformity. The normal medial longitudinal arch is depressed or flattened due to a lack of strength in associated muscles, ligaments, and tendons. This study aimed to investigate how isokinetic hip muscular strength affected normal medial longitudinal arch feet and pea planus. Forty adult subjects participated in this study: 20 with pea planus and 20 with normal medial longitudinal arched feet. Both groups were similar in age (p=.074), weight (p=.324), height (p=.211), and BMI (p=.541). The navicular drop test determined the differences in navicular height. An isokinetic dynamometer was used to determine hip muscular strength (peak torque and total work) during hip flexion, extension, abduction, and adduction at speeds of 90°/s and 180°/s. A Kruskal-Wallis test was computed to determine the comparison between the normal medial longitudinal arch and pea planus. Subjects with normal medial longitudinal arch had more muscle strength than pes planus. Hip muscle strength did not show any significant difference between both groups. The abductor and adductor group muscles' total work were higher in subjects with pes planus. This study showed that normal medial longitudinal arched foot subjects have higher muscle strength than pes planus. However, the hip abductors were significantly lower in pes planus after measuring the total work, suggesting that individuals with pes planus are easily fatigued, possibly due to the overuse of the muscles that compensate for any changes in lower limb alignment.",
"corpus_id": 253754623,
"score": -1,
"title": "The effect of isokinetic hip muscle strength on normal medial longitudinal arch feet and pes planus"
} |
{
"abstract": "Thymosin α1 (Tα1) is an immunostimulatory peptide that is commonly used as an immune enhancer in viral infectious diseases such as hepatitis B, hepatitis C, and acquired immune deficiency syndrome (AIDS). Tα1 can influence the functions of immune cells, such as T cells, B cells, macrophages, and natural killer cells, by interacting with various Toll-like receptors (TLRs). Generally, Tα1 can bind to TLR3/4/9 and activate downstream IRF3 and NF-κB signal pathways, thus promoting the proliferation and activation of target immune cells. Moreover, TLR2 and TLR7 are also associated with Tα1. TLR2/NF-κB, TLR2/p38MAPK, or TLR7/MyD88 signaling pathways are activated by Tα1 to promote the production of various cytokines, thereby enhancing the innate and adaptive immune responses. At present, there are many reports on the clinical application and pharmacological research of Tα1, but there is no systematic review to analyze its exact clinical efficacy in these viral infectious diseases via its modulation of immune function. This review offers an overview and discussion of the characteristics of Tα1, its immunomodulatory properties, the molecular mechanisms underlying its therapeutic effects, and its clinical applications in antiviral therapy.",
"corpus_id": 258224699,
"title": "Thymosin α1 and Its Role in Viral Infectious Diseases: The Mechanism and Clinical Application"
} | {
"abstract": "Objective: Thymosin α1 (Tα1) is a peptide hormone whose therapeutic application has been approved in several diseases, but the description of a precise receptor for its therapeutic action still remains elusive and some knowledge of the mechanism of interaction with the cell membrane still needs to be clarified. This work is aimed at studying the folding and interaction of Tα1, which is completely unstructured in water solution, with model membranes. Methods: The folding and interaction of Tα1 with sodium dodecyl sulfate micelles was monitored by NMR and CD spectroscopy techniques. Results: Tα1 assumes a helical conformation in the presence of sodium dodecyl sulfate micelles, showing a helical fold with a structural break around residues 9 and 14. These results were confirmed by circular dichroism and NMR spectroscopy. Moreover, by paramagnetic NMR relaxation it was found that Tα1 is inserted in the hydrophobic region of the micelles by the residues 1 – 5 of the N-terminal end. This result clarifies the modality of insertion that was not obtained in previous NMR studies in trifluoroethanol. Conclusions: These findings suggest that Tα1 folds on the membrane and, when inserted, may be able to interact with nearby proteins and/or receptors acting as an effector and causing a biological signaling cascade.",
"corpus_id": 1228028,
"title": "Thymosin α1 inserts N terminus into model membranes assuming a helical conformation"
} | {
"abstract": "Thymosin alpha 1 (Tα1) is a powerful modulator of immunity and inflammation. Despite years of studies, there are a few reports evaluating serum Tα1 in health and disease. We studied a cohort of healthy individuals in comparison with patients affected by chronic inflammatory autoimmune diseases. Sera from 120 blood donors (healthy controls, HC), 120 patients with psoriatic arthritis (PsA), 40 with rheumatoid arthritis (RA) and 40 with systemic lupus erythematosus (SLE), attending the Transfusion Medicine or the Rheumatology Clinic at the Policlinico Tor Vergata, Rome, Italy, were tested for Tα1 content by means of a commercial enzyme‐linked immunosorbent assay (ELISA) kit. Data were analysed in relation to demographic and clinical characteristics of patients and controls. A gender difference was found in the HC group, where females had lower serum Tα1 levels than males (P < 0·0001). Patients had lower serum Tα1 levels than HC (P < 0·0001), the lowest were observed in PsA group (P < 0·0001 versus all the other groups). Among all patients, those who at the time of blood collection were taking disease‐modifying anti‐rheumatic drugs (DMARD) plus steroids had significantly higher Tα1 levels than those taking DMARD alone (P = 0·044) or no treatment (P < 0·0001), but not of those taking steroids alone (P = 0·280). However, whichever type of treatment was taken by the patients, serum Tα1 was still significantly lower than in HC and there was no treatment‐related difference in PsA group. Further prospective studies are necessary to confirm and deepen these observations. They might improve our understanding on the regulatory role of Tα1 in health and disease and increase our knowledge of the pathogenesis of chronic inflammatory autoimmune diseases.",
"corpus_id": 25834460,
"score": -1,
"title": "Serum thymosin α 1 levels in patients with chronic inflammatory autoimmune diseases"
} |
{
"abstract": "The aim of the paper is to use Rough Set approach to induce decision rules on LCA use in selected business models of SMEs. For that purpose the results of “Sustainable production patterns” PARP survey are used together with defined business model types. 1000 SMEs are classified to the six business models groups but only four of them, namely: Traditionalist, Contractor, Specialist and Distributor include LCA users and is further analyzed. Classification is followed by defining condition attributes and decision attribute sets and induction of decision rules for different business model types with Rough Set approach. Decision rules are induced for LCA users class and LCA non-users class. Analysis shows that business model types differ significantly concerning the decision rules leading to LCA use. More similarities are observed for LCA non-users class.",
"corpus_id": 114025254,
"title": "Reguły decyzyjne warunkujące wykorzystanie ekologicznej oceny cyklu życia w wybranych modelach biznesowych MŚP"
} | {
"abstract": "The data that characterize an environmental system are a fundamental part of an environmental decision-support system. However, obtaining complete and consistent data sets for regional studies can be difficult. Data sets are often available only for small study areas within the region, whereas the data themselves contain uncertainty because of system complexity, differences in methodology, or data collection errors. This paper presents rough-set rule induction as one way to deal with data uncertainty while creating predictive if–then rules that generalize data values to the entire region. The approach is illustrated by determining the crop suitability of 14 crops for the agricultural soils of the Willamette River Basin, Oregon, USA. To implement this method, environmental and crop yield data were spatially related to individual soil units, forming the examples needed for the rule induction process. Next, four learning algorithms were defined by using different subsets of environmental attributes. ROSETTA, a software system for rough set analysis, was then used to generate rules using each algorithm. Cross-validation analysis showed that all crops had at least one algorithm with an accuracy rate greater than 68%. After selecting a preferred algorithm, the induced classifier was used to predict the crop suitability of each crop for the unclassified soils. The results suggest that rough set rule induction is a useful method for data generalization and suitability analysis.",
"corpus_id": 438985,
"title": "\nRough Set Rule Induction for Suitability Assessment"
} | {
"abstract": "Field and growth room experiments at Mandan, N. D., evaluated the effects of seed depth and soil temperature on corn (L.) germination and emergence. In the growth room, from 4 to 24 days were required to achieve 80% emergence, depending upon soil temperature and seed depth. Increasing soil temperature from 13.3 to 26.7 C reduced the time for 80% emergence. Temperature had a much greater effect than seed depth on emergence. A highly significant linear relationship existed between percent emergence and cumulative degree-days above 10 C.In field treatments with adequate soil water for germination, 8 to 13 days were required for near 80% germination. About one additional day was required for each 2.5-cm increase in depth of planting. For corn placed at 7.6 cm, about 10 days and 68 cumulative degree-days were required for emergence. Similar degree-day requirements occurred for the 12.7-cm depth. Degree-day requirements for field experiments agreed very well with those obtained in the growth room.",
"corpus_id": 85250996,
"score": -1,
"title": "Corn Emergence in Relation to Soil Temperature and Seeding Depth1"
} |
{
"abstract": "Biometric technology is increasingly popular and has many practical applications in our lives, such as fingerprint recognition, face recognition, iris recognition, etc. In biometrics based recognition technologies, finger knuckle print recognition (FKP) has received a lot of research attention recently. This method has many advantages compared to the others such as fingerprint recognition and iris recognition. Motivated by the advantages of FKR and advances of deep learning, this paper proposes a finger knuckle print recognition model, namely KPmixer. In particular, we modify the Convmixer model using variable-size kernels to reduce the number of parameters of the model and help the model mix spatial information at various distances. At the same time, we recommend the SE+ module to increase the accuracy of FKP recognition. Moreover, we propose to use a set of effective data augmentation methods for FKP recognition. The performance of the proposed model is compared with modern CNN models such as Convmixer, Resnet18, MobileNet, and DenseNet, showing an outstanding result in terms of accuracy.",
"corpus_id": 255267650,
"title": "KPmixer-a ConvMixer-based Network for Finger Knuckle Print Recognition"
} | {
"abstract": "This paper investigates a new approach for personal authentication using fingerback surface imaging. The texture pattern produced by the finger knuckle bending is highly unique and makes the surface a distinctive biometric identifier. The finger geometry features can be simultaneously acquired from the same image at the same time and integrated to further improve the user-identification accuracy of such a system. The fingerback surface images from each user are normalized to minimize the scale, translation, and rotational variations in the knuckle images. This paper details the development of such an approach using peg-free imaging. The experimental results from the proposed approach are promising and confirm the usefulness of such an approach for personal authentication.",
"corpus_id": 5993739,
"title": "Personal Authentication Using Finger Knuckle Surface"
} | {
"abstract": "This article contains information about the ability to authenticate a user on the movements of the smartphone in his hand during user interaction with the screen. Currently, user authentication based on their individual characteristics is very popular. The article considers one of the methods of authentication based on behavioral biometrics. Using the proposed model of the application The possibility of recognizing the user on the movement of the hand with the smartphone was evaluated. An application model for continuous user authentication is proposed, which uses hand-waving as an authentication tool.",
"corpus_id": 10107740,
"score": -1,
"title": "Mobile authentication over hand-waving"
} |
{
"abstract": "Objectives: Chronic intestinal pseudo-obstructive (CIPO) conditions are considered the most severe disorders of gut motility. They continue to present significant challenges in clinical care despite considerable recent progress in our understanding of pathophysiology, resulting in unacceptable levels of morbidity and mortality. Major contributors to the disappointing lack of progress in paediatric CIPO include a dearth of clarity and uniformity across all aspects of clinical care from definition and diagnosis to management. In order to assist medical care providers in identifying, evaluating, and managing children with CIPO, experts in this condition within the European Society for Pediatric Gastroenterology, Hepatology, and Nutrition as well as selected external experts, were charged with the task of developing a uniform document of evidence- and consensus-based recommendations. Methods: Ten clinically relevant questions addressing terminology, diagnostic, therapeutic, and prognostic topics were formulated. A systematic literature search was performed from inception to June 2017 using a number of established electronic databases as well as repositories. The approach of the Grading of Recommendations Assessment, Development and Evaluation (GRADE) was applied to evaluate outcome measures for the research questions. Levels of evidence and quality of evidence were assessed using the classification system of the Oxford Centre for Evidence-Based Medicine (diagnosis) and the GRADE system (treatment). Each of the recommendations were discussed, finalized, and voted upon using the nominal voting technique to obtain consensus. Results: This evidence- and consensus-based position paper provides recommendations specifically for chronic intestinal pseudo-obstruction in infants and children. It proposes these be termed paediatric intestinal pseudo-obstructive (PIPO) disorders to distinguish them from adult onset CIPO. The manuscript provides guidance on the diagnosis, evaluation, and treatment of children with PIPO in an effort to standardise the quality of clinical care and improve short- and long-term outcomes. Key recommendations include the development of specific diagnostic criteria for PIPO, red flags to alert clinicians to the diagnosis and guidance on the use of available investigative modalities. The group advocates early collaboration with expert centres where structured diagnosis and management is guided by a multi-disciplinary team, and include targeted nutritional, medical, and surgical interventions as well as transition to adult services. Conclusions: This document is intended to be used in daily practice from the time of first presentation and definitive diagnosis PIPO through to the complex management and treatment interventions such as intestinal transplantation. Significant challenges remain to be addressed through collaborative clinical and research interactions.",
"corpus_id": 4375212,
"title": "Paediatric Intestinal Pseudo-obstruction: Evidence and Consensus-based Recommendations From an ESPGHAN-Led Expert Group"
} | {
"abstract": "OBJECTIVE\nTo compare scintigraphic gastric emptying and antroduodenal manometry (ADM) studies with the wireless motility capsule test in symptomatic pediatric patients.\n\n\nSTUDY DESIGN\nPatients aged 8-17 years with severe upper gastrointestinal symptoms (ie, nausea, vomiting, retching, abdominal pain) referred for ADM were recruited. A standardized protocol for ADM was used. On a different day, participants were given a standardized meal and then swallowed the wireless motility capsule. A wireless receiver unit worn during the study recorded transmitted data. If not performed previously, a 2-hour scintigraphic gastric emptying study was completed at the time of ADM testing.\n\n\nRESULTS\nA total of 22 patients were recruited, of whom 21 had complete scintigraphic gastric emptying study data and 20 had complete ADM data. The wireless motility capsule test had 100% sensitivity and 50% specificity in detecting gastroparesis compared with the 2-hour scintigraphic gastric emptying study. The wireless motility capsule test detected motor abnormalities in 17 patients, compared with 10 detected by ADM. Dichotomous comparison yielded a diagnostic difference between ADM and the wireless motility capsule test (P<.01). Migrating motor complexes were recognized in all patients by both ADM and the wireless motility capsule test. The wireless motility capsule test was well tolerated in all patients, and there were no side effects.\n\n\nCONCLUSION\nIn symptomatic pediatric patients, the wireless motility capsule test is highly sensitive compared with scintigraphic gastric emptying studies in detecting gastroparesis, and seems to be more sensitive than ADM in detecting motor abnormalities.",
"corpus_id": 1933140,
"title": "Wireless motility capsule test in children with upper gastrointestinal symptoms."
} | {
"abstract": "Gastrointestinal symptoms are common in the general population and may originate from disturbances in gut motility. However, fundamental mechanistic understanding of motility remains inadequate, especially of the less accessible regions of the small bowel and colon. Hence, refinement and validation of objective methods to evaluate motility of the whole gut is important. Such techniques may be applied in clinical settings as diagnostic tools, in research to elucidate underlying mechanisms of diseases, and to evaluate how the gut responds to various drugs. A wide array of such methods exists; however, a limited number are used universally due to drawbacks like radiation exposure, lack of standardization, and difficulties interpreting data. In recent years, several new methods such as the 3D‐Transit system and magnetic resonance imaging assessments on small bowel and colonic motility have emerged, with the advantages that they are less invasive, use no radiation, and provide much more detailed information.",
"corpus_id": 4582565,
"score": -1,
"title": "Established and emerging methods for assessment of small and large intestinal motility"
} |
{
"abstract": "Neuronal nitric oxide synthase (nNOS) catalyzes single-electron reduction of quinones (Q), nitroaromatic compounds (ArNO2) and aromatic N-oxides (ArN → O), and is partly responsible for their oxidative stress-type cytotoxicity. In order to expand a limited knowledge on the enzymatic mechanisms of these processes, we aimed to disclose the specific features of nNOS in the reduction of such xenobiotics. In the absence or presence of calmodulin (CAM), the reactivity of Q and ArN → O increases with their single-electron reduction midpoint potential (E17). ArNO2 form a series with lower reactivity. The calculations according to an “outer-sphere” electron transfer model show that the binding of CAM decreases the electron transfer distance from FMNH2 to quinone by 1–2 Å. The effects of ionic strength point to the interaction of oxidants with a negatively charged protein domain close to FMN, and to an increase in accessibility of the active center induced by high ionic strength. The multiple turnover experiments of nNOS show that, in parallel with reduced FAD-FMN, duroquinone reoxidizes the reduced heme, in particular its Fe2+-NO form. This finding may help to design the heme-targeted bioreductively activated agents and contribute to the understanding of the role of P-450-type heme proteins in the bioreduction of quinones and other prooxidant xenobiotics.",
"corpus_id": 246028267,
"title": "Reactions of Recombinant Neuronal Nitric Oxide Synthase with Redox Cycling Xenobiotics: A Mechanistic Study"
} | {
"abstract": "During catalysis, the heme in nitric oxide synthase (NOS) binds NO before releasing it to the environment. Oxidation of the NOS ferrous heme–NO complex by O2 is key for catalytic cycling, but the mechanism is unclear. We utilized stopped‐flow methods to study the reaction of O2 with ferrous heme–NO complexes of inducible and neuronal NOS enzymes. We found that the reaction does not involve heme–NO dissociation, but instead proceeds by a rapid direct reaction of O2 with the ferrous heme–NO complex. This behavior is novel and may distinguish heme–thiolate enzymes, such as NOS, from related heme proteins.",
"corpus_id": 3196246,
"title": "Fast ferrous heme–NO oxidation in nitric oxide synthases"
} | {
"abstract": "Nitric oxide synthases (NOSs) are haem-thiolate enzymes that catalyse the conversion of L-arginine (L-Arg) into NO and citrulline. Inducible NOS (iNOS) is responsible for delivery of NO in response to stressors during inflammation. The catalytic performance of iNOS is proposed to rely mainly on the haem midpoint potential and the ability of the substrate L-Arg to provide a hydrogen bond for oxygen activation (O-O scission). We present a study of native iNOS compared with iNOS-mesohaem, and investigate the formation of a low-spin ferric haem-aquo or -hydroxo species (P) in iNOS mutant W188H substituted with mesohaem. iNOS-mesohaem and W188H-mesohaem were stable and dimeric, and presented substrate-binding affinities comparable to those of their native counterparts. Single turnover reactions catalysed by iNOSoxy with L-Arg (first reaction step) or N-hydroxy-L-arginine (second reaction step) showed that mesohaem substitution triggered higher rates of Fe(II)O₂ conversion and altered other key kinetic parameters. We elucidated the first crystal structure of a NOS substituted with mesohaem and found essentially identical features compared with the structure of iNOS carrying native haem. This facilitated the dissection of structural and electronic effects. Mesohaem substitution substantially reduced the build-up of species P in W188H iNOS during catalysis, thus increasing its proficiency towards NO synthesis. The marked structural similarities of iNOSoxy containing native haem or mesohaem indicate that the kinetic behaviour observed in mesohaem-substituted iNOS is most heavily influenced by electronic effects rather than structural alterations.",
"corpus_id": 840714,
"score": -1,
"title": "Dissecting structural and electronic effects in inducible nitric oxide synthase."
} |
{
"abstract": "Let G be a compact Lie group. Using suitable normalization conventions, we show that the evaluation of G ×G-symmetric spin networks is non-negative whenever the edges are labeled by representations of the form V ⊗ Vwhere V is a representation of G, and the intertwiners are generalizations of the Barrett-Crane intertwiner. This includes in particular the relativistic spin networks with symmetry group Spin(4) or SO(4). We also present a counterexample, using the finite group S3, to the stronger conjecture that all spin network evaluations are non-negative as long as they can be written using only group integrations and index contractions. This counterexample applies in particular to the product of five 6j-symbols which appears in the spin foam model of the S3-symmetric BF- theory on the two-complex dual to a triangulation of the sphere S 3 using five tetrahedra. We show that this product is negative real for a particular assignment of representations to the edges.",
"corpus_id": 16908866,
"title": "Positivity of relativistic spin network evaluations"
} | {
"abstract": "The amplitude for a spin foam in the Barrett–Crane model of Riemannian quantum gravity is given as a product over its vertices, edges and faces, with one factor of the Riemannian 10j symbols appearing for each vertex, and simpler factors for the edges and faces. We prove that these amplitudes are always nonnegative for closed spin foams. As a corollary, all open spin foams going between a fixed pair of spin networks have real amplitudes of the same sign. This means one can use the Metropolis algorithm to compute expectation values of observables in the Riemannian Barrett–Crane model, as in statistical mechanics, even though this theory is based on a real-time (eiS) rather than imaginary-time e−S path integral. Our proof uses the fact that when the Riemannian 10j symbols are nonzero, their sign is positive or negative depending on whether the sum of the ten spins is an integer or half-integer. For the product of 10j symbols appearing in the amplitude for a closed spin foam, these signs cancel. We conclude with some numerical evidence suggesting that the Lorentzian 10j symbols are always nonnegative, which would imply similar results for the Lorentzian Barrett–Crane model.",
"corpus_id": 307697,
"title": "Positivity of spin foam amplitudes"
} | {
"abstract": "Abstract This paper proposes a novel nonseparable lifting scheme for wavelet frames with high vanishing moments. A specific nonseparable framelet lifting transform (NFLT), combined with a modified covariance intersection (CI) algorithm, has been applied to pansharpening of multispectral images. Experiments are carried out on the multispectral and panchromatic images acquired by the SPOT, QuickBird and Landsat spaceborne sensors. Benefiting from the high order of vanishing moments, the proposed NFLT can distinguish the low- and high-frequency efficiently and can compact most of the energy into the low-pass subband. Thus the spectral distortion can be minimized. Experimental results show that the NFLT-CI method reduces the spectral distortion while improves the spatial resolution simultaneously, and outperforms the other state-of-the-art methods derived from various transforms and injection models.",
"corpus_id": 26164467,
"score": -1,
"title": "Pansharpening of multispectral images using the nonseparable framelet lifting transform with high vanishing moments"
} |
{
"abstract": "In this study, polypyrrole-based activated carbon was prepared by the carbonization of polypyrrole at 650 °C for 2 h in the presence of four-times the mass of KOH as a chemical activator. The structural and morphological properties of the product (polypyrrole-based activated carbon (PPyAC4)), analyzed by scanning electron microscopy, transmission electron microscopy, X-ray diffraction, and thermogravimetric analysis, support its applicability as an adsorbent. The adsorption characteristics of PPyAC4 were examined through the adsorption of lead ions from aqueous solutions. The influence of various factors, including initial ion concentration, pH, contact time, and adsorbent dose, on the adsorption of Pb2+ was investigated to identify the optimum adsorption conditions. The experimental data fit well to the pseudo-second-order kinetic model (R2 = 0.9997) and the Freundlich isotherm equation (R2 = 0.9950), suggesting a chemisorption pathway. The adsorption capacity was found to increase with increases in time and initial concentration, while it decreased with an increase in adsorbent dose. Additionally, the highest adsorption was attained at pH 5.5. The calculated maximum capacity, qm, determined from the Langmuir model was 50 mg/g.",
"corpus_id": 195660156,
"title": "Efficient Adsorption of Lead (II) from Aqueous Phase Solutions Using Polypyrrole-Based Activated Carbon"
} | {
"abstract": "Nitrogen-doped graphene oxide sheets (N-GOs) are prepared by employing N-containing polymers such as polypyrrole, polyaniline, and copolymer (polypyrrole-polyaniline) doped with acids such as HCl, H2SO4, and C6H5-SO3-K, which are activated using different concentrations of KOH and carbonized at 650 °C; characterized using SEM, TEM, BET, TGA-DSC, XRD, and XPS; and employed for the removal of environmental pollutant CO2. The porosity of the N-GOs obtained were found to be in the range 1–3.5 nm when the KOH employed was in the ratio of 1:4, and the XRD confirmed the formation of the layered like structure. However, when the KOH employed was in the ratio of 1:2, the pore diameter was found to be in the range of 50–200 nm. The SEM and TEM analysis reveal the porosity and sheet-like structure of the products obtained. The nitrogen-doped graphene oxide sheets (N-GOs) prepared by employing polypyrrole doped with C6H5-SO3-K were found to possess a high surface area of 2870 m2/g. The N-GOs displayed excellent CO2 capture property with the N-GOs; PPy/Ar-1 displayed ~1.36 mmol/g. The precursor employed, the dopant used, and the activation process were found to affect the adsorption property of the N-GOs obtained. The preparation procedure is simple and favourable for the synthesis of N-GOs for their application as adsorbents in greenhouse gas removal and capture.",
"corpus_id": 4797009,
"title": "Enhanced CO2 Adsorption by Nitrogen-Doped Graphene Oxide Sheets (N-GOs) Prepared by Employing Polymeric Precursors"
} | {
"abstract": "In order to obtain the adsorption mechanism and failure characteristics of CO2 adsorption by potassium-based adsorbents with different supports, five types of supports (circulating fluidized bed boiler fly ash, pulverized coal boiler fly ash, activated carbon, molecular sieve, and alumina) and three kinds of adsorbents under the modified conditions of K2CO3 theoretical loading (10%, 30%, and 50%) were studied. The effect of the reaction temperature (50 °C, 60 °C, 70 °C, 80 °C, and 90 °C) and CO2 concentration (5%, 7.5%, 10%, 12.5%, and 15%) on the adsorption of CO2 by the adsorbent after loading and the effect of flue gas composition on the failure characteristics of adsorbents were obtained. At the same time, the microscopic characteristics of the adsorbents before and after loading and the reaction were studied by using a specific surface area and porosity analyzer as well as a scanning electron microscope and X-ray diffractometer. Combining its reaction and adsorption kinetics process, the mechanism of influence was explored. The results show that the optimal theoretical loading of the five adsorbents is 30% and the reaction temperature of 70 °C and the concentration of 12.5% CO2 are the best reaction conditions. The actual loading and CO2 adsorption performance of the K2CO3/AC adsorbent are the best while the K2CO3/Al2O3 adsorbent is the worst. During the carbonation reaction of the adsorbent, the cumulative pore volume plays a more important role in the adsorption process than the specific surface area. As the reaction temperature increases, the internal diffusion resistance increases remarkably. K2CO3/AC has the lowest activation energy and the carbonation reaction is the easiest to carry out. SO2 and HCl react with K2CO3 to produce new substances, which leads to the gradual failure of the adsorbents and K2CO3/AC has the best cycle failure performance.",
"corpus_id": 54467444,
"score": -1,
"title": "Study on Adsorption Mechanism and Failure Characteristics of CO2 Adsorption by Potassium-Based Adsorbents with Different Supports"
} |
{
"abstract": "We evaluate the ability of a typical cloud parameterization from a global model (CCM3 from NCAR) to simulate the Arctic cloudiness and longwave radiative fluxes during wintertime. Simulations are conducted with a Single-Column Model (SCM) forced with observations and reanalysis data from the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment. Typically, the SCM overestimates the Arctic cloud fraction and the downwelling longwave flux. Moreover, the SCM does not capture accurately the temperature and moisture profiles, and the surface flux fields. Relaxing temperature and moisture profiles to observed values dramatically improves the simulations. This suggests that the cloud parameterization of CCM3 is suitable for Arctic clouds, as long as the temperature and moisture fields are captured correctly. Sensitivities studies show that the cloud fraction is not very sensitive to cloud type, ice effective radius, ice liquid ratio amount and uncertainty of the advective forcing.",
"corpus_id": 34528660,
"title": "Single-column model simulations of arctic cloudiness and surface radiative fluxes during the Surface Heat Budget of the Arctic (SHEBA) experiment"
} | {
"abstract": "A central objective of the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment was to provide a comprehensive observational test for single-column models of the atmosphere-sea ice-ocean system over the Arctic Ocean. For single-column modeling, one must specify the time-varying tendencies due to horizontal and vertical advection of air through the column. Due to the difficulty of directly measuring these tendencies, it was decided for SHEBA to obtain them from short-range forecasts of the European Centre for Medium-Range Weather Forecasts (ECM-WF) global forecast model, into which SHEBA rawinsonde and surface synoptic observations were routinely assimilated. The quality of these forecasts directly affects the reliability of the derived advective tendencies. In addition, the ECMWF-forecast thermodynamic and cloud fields, and radiative and turbulent fluxes present an illuminating comparison of the SHEBA observations with a state-of-the-art global numerical model. The authors compare SHEBA soundings, cloud and boundary layer observations with the EC-MWF model output throughout the SHEBA year. They find that above the boundary layer, the model was faithful to the SHEBA rawinsonde observations and maintained a proper long-term balance between advective and nonadvective tendencies of heat and moisture. This lends credence to use of the ECMWF-predicted advective tendencies for single-column modeling studies. The model-derived cloud properties and precipitation (which were not assimilated from observations) are compared with cloud radar, lidar, microwave radiometer, surface turbulent and radia-tive measurements, and basic surface meteorology. The model's slab sea-ice model led to large surface temperature errors and insufficient synoptic variability of temperature. The overall height distribution of cloud was fairly well simulated (though somewhat overestimated) in all seasons, as was precipitation. However, the model clouds typically had a much higher ratio of cloud ice to cloud water than suggested by lidar depolarization measurements, and a smaller optical depth, leading to monthly biases of up to 50 W m-2 in the monthly surface downwelling longwave and shortwave radiation. Further biases in net radiation were due to the inaccurate model assumption of constant surface albedo.-1-Observed turbulent sensible and latent heat fluxes tended to be small throughout SHEBA. During high-wind periods during the winter, the ECMWF model predicted sustained downward heat fluxes of up to 60 W m-2 , much higher than observed. A detailed comparison suggests that this error was due to both inadequate resolution of the 31-level model and a deficient parameterization of sea-ice thermodynamics.-2",
"corpus_id": 1701191,
"title": "--1-A comparison of the ECMWF forecast model with observations over the annual cycle at"
} | {
"abstract": "Backscattering properties of dry snowflakes at different microwave frequencies are examined. It is shown that the Rayleigh approximation does not often provide the necessary accuracy for snowflake reflectivity calculations for radar wavelengths used in meteorology; however, another simple approximation, the Rayleigh-Gans approximation, can be safely used for such calculations. Reflectivity-snowfall rate relationships are derived for different snow densities and different radar frequencies. It is shown that dual-wavelength radar measurements can be used for estimating the effective sizes of snowflakes. Experimental data obtained during radar snowfall measurements in the WISP project of 1991 with the NOAA X- and Ka-band radars are found to be consistent with the described theoretical results. >",
"corpus_id": 26946916,
"score": -1,
"title": "Radar reflectivity in snowfall"
} |
{
"abstract": "This work presents the application of the technique named signal analysis based on chaos using density of maxima to analyze brushless direct current motors. It uses a correlation coefficient estimated from the density of maxima of the current signal. This study demonstrates in experiments the speed estimation of a brushless motor on a testbench and failure detection in a small flying drone. The experimental results demonstrate that it is possible to estimate the speed in 97.8% of the cases and to detect failure in 82.75% of the analyzed cases.",
"corpus_id": 236941275,
"title": "Motor speed estimation and failure detection of a small UAV using density of maxima"
} | {
"abstract": "Adopting a system-on-chip approach, this paper presents an integrated FPGA-based Electronic Speed Control (ESC) for driving brushless DC electric motors. This allows for sensing, computation, and higher control bandwidth than traditional off-the-shelf ESCs (typically 50Hz). In addition to a more compact and flexible package, this provides greater system awareness and facilitates a faster control loop rate, which are useful for agile robotic systems, such as UAVs. This design has been tested in conjunction with a custom, compact quadrotor system. It provides reduced payload and increased robustness compared to traditional controllers.",
"corpus_id": 6280674,
"title": "Design of an Integrated Electronic Speed Controller for Compact Robotic Vehicles"
} | {
"abstract": "Bioflocculant-producing bacteria were isolated from activated sludge of a wastewater treatment plant located in Durban, South Africa, and identified using standard biochemical tests as well as the analysis of their 16S rRNA gene sequences. The bioflocculants produced by these organisms were ethanol precipitated, purified using 2% (w/v) cetylpyridinium chloride solution and evaluated for removal of wastewater dyes under different pH, temperature and nutritional conditions. Bioflocculants from these indigenous bacteria were very effective for decolourizing the different dyes tested in this study, with a removal rate of up to 97.04%. The decolourization efficiency was largely influenced by the type of dye, pH, temperature, and flocculant concentration. A pH of 7 was found to be optimum for the removal of both whale and mediblue dyes, while the optimum pH for fawn and mixed dye removal was found to be between 9 and 10. Optimum temperature for whale and mediblue dye removal was 35 °C, and that for fawn and mixed dye varied between 40–45 °C and 35–40 °C, respectively. These bacterial bioflocculants may provide an economical and cleaner alternative to replace or supplement present treatment processes for the removal of dyes from wastewater effluents, since they are biodegradable and easily sustainable.",
"corpus_id": 5179786,
"score": -1,
"title": "Textile Dye Removal from Wastewater Effluents Using Bioflocculants Produced by Indigenous Bacterial Isolates"
} |
{
"abstract": "Background: Cisplatin (CIS) is an effective antineoplastic drug that is used to treat various types of cancers. However, it causes side effects on the male reproductive system. The present study aimed to investigate the possible protective effects of Aloe vera (AL) gel (known as an antioxidant plant) on CIS-induced changes in rat sperm parameters, testicular structure, and oxidative stress markers. Materials and Methods: In this experimental study, forty-eight adult male rats were divided into 6 groups including: control, CIS, AL, metformin (MET), CIS+AL, and CIS+MET. CIS was used intraperitoneally at a dose of 5 mg/kg on days 7, 14, 21, and 28 of the experiment. AL gel (400 mg/kg per day) and MET (200 mg/kg per day) were administered orally for 35 days (started one week before the beginning of the experiment). Testes weight and dimensions, and morphometrical and histological alterations, activities of antioxidant enzymes including superoxide dismutase (SOD) and glutathione peroxidase (GPx), serum testosterone concentration, lipid peroxidation level, and sperm parameters were examined. Results: CIS caused a significant decrease (P<0.05) in relative weight and dimension of the testis, germinal epithelium thickness and diameter of seminiferous tubules, the numbers of testicular cells, and spermatogenesis indexes. The malondialdehyde (MDA) levels increased and antioxidant enzymes activities decreased in the CIS group compared to the control group (P<0.05). Additionally, sperm parameters (concentration, viability, motility, and normal morphology), and testosterone levels reduced significantly in CIS-treated rats (P<0.05). Also, CIS induced histopathological damages including disorganization, desquamation, atrophy, and vacuolation in the testis. However, administration of AL gel to CIS-treated rats attenuated the CIS-induced alterations, mitigated testicular oxidative stress and increased testosterone concentration. Conclusion: The results suggest that AL as a potential antioxidant plant and due to free radicals scavenging activities, has a protective effect against CIS-induced testicular alterations.",
"corpus_id": 235594979,
"title": "210Protective Effect of Aloe vera Gel against Cisplatin-Induced Testicular Damage, Sperm Alteration and Oxidative Stress in Rats"
} | {
"abstract": "Cisplatin (CP) treatment causes damage in the male reproductive system. Rutin (RUT) is a naturally occurring flavonoid glycoside that has antioxidant and anti‐inflammatory properties. This study aimed to investigate effects of RUT against cisplatin‐induced reproductive toxicity in male rats. Twenty‐one adult male Sprague Dawley rats were used. The control group received physiological saline with oral gavage during 14 days, and physiological saline was injected intraperitoneally (IP) in 10th days of study. CP Group received physiological saline during 14 days, and 10 mg kg−1 CP was injected IP in 10th day. RUT + CP group received RUT (150 mg kg−1) during 14 days, and 10 mg kg−1 CP was injected IP in 10th day. Spermatological parameters (including motility, cauda epididymal sperm density, dead sperm percentage and morphological sperm abnormalities), biochemical (MDA, GSH, GSH‐px, SOD and CAT), histological (H&E dye) and immunochemistry evaluations of testicles were evaluated. CP treatment caused damage on some spermatological parameters, increased the oxidative stress and induced testicular degeneration and apoptosis when compared to the control group. However, RUT treatment mitigates these side effects when compared to the CP alone group. IT is concluded that RUT treatment may reduce CP‐induced reproductive toxicity as a potential antioxidant compound.",
"corpus_id": 8895,
"title": "Rutin ameliorates cisplatin‐induced reproductive damage via suppression of oxidative stress and apoptosis in adult male rats"
} | {
"abstract": "Testicular oxidative stress, endocrine disruption and abnormal spermatogenesis in rats exposed to high doses ofphosphodiesterase-5 inhibitors (PDE5i) and opioids, with poor reversibility following withdrawal of treatment had beenreported. In this study, we examined the histopathological effects of high doses of sildenafil, tadalafil, tramadol andsildenafil+tramadol on the testes and epididymis of rats. Seventy male rats (180 - 200 g b.w) were assigned to one of fivegroups (n = 14), namely; A: control (0.2 mL normal saline), B: sildenafil (1 mg/100g b.w), C: tadalafil (1 mg/100g b.w), D:tramadol (2 mg/100g b.w) and E: sildenafil+tramadol group (dose as in groups B and D). The drugs were administered orallyfor 8 weeks. Seven rats were sacrificed per group while the remaining 7/group continued for 8 weeks without treatment.Histopathological examination was carried out at the end of both phases. After 8 weeks of treatment, mean Johnsen'stesticular biopsy score (MJTBS) and Leydig cell count decreased significantly (p=0.001) in all treated groups compared withthe control. The MJTBS and Leydig cell count decreased significantly in tramadol (p = 0.05) and sildenafil+tramadol (p<0.01)groups compared with tadalafil group. After recovery, MJTBS and Leydig cell count were significantly (p<0.05) lower in all the groups compared with the control. Histology of the testes of rats in groups B - E showed reduced germ cell andspermatozoa population in the seminiferous tubules after 8 weeks treatment. Additionally, their epididymis showed decreasedspermatozoa density. There was no complete reversibility of histopathological alterations following withdrawal of treatment.High doses of sildenafil, tadalafil, tramadol or sildenafil+tramadol impact negatively on testicular histology with poorreversal following withdrawal of treatment.",
"corpus_id": 33982584,
"score": -1,
"title": "Testicular and Epididymal Histology of Rats Chronically Administered High Doses of Phosphodiesterase-5 Inhibitors and Tramadol."
} |
{
"abstract": "Purpose – The purpose of this paper is to apply the aspects of decision theory (DT) to performance measurement and management (PMM), thereby enabling the theoretical elaboration of volatility, uncertainty, complexity and ambiguity in the business environment, which are identified as barriers to effective PMM. Design/methodology/approach – A review of decision theory and PMM literature establishes the Cynefin framework as the basis for extending the performance alignment matrix. Case research with seven companies explores the relationship between two concepts under-examined in the performance alignment matrix – internal dominant logic (DL) as the attribute of organisational culture affecting decision making, and the external environment – in line with the concept of alignment or fit in PMM. A focus area is PMM related to sustainable operations and sustainable supply chain management. Findings – Alignment between DL, external environment and PMM is found, as are instances of misalignment. The Cynefin framework offers a deeper theoretical explanation about the nature of this alignment. Other findings consider the nature of organisational ownership on DL. Research limitations/implications – The cases are exploratory not exhaustive, and limited in number. Organisations showing contested logic were excluded. Practical implications – Some organisations have cultures of predictability and control; others have cultures that recognise their external environment as fundamentally unpredictable, and hence there is a need for responsive, decentralised PMM. Some have sought to change their culture and PMM. Being attentive to how cultural logic affects decision making can help reduce the misalignment in PMM. Originality/value – A novel contribution is made by applying decision theory to PMM, extending the theoretical depth of the subject",
"corpus_id": 59429350,
"title": "A decision theory perspective on complexity in performance measurement and management"
} | {
"abstract": "Abstract Many firms are undertaking a strategic shift from cost leadership (through process management) to differentiation based on radical product innovation. Success in such transitions has been mixed, as have findings on the role of performance measurement and management in the process. This study explores the challenges of managing this transition, with specific focus on the role of performance metrics. Conventional wisdom indicates that top management can use metrics – measures, standards and rewards – to communicate new directions and priorities. Based on findings reported in this paper, this approach is found to be potentially fatally flawed when applied to a situation where both the corporate goals and the means of achieving these goals have changed. Using detailed data drawn from a multi-level analysis of a major international corporation undertaking such a strategic shift, this study explores the process by which metrics are formed and deployed, and the impact of this process on the ability of the firm to successfully achieve the change. Using measures such as the percentage of sales from new products, top management in the case study had the impression that the strategy was being successfully carried out by the various operating divisions. However, radical innovation (the desired result) had been replaced by incremental innovation. This study identifies the reasons for this situation. A major finding is that the performance measurement and management system can both allow and conceal this failure. Firms trying to significantly change their strategic directions must change their selection of performance metrics to focus less on the intended outcomes and more on the means by which these outcomes are to be achieved.",
"corpus_id": 154044052,
"title": "Hitting the Target…but Missing the Point: Resolving the Paradox of Strategic Transition"
} | {
"abstract": "People are often able to act eeciently in places like grocery stores, libraries, and other man-made domains even if they haven't been to those particular places before: They are exercising useful knowledge about how these environments are organized in order to facilitate their tasks. In this paper we show that everyday environments exhibit useful regularities an autonomous agent can use in order to accomplish tasks eeciently. In particular, we identify useful regularities of grocery stores, and show how they're used in the design of an agent. We discuss how our planning system, Shopper , uses these regularities to nd items in Grocery-World, a simulated grocery store. Suppose I stop to buy Kellogg's Raisin Bran at an unfamiliar grocery store on the way home. How should my task proceed? A slow, but sure way is to systematically walk through the store slowly moving down aisles while looking at anything, careful not to miss the Raisin Bran. However, I can use information of how grocery stores are organized to speed up my search, for example I know: Most stores provide signs outlining an aisle's contents. Signs are placed at the ends of aisles. Cereals are clustered together. Raisin Bran is a cereal. In coming up with a method for nding Raisin Bran, we can use this knowledge as a basis for a strategy: Move across aisles and stop at a \\cereals\" sign. Enter the aisle under the sign. Find any kind of cereal. Look for Raisin Bran among the cereals. This strategy is eeective for many items in a grocery store. It works because it relies on speciic features of the environment. These features are ensured to exist so as to make shopping easier for people. Thus this strategy should be easily extensible to all medium-sized grocery stores (in the United States). Shopper For a robot operating in an existing man-made domain , knowledge of organization and strategies can prove useful for accomplishing tasks like this. With the Shopper project, we are examining the types of functional knowledge needed for an agent to work in a man-made domain as well as the sensing and control mechanisms needed to use this knowledge. In this paper , we describe the Shopper system: an integrated system incorporating planning and vision techniques for the task of grocery store shopping. Grocery store shopping is a common task everyone does at least occasionally. Since everybody is able to …",
"corpus_id": 15116687,
"score": -1,
"title": "Vision and navigation in man-made environments: looking for syrup in all the right places"
} |
{
"abstract": "Abstract In this paper, we review the history of the concept of neuroplasticity as it relates to the understanding of neuropsychiatric disorders, using schizophrenia as a case in point. We briefly review the myriad meanings of the term neuroplasticity, and its neuroscientific basis. We then review the evidence for aberrant neuroplasticity and metaplasticity associated with schizophrenia as well as the risk for developing this illness, and discuss the implications of such understanding for prevention and therapeutic interventions. We argue that the failure and/or altered timing of plasticity of critical brain circuits might underlie cognitive and deficit symptoms, and may also lead to aberrant plastic reorganization in other circuits, leading to affective dysregulation and eventually psychosis. This “dysplastic” model of schizophrenia can suggest testable etiology and treatment-relevant questions for the future.",
"corpus_id": 1601176,
"title": "Dysplasticity, metaplasticity, and schizophrenia: Implications for risk, illness, and novel interventions"
} | {
"abstract": "The clinical symptoms and cognitive and functional deficits of schizophrenia typically begin to gradually emerge during late adolescence and early adulthood. Recent findings suggest that disturbances of a specific subset of inhibitory neurons that contain the calcium-binding protein parvalbumin (PV), which may regulate the course of postnatal developmental experience-dependent synaptic plasticity in the cerebral cortex, including the prefrontal cortex (PFC), may be involved in the pathogenesis of the onset of this illness. Specifically, converging lines of evidence suggest that oxidative stress, extracellular matrix (ECM) deficit and impaired glutamatergic innervation may contribute to the functional impairment of PV neurons, which may then lead to aberrant developmental synaptic pruning of pyramidal cell circuits during adolescence in the PFC. In addition to promoting the functional integrity of PV neurons, maturation of ECM may also play an instrumental role in the termination of developmental PFC synaptic pruning; thus, ECM deficit can directly lead to excessive loss of synapses by prolonging the course of pruning. Together, these mechanisms may contribute to the onset of schizophrenia by compromising the integrity, stability, and fidelity of PFC connectional architecture that is necessary for reliable and predictable information processing. As such, further characterization of these mechanisms will have implications for the conceptualization of rational strategies for the diagnosis, early intervention, and prevention of this debilitating disorder.",
"corpus_id": 1308986,
"title": "Neurobiology of schizophrenia onset."
} | {
"abstract": "The Adverse Outcome Pathway (AOP) concept has recently been proposed to support a paradigm shift in regulatory toxicology testing and risk assessment. This concept is similar to the Mode of Action (MOA), in that it describes a sequence of measurable key events triggered by a molecular initiating event in which a stressor interacts with a biological target. The resulting cascade of key events includes molecular, cellular, structural and functional changes in biological systems, resulting in a measurable adverse outcome. Thereby, an AOP ideally provides information relevant to chemical structure-activity relationships as a basis for predicting effects of structurally similar compounds. AOPs could potentially also form the basis for qualitative and quantitative predictive modeling of the human adverse outcome resulting from molecular initiating or other key events for which higher-throughput testing methods are available or can be developed. A variety of cellular and molecular processes are known to be critical for normal function of the central (CNS) and peripheral nervous systems (PNS). Because of the biological and functional complexity of the CNS and PNS, it has been challenging to establish causative links and quantitative relationships between key events that comprise the pathways leading from chemical exposure to an adverse outcome in the nervous system. Following introduction of the principles of MOA and AOPs, examples of potential or putative adverse outcome pathways specific for developmental or adult neurotoxicity are summarized and aspects of their assessment considered. Their possible application in developing mechanistically informed Integrated Approaches to Testing and Assessment (IATA) is also discussed.",
"corpus_id": 3629573,
"score": -1,
"title": "Developing and applying the adverse outcome pathway concept for understanding and predicting neurotoxicity."
} |
{
"abstract": "Histopathology refers to the examination by a pathologist of biopsy samples. Histopathology images are captured by a microscope to locate, examine, and classify many diseases, such as different cancer types. They provide a detailed view of different types of diseases and their tissue status. These images are an essential resource with which to define biological compositions or analyze cell and tissue structures. This imaging modality is very important for diagnostic applications. The analysis of histopathology images is a prolific and relevant research area supporting disease diagnosis. In this paper, the challenges of histopathology image analysis are evaluated. An extensive review of conventional and deep learning techniques which have been applied in histological image analyses is presented. This review summarizes many current datasets and highlights important challenges and constraints with recent deep learning techniques, alongside possible future research avenues. Despite the progress made in this research area so far, it is still a significant area of open research because of the variety of imaging techniques and disease-specific characteristics.",
"corpus_id": 226299719,
"title": "Objective Diagnosis for Histopathological Images Based on Machine Learning Techniques: Classical Approaches and New Trends"
} | {
"abstract": "Current analysis of tumor proliferation, the most salient breast cancer prognostic biomarker, is limited to subjective mitosis counting by pathologists in localized regions of tissue images. This study presents the first data-driven integrative approach to characterize the severity of tumor growth and spread on a categorical and molecular level, utilizing multiple biologically salient deep learning classifiers to develop a comprehensive prognostic model. Our approach achieves pathologist-level performance on three-class categorical tumor severity prediction. It additionally pioneers prediction of molecular expression data from a tissue image, obtaining a Spearman's rank correlation coefficient of 0.60 with ex vivo mean calculated RNA expression. Furthermore, our framework is applied to identify over two hundred unprecedented biomarkers critical to the accurate assessment of tumor proliferation, validating our proposed integrative pipeline as the first to holistically and objectively analyze histopathological images.",
"corpus_id": 762484,
"title": "Deep learning assessment of tumor proliferation in breast cancer histological images"
} | {
"abstract": "Background Tumor proliferation speed, most commonly assessed by counting of mitotic figures in histological slide preparations, is an important biomarker for breast cancer. Although mitosis counting is routinely performed by pathologists, it is a tedious and subjective task with poor reproducibility, particularly among non-experts. Inter- and intraobserver reproducibility of mitosis counting can be improved when a strict protocol is defined and followed. Previous studies have examined only the agreement in terms of the mitotic count or the mitotic activity score. Studies of the observer agreement at the level of individual objects, which can provide more insight into the procedure, have not been performed thus far. Methods The development of automatic mitosis detection methods has received large interest in recent years. Automatic image analysis is viewed as a solution for the problem of subjectivity of mitosis counting by pathologists. In this paper we describe the results from an interobserver agreement study between three human observers and an automatic method, and make two unique contributions. For the first time, we present an analysis of the object-level interobserver agreement on mitosis counting. Furthermore, we train an automatic mitosis detection method that is robust with respect to staining appearance variability and compare it with the performance of expert observers on an “external” dataset, i.e. on histopathology images that originate from pathology labs other than the pathology lab that provided the training data for the automatic method. Results The object-level interobserver study revealed that pathologists often do not agree on individual objects, even if this is not reflected in the mitotic count. The disagreement is larger for objects from smaller size, which suggests that adding a size constraint in the mitosis counting protocol can improve reproducibility. The automatic mitosis detection method can perform mitosis counting in an unbiased way, with substantial agreement with human experts.",
"corpus_id": 848230,
"score": -1,
"title": "Mitosis Counting in Breast Cancer: Object-Level Interobserver Agreement and Comparison to an Automatic Method"
} |
{
"abstract": "Objective: To investigate the impact of a Croton tiglium extract on cellular proliferation and apoptosis in a non-small cell lung cancer cell line (A549) in vitro. Methods: A Croton tiglium seed methanol extract was prepare and assessed for effects on A549 cells regarding cellular proliferation, apoptotic rates, and expression of apoptosis related genes and proteins using real-time PCR and immunofluorescence. Results: The tested Croton tiglium extract inhibited A549 cell proliferation in a dose- and time-dependent manner, with significant elevation of apoptotic indexes at various concentrations after 24 h. In addition, rates in both early and late stages were higher in treated than untreated groups, the 100 μg/ml dose causing the highest levels of apoptosis. RT-PCR showed that A549 cells treated with 100 μg/ml Croton tiglium extract for 24 h has markedly higher Bax mRNA expression levels and obviously lower Bcl-2 expression levels than controls, equivalent results being observed for proteins by immunofluorescence. However, the mRNA expression levels of Fas and caspase-8 were not significantly altered. Conclusion: A Croton tiglium extract can inhibit proliferation of A549 cells and promote apoptosis though Bax/Bcl-2 pathways.",
"corpus_id": 33557495,
"title": "Croton Tiglium Extract Induces Apoptosis via Bax/Bcl-2 Pathways in Human Lung Cancer A549 Cells"
} | {
"abstract": "BACKGROUND\nApoptotic genes regulate apoptosis by the action of their pro- and antiapoptotic products. Among the most important proteins are p53 and Bcl-x family proteins.\n\n\nPATIENTS AND METHODS\nThe differential expression of these apoptotic genes were analyzed in relation to clinicopathological criteria in women with endometrial carcinoma. Thirty-three fresh tissues and 191 paraffin-embedded tissues were analyzed by real-time PCR for bcl-2/bax ratio and immunohistochemistry for p53, bcl-2 and bax proteins.\n\n\nRESULTS\nBcl-2/bax ratio tended to increase in grade 3 samples compared to grade 1 tumors. Mutated p53 was frequently observed in serous-papillary endometrial carcinomas (p=0.018). Low (<10%) and moderate (10-50%) expression of mutated p53 was observed in tumors with high expression of bax protein (>0.7).\n\n\nCONCLUSION\nThe Bcl-2/bax ratio is increased in grade 3 tumors. Bax protein shows a strong tendency for expression in the third group of clinical staging (stage IIb, III and IV). Poorly differentiated tumors highly expressed mutated p53.",
"corpus_id": 1899723,
"title": "BCL-2, BAX and P53 expression profiles in endometrial carcinoma as studied by real-time PCR and immunohistochemistry."
} | {
"abstract": "Due to the hardness of oxo ions (O2-) in coordination chemistry, coin-metal (Cu, Ag, Au) clusters supported by rich oxo ions (O2-) are extremely rare. Here, a novel μ4-oxo supported all-alkynyl-protected silver(I)-copper(I) nanocluster [Ag74-xCuxO12(PhC≡C)50] (NC-1, avg. x = 37.9) is presented with total structure characterization. NC-1 is the highest nuclearity silver-copper heterometallic cluster and contains unprecedented twelve interstitial μ4-oxo ions. The oxo ions are originated from the reduction of nitrate ions by NaBH4. The rich oxo ions induced the aggregation of Cu(I) and Ag(I) ions hierarchically in the cluster, forming the unique regioselective distribution of two different metal ions. The anisotropic ligand coverage on the surface is also observed and responsible for the puzzle-like cluster packing style incorporating rare intermolecular C-H···metal agostic interactions and solvent molecules. This work not only reveals a new category of high-nuclearity coin-metal clusters but also exemplifies the special clustering effect of oxo ions in assembly of coin-metal clusters.",
"corpus_id": 195771360,
"score": -1,
"title": "An Unprecedented All-Alkynyl Protected 74-Nuclei Silver(I)-Copper(I)-Oxo Nanocluster: Oxo-Induced Hierarchical Bimetal Aggregation and Anisotropic Surface Ligand Orientation."
} |
{
"abstract": "Masters of Science in Crop Protection Department Of Plant Science And Crop Protection Faculty Of Agriculture University Of Nairobi, 2014",
"corpus_id": 106844467,
"title": "Occurrence of Aflatoxin Contamination on Maize in the Lower Eastern Kenya and Evaluation of Superabsorbent Polymers in its Management"
} | {
"abstract": "Preand postharvest contamination of aflatoxin in maize is a major health deterrent for people in Africa where maize production has increased dramatically. This chapter highlights management options for preand postharvest toxin contamination in maize. Sound crop management practices are an effective way of avoiding, or at least diminishing, infection by Aspergillus jlavus and subsequent aflatoxin production. Preand postharvest practices that reduced aflatoxin contamination include: the use of resistant cultivars, harvesting at maturity, rapid drying on platforms to avoid contact with soil, appropriate shelling methods to reduce grain damage, sorting, use of clean and aerated storage structures, controlling insect damage, and avoiding long storage periods. These contamination reducing management practices are being tested in collaboration with farmers. Work continues on food basket surveys, the bio-ecology of aflatoxin production, developing biological control through a competitive exclusion strategy, reducing the impact of postharvest management practices on human blood toxin levels, and breeding to reduce the impact of mycotoxins on trade.",
"corpus_id": 1507033,
"title": "Pre- and postharvest management of aflatoxin in maize: an African perspective."
} | {
"abstract": "The ability of peasant farmers in the third world to monitor environmental occurrences around them has often been ignored. This study looks at Nigerian farmers’ perception of pests and pesticides and determines the relevance of such knowledge as an input to efforts to devise effective integrated pest management strategies.Farmers in Kabba area of Kwara State, Nigeria were extensively interviewed and the following findings were highlighted: they had a deep knowledge of all insect, animal and fungi pests; could identify each pest, know their breeding cycles and their general behaviour characteristics; were able to make a relatively accurate assessment of damage caused by pests; and developed an indigenous integrated pest management strategy.Due to massive pest damage in the last few years, and strenuous advertisement by the Ministry of Agriculture, many farmers are now turning to chemical pesticides for solution to the pest problem. Prognosis of future trends in pesticide usage among farmers reveal the likely danger of farmers becoming pesticide-dependent with the consequent possibilities of human poisoning and eventually aggravating the pest problem.",
"corpus_id": 88017098,
"score": -1,
"title": "Nigerian Farmers’ Perception of Pests and Pesticides"
} |
{
"abstract": "Strategies for Transitioning Workforces From Baby-Boomer to Millennial Majorities by Kimberly G. Riley MBA, Morehead State University, 2002 BA, Ohio University, 1995 Doctoral Study Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Business Administration Walden University January 2016 Abstract The transition of organizations’ workforces from a baby-boomer to a millennial majority in the 21st century has created work-engagement strategy challenges for management. The purpose of this study was to explore the engagement strategies that business managers design and implement that effectively address the generational differences within the workforce. The case study design was appropriate for addressing this study’s purpose of exploring the successful experiences of approximately 125 healthcare business managers within a business organization in Huntington, West Virginia. Transformational leadership theory constituted the conceptual framework for this study. Methodological triangulation was used to identify key themes from the participants’ interviews, employee training manuals, and job descriptions of the healthcare organization. The key themes that emerged were reverse mentorship, employee work–life balance, and employee feedback expectations. Social change could result from implementing the recommendations of this study to enhance employees’ individual qualities such as worth, dignity, and a strong work ethic, thereby catalyzing employees’ support of their local communities.The transition of organizations’ workforces from a baby-boomer to a millennial majority in the 21st century has created work-engagement strategy challenges for management. The purpose of this study was to explore the engagement strategies that business managers design and implement that effectively address the generational differences within the workforce. The case study design was appropriate for addressing this study’s purpose of exploring the successful experiences of approximately 125 healthcare business managers within a business organization in Huntington, West Virginia. Transformational leadership theory constituted the conceptual framework for this study. Methodological triangulation was used to identify key themes from the participants’ interviews, employee training manuals, and job descriptions of the healthcare organization. The key themes that emerged were reverse mentorship, employee work–life balance, and employee feedback expectations. Social change could result from implementing the recommendations of this study to enhance employees’ individual qualities such as worth, dignity, and a strong work ethic, thereby catalyzing employees’ support of their local communities. Strategies for Transitioning Workforces From Baby-Boomer to Millennial Majorities by Kimberly G. Riley MBA, Morehead State University, 2002 BA, Ohio University, 1995 Doctoral Study Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Business Administration Walden University January 2016 Dedication I dedicate this study to my daughters, Madison and Meredith. You both inspire me every day to be the best that I can be in this world. My prayer is that you each lead a life with God in the Center of your day. “In all thy ways acknowledge Him, and He shall direct thy paths” (Proverbs 3:6). I love you both! – Mom Acknowledgments I want to thank my family for the amazing support I have received throughout this long process. I would not be where I am today without the family God has placed in my life. Thank you to my husband, Eric, for helping me every day and spending countless hours reassuring me that I would complete this study. Thank you to my parents, Ron and Betty—without your prayers and continual support, this study would have not been possible. Last but not least, my brother, Jason when I wanted to say no more, you would not hear of it. Thank you, Dr. Steven Munkeby and Dr. Peter Anthony, for having faith in me as a student. You stretched me beyond my limits. Thank you! Thankful to my Lord and Savior, who gave me the knowledge and wisdom to complete this doctoral study. I can do all things through Christ, which strengtheneth me (Phil 4:13).",
"corpus_id": 146348765,
"title": "Strategies for Transitioning Workforces From Baby-Boomer to Millennial Majorities"
} | {
"abstract": "Purpose – The purpose of this paper is to explore the issues around a multiple generational workforce and more specifically, the challenges and benefits for education providers and employers.Design/methodology/approach – Reviewing research papers, analysing academic texts, interrogating market intelligence and contextualising case studies, the paper examines the “experience” or “qualifications” debate alongside the similarities, differences and overlaps of the cross‐generational workforce, with a view to offering education/training solutions.Findings – Demographic forecasts suggest that the UK workplace will imminently be dominated by older, experienced employees. As the composition of the workplace shifts, examining the inter‐relationship between groups of workers of different ages/profiles who have different skills, attitudes, expectations and learning styles is vital. The synergy caused by this inter‐mingling cannot help but impact on employers, sectors and higher education institutions.Research limita...",
"corpus_id": 152905378,
"title": "The twenty-first century multiple generation workforce: overlaps and differences but also challenges and benefits"
} | {
"abstract": "The aim of this paper is to provide a comparative analysis of higher education and the graduate labour markets in selected European countries (France, Germany, Spain and United Kingdom) in the context of the expectations of graduates and prospective employers, and respective recruitment and selection practices. Expectations of graduating students from a number of European collaborating universities are sought and analysed in order to find out about a match between the knowledge and skills of graduates and the needs of European employers. The study examines the process of graduate recruitment, employee and employer expectations, and the role of higher education institutions in meeting such expectations. Primary data was gathered from 252 employers and 485 final year (graduating) students through the use of questionnaires. The analysis of the data collected has revealed different approaches to but similar methods of graduate recruitment between the four countries. Despite the current differences in higher education systems and labour market trends, the expectations of employers and graduating students are more similar than different. It is concluded that EU graduates will have good employment prospects in an integrated labour market.",
"corpus_id": 55844122,
"score": -1,
"title": "A Comparative Analysis of Graduate Employment Prospects in European Labour Markets: A Study of Graduate Recruitment in Four Countries"
} |
{
"abstract": "The intricate connectivity patterns of neural circuits support a wide repertoire of communication processes and functional interactions. Here we systematically investigate how neural signaling is constrained by anatomical connectivity in the mesoscale Drosophila (fruit fly) brain network. We use a spreading model that describes how local perturbations, such as external stimuli, trigger global signaling cascades that spread through the network. Through a series of simple biological scenarios we demonstrate that anatomical embedding potentiates sensory-motor integration. We find that signal spreading is faster from nodes associated with sensory transduction (sensors) to nodes associated with motor output (effectors). Signal propagation was accelerated if sensor nodes were activated simultaneously, suggesting a topologically mediated synergy among sensors. In addition, the organization of the network increases the likelihood of convergence of multiple cascades towards effector nodes, thereby facilitating integration prior to motor output. Moreover, effector nodes tend to coactivate more frequently than other pairs of nodes, suggesting an anatomically enhanced coordination of motor output. Altogether, our results show that the organization of the mesoscale Drosophila connectome imparts privileged, behaviorally relevant communication patterns among sensors and effectors, shaping their capacity to collectively integrate information.Author SummaryThe complex network spanned by neurons and their axonal projections promotes a diverse set of functions. In the present report, we study how the topological organization of the fruit fly brain supports sensory-motor integration. Using a simple communication model, we demonstrate that the topology of this network allows efficient coordination among sensory and motor neurons. Our results suggest that brain network organization may profoundly shape the functional repertoire of this simple organism.",
"corpus_id": 49745068,
"title": "Optimized connectome architecture for sensory-motor integration"
} | {
"abstract": "We exploit flow propagation on the directed neuronal network of the nematode C. elegans to reveal dynamically relevant features of its connectome. We find flow-based groupings of neurons at different levels of granularity, which we relate to functional and anatomical constituents of its nervous system. A systematic in silico evaluation of the full set of single and double neuron ablations is used to identify deletions that induce the most severe disruptions of the multi-resolution flow structure. Such ablations are linked to functionally relevant neurons, and suggest potential candidates for further in vivo investigation. In addition, we use the directional patterns of incoming and outgoing network flows at all scales to identify flow profiles for the neurons in the connectome, without pre-imposing a priori categories. The four flow roles identified are linked to signal propagation motivated by biological input-response scenarios.",
"corpus_id": 1747410,
"title": "Flow-Based Network Analysis of the Caenorhabditis elegans Connectome"
} | {
"abstract": "4 Figure 9.1: This graph shows the social relations between the members of a karate club, studied by anthropologist Wayne Zachary in the 1970s. Two people (nodes) stand out, the instructor and the administrator of the club, both happen to have many friends among club members. At some point, a dispute caused the club to split into two. Can you predict how the club partitioned? (If not, just search the Internet for Zachary and Karate.)",
"corpus_id": 2494417,
"score": -1,
"title": "Social Networks"
} |
{
"abstract": "To improve the efficiency and accuracy of fault diagnostics of planetary gearboxes, an intelligent diagnosis approach is proposed based on deep convolutional neural networks (CNNs) and vibration bispectrum (BSP). Rather than using raw vibration signals, BSP is appreciated as the input for the CNN models (denoted as BSP-CNN) because the BSP allows nonlinear feature enhancement and noise reduction. In addition, transfer learning (TL) is accompanied to address the challenges of CNN difficulties. The proposed BSP-CNN is verified firstly to diagnose a number of common faults including gear states: normal, tooth wear, tooth root crack, tooth breakage and missing tooth, achieving an accuracy of 97.36% in identifying different faults. Then, its TL capability is evaluated based on the sun gear faults datasets. The classification accuracy of the planet gear faults is over 95.1%. After the transfer learning, the classification accuracy of the sun gear fault is still higher than 97.9%, and the computational time consumed by proposed method is also less compared to other diagnosis methods. This article has twofold contributions: first, the development of a BSP-based CNN model for fault diagnosis; andsecond, the extensive evaluation of CNN-TL methods for monitoring and diagnosing planetary gearboxes.",
"corpus_id": 226715933,
"title": "An Investigation Into Fault Diagnosis of Planetary Gearboxes Using A Bispectrum Convolutional Neural Network"
} | {
"abstract": "Early diagnosis of gear transmission has been a significant challenge, because gear faults occur primarily at microstructure or even material level but their effects can only be observed indirectly at a system level. The performance of a gear fault diagnosis system depends significantly on the features extracted and the classifier subsequently applied. Traditionally, fault-related features are extracted and identified based on domain expertise through data preprocessing which are system-specific and may not be easily generalized. On the other hand, although recently the deep neural networks based approaches featuring adaptive feature extractions and inherent classifications have attracted attention, they usually require a substantial set of training data. Aiming at tackling these issues, this paper presents a deep convolutional neural network-based transfer learning approach. The proposed transfer learning architecture consists of two parts; the first part is constructed with a pre-trained deep neural network that serves to extract the features automatically from the input, and the second part is a fully connected stage to classify the features that needs to be trained using gear fault experimental data. Case analyses using experimental data from a benchmark gear system indicate that the proposed approach not only entertains preprocessing free adaptive feature extractions, but also requires only a small set of training data.",
"corpus_id": 1916221,
"title": "Preprocessing-Free Gear Fault Diagnosis Using Small Datasets With Deep Convolutional Neural Network-Based Transfer Learning"
} | {
"abstract": "Abstract This study aims to propose and to validate a research model on project sustainability management. Moreover, it investigates the relation between project sustainability management and project success. The methodological approach is a survey-based research, using structural equation modelling to validate the research model. The hypotheses were tested based on a field study involving 222 projects distributed among eight industries and two countries. The results show a low degree of commitment to social and environment aspects of the surveyed projects. The structural model proposed shows a significant and positive relation between project sustainability management and project success and in reducing the social and environmental negative impact.",
"corpus_id": 113872269,
"score": -1,
"title": "Can project sustainability management impact project success? An empirical study applying a contingent approach"
} |
{
"abstract": "Record linkage, often called entity resolution or de-duplication, refers to identifying the same entities across one or more databases. As the amount of data that is generated grows at an exponential rate, it becomes increasingly important to be able to integrate data from several sources to perform richer analysis. In this paper, we present an open source comprehensive end to end hybrid record linkage framework that combines the automatic and manual review process. Using this framework, we train several models based on different machine learning algorithms such as random forests, linear SVM, Radial SVM, and Dense Neural Networks and compare the effectiveness and efficiency of these models for record linkage in different settings. We evaluate model performance based on Recall, F1-score (quality of linkages) and number of uncertain pairs which is the number of pairs that need manual review. We also test our trained models in a new dataset to test how different trained models transfer to a new setting. The RF, linear SVM and radial SVM models transfer much better compared to the DNN. Finally, we study the effect of name2vec (n2v) feature, a letter embedding in names, on model performance. Using n2v results in smaller manual review set with slightly less F1-score. Overall the SVM models performed best in all experiments.",
"corpus_id": 233354934,
"title": "Evaluation of Machine Learning Algorithms in a Human-Computer Hybrid Record Linkage System"
} | {
"abstract": "Introduction Clinical databases require accurate entity resolution (ER). One approach is to use algorithms that assign questionable cases to manual review. Few studies have compared the performance of common algorithms for such a task. Furthermore, previous work has been limited by a lack of objective methods for setting algorithm parameters. We compared the performance of common ER algorithms: using algorithmic optimization, rather than manual parameter tuning, and on two-threshold classification (match/manual review/non-match) as well as single-threshold (match/non-match). ::: ::: Methods We manually reviewed 20 000 randomly selected, potential duplicate record-pairs to identify matches (10 000 training set, 10 000 test set). We evaluated the probabilistic expectation maximization, simple deterministic and fuzzy inference engine (FIE) algorithms. We used particle swarm to optimize algorithm parameters for a single and for two thresholds. We ran 10 iterations of optimization using the training set and report averaged performance against the test set. ::: ::: Results The overall estimated duplicate rate was 6%. FIE and simple deterministic algorithms allowed a lower manual review set compared to the probabilistic method (FIE 1.9%, simple deterministic 2.5%, probabilistic 3.6%; p<0.001). For a single threshold, the simple deterministic algorithm performed better than the probabilistic method (positive predictive value 0.956 vs 0.887, sensitivity 0.985 vs 0.887, p<0.001). ER with FIE classifies 98.1% of record-pairs correctly (1/10 000 error rate), assigning the remainder to manual review. ::: ::: Conclusions Optimized deterministic algorithms outperform the probabilistic method. There is a strong case for considering optimized deterministic methods for ER.",
"corpus_id": 6308067,
"title": "A benchmark comparison of deterministic and probabilistic methods for defining manual review datasets in duplicate records reconciliation"
} | {
"abstract": "We review data mining and related computer science techniques that have been studied in the area of drug safety to identify signals of adverse drug reactions from different data sources, such as spontaneous reporting databases, electronic health records, and medical literature. Development of such techniques has become more crucial for public heath, especially with the growth of data repositories that include either reports of adverse drug reactions, which require fast processing for discovering signals of adverse reactions, or data sources that may contain such signals but require data or text mining techniques to discover them. In order to highlight the importance of contributions made by computer scientists in this area so far, we categorize and review the existing approaches, and most importantly, we identify areas where more research should be undertaken.",
"corpus_id": 15115864,
"score": -1,
"title": "Text and Data Mining Techniques in Adverse Drug Reaction Detection"
} |
{
"abstract": "Background and Aims: Patients’ nutritional intake is a crucial issue in modern hospitals, where the high prevalence of disease-related malnutrition may worsen clinical outcomes. On the other hand, food waste raises concerns in terms of sustainability and environmental burden. We conducted a systematic review to ascertain which hospital services could overcome both issues. Methods: A systematic literature search following PRISMA guidelines was conducted across MEDLINE, Web of Science, and Scopus for randomised controlled trials (RCTs) and observational studies comparing the effect of hospital strategies on energy intake, protein intake, and plate/food waste. The quality of included studies was assessed using the Newcastle-Ottawa Scale for cohort studies and the Cochrane Risk of Bias tool from the Cochrane Handbook for Systematic Reviews of Interventions for RCTs. Results: Nineteen studies were included, assessing as many hospital strategies such as food service systems—including catering and room service—(n = 9), protected mealtimes and volunteer feeding assistance (n = 4), food presentation strategies (n = 3), nutritional counseling and education (n = 2), plant-based proteins meal (n = 1). Given the heterogeneity of the included studies, the results were narratively analysed. Conclusions: Although the results should be confirmed by prospective and large sample-size studies, the personalisation of the meal and efficient room service may improve nutritional intake while decreasing food waste. Clinical nutritionist staff—especially dietitians—may increase food intake reducing food waste through active monitoring of the patients’ nutritional needs.",
"corpus_id": 255621924,
"title": "Hospital Services to Improve Nutritional Intake and Reduce Food Waste: A Systematic Review"
} | {
"abstract": "Background\neffective strategies are required to support the nutritional status of patients.\n\n\nObjectives\nto evaluate a foodservice nutrition intervention on a range of participant outcomes and estimate its cost.\n\n\nDesign\nparallel controlled pilot study.\n\n\nSetting\nsubacute hospital ward.\n\n\nSubjects\nall consecutively admitted adult patients were eligible for recruitment under waiver of consent.\n\n\nMethods\nthe intervention was a modified hospital menu developed by substituting standard items with higher energy options. The control was the standard menu. All participants received usual multidisciplinary care. Outcomes were change in weight and hand grip strength (HGS) between admission and day 14 and; energy and protein intake and patient satisfaction with the foodservice at day 14. The additional cost of the intervention was also estimated.\n\n\nResults\nthe median (interquartile range) age of participants (n = 122) was 83 (75-87) years and length of stay was 19 (11-32) days. One-third (38.5%) were malnourished at admission. There was no difference in mean (SD) HGS change (1.7 (5.1) versus 1.4 (5.8) kg, P = 0.798) or weight change (-0.55 (3.43) versus 0.26 (3.33) %, P = 0.338) between the intervention and control groups, respectively. The intervention group had significantly higher mean (SD) intake of energy (132 (38) versus 105 (34) kJ/kg/day, P = 0.003) and protein (1.4 (0.6) versus 1.1 (0.4) g protein/kg/day, P = 0.035). Both groups were satisfied with the foodservice. The additional cost was £4.15/participant/day.\n\n\nConclusions\nin this pilot, the intervention improved intake and may be a useful strategy to address malnutrition. Further consideration of clinical and cost implications is required in a fully powered study.",
"corpus_id": 3740638,
"title": "A foodservice approach to enhance energy intake of elderly subacute patients: a pilot study to assess impact on patient outcomes and cost"
} | {
"abstract": "The association of plasma interleukin-6 (IL-6) levels, muscle strength and functional capacity was investigated in a cross-sectional study of community-dwelling elderly women from Belo Horizonte, Brazil. Elderly people who present controlled chronic diseases with no negative impact on physical, psychosocial and mental functionality are considered to be community-dwelling. Psychological and social stress due to unsuccessfully aging can represent a risk for immune system disfunctions. IL-6 levels, isokinetic muscle strength of knee flexion/extension, and functional tests to determine time required to rise from a chair and gait velocity were measured in 57 participants (71.21 +/- 7.38 years). Serum levels of IL-6 were measured in duplicate and were performed within one single assay (mouse monoclonal antibody against IL-6; High-Sensitivity, Quantikine, R & D Systems, USA; intra-assay coefficient of variance = 6.9-7.4%; interassay coefficient of variance = 9.6-6.5%; sensitivity = 0.016-0.110 pg/mL; mean = 0.039 pg/mL). Muscle strength was assessed with the isokinetic dynamometer Biodex System 3 Pro. After the Shapiro-Wilk normality test was applied, correlations were investigated using Spearman and Kruskal-Wallis tests. Post hoc analysis was performed using the Dunn test. A significant negative correlation was observed between plasma IL-6 levels (1.95 +/- 1.77 pg/mL) and muscle strength for knee flexion (70.70 +/- 21.14%; r = -0.265; P = 0.047) and extension (271.84 +/- 67.85%; r = -0.315; P = 0.017). No significant correlation was observed between IL-6 levels and the functional tests (time to rise from a chair = 14.65 +/- 2.82 s and gait velocity = 0.95 +/- 0.14 m/s). These results suggest that IL-6 is associated with reduced muscle strength.",
"corpus_id": 3241309,
"score": -1,
"title": "Muscle strength but not functional capacity is associated with plasma interleukin-6 levels of community-dwelling elderly women."
} |
{
"abstract": "Summary We have recently demonstrated that protein S impairs the intrinsic tenase complex, independent of activated protein C, in competitive interactions between the A2 and A3 domains of factor VIIIa and factor IXa. In the present study, we have identified a protein S-interactive site in the A2 domain of factor VIIIa. Anti-A2 monoclonal antibody recognising a factor IXa-functional region (residues 484–509) on A2, and synthetic peptide inhibited the A2 binding to protein S by ∼60% and ∼70%, respectively, in solid-phase binding assays. The 484–509 peptide directly bound to protein S dose-dependently. Covalent cross-linking was observed between the 484–509 peptide and protein S following reaction with EDC (1-ethyl-3-(3-dimethylaminopropyl)-carbodiimide). The cross-linked adduct was consistent with 1:1 stoichiometry of reactants. Cross-linking formation was blocked by addition of the 484–497 peptide, but not by the 498–509 peptide. Furthermore, N-terminal sequence analysis of the 484–509 peptide-protein S adduct showed that three sequential residues (S488, R489, and R490) in A2 were not identified, suggesting that these residues participate in cross-link formation. Mutant A2 molecules where these residues were converted to alanine were evaluated for the binding of protein S. The S488A, R489A, and R490A mutants demonstrated ∼four-fold lower affinity than wild-type A2.These results indicate that the 484–509 region in the A2 domain of factor VIIIa, in particular sequential residues at positions 488–490, contributes to a unique protein S-interactive site.",
"corpus_id": 19071435,
"title": "Identification of a protein S-interactive site within the A2 domain of the factor VIII heavy chain"
} | {
"abstract": "Protein S functions as an activated protein C (APC)‐independent anticoagulant in the inhibition of intrinsic factor X activation, although the precise mechanisms remain to be fully investigated. In the present study, protein S diminished factor VIIIa/factor IXa‐dependent factor X activation, independent of APC, in a functional Xa generation assay. The presence of protein S resulted in an c. 17‐fold increase in Km for factor IXa with factor VIIIa in the factor Xase complex, but an c. twofold decrease in Km for factor X. Surface plasmon resonance‐based assays showed that factor VIII, particularly the A2 and A3 domains, bound to immobilized protein S (Kd; c. 10 nmol/l). Competition binding assays using Glu‐Gly‐Arg‐active‐site modified factor IXa showed that factor IXa inhibited the reaction between protein S and both the A2 and A3 domains. Furthermore, Sodium dodecyl sulphate polyacrylamide gel electrophoresis revealed that the cleavage rate of factor VIIIa at Arg336 by factor IXa was c. 1·8‐fold lower in the presence of protein S than in its absence. These data indicate that protein S not only down‐regulates factor VIIIa activity as a cofactor of APC, but also directly impairs the assembly of the factor Xase complex, independent of APC, in a competitive interaction between factor IXa and factor VIIIa.",
"corpus_id": 1041468,
"title": "Protein S down‐regulates factor Xase activity independent of activated protein C: specific binding of factor VIII(a) to protein S inhibits interactions with factor IXa"
} | {
"abstract": "Hyperfibrinolysis has been observed in patients heavily transfused with solvent/detergent‐treated pooled plasma (S/D plasma). We compared coagulation and fibrinolytic variables in blood containing S/D plasma with blood containing fresh‐frozen plasma (FFP), with and without α2‐antiplasmin or tranexamic acid (TXA) supplementation.",
"corpus_id": 3700307,
"score": -1,
"title": "Effect of solvent/detergent‐treated pooled plasma on fibrinolysis in reconstituted whole blood"
} |
{
"abstract": "The technique of “renormalization” for geometric estimation attracted much attention when it appeared in early 1990s for having higher accuracy than any other then known methods. The key fact is that it directly specifies equations to solve, rather than minimizing some cost function. This paper expounds this “non-minimization approach” in detail and exploits this principle to modify renormalization so that it outperforms the standard reprojection error minimization. Doing a precise error analysis in the most general situation, we derive a formula that maximizes the accuracy of the solution; we call it hyper-renormalization. Applying it to ellipse fitting, fundamental matrix computation, and homography computation, we confirm its accuracy and efficiency for sufficiently small noise. Our emphasis is on the general principle, rather than on individual methods for particular problems.",
"corpus_id": 279602,
"title": "Hyper-renormalization: Non-minimization Approach for Geometric Estimation"
} | {
"abstract": "The best known method for optimally computing parameters from noisy data based on geometric con- straints is maximum likelihood (ML). This paper reinvestigates \"hyperaccurate correction\" for further improving the accuracy of ML. In the past, only the case of a single scalar constraint was studied. In this paper, we extend it to multiple constraints given in the form of vector equations. By detailed error analysis, we illuminate the existence of a term that has been ignored in the past. Doing simulation experiments of ellipse fitting, fundamental matrix, and homography computation, we show that the new term does not effectively affect the final solution. However, we show that our hyperaccurate correction is even superior to hyper-renormalization, the latest method regarded as the best fitting method, but that the iterations of ML computation do not necessarily converge in the presence of large noise.",
"corpus_id": 1263743,
"title": "Hyperaccurate Correction of Maximum Likelihood for Geometric Estimation"
} | {
"abstract": "Unprotected cryptographic hardware is vulnerable to a side-channel attack known as differential power analysis (DPA). This attack exploits data-dependent power consumption of a computation to determine the secret key. Dual-rail asynchronous circuits have been regarded as a potential countermeasure to this attack. In this paper, we evaluate the security of asynchronous dual-rail circuits against DPA. Our results show that, unless special precautions are taken, asynchronous circuits are not inherently more DPA resistant than their synchronous dual-rail counterparts. We show that the use of null-spaced or return-to-zero (RTZ) protocols, used to provide delay-insensitive encoding for asynchronous circuits, can make a DPA attack easier. We present an overview of balancing dynamic implementations of dual-rail fine-grained asynchronous gates that offer a solution for the DPA weakness. We demonstrate the use of asynchronous balanced cells that use RTZ which are not only secure against DPA but also deliver high performance with low design effort through automated pipelining.",
"corpus_id": 8217393,
"score": -1,
"title": "Delay insensitive encoding and power analysis: a balancing act [cryptographic hardware protection]"
} |
{
"abstract": "This paper concerns Remaining Useful Life (RUL) estimation of discrete event systems. For that purpose, physics-based models with partially observed stochastic Petri nets are used to represent the system and its sensors. The advantage of the proposed modelling approach is to provide a realistic representation of the system, including the interaction between the normal behaviours and the failure processes. From the proposed modelling and collected measurements, timed trajectories, which are consistent with the observations, are obtained. Based on the event dates, our approach consists in evaluating the probabilities of the consistent behaviours using probabilistic models. State estimation is obtained as a consequence. The most probable future degradations, from the current state, are then considered and a method for fault prognosis is presented. Finally, the prognosis result is used to estimate the RUL as a time interval. A case study is proposed to show the applicability of the proposed method.",
"corpus_id": 125544620,
"title": "State estimation of discrete event systems for RUL prediction issue"
} | {
"abstract": "We study the problem of decentralized fault prognosis of partially-observed discrete event systems. In order to capture the prognostic performance issue in the prognosis problem, we propose two new criteria: (1) all faults can be predicted K steps ahead; and (2) a fault will occur for sure within M steps once a fault alarm is issued; and we refer to ( M , K ) as the performance bound of the prognostic system. A necessary and sufficient condition for the existence of a decentralized supervisor satisfying these two criteria is provided, which is termed as ( M , K ) -coprognosability. A polynomial-time algorithm for the verification of ( M , K ) -coprognosability is also proposed. Finally, we show that the proposed approach is applicable to both disjunctive and conjunctive architectures. Our results generalize previous work on decentralized fault prognosis.",
"corpus_id": 1464178,
"title": "Decentralized fault prognosis of discrete event systems with guaranteed performance bound"
} | {
"abstract": "Abstract This article deals with the problem of fault prognosis in timed stochastic discrete event systems. For that purpose, partially observed stochastic Petri nets are considered to model the system with its sensors. The model represents both healthy and faulty behaviors of the system. Using a timed measurement sequence issued from the sensors, an approach denoted ( ρ , δ ) -prognosis is proposed to estimate the probability of a future fault occurrence. The method is based on two input parameters: the error bound ρ and the prognosis horizon δ . The main contribution is to bound the estimation error by ρ when the prognosis horizon does not exceed δ . An example is presented to illustrate the results.",
"corpus_id": 4813934,
"score": -1,
"title": "Fault prognosis of timed stochastic discrete event systems with bounded estimation error"
} |
{
"abstract": "Latar Belakang : Sistem TB-03 elektronik merupakan sistem pencatatan dan pelaporan yang merekap data penderita TB. Namun, data yang tersedia pada sistem sering kurang dimanfaatkan, atau tidak digunakan sama sekali. Oleh karena itu dibutuhkannya analisis proses pencarian informasi yang dapat digunakan untuk menggali potensi-potensi informasi yang ada dari penyimpanan data terutama untuk menemukan suatu pola hubungan antar data sehingga dapat diketahui pola hubungan pada hasil pengobatan TB.Tujuan : Penelitian ini bertujuan untuk mengidentifikasi mencari aturan asosiasi hasil pengobatan di Provinsi Sulawesi Selatan dengan menggunakan teknik association rule.Metode Penelitian: Metode yang digunakan pada penelitian ini adalah observasional deskriptif. Penelitian ini menggunakan data register TB-03 pasien TB di Provinsi Sulawesi Selatan tahun 2011-2013. Aturan asosiasi pada penderita TB dapat memberikan pengetahuan mengenai karakteristik pada setiap hasil pengobatan yang dimiliki.Hasil : Sebagian besar hasil pengobatan yang diperoleh penderita TB adalah sembuh. Aturan asosiasi menunjukkan bahwa tipe pasien merupakan pasien baru, klasifikasi TB adalah TB Paru, hasil pemeriksaan dahak awal positif, dan waktu konversi pada 2 bulan masa pengobatan berhubungan dengan hasil pengobatan sembuh. Tipe pasien merupakan pasien baru dan klasifikasi TB adalah TB Paru maka hasil pengobatan lengkap. Tipe pasien adalah pasien baru, klasifikasi TB adalah TB Paru, hasil pemeriksaan dahak awal adalah positif, tidak mengalami konversi pada masa pengobatan, dan bertempat tinggal di perdesaan berhubungan dengan hasil pengobatan meninggal. Tipe pasien adalah pasien baru, klasifikasi TB adalah TB Paru, hasil pemeriksaan dahak awal adalah positif, dan bertempat tinggal di perkotaan berhubungan dengan hasil pengobatan default. Klasifikasi TB adalah TB Paru, hasil pemeriksaan dahak awal adalah positif, dan bertempat tinggal di perkotaan berhubungan dengan hasil pengobatan pindah. Pola pengelompokan (clustering) data register TB menunjukkan bahwa hasil pengobatan pada penderita TB cenderung sembuh dan lengkap.Kesimpulan : Aturan asosiasi dan pengelompokan pada penderita dapat memberikan pengetahuan mengenai karakteristik pada setiap hasil pengobatan yang dimiliki. Dengan memanfaatkan karakteristik tersebut, dapat dijadikan sebagai acuan untuk menetapkan prioritas-prioritas dalam manajemen pengobatan TB selanjutnya sehingga hasil pengobatan dapat maksimal. Selain itu, teknik data mining dapat digunakan sebagai teknik analisis data register TB.",
"corpus_id": 225370370,
"title": "Analisis aturan asosiasi hasil pengobatan tuberkulosis di provinsi sulawesi selatan"
} | {
"abstract": "Objectives. The aim of this study was to assess treatment outcome and associated risk factors among TB patients registered for anti-TB treatment at Enfraz health center, northwest Ethiopia. Methods. A five-year retrospective data (2007–2011) of tuberculosis patients (n = 417) registered for anti-TB treatment at Enfraz health center, northwest Ethiopia, were reviewed. Tuberculosis outcomes were following the WHO guidelines. Data were entered and analyzed using SPSS version 20. Results. Among 417 study participants, 95 (22.8%), 141 (33.8%), and 181 (43.4%) were smear-positive, smear-negative, and extrapulmonary tuberculosis patients, respectively. Of the 417 study participants, 206 (49.4%) were tested for HIV. The TB-HIV coinfection was 24/206 (11.7%). Seventeen study participants (4.2%) were transferred to other health facilities. Among the 400 study participants, 379 (94.8%) had successful treatment outcome (302 treatment completed and 77 cured). The overall death, default, and failure rates were 3.4%, 0.5%, and 1.2%, respectively. There was no significant association between sex, age, residence, type of TB, HIV status, and successful TB treatment outcome. Conclusion. Treatment outcome of patients who attended their anti-TB treatment at Enfraz health center was successful. Therefore, this treatment success rate should be maintained and strengthened to achieve the millennium development goal.",
"corpus_id": 2246632,
"title": "Treatment Outcome of Tuberculosis Patients at Enfraz Health Center, Northwest Ethiopia: A Five-Year Retrospective Study"
} | {
"abstract": "With the predictable integration of implants, the emphasis is shifted towards precise prosthesis. Reproducing the \nintraoral relationship of implants through impression procedures is the first step in achieving an accurate, passively \nfitting prosthesis. The critical aspect is to record the three dimensional orientation of the implant as it is present \nintraorally, other than reproducing fine surface detail for successful implant prosthodontic treatment. The development \nof impression techniques to accurately record implant position has become more complicated and challenging. \nDuring the prosthetic phase of implant therapy there are numerous options available to the implantologist in \nrelation to different impression techniques and materials available for impression making. It is critical to ensure that \nimplant – prosthesis interface have passive fit and original position of the implant maintained in the master cast. \nThere is no evidence supporting that one impression technique or material is better than the other. In the present \narticle the various parameters affecting the accuracy of implant impression along with impression material and \ntechnique pertaining to different clinical situations is reviewed.",
"corpus_id": 30071010,
"score": -1,
"title": "Accuracy of the implant impression obtained from different impression materials and techniques: review"
} |
{
"abstract": "Purpose To evaluate the additive effect of triamcinolone to bevacizumab in comparison to standard macular laser photocoagulation versus bevacizumab in the management of diabetic macular edema (DME). Methods In a prospective, randomized clinical trial, 130 eyes of 110 patients with type 2 diabetes with DME were included. Eligible eyes were randomly assigned to 1.25 mg intravitreal bevacizumab (42 eyes) (IVB group) or combination of 1.25 mg bevacizumab and 2 mg triamcinolone acetonide (41 eyes) (IVB+IVT group) or macular laser photocoagulation (47 eyes) (MPC). Central macular thickness (CMT) and visual acuity changes at week 6 and 16 were assessed. Results The mean age of the patients was 57 ±7 years. Patients were followed 16 weeks. At week 6, all the three groups showed significant reduction in CMT but the reductions for IVB and IVB+IVT were significantly more than MPC (p<0.001). At week 16, the response was not stable for IVB (p<0.001), but IVB+IVT maintained its superior status to MPC (p<0.001). At week 16, visual acuities were essentially unchanged for the two groups of MPC and IVB and improvement for IVB+IVT was marginal and at most was 0.1 log MAR. No patient developed uveitis, endophthalmitis, or thromboembolic event. Conclusions Single intravitreal bevacizumab or triamcinolone plus bevacizumab injection brought about significantly greater macular thickness reduction in diabetic patients in comparison to standard laser treatment. However, the response for bevacizumab alone was short-lived. Reduction in macular thickness was only marginally associated with visual acuity improvement in the triamcinolone plus bevacizumab injection group.",
"corpus_id": 11519230,
"title": "Intravitreal Bevacizumab versus Combined Bevacizumab-Triamcinolone versus Macular Laser Photocoagulation in Diabetic Macular Edema"
} | {
"abstract": "In spite of all the scientific advances in medicine and in our knowledge of the pathophysiology and treatment of diabetes and diabetic retinopathy over the past 25 years, diabetic retinopathy remains the leading cause of blindness in the United States among 20to 64-year-old individuals. Diabetic retinopathy currently affects about half of the 16 million Americans with diabetes. In addition, each year approximately 25,000 new cases of diabetes-related blindness occur in the United States. On the other hand, the advances gained in the past 25 years have helped us to more effectively manage diabetes and its complications.",
"corpus_id": 13864,
"title": "DIABETIC RETINOPATHY: THE LATEST IN CURRENT MANAGEMENT"
} | {
"abstract": "This study evaluates the relationship between hemoglobin levels and diabetic retinopathy. Hemoglobin values measured in 1991 and 1992 were collected from 1691 subjects attending a diabetic clinic in Oulu, Finland, and the mean values for the two years were used in the analyses. A classification of retinopathy, based on non-mydriatic photographs taken in 1991 and 1992, was used as the outcome variable. Multiple logistic regression analyses, controlled for serum creatinine levels, proteinuria, and other prognostic factors associated with diabetes, showed that the odds ratio of having any retinopathy was 2.0 (95% confidence interval 1.2-3.3) among subjects with a hemoglobin level of less than 12 g/dl, as compared with those having a hemoglobin level > or = 12 g/dl. Among the retinopathic subjects with low hemoglobin levels, the relative odds of having a severe retinopathy rather than a mild one was 5.3 (2.3-12.6). We conclude that subjects with normocytic anemia tended to have an increased risk of retinopathy, especially of the severe form.",
"corpus_id": 29860161,
"score": -1,
"title": "The relationship between hemoglobin levels and diabetic retinopathy."
} |
{
"abstract": "An overview on the development of QSPR/QSAR equations using various descriptor mining techniques and multilinear regression analysis in the framework of program CODESSA (Comprehensive Descriptors for Structural and Statistical Analysis) is given. The description of the methodologies applied in CODESSA is followed by the presentation of the QSAR and QSPR models derived for eighteen molecular activities and properties. The properties cover single molecular species, interactions between different molecular species, properties of surfactants, complex properties and properties of polymers.",
"corpus_id": 16634739,
"title": "QSPR and QSAR Models Derived with CODESSA Multipurpose Statistical Analysis Software"
} | {
"abstract": "A three-parameter QSPR equation with R2 = 0.936 was developed for the unified nonspecific solvent polarity scale (S‘) on the basis of theoretical molecular descriptors. It correlates S‘ for 25 structurally diverse solvents within a 5% average absolute error. The correlation equation includes the following three orthogonal theoretical molecular descriptors: (i) the average structural information content (order 0); (ii) the weighted partial negative surface area; and (iii) the hydrogen-bonding acceptor surface area. These descriptors provide insight into nonspecific solvation at the molecular level. Predictions using this three-parameter model are used to extend available S‘ values to a total of 67 solvents.",
"corpus_id": 287670,
"title": "QSPR Treatment of the Unified Nonspecific Solvent Polarity Scale"
} | {
"abstract": "The organic colorant curcumin [177-bis(4-hydroxy-3- \nmethoxyphenyl)-l,6-heptadiene-3,bdionew]a s exposed to \nozone in purified air in the dark, and the exposed samples \nwere analyzed by mass spectrometry. The major reaction \nproducts included vanillin (4-hydroxy-3-methoxybenzaldehyde) \nand vanillic acid (4-hydroxy-3-methoxybenzoic \nacid). These products and the corresponding loss of \nchromophore (i.e., fading of curcumin) are consistent with \na reaction mechanism involving electrophilic addition of \nozone onto the olefinic bonds of curcumin. Vanillin and \nvanillic acid did not react with ozone under the conditions \nof this study.",
"corpus_id": 97299089,
"score": -1,
"title": "Ozone Fading of Organic Colorants: Products and Mechanism of the Reaction of Ozone with Curcumin"
} |
{
"abstract": "Abstract: The red heartwood of beech is responsible for decreasing the market value of the most important deciduous tree species of central Europe. The aims of this study were: (i) to verify the hypothesis that stand age affects the occurrence and metamorphosis of red heartwood in beech; and (ii) to quantify the economic loss due the sale price reduction of timber affected by red heartwood. Seven even-aged beech stands of different age (87, 100, 105, 110, 115, 132, and 145 years) were selected in Slovakia, and 213 trees were cut into 961 pieces of assortments which were evaluated for the presence, form and extension of red heartwood. The economic loss caused by red heartwood was determined as the difference in price between the actual and the potential quality grades of assortments. The results confirmed that stand age significantly influence the occurrence, development, and metamorphosis of red heartwood. The average loss in timber sale price caused by red heartwood varied between 0.76 and 28.04 € m-3, depending on age and form of red heartwood, with more severe losses in stands older than 110 years. To reduce the incidence of beech red heartwood in Central Europe, a reduction of the rotation period should be considered, as well as the adoption of suitable silvicultural practices in aged beech stands.",
"corpus_id": 54490858,
"title": "Analysis and evaluation of the impact of stand age on the occurrence and metamorphosis of red heartwood"
} | {
"abstract": "Abstract A mature, average stand of European beech was generated based on characteristic data of trial plots. Some 27 different strategies of target diameter harvest, were simulated for up to 80 years with the help of a distance-dependent single-tree growth simulator. The treatments were differing in the size of the target diameter, the beginning and the end of the harvest. Based on a statistical model, the probability of the occurrence of more than 30% of red heartwood at the front-side diameter was calculated for three sections of each log. Using the predicted probability, the decrease of timber quality due to red heartwood for different treatment strategies was assessed. The harvested volume and the predicted timber quality for different harvesting strategies were used to calculate the net revenue achieved in each simulation period with the help of a calculation program. The net present value for variable interest rates of the different harvesting strategies was calculated, assuming free land rent. Using a linear programming approach, optimal areas for different treatment strategies of a modelled forest of 100 ha were calculated under 4 different scenarios. The results of the optimisation showed how the increasing interest rates replaced higher target diameters out of the optimal solution. In contrast to that the treatments with higher target diameter became more important with increasing restrictions concerning budget or ecological constraints.",
"corpus_id": 154468889,
"title": "Financial optimisation of target diameter harvest of European beech (Fagus sylvatica) considering the risk of decrease of timber quality due to red heartwood"
} | {
"abstract": "Lepg, J. and Kindlmann, P., 1987. Models of the development of spatial pattern of an even-aged plant population over time. Ecol. Modelling, 39: 45-57. The development of spatial patterns of a single even-aged population in a homogeneous area was studied by means of simulation and analytical models. The simulation model was designed to reflect ecological reality as much as possible, simultaneously keeping a reasonable level of simplicity. Results of simulations were supported by analysis of a simplified and more mathematically tractable model. It was shown that the main factor causing the decrease of aggregation intensity or tendency to regularity in the course of population development is the competition among neighbouring individuals. Random patterns may be a result of changes of initial aggregated pattern caused by competition among neighbours. Hence, an observed random pattern is not evidence for the independence of individuals.",
"corpus_id": 49350464,
"score": -1,
"title": "Models of the development of spatial pattern of an even-aged plant population over time"
} |
{
"abstract": "Antifreeze proteins (AFPs) are a structurally diverse group of proteins that have the ability to modify ice crystal structure and inhibit recrystallization of ice. AFPs are well characterized in fish and insects, but very few bacterial species have been shown to have AFP activity to date. Thirty eight freshwater to hypersaline lakes in the Vestfold Hills and Larsemann Hills of Eastern Antarctica were sampled for AFPs during 2000. Eight hundred and sixty six bacterial isolates were cultivated. A novel AFP assay, designed for high-throughput analysis in Antarctica, demonstrated putative activity in 187 of the cultures. Subsequent analysis of the putative positive isolates showed 19 isolates with significant recrystallization inhibition (RI) activity. The 19 RI active isolates were characterized using ARDRA (amplified rDNA restriction analysis) and 16S rDNA sequencing. They belong to genera from the alpha- and gamma-Proteobacteria, with genera from the gamma-subdivision being predominant. The 19 AFP-active isolates were isolated from four physico-chemically diverse lakes. Ace Lake and Oval Lake were both meromictic with correspondingly characteristic chemically stratified water columns. Pendant Lake was a saline holomictic lake with different chemical properties to the two meromictic lakes. Triple Lake was a hypersaline lake rich in dissolved organic carbon and inorganic nutrients. The environments from which the AFP-active isolates were isolated are remarkably diverse. It will be of interest, therefore, to elucidate the evolutionary forces that have led to the acquisition of functional AFP activity in microbes of the Vestfold Hills lakes and to discover the role the antifreezes play in these organisms.",
"corpus_id": 31069523,
"title": "Demonstration of antifreeze protein activity in Antarctic lake bacteria."
} | {
"abstract": "The plant growth promoting rhizobacterium Pseudomonas putida GR12-2 was originally isolated from the rhizosphere of plants growing in the Canadian High Arctic. Here we report that this bacterium was able to grow and promote root elongation of both spring and winter canola at 5 degrees C, a temperature at which only a relatively small number of bacteria are able to proliferate and function. In addition, the bacterium survived exposure to freezing temperatures, i.e., -20 and -50 degrees C. In an effort to determine the mechanistic basis for this behaviour, it was discovered that following growth at 5 degrees C, P. putida GR12-2 synthesized and secreted to the growth medium a protein with antifreeze activity. Analysis of the spent growth medium, following concentration by ultrafiltration, by SDS-polyacrylamide gel electrophoresis revealed the presence of one major protein with a molecular mass of approximately 32-34 kDa and a number of minor proteins. However, at this point it is not known which of these proteins contains the antifreeze activity.",
"corpus_id": 1610198,
"title": "Low temperature growth, freezing survival, and production of antifreeze protein by the plant growth promoting rhizobacterium Pseudomonas putida GR12-2."
} | {
"abstract": "Bacillus pumilis F3-4 utilized feather as a sole source of carbon, nitrogen and sulfur. Supplementation of the feather medium with glucose or MgSO4 · 7H2O increased keratinolytic protease production (14.6–16.7 U/mg). The synthesis of keratinolytic protease was repressed by an exogenous nitrogen source. Keratinolytic protease was produced in the absence of feather (9.4 U/mg). Feather degradation resulted in sulfhydryl group formation (0.8–2.6 μM). B. pumilis F3-4 effectively degraded chicken feather (75%), duck feather (81%) and feather meal (97%), whereas human nails, human hair and sheep wool under went less degradation (9–15%).",
"corpus_id": 36466688,
"score": -1,
"title": "Nutritional regulation of keratinolytic activity in Bacillus pumilis"
} |
{
"abstract": "Adenylate kinase (ADK) catalyzes the reversible Mg2+‐dependent phosphoryl transfer reaction Mg2++2ADP ↔Mg2++ATP + AMP in essential cellular systems. This reaction is a major player in cellular energy homeostasis and the isoform network of ADK plays an important role in AMP metabolic signaling circuits. ADK has 3 domains, the LID, NMP, and CORE domains, that undergo large conformational rearrangements during ADK's catalytic cycle. In spite of extensive experimental and computational studies, details of the conformational pathway from open to closed forms remain uncertain. In this paper we explore this pathway using coarse‐grained molecular dynamics (MD) trajectories of ADK calculated by GROMACS using a SMOG model and classify the conformations within the resultant trajectories by K‐means clustering. ADK conformations segregate naturally into open; intermediate; and closed forms with long‐term residence in the intermediate state. Structural clustering divides the intermediate conformation into 3 sub‐states that are distinguished from one another on the basis of differences in both structure and dynamics. These distinctions are defined on the basis of a number of different metrics including radius of gyration, dihedral angle fluctuation, and fluctuations of interatomic pair distances. Furthermore, differences in the sub‐states appear to correspond to the distinct ways each sub‐state contributes to the molecular mechanism of catalysis: One sub‐state acts as a gate‐way to the open conformation; one sub‐state a gate‐way to the closed conformation. A third intermediate sub‐state appears to represent a metastable off‐pathway structure that is nevertheless frequently visited during the passage from open to closed state.",
"corpus_id": 46771857,
"title": "Fine structure of conformational ensembles in adenylate kinase"
} | {
"abstract": "While coarse-grained (CG) simulations provide an efficient approach to identify small- and large-scale motions important to protein conformational transitions, coupling with appropriate experimental validation is essential. Here, by comparing small-angle X-ray scattering (SAXS) predictions from CG simulation ensembles of adenylate kinase (AK) with a range of energetic parameters, we demonstrate that AK is flexible in solution in the absence of ligand and that a small population of the closed form exists without ligand. In addition, by analyzing variation of scattering patterns within CG simulation ensembles, we reveal that rigid-body motion of the LID domain corresponds to a dominant scattering feature. Thus, we have developed a novel approach for three-dimensional structural interpretation of SAXS data. Finally, we demonstrate that the agreement between predicted and experimental SAXS can be improved by increasing the simulation temperature or by computationally mutating selected residues to glycine, both of which perturb LID rigid-body flexibility.",
"corpus_id": 2239533,
"title": "Large-scale motions in the adenylate kinase solution ensemble: coarse-grained simulations and comparison with solution X-ray scattering."
} | {
"abstract": "Theoretical models predict that macromolecular crowding can increase protein folding stability, but depending on details of the models (e.g., how the denatured state is represented), the level of stabilization predicted can be very different. In this study, we represented the native and denatured states atomistically, with conformations sampled from explicit-solvent molecular dynamics simulations at room temperature and high temperature, respectively. We then designed an efficient algorithm to calculate the allowed fraction, f, when the protein molecule is placed inside a box of crowders. That a fraction of placements of the protein molecule is disallowed because of volume exclusion by the crowders leads to an increase in chemical potential, given by Deltamu = -k(B)T lnf. The difference in Deltamu between the native and denatured states predicts the effect of crowding on the folding free energy. Even when the crowders occupied 35% of the solution volume, the stabilization reached only 1.5 kcal/mol for cytochrome b562. The modest stabilization predicted is consistent with experimental studies. Interestingly, a mixture of different sized crowders was found to exert a greater effect than the sum of the individual species of crowders. The stabilization of crowding on the binding stability of barnase and barstar, based on atomistic modeling of the proteins, was similarly modest. These findings have profound implications for macromolecular crowding inside cells.",
"corpus_id": 4999839,
"score": -1,
"title": "Atomistic modeling of macromolecular crowding predicts modest increases in protein folding and binding stability."
} |
{
"abstract": "OBJECTIVE\nThe aim of this study was to determine the imaging findings and the prevalence of active hemorrhage on contrast-enhanced multidetector CT in patients with blunt abdominal trauma.\n\n\nMATERIALS AND METHODS\nContrast-enhanced multidetector CT images of 165 patients with blunt abdominal trauma were reviewed for the presence of extravasated contrast agent, a finding that represents active hemorrhage. The site and appearance of the hemorrhage were noted on multidetector CT images. These findings were compared with surgical and angiographic results or with clinical follow-up.\n\n\nRESULTS\nOn multidetector CT images, active hemorrhage was detected in 22 (13%) of 165 patients with a total of 24 bleeding sites (14 intraperitoneal sites and 10 extraperitoneal sites). Active hemorrhage was visible most frequently as a jet of extravasated contrast agent (10/24 bleeding sites [42%]). Diffuse or focal extravasation was less frequently seen (nine [37%] and five [21%] bleeding sites, respectively). CT attenuation values measured in the aorta (mean, 199 H) were significantly higher than those measured in extravasated contrast material (mean, 155 H) (p < 0.001). Sixteen (73%) of 22 patients with active bleeding on multidetector CT images underwent immediate surgical or angiographic intervention. One patient received angiographic therapy 10 hr after undergoing multidetector CT, and five patients died between 1 and 3 hr after multidetector CT examination.\n\n\nCONCLUSION\nActive hemorrhage in patients after blunt abdominal trauma is most frequently visible as a jet of extravasated contrast agent on multidetector CT. When extravasation is detected, immediate surgical or angiographic therapy is required.",
"corpus_id": 19458492,
"title": "Multidetector CT: detection of active hemorrhage in patients with blunt abdominal trauma."
} | {
"abstract": "OBJECTIVE\nUsing CT to grade blunt splenic injuries frequently does not predict clinical outcome. This retrospective, blinded study evaluated whether revealing a traumatic pseudoaneurysm or frank hemorrhage on an initial CT examination can be used to predict the successful clinical outcome of patients managed without surgery.\n\n\nMATERIALS AND METHODS\nThe medical and CT records of all patients with blunt splenic injury during a 5-year period were independently reviewed for vascular abnormalities. Also, the grade of injury was reconfirmed. Hemodynamically stable patients with injuries of grades 1-3 were managed without surgery. Clinical failure occurred if a patient required splenectomy or splenorrhaphy after any attempt of nonsurgical management.\n\n\nRESULTS\nTwo hundred sixty-three patients were treated for blunt splenic injuries. Eighty-two of these patients underwent emergent surgery on the basis of clinical and peritoneal lavage findings without CT examination. The remaining 181 (69%) patients were initially evaluated with emergent abdominal CT. Of these 181 patients, 72 (40% of those undergoing CT) were treated nonsurgically. Nonsurgical therapy failed in 11 (15%) of these 72 patients. Of these 11 patients, nine (82%) had a defined vascular abnormality of the spleen. Only eight (13%) of the remaining 61 patients who underwent CT and successful nonsurgical management had a vascular abnormality of the spleen.\n\n\nCONCLUSION\nThe failure rate in patients with nonsurgically managed blunt splenic injuries may be markedly reduced if patients with traumatic pseudoaneurysm or active hemorrhage revealed on emergent CT are treated with early surgical or endovascular repair.",
"corpus_id": 2220861,
"title": "Predicting clinical outcome of nonsurgical management of blunt splenic injury: using CT to reveal abnormalities of splenic vasculature."
} | {
"abstract": "OBJECTIVE\nIran has made remarkable progress in reducing child mortality over the past few decades. However, this promising profile is mainly average driven, and inequalities are not counted in judgments about the progress. In the present study, we used an achievement index approach to combine average and inequalities to provide a better picture of Iran's achievement in under-five mortality over the last two decades.\n\n\nSTUDY DESIGN\nThe study had a cross-sectional design.\n\n\nMETHODS\nData gathered in the two recent national demographic health surveys (DHSs) in 2000 and 2010 were used to conduct the analyses. Accordingly, 45,646 live births covered by DHS 2000 and 10,604 live births covered by DHS 2010 were investigated. An achievement index was constructed by incorporating some extensions to the concentration index, namely by incorporation of the average into the index.\n\n\nRESULTS\nThe standard concentration index showed that under-five mortality was unequally distributed, hurting the poor, across all provinces and Iran overall in 2000 (concentration index = -0.1311 [standard error {SE} = 0.0139]) and 2010 (-0.1367 [SE = 0.0381]). The achievement index revealed that Iran has had achievements in under-five mortality (relative change in the mean has decreased from 29.5% to 25.8%), but the achievement was mostly due to reductions in the average mortality and not in its unequal distribution. The same result applied to a considerable number of provinces, and only a few have made achievements in both inequality and average.\n\n\nCONCLUSIONS\nConsidering the lack of progress in the reduction of inequalities in under-five mortality over the past decades, equity-oriented policies should be of prime importance for Iran's healthcare system.",
"corpus_id": 53789432,
"score": -1,
"title": "What has Iran achieved in under-five mortality in terms of equity and efficiency in the past decades?"
} |
{
"abstract": "We present a novel method for recovering the whole 3D structu e of a nonuniform refractive space. The refractive space may consist of a single nonuniform refract ive medium such as heated air or multiple refractive media with uniform or nonuniform refractive indices. Unlik e most existing methods for recovering transparent objects, our method does not have a limitation on the number o f light refractions. Furthermore, our method can recover both gradual and abrupt changes in the refractiv e index in the space. For recovering the whole 3D structure of a nonuniform refractive space, we combine th e ray equation in geometric optics with a sparse estimation of the 3D distribution. Testing showed that the p roposed method can efficiently estimate the time varying 3D distribution of the refractive index of heated ai r.",
"corpus_id": 215791337,
"title": "Recovering 3D Structure of Nonuniform Refractive Space"
} | {
"abstract": "Estimating the shape of transparent and refractive objects is one of the few open problems in 3D reconstruction. Under the assumption that the rays refract only twice when traveling through the object, we present the first approach to simultaneously reconstructing the 3D positions and normals of the object's surface at both refraction locations. Our acquisition setup requires only two cameras and one monitor, which serves as the light source. After acquiring the ray-ray correspondences between each camera and the monitor, we solve an optimization function which enforces a new position-normal consistency constraint. That is, the 3D positions of surface points shall agree with the normals required to refract the rays under Snell's law. Experimental results using both synthetic and real data demonstrate the robustness and accuracy of the proposed approach.",
"corpus_id": 8038381,
"title": "3D Reconstruction of Transparent Objects with Position-Normal Consistency"
} | {
"abstract": "3D measurement of target objects characterized by specular reflection or subsurface scatterings cannot be measured by traditional 3D measurement methods because these targets have multiple light paths that make it difficult to determine the unique surface. We define these objects as complex light reflection objects. In this case, 3D measurement methods based on Light Transport (LT) Matrix estimation may be a solution to measure these complex light reflection objects, because LT Matrix captures every light path, and we can identify all 3D points on the target shape by using LT Matrix. However, these methods either provide low resolution results, or they are too slow for use in robot vision in practice. In this paper, we suppress the computational cost of LT Matrix estimation by dividing LT Matrix estimation into multi-scale. The proposed method reduces the number of candidate combinations between camera pixels and projector pixels greatly by using the information given by low resolution observations. The proposed algorithm allows high resolution measurement of the LT Matrix very efficiently. Furthermore, careful implementation of our method by using a sparse matrix representation achieves memory efficiency. We evaluated our method by measuring 3D points for a 256 × 256 resolution projector and camera system, which is an LT matrix 4096 times larger than that developed in our previous study [1] and 100 times faster than our naive implementation of [2].",
"corpus_id": 52290206,
"score": -1,
"title": "Ultra-Fast Multi-Scale Shape Estimation of Light Transport Matrix for Complex Light Reflection Objects"
} |
{
"abstract": "Obtaining structural information for lipids such as phosphatidylcholines, in particular the location of double bonds in their fatty acid constituents, is an ongoing challenge for mass spectrometry (MS) analysis. Here, we present a novel method utilizing the doping of liquid matrix-assisted laser desorption/ionization (MALDI) samples with divalent metal chloride salts, producing ions with the formula [L+M] 2+ (L = lipid, M = divalent metal cation). Multiply charged lipid ions were not detected with the investigated trivalent metal cations. Collision-induced dissociation (CID) product ions from doubly charged metal-cationized lipids include the singly charged intact fatty acids [ sn x+M – H] + , where ‘ x ’ represents the position of the fatty acid on the glycerol backbone. The preference of the divalent metal cation to locate on the sn 2 fatty acid during CID was found, enabling stereo-chemical assignment. Pseudo-MS 3 experiments such as in-source decay (ISD)-CID and ion mobility-enabled time-aligned parallel (TAP) MS of [ sn x+M – H] + provided diagnostic product ion spectra for determining the location of double bonds on the acyl chain and were applied to identify and characterize lipids extracted from soya milk. This novel method is applicable to lipid profiling in the positive ion mode, where structural information of lipids is often difficult to obtain.",
"corpus_id": 251283466,
"title": "Collisioninduced dissociation of doubly charged bariumcationized lipids generated from liquid samples by atmospheric pressure matrixassisted laser desorption/ionization provides structurally diagnostic product ions Collisioninduced dissociation of doublycharged bariumcationized lipids generat"
} | {
"abstract": "Phospholipid cations formed by electrospray ionization were subjected to excitation and fragmentation by a beam of 6 keV helium cations in a process termed charge transfer dissociation (CTD). The resulting fragmentation pattern in CTD is different from that of conventional collision-induced dissociation, but analogous to that of metastable atom-activated dissociation and electron-induced dissociation. Like collision-induced dissociation, CTD yields product ions indicative of acyl chain lengths and degrees of unsaturation in the fatty acyl moieties but also provides additional structural diagnostic information, such as double bond position. Although CTD has not been tested on a larger lipid sample pool, the extent of structural information obtained demonstrates that CTD is a useful tool for lipid structure characterization, and a potentially useful tool in future lipidomics workflows. CTD is relatively unique in that it can produce a relatively strong series of 2+ product ions with enhanced abundance at the double bond position. The generally low signal-to-noise ratios and spectral complexity of CTD make it less appealing than OzID or other radical-induced methods for the lipids studies here, but improvements in CTD efficiency could make CTD more appealing in the future. Copyright © 2017 John Wiley & Sons, Ltd.",
"corpus_id": 3634296,
"title": "Charge transfer dissociation of phosphocholines: gas-phase ion/ion reactions between helium cations and phospholipid cations."
} | {
"abstract": "MP3 level calculations using pseudo-potentials for the halogens and semidiffuse functions for the heavy atoms indicate that in the series CH 3 X (X=F, Cl, Br, I), the reaction CH 3 X+e - →CH 3 . +X - is a concerted electron transfer-bond breaking process in accord with previous experimental findings (gas phase, solid matrixes, electrochemistry in polar solvents)",
"corpus_id": 94067599,
"score": -1,
"title": "Dissociative electron transfer. Ab initio study of the carbon-halogen bond reductive cleavage in methyl and perfluoromethyl halides. Role of the solvent"
} |
{
"abstract": "The elbow joint is a complex articulation composed of the humeroulnar and humeroradial joints (for flexion-extension movement) and the proximal radioulnar articulation (for pronation-supination movement). During the flexion-extension movement of the elbow joint, the rotation center changes and this articulation cannot be truly represented as a simple hinge joint. The main goal of this project is to design and assemble a medical rehabilitation exoskeleton for the elbow with one degree of freedom for flexion-extension, using the rotation center for proper patient elbow joint articulation. Compared with the current solutions, which align the exoskeleton axis with the elbow axis, this offers an ergonomic physical human-robot interface with a comfortable interaction. The exoskeleton is actuated with shape memory alloy wire-based actuators having minimum rigid parts, for guiding the actuators. Thanks to this unusual actuation system, the proposed exoskeleton is lightweight and has low noise in operation with a simple design 3D-printed structure. Using this exoskeleton, these advantages will improve the medical rehabilitation process of patients that suffered stroke and will influence how their lifestyle will change to recover from these diseases and improve their ability with activities of daily living, thanks to brain plasticity. The exoskeleton can also be used to evaluate the real status of a patient, with stroke and even spinal cord injury, thanks to an elbow movement analysis.",
"corpus_id": 32856479,
"title": "New Design of a Soft Robotics Wearable Elbow Exoskeleton Based on Shape Memory Alloy Wire Actuators"
} | {
"abstract": "This paper describes a flexible Shape Memory Alloy (SMA) actuator designed to increase the limited displacement that these alloys can induce. The SMA actuator has been designed so that it can be bent up to about 180?, providing more freedom of movements and a better integration in wearable robots, specially in soft wearable robots, than standard rigid solutions. Although the actuator length is relatively short, this original design allows a great linear displacement, because it can have one or more loops of the same SMA wire inside the actuator. This implies that the length of the SMA wire is at least two times greater than the length of the actuator. The adopted strategy for both position and speed control that overcomes the hysteresis and prevents overheating the actuator is also described. The control algorithm has been implemented in a rapid control prototyping (RCP) system based on a low cost hardware platform. Finally, the application of this novel actuator in a wrist exoskeleton prototype is shown to demonstrate the feasibility of using the flexible SMA actuator in a real soft wearable robot. We design a novel high-strain flexible SMA actuator.We implement a control algorithm for the SMA actuator.We develop a Hammerstein-Wiener model of the SMA actuator.We use a RCP system to develop the control algorithm.We test the designed actuator in a real wearable device.",
"corpus_id": 1981949,
"title": "High-displacement flexible Shape Memory Alloy actuator for soft wearable robots"
} | {
"abstract": "A two-degrees-of-freedom actively positioned consequent-pole bearingless motor with a wide magnetic gap of 8 mm and a gap factor (gap/rotor radius) of 0.2 is proposed. Experimental results of the magnetic suspension tests at a rotational speed up to 6000 r/min have been demonstrated. The axial and passive stiffnesses are sufficient for stable magnetic suspension while rotation. To improve the suspension performance, the notch filter as a function of a rotational speed is applied into the control system to remove the undesirable periodic sensor noise, which is caused by interference between a sensor detection coil and the leakage flux from the consequent-pole rotor. The proposed variable notch filter has effects of the increased maximum speed and the decreased power consumption for magnetic suspension.",
"corpus_id": 7567468,
"score": -1,
"title": "Suspension performance of a two-DOF actively positioned consequent-pole bearingless motor with a wide magnetic gap"
} |
{
"abstract": "Produced by Aberdeen HTA Group Authors Graham Scotland, research fellow, Health Economics Research Unit University of Aberdeen Pamela Royle, research fellow, Department of Public Health, University of Aberdeen Rob Henderson, consultant in public health medicine, Highland Health Board Rosemary Hollick, specialist registrar in rheumatology, Aberdeen Royal Infirmary Paul McNamee, senior research fellow, Health Economics Research Unit University of Aberdeen Norman Waugh, professor of public health, Department of Public Health, University of Aberdeen",
"corpus_id": 142090874,
"title": "Evidence review : denosumab for the prevention of osteoporotic fractures in post-menopausal women"
} | {
"abstract": "Oral bisphosphonates are of proven efficacy in preventing fractures in postmenopausal osteoporosis. However, poor adherence limits their real-world efficacy and clinical utility. Zoledronic acid (ZOL) is a potent bisphosphonate administered by annual intravenous infusion, effectively ensuring adherence to therapy over the following year. According to available data, 66% to 79% of patients have expressed a preference for ZOL over oral bisphosphonates. This is likely to lead to enhanced clinical outcomes, although long-term (repeat annual) adherence is currently unknown. ZOL is of proven efficacy, with hip fracture reduction of 41% and morphometric vertebral fracture reduction of 70% over 3 years in the HORIZON PFT trial. It has demonstrated a good side-effect profile with postinfusion flu-like symptoms being the most common. Additionally, it has been associated with decreased mortality in patients following surgery for hip fracture. There is no clear association between exposure and the rate of serious or nonserious atrial fibrillation. We review adherence to oral bisphosphonates, and the pharmacokinetics, efficacy, safety, and patient preference for ZOL.",
"corpus_id": 3525238,
"title": "Treatment of postmenopausal osteoporosis, patient perspectives – focus on once yearly zoledronic acid"
} | {
"abstract": "Intravenous (IV) administration of bisphosphonates has been considered an absolute contraindication for placement of dental implants, because of the increased risk of bisphosphonate-related osteonecrosis of the jaw (BRONJ). However, the evidence regarding this association originates from patients being treated for various forms of metastatic cancer. In the case reported here, a patient received a dental implant while undergoing IV treatment with zoledronic acid for osteoporosis. The authors discuss the current evidence regarding the risks of dental procedures in patients receiving IV bisphosphonates for this indication. They also evaluate important risk factors and the decision-making pathway in such cases. On the basis of existing evidence, receipt of a single IV infusion of zoledronic acid for the treatment of osteoporosis does not appear to be an absolute contraindication to implant placement.",
"corpus_id": 4989479,
"score": -1,
"title": "Dental implant placement with bone augmentation in a patient who received intravenous bisphosphonate treatment for osteoporosis."
} |
{
"abstract": "To achieve satisfying user experiences of diverse applications, quality of service (QoS) guaranteed mechanisms such as per-flow queuing are required in routers. However, deployment of per-flow queuing in high-speed routers is considered as a great challenge since its industrial brute-force implementation is not scalable with the increase of the number of flows. In this study, the authors propose a dynamic queue sharing (DQS) mechanism to enable scalable per-flow queuing. DQS keeps isolation of each concurrent active flow by sharing a small number of queues instead of maintaining a dedicated queue for each in-progress flow, which is novel compared to the existing methods. According to DQS, a physical queue is created and assigned to an active flow upon the arrival of its first packet, and is destroyed upon the departure of the last packet in the queue. The authors combine hash method with binary sorting tree to construct and manage the dynamic mapping between active flows and physical queues, which significantly reduces the number of required physical queues from millions to hundreds and makes per-flow queuing feasible for high-performance routers.",
"corpus_id": 7453319,
"title": "Dynamic queuing sharing mechanism for per-flow quality of service control"
} | {
"abstract": "Per-flow queuing is believed to be able to guarantee advanced Quality of Service (QoS) for each flow. With the dramatic increase of link speed and number of traffic flows, per-flow queuing faces a great challenge since millions of queues need to be maintained for implementation in a traditional sense. In this paper, by setting only a small number of physical queues, we propose a Dynamic Queue Sharing (DQS) mechanism to achieve an equal performance to the pure per-flow queuing with a lower cost. The proposed mechanism is based on an interesting fact that the number of simultaneous active flows in the router buffer is far less than that of in-progress flows. In DQS, a physical queue is dynamically created on-demand when a new flow comes and then dynamically released when the flow temporarily pauses. Hashing and binary sorting tree (or linked list) are combined to manage the mapping between flows and queues, so as to isolate flows in different queues. Theoretical analysis and traces experiments are conducted to evaluate DQS. The results demonstrate that when the parameters are well set, the operation delay is less than two time cycles in average with an extra memory of 16k bits.",
"corpus_id": 863906,
"title": "Per-Flow Queueing by Dynamic Queue Sharing"
} | {
"abstract": "A sequence of codes with particular symmetries and with large rates compared to their minimal distances is constructed over the field GF (2^{3}) . In the sequence there is, for instance, a code of length 21 and dimension 10 with minimal distance 9 , and a code of length 21 and dimension 16 with minimal distance 3 . The codes are constructed from algebraic geometry using the dictionary between coding theory and algebraic curves over finite fields established by Goppa. The curve used in the present work is the Klein quartic. This curve has the maximal number of rational points over GF (2^{3}) allowed by Serre's improvement of the Hasse-Weil bound, which, together with the low genus, accounts for the good code parameters. The Klein quartic has the Frobenius group G of order 21 acting as a group of automorphisms which accounts for the particular symmetries of the codes. In fact, the codes are given alternative descriptions as left ideals in the group-algebra GF (2^{3})[G] . This description allows for easy decoding. For instance, in the case of the single error correcting code of length 21 and dimension 16 with minimal distance 3 . decoding is obtained by multiplication with an idempotent in the group algebra.",
"corpus_id": 42978773,
"score": -1,
"title": "Codes on the Klein quartic, ideals, and decoding"
} |
{
"abstract": "The problem of maximizing quality of service (QoS) of real-time systems subject to both schedulability and energy constraints is addressed. A discrete system consisting of tasks with multiple operating modes, and which can be executed by the processor at different frequency/voltage levels, is considered. Although the system reconfiguration scheme assumes the earliest deadline first (EDF) policy and soft real-time tasks, it can be extended to other scheduling policies and handle hard real-time tasks. The described solution is suitable for adaptive real-time embedded systems which require energy savings associated to QoS. Despite being NP-Hard, the reconfiguration problem can be solved with a mixed-integer linear-programing solver sufficiently fast.",
"corpus_id": 1959571,
"title": "A Model for Reconfiguration of Multi-Modal Real-Time Systems under Energy Constraints"
} | {
"abstract": "The slack time in real-time systems can be used by recovery schemes to increase system reliability as well as by frequency and voltage scaling techniques to save energy. Moreover, the rate of transient faults (i.e., soft errors caused, for example, by cosmic ray radiations) also depends on system operating frequency and supply voltage. Thus, there is an interesting trade-off between system reliability and energy consumption. This work first investigates the effects of frequency and voltage scaling on the fault rate and proposes two fault rate models based on previously published data. Then, the effects of energy management on reliability are studied. Our analysis results show that, energy management through frequency and voltage scaling could dramatically reduce system reliability, and ignoring the effects of energy management on the fault rate is too optimistic and may lead to unsatisfied system reliability.",
"corpus_id": 5099115,
"title": "The effects of energy management on reliability in real-time embedded systems"
} | {
"abstract": "Since the introduction of fibre-reinforced polymer composites there has been a surge in the use of adhesives for joints and repairs, and polymer resins as the matrix material for fibre-reinforced composites. The failure mechanisms of these materials have been studied by many researchers; however, there is little accurate experimental data under tension-tension loading published. This is due to the lack of a standard specimen design to perform these tests. The authors propose a flat plate specimen design that has been shown to overcome some of the difficulties presented in literature. A series of tests with this specimen configuration were conducted under varying biaxial loading conditions. The results are plotted in the tension-tension quadrant of the materials failure envelope. For the two polymer materials tested; a linear truncation within the first quadrant of the failure envelope was found.",
"corpus_id": 139871492,
"score": -1,
"title": "Design of a Flat Plate Specimen Suitable for Biaxial Tensile Tests on Polymer Materials"
} |
{
"abstract": "Video anomaly detection (VAD) mainly refers to identifying anomalous events that have not occurred in the training set where only normal samples are available. Existing works usually formulate VAD as a reconstruction or prediction problem. However, the adaptability and scalability of these methods are limited. In this paper, we propose a novel distance-based VAD method to take advantage of all the available normal data efficiently and flexibly. In our method, the smaller the distance between a testing sample and normal samples, the higher the probability that the testing sample is normal. Specifically, we propose to use locality-sensitive hashing (LSH) to map the samples whose similarity exceeds a certain threshold into the same bucket in advance. To utilize multiple hashes and further alleviate the computation and memory usage, we propose to use the hash codes rather than the features as the representations of the samples. In this manner, the complexity of near neighbor search is cut down significantly. To make the samples that are semantically similar get closer and those not similar get further apart, we propose a novel learnable version of LSH that embeds LSH into a neural network and optimizes the hash functions with contrastive learning strategy. The proposed method is robust to data imbalance and can handle the large intra-class variations in normal data flexibly. Besides, it has a good ability of scalability. Extensive experiments demonstrate the superiority of our method, which achieves new state-of-the-art results on VAD benchmarks.",
"corpus_id": 244116838,
"title": "Learnable Locality-Sensitive Hashing for Video Anomaly Detection"
} | {
"abstract": "Locality sensitive hashing (LSH) is a computationally efficient alternative to the distance based anomaly detection. The main advantages of LSH lie in constant detection time, low memory requirement, and simple implementation. However, since the metric of distance in LSHs does not consider the property of normal training data, a naive use of existing LSHs would not perform well. In this paper, we propose a new hashing scheme so that hash functions are selected dependently on the properties of the normal training data for reliable anomaly detection. The distance metric of the proposed method, called NSH (Normality Sensitive Hashing) is theoretically interpreted in terms of the region of normal training data and its effectiveness is demonstrated through experiments on real-world data. Our results are favorably comparable to state-of-the arts with the low-level features.",
"corpus_id": 14127832,
"title": "NSH: Normality Sensitive Hashing for Anomaly Detection"
} | {
"abstract": "Detecting outliers in a large set of data objects is a major data mining task aiming at finding different mechanisms responsible for different groups of objects in a data set. All existing approaches, however, are based on an assessment of distances (sometimes indirectly by assuming certain distributions) in the full-dimensional Euclidean data space. In high-dimensional data, these approaches are bound to deteriorate due to the notorious \"curse of dimensionality\". In this paper, we propose a novel approach named ABOD (Angle-Based Outlier Detection) and some variants assessing the variance in the angles between the difference vectors of a point to the other points. This way, the effects of the \"curse of dimensionality\" are alleviated compared to purely distance-based approaches. A main advantage of our new approach is that our method does not rely on any parameter selection influencing the quality of the achieved ranking. In a thorough experimental evaluation, we compare ABOD to the well-established distance-based method LOF for various artificial and a real world data set and show ABOD to perform especially well on high-dimensional data.",
"corpus_id": 3072058,
"score": -1,
"title": "Angle-based outlier detection in high-dimensional data"
} |
{
"abstract": "Abstract Background We describe five new species in the genus Vibrissina Rondani from Area de Conservación Guanacaste (ACG). All species were reared from wild-caught sawfly larvae (Hymenoptera: Symphyta: Argidae and Tenthredinidae). We provide a morphological description of each species together with information on life history, molecular data, and photographic documentation. New information Five new species of Vibrissina Rondani: Vibrissina randycurtisi sp. n., V. randyjonesi sp. n., V. robertwellsi sp. n., V. danmartini sp. n., V. hallwachsorum sp. n.",
"corpus_id": 19214955,
"title": "Five new species of Vibrissina Rondani (Diptera: Tachinidae) from Area de Conservación Guanacaste in Northwestern Costa Rica"
} | {
"abstract": "Abstract Nine new species of Itaplectops Townsend (Diptera: Tachinidae) are described from Area de Conservación Guanacaste (ACG), northwestern Costa Rica. All specimens have been reared from various species of ACG caterpillars in the families Limacodidae and Dalceridae. By combining morphological, photographic, and genetic barcode data we provide clear yet concise descriptions. The following nine new species are described in the genus Itaplectops: Itaplectops akselpalolai, Itaplectops anikenpalolae, Itaplectops argentifrons, Itaplectops aurifrons, Itaplectops ericpalolai, Itaplectops griseobasis, Itaplectops omissus, Itaplectops shellymcsweeneyae, Itaplectops tristanpalolai. We move Itaplectops to the tribe Uramyini from its original placement within the Blondeliini, and we discuss its systematic placement. We also provide a key differentiating the, genera of the tribe Uramyini as well as the known species of Itaplectops.",
"corpus_id": 268853,
"title": "Nine new species of Itaplectops (Diptera: Tachinidae) reared from caterpillars in Area de Conservación Guanacaste, northwestern Costa Rica, with a key to Itaplectops species"
} | {
"abstract": "The Saccharomyces cerevisiae genome contains 16 genes encoding full-size ABC transporters. Each comprises two nucleotide binding folds (NBF) alternating with transmembrane domains (TM). We have studied in detail three plasma membrane multidrug exporters: Pdr5p (TC3.A.1.205.1) and Snq2p (TC3.A.1.205.2) which share NBF-TM-NBF-TM topology as well as Yor1p (TC3.A.1.208.3) which exhibits the reciprocal TM-NBF-TM-NBF topology. The substrate specificity of Pdr5p, Snq2p and Yor1p are largely, but not totally, overlapping as shown by screening the growth inhibition by 349 toxic compounds of combinatorial deletants of these three ABC genes. Multiple deletion of 7 ABC genes (YOR1, SNQ2, PDR5, YCF1, PDR10, PDR11 and PDR15) and of two transcription activation factors (PDR1 and PDR3) renders the cell from 2 to 200 times more sensitive to numerous toxic coumpounds including antifungals used in agriculture or medicine. The use of the pdr1-3 activating mutation and when necessary of the PDR5 promoter in appropriate multideleted hosts allow high levels of expression of Pdr5p, Snq2p or Yor1 p. These overexpressed proteins exhibit ATPase activity in vitro and confer considerable multiple drug resistance in vivo. The latter property can be used for screening specific inhibitors of fungal and other ABC transporters.",
"corpus_id": 32670757,
"score": -1,
"title": "The pleitropic drug ABC transporters from Saccharomyces cerevisiae."
} |
{
"abstract": "Complex disordered matter is of central importance to a wide range of disciplines, from bacterial colonies and embryonic tissues in biology to foams and granular media in materials science to stellar configurations in astrophysics. Because of the vast differences in composition and scale, comparing structural features across such disparate systems remains challenging. Here, by using the statistical properties of Delaunay tessellations, we introduce a mathematical framework for measuring topological distances between general three-dimensional point clouds. The resulting system-agnostic metric reveals subtle structural differences between bacterial biofilms as well as between zebrafish brain regions, and it recovers temporal ordering of embryonic development. We apply the metric to construct a universal topological atlas encompassing bacterial biofilms, snowflake yeast, plant shoots, zebrafish brain matter, organoids, and embryonic tissues as well as foams, colloidal packings, glassy materials, and stellar configurations. Living systems localize within a bounded island-like region of the atlas, reflecting that biological growth mechanisms result in characteristic topological properties.",
"corpus_id": 252070444,
"title": "Topological packing statistics of living and nonliving matter"
} | {
"abstract": "Optimal transportation distances are valuable for comparing and analyzing probability distributions, but larger-scale computational techniques for the theoretically favorable quadratic case are limited to smooth domains or regularized approximations. Motivated by fluid flow-based transportation on $\\mathbb{R}^n$, however, this paper introduces an alternative definition of optimal transportation between distributions over graph vertices. This new distance still satisfies the triangle inequality but has better scaling and a connection to continuous theories of transportation. It is constructed by adapting a Riemannian structure over probability distributions to the graph case, providing transportation distances as shortest-paths in probability space. After defining and analyzing theoretical properties of our new distance, we provide a time discretization as well as experiments verifying its effectiveness.",
"corpus_id": 15654568,
"title": "Continuous-Flow Graph Transportation Distances"
} | {
"abstract": "Optimal Transport has recently gained interest in machine learning for applications ranging from domain adaptation, sentence similarities to deep learning. Yet, its ability to capture frequently occurring structure beyond the \"ground metric\" is limited. In this work, we develop a nonlinear generalization of (discrete) optimal transport that is able to reflect much additional structure. We demonstrate how to leverage the geometry of this new model for fast algorithms, and explore connections and properties. Illustrative experiments highlight the benefit of the induced structured couplings for tasks in domain adaptation and natural language processing.",
"corpus_id": 4866863,
"score": -1,
"title": "Structured Optimal Transport"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.