query
dict
pos
dict
neg
dict
{ "abstract": "Background: Approximately 75% of breast cancer (BC) is associated with luminal differentiation expressing endocrine receptors (ER). For ER+ HER2− tumors, adjuvant endocrine therapy (ET) is the cornerstone treatment. Although relapse events steadily continue, the ET benefits translate to dramatically lengthen life expectancy with bearable side-effects. This review of ER+ HER2− female BC outlines suitable adjuvant treatment strategies to help guide clinical decision making around appropriate therapy. Methods: A literature search was conducted in Embase, Medline, and the Cochrane Libraries, using ER+ HER−, ET BC keywords. Results: In low-risk patients: five years of ET is the standard option. While Tamoxifen remains the preferred selection for premenopausal women, AI is the choice for postmenopausal patients. In the high-risk category: ET plus/minus OFS with two years of Abemaciclib is recommended. Although extended ET for a total of ten years is an alternative, the optimal AI duration is undetermined; nevertheless an additional two to three years beyond the initial five years may be sufficient. In this postmenopausal group, bisphosphonate is endorsed. Conclusions: Classifying the risk category assists in deciding the treatment route and its optimal duration. Tailoring the breadth of ET hinges on a wide array of factors to be appraised for each individualized case, including weighing its benefits and harms.", "corpus_id": 250566468, "title": "Appraising Adjuvant Endocrine Therapy in Hormone Receptor Positive HER2-Negative Breast Cancer—A Literature Review" }
{ "abstract": "Background\nA number of randomized controlled trials (RCTs) have reported improvement in breast cancer outcomes from extending treatment with aromatase inhibitors (AIs) beyond the initial five years after diagnosis. However, the toxicity profile of extended AIs is uncertain.\n\n\nMethods\nWe identified RCTs that compared extended AIs to placebo or no treatment using MEDLINE and a review of abstracts from key conferences between 2013 and 2016. Odds ratios (ORs), 95% confidence intervals (CIs), absolute risks, and the number needed to harm (NNH) were computed for prespecified safety and tolerability outcomes including cardiovascular events, bone fractures, second cancers (excluding new breast cancer), treatment discontinuation for adverse events, and death without recurrence. All statistical tests were two-sided.\n\n\nResults\nSeven trials comprising 16 349 patients met the inclusion criteria. Longer treatment with AIs was associated with increased odds of cardiovascular events (OR = 1.18, 95% CI = 1.00 to 1.40, P = .05, NNH = 122), bone fractures (OR = 1.34, 95% CI = 1.16 to 1.55, P < .001, NNH = 72), and treatment discontinuation for adverse events (OR = 1.45, 95% CI = 1.25 to 1.68, P < .001, NNH = 21). Longer treatment with AIs did not influence the odds of either second malignancy (OR = 0.93, 95% CI = 0.73 to 1.18, P = .56) or deaths without breast cancer recurrence (OR = 1.11, 95% CI = 0.90 to 1.36, P = .34).\n\n\nConclusions\nExtended treatment with AIs is associated with an increased risk of cardiovascular events and bone fractures. There is no statistically significant increase in deaths without breast cancer recurrence among patients receiving longer treatment with AIs. These data should be taken into account when considering extended adjuvant AIs.", "corpus_id": 3797922, "title": "Toxicity of Extended Adjuvant Therapy With Aromatase Inhibitors in Early Breast Cancer: A Systematic Review and Meta-analysis" }
{ "abstract": "PURPOSE\nThe efficacy and tolerability of anastrozole (Arimidex; AstraZeneca, Wilmington, DE, and Macclesfield, United Kingdom) and tamoxifen were compared as first-line therapy for advanced breast cancer in 353 postmenopausal women.\n\n\nPATIENTS AND METHODS\nThe randomized, double-blind, multicenter study was designed to evaluate anastrozole 1 mg once daily relative to tamoxifen 20 mg once daily in patients with hormone receptor-positive tumors or tumors of unknown receptor status who were eligible for endocrine therapy. Primary end points were objective response (OR), defined as complete (CR) or partial (PR) response, time to progression (TTP), and tolerability.\n\n\nRESULTS\nAnastrozole was as effective as tamoxifen in terms of OR (21% v 17% of patients, respectively), with clinical benefit (CR + PR + stabilization > or = 24 weeks) observed in 59% of patients on anastrozole and 46% on tamoxifen (two-sided P =.0098, retrospective analysis). Anastrozole had a significant advantage over tamoxifen in terms of TTP (median TTP of 11.1 and 5.6 months for anastrozole and tamoxifen, respectively; two-sided P =.005). The tamoxifen:anastrozole hazards ratio was 1.44 (lower one-sided 95% confidence limit, 1.16). Both treatments were well tolerated. However, thromboembolic events and vaginal bleeding were reported in fewer patients who received anastrozole compared with those who received tamoxifen (4.1% v 8.2% [thromboembolic events] and 1.2% v 3.8% [vaginal bleeding], respectively).\n\n\nCONCLUSION\nAnastrozole satisfied the predefined criteria for equivalence to tamoxifen. Furthermore, we observed both a significant increase in TTP and a lower incidence of thromboembolic events and vaginal bleeding with anastrozole. These findings indicate that anastrozole should be considered as first-line therapy for postmenopausal women with advanced breast cancer.", "corpus_id": 30262183, "score": -1, "title": "Anastrozole is superior to tamoxifen as first-line therapy for advanced breast cancer in postmenopausal women: results of a North American multicenter randomized trial. Arimidex Study Group." }
{ "abstract": "Information on the effect of chemotherapy in a group of patients with poor prognosis, poor performance status small cell lung carcinoma (SCLC) is scarce. A randomized study comparing single‐agent carboplatin with combination chemotherapy in this largely unreported population of SCLC patients was undertaken.", "corpus_id": 41368774, "title": "Randomized phase II study of cyclophosphamide, doxorubicin, and vincristine compared with single‐agent carboplatin in patients with poor prognosis small cell lung carcinoma" }
{ "abstract": "We report the results of a randomised trial in extensive small-cell lung cancer (SCLC) of a novel approach to palliative chemotherapy. A widely used 3 weekly regimen was compared with the same drugs given at half the dose but twice the frequency with the same intended overall dose intensity (DI). A total of 167 patients defined as having extensive SCLC with adverse prognostic features were randomised to receive either a 3 weekly regimen of cisplatin 60 mg m-2 i.v. on day 1 and etoposide 120 mg m-2 i.v. on day 1 and 100 mg b.d. orally on days 2 and 3 alternating with cyclophosphamide 600 mg m-2 i.v., doxorubicin 50 mg m-2 i.v. and vincristine 2 mg i.v. all on day 1 for a maximum of six courses (3 weekly); or treatment with the same drugs but with each course consisting of half the 3 weekly dose given every 10 or 11 days for a maximum of 12 courses. In the 10/11 day regimen overall response rate was 58.9% (95% CI, 47.9-69.2%) with 12.8% complete responses (CR). For the 3 weekly treatment the overall response rate was 44.9% (95% CI, 35.0-55.5%) with 10.1% CR. Median survival was similar in the two arms at 6.4 months (95% CI, 4.9-7.3 months) and 5.8 months (95% CI, 4.0-6.6 months) respectively. Survival at 1 year was 9.9% (95% CI, 5.0-18.5%) and 8.9% (95% CI, 4.6-16.6%). The 95% CI for the difference in survival at 1 year is -7.09% to +9.09%. Haematological toxicity and treatment delays owing to infection were more frequent with the 10/11 day regimen but other toxicities were equal in both arms. Other aspects of quality of life were measured in a small representative cohort of patients using a daily diary card (DDC). There was a trend of improved quality of life on the 10/11 day arm, but there was little difference between the two treatments. The trial shows that a low-dose/high-frequency regimen with the same DI as conventionally scheduled chemotherapy gives similar response rates and survival. This and other modifications of the schedule may offer new approaches to palliative treatment of advanced cancer. However, in this trial there was no significant benefit in toxicity or other aspects of quality of life.", "corpus_id": 62016, "title": "A randomised trial of low-dose/high-frequency chemotherapy as palliative treatment of poor-prognosis small-cell lung cancer: a Cancer research Campaign trial." }
{ "abstract": "The pattern of Lichen Planus seen among 95 Nigerians seen over a 3 year period is described. They constituted 5% of all skin cases, with females slightly more affected than males. A younger age group is predominantly affected. In most patients (68%), the lesions are widespread all over the body. Large papules, scaly patches and hypertrophic verrucous lesions in the legs are frequent findings. Patients having lesions in the mouth are few. Seasonal variations of the disease do occur, the peak being during the rainy season, April-September (65%). Pruritus is a constant feature. Lesions do heal with marked residual hyperpigmentation. There is no report so far of the \"subtropical\" (actinic) Lichen Planus in West Africa. It is suggested that drugs, weather and the hard native sponge may play a role in the causation of the disease in tropical Africa.", "corpus_id": 77213706, "score": -1, "title": "Lichen planus in tropical Africa." }
{ "abstract": "ABSTRACT The Perceived Ethnic Discrimination Questionnaire-Community Version Brief (PEDQ-CVB) is a widely used brief multidimensional measure of general racial discrimination for both students and community populations. We evaluated the factor structure and measurement equivalency of the PEDQ-CVB across diverse racial/ethnic and gender groups. The groups in the current study were Black (N = 306), Asian (N = 310), Latinx (N = 163), multiracial (N = 108), women (N = 555), and men (N = 372). Confirmatory factor analysis (CFA) and test of competing models suggested that the four-factor and bifactor (with four specific factors and one general factor) models were best fitting and most conceptually meaningful. Based on the bifactor model, the PEDQ-CVB could be represented unidimensionally (total scale score) for applied measurement. Multi-group CFAs found evidence of measurement invariance for configural, metric, and scalar models across racial/ethnic and gender groups, suggesting that men and women, and individuals self-identifying as Black, Asian, Latinx and multiracial, interpreted PEDQ-CVB items in a similar fashion. Our findings substantiate the utility of the PEDQ-CVB as a brief general measure of racial/ethnic discrimination and the validity of results from prior studies that used the PEDQ-CVB. Study limitations and future directions for research are discussed.", "corpus_id": 148946816, "title": "Factor structure and measurement invariance of the Perceived Ethnic Discrimination Questionnaire-Community Version Brief" }
{ "abstract": "Empirical studies examining perceived ethnic discrimination in Latinos of diverse background groups are limited. This study examined prevalence and correlates of discrimination in a diverse sample of U.S. Latinos (N=5,291) from the multi-site Hispanic Community Health Study/Study of Latinos (HCHS/SOL) and HCHS/SOL Sociocultural Ancillary Study. The sample permitted an examination of differences across seven groups (Central American, Cuban, Dominican, Mexican, Puerto Rican, South American, and Other/Multiple Background). Most participants (79.5%) reported lifetime discrimination exposure and prevalence rates ranged from 64.9% to 98% across groups. Structural Equation Models (SEM) indicated that after adjusting for sociodemographic covariates most group differences in reports of discrimination were eliminated. However, Cubans reported the lowest levels of discrimination, overall among all groups. Furthermore, regional effects were more important than group effects. Participants from Chicago reported the highest levels of discrimination in comparison to other regions. Group differences among Latinos appear to be primarily a function of sociodemographic differences in education, income, and acculturation. In addition, differences in exposure to discrimination may be tied to variables associated with both immigration patterns and integration to U.S. culture. Results highlight the importance of considering historical context and the intersection of discrimination and immigration when evaluating the mental health of Latinos.", "corpus_id": 3644610, "title": "Prevalence and Correlates of Perceived Ethnic Discrimination in the Hispanic Community Health Study/Study of Latinos Sociocultural Ancillary Study." }
{ "abstract": "ABSTRACT Although parental distress and child distress have been linked in families of children with cancer, how these associations change over time is unknown. The present study examined how the amount of time elapsed since the child’s diagnosis moderates the associations between self-reported parent and child symptoms of depression, anxiety, and post-traumatic stress in 255 parent-child dyads. Time since diagnosis moderated the associations between parental symptoms and child-reported anxiety and post-traumatic stress. Dyads farther out from diagnosis exhibited stronger associations between parental and child symptoms. Findings suggest the importance of monitoring the psychological adjustment of parents and children over time.", "corpus_id": 88482, "score": -1, "title": "Effects of time since diagnosis on the association between parent and child distress in families with pediatric cancer" }
{ "abstract": "Cement-based materials (CBMs) such as pastes, mortars and concretes are the most frequently used building materials in the present construction industry. Cement hydration, along with the resulting compressive strength in these materials, is dependent on curing temperature, methods and duration. A concrete subjected to an initial higher curing temperature undergoes accelerated hydration by resulting in non-uniform scattering of the hydration products and consequently creating a great porosity at later ages. This phenomenon is called crossover effect (COE). The COE may occur even at early ages between seven to 10 days for Portland cements with various mineral compositions. Compressive strength and other mechanical properties are important for the long life of concrete structures, so any reduction in these properties is of great concern to engineers. This study aims to review existing information on COE phenomenon in CBMs and provide recommendations for future research.", "corpus_id": 199088366, "title": "Crossover Effect in Cement-Based Materials: A Review" }
{ "abstract": "This paper investigates the effect of curing temperature on the hydration, microstructure, compressive strength, and transport of cement pastes modified with TiO2 nanoparticles. These characteristics of cement pastes were studied using non-evaporable water content measurement, X-ray diffraction (XRD), compressive strength test, electrical resistivity and porosity measurements, and scanning electron microscopy (SEM). It was shown that temperature enhanced the early hydration. The cement pastes cured at elevated temperatures generally showed an increase in compressive strength at an early age compared to the cement paste cured at room temperature, but the strength gain decreased at later ages. The electrical resistivity of the cement pastes cured at elevated temperatures was found to decrease more noticeably at late ages compared to that of the room temperature cured cement paste. SEM examination indicated that hydration product was more uniformly distributed in the microstructure of the cement paste cured at room temperature compared to the cement pastes cured at elevated temperatures. It was observed that high temperature curing decreased the compressive strength and electrical resistivity of the cement pastes at late ages in a more pronounced manner when higher levels of TiO2 nanoparticles were added.", "corpus_id": 1804131, "title": "The Effect of Curing Temperature on the Properties of Cement Pastes Modified with TiO2 Nanoparticles" }
{ "abstract": "Abstract In the present study, an electromagnetic (EM) wave absorber was fabricated with a multi-walled carbon nanotube (MWNT)-incorporated cement composite and the absorbing capability of the absorber was assessed. To disperse MWNTs in a cement matrix, composites were fabricated under a low flow condition of the fresh mixture, and silica fume (SF) was added to explore the influence of SF addition on MWNT distribution. The electrical conductivity of the composite was evaluated to examine the MWNT distribution and the complex permittivity was determined to study the EM characteristics of the composite. The conductivity results demonstrated that SF addition of 10 wt% led to the greatest enhancement. Meanwhile, the absorber was designed on the basis of complex permittivity at a frequency point of 9.4 GHz, and SF0-M1.0 type (no SF addition and MWNT content of 1.0 wt%) and SF10-M0.6 type (SF content of 10 wt% and MWNT content of 0.6 wt%) were employed. The experimental assessment of the absorbing capability demonstrated that the −10 dB bandwidths of SF0-M1.0 and SF10-M0.6 type absorbers were 2.5 GHz and 3.2 GHz, respectively. In addition, the absorbing capability derived from the experimental work was compared and validated by means of computational simulation work.", "corpus_id": 139788544, "score": -1, "title": "Fabrication and design of electromagnetic wave absorber composed of carbon nanotube-incorporated cement composites" }
{ "abstract": "Control planes in future mobile core networks face two new challenges. First, they must scale to process the growing control traffic generated by an ever increasing number of mobile devices. Second, they must be flexible and evolvable to support the range of emerging service abstractions and to realize customized network slices to meet the broad range of requirements of these networks. To address these challenges, we propose MobileStream, a scalable, programmable, and evolvable mobile core control plane platform. MobileStream provides a set of refactored basic building blocks, functionally decomposed from existing monolithic control plane components. It leverages realtime streaming frameworks to assemble, execute, and scale these blocks as streaming control plane applications. Moreover, it allows users to add their own functions to customize and optimize streaming control plane applications. We present several streaming control plane applications to showcase the flexibility and generality of MobileStream. We describe our extensive functional testing, with a variety of mobile devices and base stations, to validate the MobileStream prototype, and present the results of large-scale experiments demonstrating its scalability.", "corpus_id": 53443690, "title": "MobileStream: a scalable, programmable and evolvable mobile core control plane platform" }
{ "abstract": "Network functions virtualization (NFV) -- deploying network functions in software on commodity machines -- allows operators to employ rich chains of NFs to realize custom performance, security, and compliance policies, and ensure high performance by dynamically adding instances and/or failing over. Because NFs are stateful, it is important to carefully manage their state, especially during such dynamic actions. Crucially, state management must: (1) offer good performance to match the needs of modern networks; (2) ensure NF chain-wide properties; and (3) not require the operator to manage low-level state management details. We present StreamNF, an NFV framework that satisfies the above requirements. To do so, StreamNF leverages an external state store with novel caching strategies and offloading of state operations, and chain-level logical packet clocks and packet logging/replay. Extensive evaluation of a StreamNF prototype built atop Apache Storm shows that the significant benefits of StreamNF in terms of state management performance and chain-wide properties come at a modest per-packet latency cost.", "corpus_id": 8972695, "title": "StreamNF: Performance and Correctness for Stateful Chained NFs" }
{ "abstract": "High-performance network packet processing benefits greatly from parallel-programming accelerators such as Graphics Processing Units (GPUs). Intel Xeon Phi, a relative newcomer in this market, is a distinguishing platform because its x86-compatible vectorized architecture offers additional optimization opportunities. Its software stack exposes low-level communication primitives, enabling fine-grained control and optimization of offloading processes. Nonetheless, our microbenchmarks show that offloading APIs for Xeon Phi comes in short for combining low latency and high throughput for both I/O and computation. In this work, we exploit Xeon Phi's low-level threading mechanisms to design a new offloading framework, Knapp, and evaluate it using simplified IP routing applications. Knapp lays the ground for full exploitation of Xeon Phi as a packet processing framework.", "corpus_id": 6378913, "score": -1, "title": "Knapp: A Packet Processing Framework for Manycore Accelerators" }
{ "abstract": "This paper studies statistics of riffle shuffles by relating them to random word statistics with the use of inverse shuffles. Asymptotic normality of the number of descents and inversions in riffle shuffles with convergence rates of order $1/\\sqrt{n}$ in the Kolmogorov distance are proven. Results are also given about the lengths of the longest alternating subsequences of random permutations resulting from riffle shuffles. A sketch of how the theory of multisets can be useful for statistics of a variation of top $m$ to random shuffles is presented.", "corpus_id": 119161176, "title": "Descent-inversion statistics in riffle shuffles" }
{ "abstract": "Let asn denote the length of a longest alternating subsequence in a uniformly random permutation of order n. Stanley studied the distribution of asn using algebraic methods, and showed in particular that E(asn) = (4n+1)=6 and Var(asn) = (32n 13)=180. From Stanley's result it can be shown that after rescaling, asn converges in the limit to the Gaussian distribution. In this extended abstract we present a new approach to the study of asn by relating it to the sequence of local extrema of a random permutation, which is shown to form a \"canonical\" longest alternating subsequence. Using this connection we reprove the abovementioned results in a more probabilistic and transparent way. We also study the distribution of the values of the local minima and maxima, and prove that in the limit the joint distribution of successive minimum-maximum pairs converges to the two-dimensional distribution whose density function is given by f(s;t) = 3(1 s)te t s . R´", "corpus_id": 27339, "title": "Local extrema in random permutations and the structure of longest alternating subsequences" }
{ "abstract": "Recently Richard Stanley initiated a study of the distribution of the length as(w) of the longest alternating subsequence in a random permutation w from the symmetric group $S_n$. Among other things he found an explicit formula for the generating function (on n and k) for the probability that as(w) is at most k and conjectured that the distribution, suitably centered and normalized, tended to a Gaussian with variance 8/45. In this note we present a proof of the conjecture based on the generating function.", "corpus_id": 16086181, "score": -1, "title": "On the Limiting Distribution for the Longest Alternating Sequence in a Random Permutation" }
{ "abstract": "Terrain, representing features of an earth surface, plays a crucial role in many applications such as simulations, hazard prevention and mitigation planning, route planning, analysis of surface dynamics, computer graphics-based games, entertainment, films, to name a few. With recent advancements in digital technology, these applications demand the presence of high-resolution details in the terrain. However, currently available public datasets, providing terrain scans in the form of Digital Elevation Models (DEMs) have low resolution compared with the terrain information available in other modalities like aerial images. Publicly available DEM datasets for most parts of the world have a resolution of 30 m whereas the aerial images or satellite images are available at a resolution of 50 cm. The cost involved in capturing of such high-resolution DEMs (HRDEMs) turns out to be a major hurdle for making such high-resolution available in the public domain. This motivates us to provide a software solution for generating high-resolution DEM from the existing low-resolution DEMs (LRDEMs). In natural image domain, super-resolution has set up higher benchmarks by incorporating deep learning based solutions. Despite such tremendous success in image super-resolution task using deep learning solutions, there are very few works that have used these powerful systems on DEMs to generate HRDEMs. A few of them used additional modalities as aerial images or satellite images, temporal sequence of DEMs etc., to generate high-resolution terrains. However, the applicability of these methods is highly subject to the available input formats. In this research effort, we explore a new direction in DEM super-resolution by using feedback neural networks. Availing the capability of feedback neural networks to redefine the features learned by shallow layers of the network, we design DSRFB, a DEM super-resolution architecture that generates high-resolution DEM with a super-resolution factor of 8X with minimal input. Our experiments on Pyrenees and Tyrol mountain range datasets show that DSRFB can perform near to the state-of-the-art without using information from any additional modalities like aerial images. Further, by understanding the limitations of DSRFB, which primarily occur in case of highly degraded low-resolution input. In such cases, the major structures are entirely lost and the reconstruction becomes challenging. In such cases, to avail the elevation cues from alternate sources of information becomes necessary. To utilize such information from other modalities, we inherit the attention mechanism from natural language processing (NLP) domain. We integrate the attention mechanism into the feedback network to present Attentional Feedback Module (AFM). Our proposed network, Attentional Feedback", "corpus_id": 234788890, "title": "Super-resolution of Digital Elevation Models With Deep Learning Solutions" }
{ "abstract": "In this paper, we present a simple and efficient method to represent terrains as elevation functions built from linear combinations of landform features (atoms). These features can be extracted either from real world data-sets or procedural primitives, or from any combination of multiple terrain models. Our approach consists in representing the elevation function as a sparse combination of primitives, a concept which we call Sparse Construction Tree, which blends the different landform features stored in a dictionary. The sparse representation allows us to represent complex terrains using combinations of atoms from a small dictionary, yielding a powerful and compact terrain representation and synthesis tool. Moreover, we present a method for automatically learning the dictionary and generating the Sparse Construction Tree model. We demonstrate the efficiency of our method in several applications: inverse procedural modeling of terrains, terrain amplification and synthesis from a coarse sketch.", "corpus_id": 2982835, "title": "Sparse representation of terrains for procedural modeling" }
{ "abstract": "The images taken in low-light environment always lose some details. Given that traditional multi-scale Retinex (MSR) algorithm always appears halo artifact in the edge area of the image, a multiscale Retinex-like is put forward for low-light video image enhancement. The proposed algorithm is operated in HSI color space. For Intensity layer, three Gaussian filters in traditional MSR algorithm are replaced by three guided filters to extract illumination component, then gets the reflection component, the sum of this two components as the new Intensity layer. For the Saturation layer, Gamma correction function is used to enhance Saturation, then the new HSI image is converted to RGB image. Four kinds of numerical methods are adopted to evaluate the images enhanced by different methods. The experimental results show that the proposed algorithm in terms of low-light image, not only makes the picture becomes clearer, but also extracts more image details. Furthermore the algorithm is more efficient.", "corpus_id": 42535406, "score": -1, "title": "Low-light video image enhancement based on multiscale Retinex-like algorithm" }
{ "abstract": "The present invention refers to a device for depositing a textile sliver in a can, comprising a bent delivery pipe, which is adapted to be rotated about a substantially vertical axis and the mouth of which ends in front of a stationary wall projecting into the cross-section of the mouth of the bent delivery pipe. The task to be solved by the present invention is the task of simplifying a device of the type mentioned at the beginning and treating the sliver with care at the same time. In accordance with the present invention, this task is solved by the features that a pressure-exerting member, which is directed towards the stationary wall and which moves together with the belt delivery pipe, is arranged behind the mouth of the bent delivery pipe when seen in the direction of rotation, the coefficient of friction of the wall being higher than the coefficient of friction of the pressure-exerting member, when measured in the direction of rotation of the bent delivery pipe.", "corpus_id": 117198379, "title": "Discrete Logarithms in Finite Fields" }
{ "abstract": "Given a primitive element g of a finite field GF(q), the discrete logarithm of a nonzero element u ? GF(q) is that integer k, 1 ? k ? q-1, for which u = gk. The well-known problem of computing discrete logarithms in finite fields has acquired additional importance in recent years due to its applicability in cryptography. Several cryptographic systems would become insecure if an efficient discrete logarithm algorithm were discovered. This paper surveys and analyzes known algorithms in this area, with special attention devoted to algorithms for the fields GF(2n). It appears that in order to be safe from attacks using these algorithms, the value of n for which GF(2n) is used in a cryptosystem has to be very large and carefully chosen. Due in large part to recent discoveries, discrete logarithms in fields GF(2n) are much easier to compute than in fields GF(p) with p prime. Hence the fields GF(2n) ought to be avoided in all cryptographic applications. On the other hand, the fields GF(p) with p prime appear to offer relatively high levels of security.", "corpus_id": 1199416, "title": "Discrete Logarithms in Finite Fields and Their Cryptographic Significance" }
{ "abstract": "We present an experimental study of the deformation inside a granular material that is progressively tilted. We investigate the deformation before the avalanche with a spatially resolved diffusive wave spectroscopy setup. At the beginning of the inclination process, we first observe localized and isolated events in the bulk, with a density which decreases with the depth. As the angle of inclination increases, series of microfailures occur periodically in the bulk, and finally a granular avalanche takes place. The microfailures are observed only when the tilt angles are larger than a threshold angle much smaller than the granular avalanche angle. We have characterized the density of reorganizations and the localization of microfailures. We have also explored the effect of the nature of the grains, the relative humidity conditions, and the packing fraction of the sample. We discuss those observations in the framework of the plasticity of granular matter. Microfailures may then be viewed as the result of the accumulation of numerous plastic events.", "corpus_id": 22070012, "score": -1, "title": "Experimental investigation of plastic deformations before a granular avalanche." }
{ "abstract": "Large scale brain models encompassing cortico-cortical, thalamo-cortical and basal ganglia processing are fundamental to understand the brain as an integrated system in healthy and disease conditions but are complex to analyze and interpret. Neuronal processes are typically segmented by region and modality in order to explain an experimental observation at a given scale, but integrative frameworks linking scales and modalities are scarce. Here, we present a set of functional requirements used to evaluate the recently developed large-scale brain model against a learning task involving coordinated learning between cortical and sub-cortical systems. The original Information Based Exchange Brain model (IBEx) is decomposed into functionally relevant subsystems, and each subsystem is analyzed and tuned independently and with regard to its relevant functional requirements. Intermediate conclusions are made for each subsystems according to the constraints imposed by these requirements. Subsystems are then re-introduced into the global framework. The relationship between the global framework and phenotypes associated with Huntington’s disease is then discussed and the framework considered in the context of other state-of-the-art integrative brain models.", "corpus_id": 220715589, "title": "Functional requirements of intentional control over the integrated cortico-thalamo-cortical and basal ganglia systems using neural computations" }
{ "abstract": "Here we describe an “information-based exchange” model of brain function that ascribes to neocortex, basal ganglia, and thalamus distinct network functions. The model allows us to analyze whole brain system set point measures, such as the rate and heterogeneity of transitions in striatum and neocortex, in the context of neuromodulation and other perturbations. Our closed-loop model is grounded in neuroanatomical observations, proposing a novel “Grand Loop” through neocortex, and invokes different forms of plasticity at specific tissue interfaces and their principle cell synapses to achieve these transitions. By implementing a system for maximum information-based exchange of action potentials between modeled neocortical areas, we observe changes to these measures in simulation. We hypothesize that similar dynamic set points and modulations exist in the brain's resting state activity, and that different modifications to information-based exchange may shift the risk profile of different component tissues, resulting in different neurodegenerative diseases. This model is targeted for further development using IBM's Neural Tissue Simulator, which allows scalable elaboration of networks, tissues, and their neural and synaptic components toward ever greater complexity and biological realism.", "corpus_id": 727392, "title": "Closed-Loop Brain Model of Neocortical Information-Based Exchange" }
{ "abstract": "Eight healthy male subjects had intra-cortical bone-pins inserted into the proximal tibia and distal femur. Three reflective markers were attached to each bone-pin and four reflective markers were mounted on the skin of the tibia and thigh, respectively. Roentgen-stereophotogrammetric analysis (RSA) was used to determine the anatomical reference frame of the tibia and femur. Knee joint motion was recorded during walking and cutting using infrared cameras sampling at 120Hz. The kinematics derived from the bone-pin markers were compared with that of the skin-markers. Average rotational errors of up to 4.4 degrees and 13.1 degrees and translational errors of up to 13.0 and 16.1mm were noted for the walk and cut, respectively. Although skin-marker derived kinematics could provide repeatable results this was not representative of the motion of the underlying bones. A standard error of measurement is proposed for the reporting of 3D knee joint kinematics.", "corpus_id": 19893099, "score": -1, "title": "Effect of skin movement artifact on knee kinematics during gait and cutting motions measured in vivo." }
{ "abstract": "Lucky imaging is a technique for high resolution astronomical imaging at visible wavelengths, utilising medium sized ground based telescopes in the 2--4m class. The technique uses high speed, low noise cameras to record short exposures which may then be processed to minimise the deleterious effects of atmospheric turbulence upon image quality. \nThe key statement of this thesis is as follows; that lucky imaging is a technique which now benefits from sufficiently developed hardware and analytical techniques that it may be effectively used for a wide range of astronomical imaging purposes at medium sized ground based telescopes. Furthermore, it has proven potential for producing extremely high resolution imaging when coupled with adaptive optics systems on larger telescopes. I develop this argument using new mathematical analyses, simulations, and data from the latest Cambridge lucky imaging instrument.", "corpus_id": 117874749, "title": "Lucky imaging: beyond binary stars" }
{ "abstract": "Using simultaneous observations in X-rays and optical, we have performed a homogeneous analysis of the cross-correlation behaviours of four X-ray binaries: SWIFT J1753.5-0127, GX339-4, Sco X-1 and CygX-2. With high-time-resolution observations using ULTRACAM and RXTE, we concentrate on the short time-scale, delta t < 20 s, variability in these sources. Here we present our data base of observations, with three simultaneous energy bands in both the optical and the X-ray, and multiple epochs of observation for each source, all with similar to second or better time resolution. For the first time, we include a dynamical cross-correlation analysis, i.e. an investigation of how the cross-correlation function changes within an observation. We describe a number of trends which emerge. We include the full data set of results, and pick a few striking relationships from among them for further discussion. \n \nWe find, that the surprising form of X-ray/optical cross-correlation functions, a positive correlation signal preceded by an anticorrelation signal, is seen in all the sources at least some of the time. Such behaviour suggests a mechanism other than reprocessing as being the dominant driver of the short-term variability in the optical emission. This behaviour appears more pronounced when the X-ray spectrum is hard. Furthermore, we find that the cross-correlation relationships themselves are not stable in time, but vary significantly in strength and form. This all hints at dynamic interactions between the emitting components which could be modelled through non-linear or differential relationships.", "corpus_id": 1659817, "title": "High time resolution optical/X-ray cross-correlations for X-ray binaries : anticorrelations and rapid variability" }
{ "abstract": "We study the effects of the mutual interaction of hot plasma and cold medium in black hole binaries in their hard spectral state. We consider a number of different geometries. In contrast to previous theoretical studies, we use a modern energy-conserving code for reflection and reprocessing from cold media. We show that a static corona above an accretion disc extending to the innermost stable circular orbit produces spectra not compatible with those observed. They are either too soft or require a much higher disc ionization than that observed. This conclusion confirms a number of previous findings, but disproves a recent study claiming an agreement of that model with observations. We show that the cold disc has to be truncated in order to agree with the observed spectral hardness. However, a cold disc truncated at a large radius and replaced by a hot flow produces spectra which are too hard if the only source of seed photons for Comptonization is the accretion disc. Our favourable geometry is a truncated disc coexisting with a hot plasma either overlapping with the disc or containing some cold matter within it, also including seed photons arising from cyclo-synchrotron emission of hybrid electrons, i.e. containing both thermal and non-thermal parts.", "corpus_id": 54041059, "score": -1, "title": "Doughnut strikes sandwich: the geometry of hot medium in accreting black hole X-ray binaries" }
{ "abstract": "Small incision lenticule extraction (SMILE), with over 5 million procedures globally performed, will challenge ophthalmologists in the foreseeable future with accurate intraocular lens power calculations in an ageing population. After more than one decade since the introduction of SMILE, only one case report of cataract surgery with IOL implantation after SMILE is present in the peer-reviewed literature. Hence, the scope of the present multicenter study was to compare the IOL power calculation accuracy in post-SMILE eyes between ray tracing and a range of empirically optimized formulae available in the ASCRS post-keratorefractive surgery IOL power online calculator. In our study of 11 post-SMILE eyes undergoing cataract surgery, ray tracing showed the smallest mean absolute error (0.40 D) and yielded the largest percentage of eyes within ±0.50/±1.00 D (82/91%). The next best conventional formula was the Potvin–Hill formula with a mean absolute error of 0.66 D and an ±0.50/±1.00 D accuracy of 45 and 73%, respectively. Analyzing this first cohort of post-SMILE eyes undergoing cataract surgery and IOL implantation, ray tracing showed superior predictability in IOL power calculation over empirically optimized IOL power calculation formulae that were originally intended for use after Excimer-based keratorefractive procedures.", "corpus_id": 251268894, "title": "IOL Power Calculations and Cataract Surgery in Eyes with Previous Small Incision Lenticule Extraction" }
{ "abstract": "PurposeTo evaluate the accuracy of intraocular lens (IOL) power calculations using ray tracing software in eyes after myopic laser in situ keratomileusis (LASIK).MethodsTwenty-four eyes of 17 cataract patients who underwent phacoemulsification and IOL implantation after myopic LASIK were analyzed retrospectively. The IOL power calculation was performed using OKULIX ray tracing software. The axial length was measured using the IOLMaster and keratometry data using TMS2N. The accuracy of the IOL power calculation using OKULIX was compared with those using the Camellin–Calossi, Shammas-PL, Haigis-L formulas and the double-K SRK/T formula using 43.5 diopters (D) for the Kpre.ResultsThe mean values of the arithmetic and absolute prediction errors were 0.63 ± 0.85 and 0.80 ± 0.68 D, respectively. The arithmetic prediction error by OKULIX was a significant hyperopic shift of the distribution of the postoperative refractive errors compared to the Camellin–Calossi, Shammas-PL and Haigis-L formulas (P < 0.05), and the absolute prediction error showed no significant difference with other formulas. The prediction errors using OKULIX were within ±0.5 D in 10 eyes (41.7 %) and within ±1.0 D in 18 eyes (75.0 %). The percentages of eyes within ±1.0 D using OKULIX were comparable to those obtained using the Camellin–Calossi, the Shammas-PL formulas and the double-K SRK/T formula using 43.5 D for the Kpre, and significantly (P < 0.05) higher than that obtained using the Haigis-L formula.ConclusionsIOL power calculations using OKULIX provided predictable outcomes in eyes that had undergone a previous myopic LASIK.", "corpus_id": 345534, "title": "Ray tracing software for intraocular lens power calculation after corneal excimer laser surgery" }
{ "abstract": "Purpose To propose the new anterior–posterior method (A–P method) that does not require historical data to calculate intraocular lens (IOL) power after laser in situ keratomileusis (LASIK) and to compare the accuracy of the method with other IOL formulas after LASIK. Setting Keio University Hospital, Tokyo, Japan. Design Case series. Methods Eyes having phacoemulsification and IOL implantation after myopic LASIK were analyzed retrospectively. The A–P method is a modification of the double‐K method using the SRK/T formula in which the estimated pre‐LASIK keratometry (K) power calculated from the post‐LASIK posterior sagittal power in the 6.0 mm zone is used for the preoperative K value in the double‐K method and the post‐LASIK anterior sagittal power is used for the postoperative K value. The accuracy of the A–P method was compared with that of other formulas that do not require preoperative data and with formulas that require preoperative data. Results The median values of the arithmetic and absolute prediction errors using the A–P method were 0.16 diopter (D) and 0.54 D, respectively. The prediction errors using the A–P method were within ±0.50 D in 46.4% of eyes and within ±1.00 D in 75.0%. The percentage of eyes within ±1.00 D of the prediction errors with the A–P method was highest with the no‐history methods. Conclusion The A–P method may be a good option for calculating IOL power in eyes having cataract surgery after LASIK when preoperative LASIK data are unavailable. Financial Disclosure No author has a financial or proprietary interest in any material or method mentioned.", "corpus_id": 45129837, "score": -1, "title": "Modified double‐K method for intraocular lens power calculation after excimer laser corneal refractive surgery" }
{ "abstract": "Isolated local or regional recurrence of breast cancer (BC) leads to an increased risk of metastases and decreased survival. Ipsilateral breast recurrence can occur at the initial tumor bed or in another quadrant of the breast. Depending on tumor patterns and molecular subtypes, the risk and time to onset of metastatic recurrence differs. HER2-positive and triple-negative (TNG) BC have a risk of locoregional relapse between six and eight times than luminal A. Thus, the management of local and locoregional relapses must take into account the prognostic factors for metastatic disease development. It is important to personalize the overall management, including or not systemic treatment according to the metastatic risk. All isolated recurrence cases should be treated with curative intent. Complete surgical resection is recommended whenever possible. Patients who did not receive postoperative irradiation during their initial management should receive full-dose radiotherapy to the chest wall and to the regional lymph nodes if appropriate. Overall, total mastectomy is the “gold standard” among patients who were previously treated by conservative surgery followed by radiation therapy. In terms of systemic therapy, the benefits of additional treatments are not conclusively proven in cases of isolated recurrence. The beneficial role of chemotherapy has been reported in at least one randomized trial, while endocrine therapy and anti-HER2 are common practice. This review will discuss salvage treatment options of local and locoregional recurrences in the new era of BC molecular subtypes.", "corpus_id": 4896327, "title": "Local and Regional Breast Cancer Recurrences: Salvage Therapy Options in the New Era of Molecular Subtypes" }
{ "abstract": "BACKGROUND\nAdjuvant systemic treatment for patients with isolated locoregional recurrence (ILRR) of breast cancer is based on a single reported randomized trial. The trial, conducted by the Swiss Group for Clinical Cancer Research, compared tamoxifen (TAM) with observation after complete excision of the ILRR and proper radiotherapy. We performed a definitive analysis of treatment outcome at >11 years of follow-up, after the majority of the patients had a subsequent event of interest. Patient and methods One hundred and sixty-seven patients with 'good-risk' characteristics of disease were randomized. 'Good-risk' was defined as estrogen receptor expression in the ILRR, or having a disease-free interval of >12 months and a recurrence consisting of three or less tumor nodules, each </=3 cm in diameter. Seventy-nine percent of the patients were postmenopausal at randomization.\n\n\nRESULTS\nThe median follow-up time of the surviving patients was 11.6 years. The median post ILRR disease-free survival (DFS) was 6.5 years with TAM and 2.7 years with observation (P = 0.053). The difference was mainly due to reduction of further local relapses (P = 0.011). In postmenopausal patients, TAM led to an increase of DFS from 33% to 61% (P = 0.006). In premenopausal women, 5-year DFS was 60%, independent of TAM medication. For the whole study population, the median post-recurrence overall survival (OS) was 11.2 and 11.5 years in the observation and the TAM group, respectively; premenopausal patients experienced a 5-year OS of 90% for observation compared with 67% for TAM (P = 0.175), while the respective figures for postmenopausal patients were both 75%.\n\n\nCONCLUSIONS\nThese definitive results confirmed that TAM significantly improves the post-recurrence DFS of patients after local treatment for ILRR. This beneficial effect does not translate into a detectable OS advantage.", "corpus_id": 2579835, "title": "Adjuvant therapy after excision and radiation of isolated postmastectomy locoregional breast cancer recurrence: definitive results of a phase III randomized trial (SAKK 23/82) comparing tamoxifen with observation." }
{ "abstract": "BACKGROUND\nThe prognosis of breast cancer in very young women is generally considered to be unfavourable. Therefore, the outcome of adjuvant therapy was analysed in a population of young (<35 years) premenopausal patients treated in four randomised controlled trials.\n\n\nMETHODS\nBetween 1978 and 1993 the International Breast Cancer Study Group (IBCSG) treated 3700 premenopausal and perimenopausal patients with various timing and duration of adjuvant cyclophosphamide, methotrexate, and fluorouracil (CMF with or without low-dose prednisone and oophorectomy). 314 of these women were less than 35 years old at randomisation.\n\n\nFINDINGS\nRelapse and death occurred earlier and more often in younger (<35 years) than in older (> or = 35) patients with a 10 year disease-free survival of 35% (SE 3) versus 47% (1) (hazard ratio 1.41 [95% CI 1.22-1.62], p<0.001) and overall survival of 49% (3) versus 62% (1) (1.50 [1.28-1.77], p<0.001). Younger patients with oestrogen-receptor positive tumours had a significantly worse disease-free survival than younger patients with oestrogen-receptor negative tumours. By contrast, among older patients the disease-free survival was similar irrespective of oestrogen-receptor status.\n\n\nINTERPRETATION\nYoung premenopausal breast cancer patients treated with adjuvant CMF chemotherapy had higher risk of relapse and death than older premenopausal patients, especially if their tumours expressed oestrogen receptors. The endocrine effects of chemotherapy alone are insufficient for the younger age group and these patients should strongly consider additional endocrine therapies (tamoxifen or ovarian ablation) if their tumours express oestrogen receptors.", "corpus_id": 24992844, "score": -1, "title": "Is chemotherapy alone adequate for young women with oestrogen-receptor-positive breast cancer?" }
{ "abstract": "The selection of the best alternative for Enterococcus faecalis infective endocarditis (IE) continuation treatment in the outpatient setting is still challenging. Three databases were searched, reporting antibiotic therapies against E. faecalis IE in or suitable for the outpatient setting. Articles the results of which were identified by species and treatment regimen were included. The quality of the studies was assessed accordingly with the study design. Data were extracted and synthesized narratively. In total, 18 studies were included. The treatment regimens reported were classified regarding the main antibiotic used as regimen, based on Aminoglycosides, dual β-lactam, teicoplanin, daptomycin or dalbavancin or oral therapy. The regimens based on aminoglycosides and dual β-lactam combinations are the treatment alternatives which gather more evidence regarding their efficacy. Dual β-lactam is the preferred option for high level aminoglycoside resistance strains, and for to its reduced nephrotoxicity, while its adaptation to the outpatient setting has been poorly documented. Less evidence supports the remaining alternatives, but many of them have been successfully adapted to outpatient care. Teicoplanin and dalbavancin as well as oral therapy seem promising. Our work provides an extensive examination of the potential alternatives to E. faecalis IE useful for outpatient care. However, the insufficient evidence hampers the attempt to give a general recommendation.", "corpus_id": 222167163, "title": "Enterococcus faecalis Endocarditis and Outpatient Treatment: A Systematic Review of Current Alternatives" }
{ "abstract": "Background International guidelines recommend 4 weeks of treatment with ampicillin plus gentamicin (A+G) for uncomplicated native valve Enterococcus faecalis infective endocarditis (EFIE) and 6 weeks in the remaining cases. Ampicillin plus ceftriaxone (A+C) is always recommended for at least 6w, with no available studies assessing its suitability for 4w. We aimed to investigate differences in the outcome of EFIE according to the duration (4 versus 6 weeks) of antibiotic treatment (A+G or A+C). Methods Retrospective analysis from a prospectively collected cohort of 78 EFIE patients treated with either A+G or A+C. Results 32 cases (41%) were treated with A+G (9 for 4w, 28%) and 46 (59%) with A+C (14 for 4w, 30%). No significant differences were found in 1-year mortality according to the type of treatment (31% and 24% in A+G and A+C, respectively; P = 0.646) or duration (26% and 27% at 4 and 6w, respectively; P = 0.863). Relapses were more frequent among survivors treated for 4w than in those treated for 6w (3/18 [17%] at 4w and 1/41 [2%] at 6w; P = 0.045). Three out of 4 (75%) relapses occurred in cirrhotic patients. Conclusions A 4-week course of antibiotic treatment might not be suitable neither for A+G nor A+C for treating uncomplicated native valve EFIE.", "corpus_id": 3435407, "title": "Outcome of Enterococcus faecalis infective endocarditis according to the length of antibiotic therapy: Preliminary data from a cohort of 78 patients" }
{ "abstract": "Over the last 4 years, increasing evidence has pointed to an unexpectedly close relationship between Gaucher disease (OMIM 606463) and Parkinson disease (PD) (OMIM 168600). Gaucher disease is a lysosomal storage disorder caused by mutations in the glucocerebrosidase gene leading to intracellular buildup of glucosylceramide. Although Gaucher disease is one of the classic, recessive disorders, the genotype–phenotype relationships for the disease are not clear and there is an enormous amount of unexplained phenotypic heterogeneity with some individuals having prominent spleen or liver symptoms and others having a neurodegenerative disease phenotype.1 Although there have been brain neuropathologic examinations of persons dying of Gaucher disease, these have generally been of young patients and the occurrence of α-synuclein positive Lewy bodies (hallmark of PD) has only been assessed in a few elderly cases.2\n\nOne of the first hints that there might be a relationship between the two conditions came with the observation of PD in the relatives of individuals with Gaucher syndrome.3 …", "corpus_id": 37375404, "score": -1, "title": "Gaucher and Parkinson diseases" }
{ "abstract": "There is a scarceness of information on the central nervous system effects of common variable immunodeficiency (CVID). A 30-year-old woman with a history of recurrent upper respiratory infections, vitiligo, and immune thrombocytopenic purpura presented with right-sided numbness. Magnetic resonance imaging (MRI) of the thoracic spine revealed a signal hyperintensity. MRI of the brain demonstrated FLAIR hyperintensity in the right middle frontal gyrus. Cerebral spinal fluid was unremarkable. Serum immunoglobulins revealed hypogammaglobulinemia. Endobronchial and subsequent mediastinum biopsies were all negative for sarcoidosis and malignancy. No infectious etiology was found. She was treated with glucocorticoids and intravenous immunoglobulin (IVIG) replacement therapy for CVID-associated myelitis. Follow-up MRI showed improvement; however, her numbness persisted despite these treatments, which led to an outside physician adding methotrexate for their suspicion of sarcoidosis. Her symptoms remained stable for two years, but when the methotrexate dose was weaned, her numbness worsened. Upon review, the treatment team refuted the diagnosis of sarcoidosis but continued treatment with prednisone, IVIG, and methotrexate for CVID-associated myelitis, from which her symptoms have stabilized. Here, we discuss CVID-associated neurological complications, its similarities to sarcoidosis, and a literature review with treatment regimens and outcomes.", "corpus_id": 204952051, "title": "The Central Nervous System Effects and Mimicry of Common Variable Immunodeficiency (CVID): A Case Report with Literature Review" }
{ "abstract": "Transverse myelitis requires careful investigation, as although many causes respond acutely to immunomodulation, the longer term management depends upon its precise cause. We describe a patient presenting acutely with a corticosteroid-responsive longitudinally extensive transverse myelitis and granulomatous lung lesions with a previous history of recurrent generalised lymphadenopathy, pyogenic infections and idiopathic thrombocytopenic purpura. We initially suspected an underlying primary autoimmune disorder. However, investigations showed granulomatous common variable immunodeficiency (gCVID) and following regular intravenous immunoglobulin, her symptoms did not recur. We discuss this relatively common immunodeficiency disease, frequently misdiagnosed as a systemic autoimmune disease (often sarcoidosis). We also discuss the range of neurological syndromes, including transverse myelitis, that may accompany common variable immunodeficiency (CVID).\n\nA 41-year-old Caucasian woman presented with a 1-week history of progressive bilateral leg weakness, with urinary hesitancy and frequency. She had an intriguing past medical history of multiple episodes of corticosteroid-responsive widespread tender lymphadenopathy over 10 years, splenectomy for refractory idiopathic thrombocytopenic purpura in 2005, frequent respiratory tract infections and multiple buttock abscesses.\n\nOn examination at presentation, there was increased tone in both lower limbs with mild bilateral proximal weakness. Her reflexes were pathologically brisk in both legs and plantars were extensor. There were no sensory abnormalities. Upper limb and cranial nerve examinations were normal.\n\nRoutine blood tests were normal, including full blood count, renal function, liver function, clotting function and inflammatory markers. Her MR scan of brain was normal, but an MR scan of spine showed longitudinally extensive cord signal change between T2 and the conus medullaris (figure 1) with focal contrast enhancement at T6/7 (figure …", "corpus_id": 1986504, "title": "Longitudinally extensive transverse myelitis: a rare association with common variable immunodeficiency" }
{ "abstract": "Yazdani R, Habibi S, Sharifi L, Azizi G, Abolhassani H, Olbrich P, Aghamohammadi A Research Center for Immunodeficiencies, Pediatrics Center of Excellence, Children's Medical Center, Tehran University of Medical Science, Tehran, Iran. Uro-Oncology Research Center, Tehran University of Medical Sciences, Tehran, Iran Non-Communicable Diseases Research Center, Alborz University of Medical Sciences, Karaj, Iran. 4 Division of Clinical Immunology, Department of Laboratory Medicine, Karolinska Institute at Karolinska University Hospital Huddinge, Stockholm, Sweden. Sección de Infectología e Inmunopatología, Unidad de Pediatría, Hospital Virgen del Rocío/Instituto de Biomedicina de Sevilla (IBiS), Seville, Spain", "corpus_id": 167214924, "score": -1, "title": "Common Variable Immunodeficiency : Epidemiology , Pathogenesis , Clinical manifestations , Diagnosis , Classification and Management Running title : Common Variable Immunodeficiency" }
{ "abstract": "Neonatal sepsis is a major concern with maternal and neonatal risk factors greatly being associated with development of neonatal sepsis. In this study, we sort to determine the associated risk factors and microbial profiles at Kitale County Hospital (KCH) new-born unit in Western Kenya. Data was collected from 181 eligible preterm neonates and cultured using standard protocols. A prevalence of 22.7% was found with majority of Gram positive 35 (85.4%) while Gram negative were 6 (14.6%). Coagulase Negative Staphylococcus (CoNS) were 31 (75.6%) with Staphylococcus epidermidis 19 (46.3%) being the majority . Mode of delivery, Prolonged Rupture of Membranes (PROM), foetal distress, low birth weight and poor breast feeding were major risk factors associated with neonatal sepsis at KCH. There is therefore need to assess the correlation between the specific maternal and neonatal risk factors with common circulating bacterial profiles at KCH new born unit.", "corpus_id": 240997857, "title": "Preterm Neonatal Sepsis: Associated Risk Factors and Microbial Profiles at Kitale County Hospital New-born Unit, Western Kenya" }
{ "abstract": "In the face of recent improvements in neonatal care, the influences of neonatal sepsis remain a public health problem in developing countries. Thus, identifying the determinants of neonatal sepsis is an indispensable matter of enhancing neonatal care. Therefore, this study intends to identify the determinants of neonatal sepsis among neonates admitted to the Neonatal Intensive Care Unit at Jinka General Hospital in Southern Ethiopia. An institution-based case-control study was conducted from September to October 2017. A total of 335 neonates who were admitted at Jinka General Hospital were incorporated. Cases (n=112) were neonates who were with sepsis and their mother. Controls (n=223) were neonates who were not with neonatal sepsis and their mother. Study participants were selected using the simple random sampling technique. Bi-variable and multivariable logistic regression analyses were performed to identify determinants of neonatal sepsis. A total of 335 (112 cases and 223 controls) medical charts of neonates with their index mother was reviewed. History of urinary tract infection during the index pregnancy [AOR= 4.47, 95% CI (2.06, 9.71)], prolonged rupture of membrane [AOR= 2.2, 95% CI (1.24, 3.92)], birth weight of neonate less than 2.5kg [AOR= 1.68, 95% CI (1.25, 3.75)] and birth asphyxia [AOR= 2.34, 95% CI (1.14, 4.81)] were identified as determinants of neonatal sepsis. This study concludes that history of urinary tract infection, prolonged rupture of membrane, birth weight of neonate and birth asphyxia were the independent determinants of neonatal sepsis. Therefore, preventive efforts of neonatal sepsis should focus on high-risk neonates such as neonate born from mothers who have of urinary tract infection and prolonged rupture of membranes, a neonate with low birth weight and neonate who developed neonatal asphyxia by careful monitoring and follow-up as well as by prudently treating the victims. \n \n Key words: Neonatal sepsis, determinants of neonatal sepsis, neonatal intensive care unit.", "corpus_id": 155798222, "title": "Determinants of neonatal sepsis among neonates admitted in a neonatal intensive care unit at Jinka General Hospital, Southern Ethiopia" }
{ "abstract": "Karen Edmond and Anita Zaidi highlight new approaches that could reduce the burden of neonatal sepsis worldwide.", "corpus_id": 5723662, "score": -1, "title": "New Approaches to Preventing, Diagnosing, and Treating Neonatal Sepsis" }
{ "abstract": "—Deep Convolutional Neural Networks (CNN) are ex- panding their territory to many applications, including vision processing algorithms. This is because CNNs achieve higher accuracy compared to traditional signal processing algorithms. For real-time vision processing, however, their high demand for computational power and data movement limits their applicabil-ity to battery-powered devices. For such applications that require both real-time processing and power efficiency, hardware accelerators are inevitable in meeting the requirements. Recent CNN frameworks, such as SqueezeNet and GoogLeNet, necessitate a re-design of hardware accelerators, because their irregular architec- tures cannot be supported efficiently by traditional hardware accelerators. In this paper, we propose a novel hardware accelerator for advanced CNNs aimed at realizing real-time vision processing with high accuracy. The proposed design employs data-driven scheduling that enables support for irregular CNN architectures without run-time reconfiguration, and it offers high scalability through its modular design concept. Specifically, the design’s on-chip memory management and on-chip communication fabric are tailored to CNNs. As a result, the new accelerator completes all layers of SqueezeNet and GoogLeNet in 14.30 ms and 27.12 ms at 2.47 W and 2.51 W, respectively, with 64 processing elements. The performance offered by the proposed accelerator is comparable to high-performance FPGA-based approaches (that achieve 1.06 to 262.9 ms at 25 to 58 W), albeit with significantly lower power consumption. If the hardware budget allows, these latencies can be further reduced to 6.71 ms and 11.70 ms, respectively, with 256 processing elements. In comparison, the latency reported by existing architectures executing large-scale deep CNNs ranges from 115.3 ms to 4309.5 ms.", "corpus_id": 251887341, "title": "A Low-Latency Power-Efficient Convolutional Neural Network Accelerator for Vision Processing Algorithms" }
{ "abstract": "The high accuracy of deep neural networks (NNs) has led to the development of NN accelerators that improve performance by two orders of magnitude. However, scaling these accelerators for higher performance with increasingly larger NNs exacerbates the cost and energy overheads of their memory systems, including the on-chip SRAM buffers and the off-chip DRAM channels. This paper presents the hardware architecture and software scheduling and partitioning techniques for TETRIS, a scalable NN accelerator using 3D memory. First, we show that the high throughput and low energy characteristics of 3D memory allow us to rebalance the NN accelerator design, using more area for processing elements and less area for SRAM buffers. Second, we move portions of the NN computations close to the DRAM banks to decrease bandwidth pressure and increase performance and energy efficiency. Third, we show that despite the use of small SRAM buffers, the presence of 3D memory simplifies dataflow scheduling for NN computations. We present an analytical scheduling scheme that matches the efficiency of schedules derived through exhaustive search. Finally, we develop a hybrid partitioning scheme that parallelizes the NN computations over multiple accelerators. Overall, we show that TETRIS improves mthe performance by 4.1x and reduces the energy by 1.5x over NN accelerators with conventional, low-power DRAM memory systems.", "corpus_id": 5793764, "title": "TETRIS: Scalable and Efficient Neural Network Acceleration with 3D Memory" }
{ "abstract": "The \"CNN-RNN\" design pattern is increasingly widely applied in a variety of image annotation tasks including multi-label classification and captioning. Existing models use the weakly semantic CNN hidden layer or its transform as the image embedding that provides the interface between the CNN and RNN. This leaves the RNN overstretched with two jobs: predicting the visual concepts and modelling their correlations for generating structured annotation output. Importantly this makes the end-to-end training of the CNN and RNN slow and ineffective due to the difficulty of back propagating gradients through the RNN to train the CNN. We propose a simple modification to the design pattern that makes learning more effective and efficient. Specifically, we propose to use a semantically regularised embedding layer as the interface between the CNN and RNN. Regularising the interface can partially or completely decouple the learning problems, allowing each to be more effectively trained and jointly training much more efficient. Extensive experiments show that state-of-the art performance is achieved on multi-label classification as well as image captioning.", "corpus_id": 18214854, "score": -1, "title": "Semantic Regularisation for Recurrent Image Annotation" }
{ "abstract": "With the proliferation of shape-change research in affective computing, there is a need to deepen understandings of affective responses to shape-change display. Little research has focused on affective reactions to tactile experiences in shape-change, particularly in the absence of visual information. It is also rare to study response to the shape-change as it unfolds, isolated from a final shape-change outcome. We report on two studies on touch-affect associations, using the crossmodal “Bouba-Kiki” paradigm, to understand affective responses to shape-change as it unfolds. We investigate experiences with a shape-change gadget, as it moves between rounded (“Bouba”) and spiky (“Kiki”) forms. We capture affective responses via the circumplex model, and use a motion analysis approach to understand the certainty of these responses. We find that touch-affect associations are influenced by both the size and the frequency of the shape-change and may be modality-dependent, and that certainty in affective associations is influenced by association-consistency.", "corpus_id": 248419724, "title": "It’s Touching: Understanding Touch-Affect Association in Shape-Change with Kinematic Features" }
{ "abstract": "In this paper, we explore how shape changing interfaces might be used to communicate emotions. We present two studies, one that investigates which shapes users might create with a 2D flexible surface, and one that studies the efficacy of the resulting shapes in conveying a set of basic emotions. Results suggest that shape parameters are correlated to the positive or negative character of an emotion, while parameters related to movement are correlated with arousal level. In several cases, symbolic shape expressions based on clear visual metaphors were used. Results from our second experiment suggest participants were able to recognize emotions given a shape with a good accuracy within 28% of the dimensions of the Circumplex Model. We conclude that shape and shape changes of a 2D flexible surface indeed appear able to convey emotions in a way that is worthy of future exploration.", "corpus_id": 18354526, "title": "An Evaluation of Shape Changes for Conveying Emotions" }
{ "abstract": "Emotional Intelligence (EI) has been an important and controversial topic during the last few decades. Its significance and its correlation with many domains of life has made it the subject of expert study. EI is the rudder for feeling, thinking, learning, problem-solving, and decision-making. In this article, we present an emotional–cognitive based approach to the process of gaining emotional intelligence and thus, we suggest a nine-layer pyramid of emotional intelligence and the gradual development to reach the top of EI.", "corpus_id": 19221671, "score": -1, "title": "A New Layered Model on Emotional Intelligence" }
{ "abstract": "A 9.1ENOB 200 MS/s asynchronous SAR analog-to-digital converter (ADC) dealing with image sensing signals is presented in this paper. The proposed ADC, designed in 55-nm CMOS process technology, can work in both single-ended and differential modes. To guarantee the ADC linearity, a symmetrical switching method is adopted. To ensure the reference accuracy when the ADC is in operation, a 1.2-V on-chip quick-response LDO, which works under a 2.5-V supply, is also integrated. To overcome the need for a clock-generation circuit and avoid a high-speed interface circuit design, asynchronous clock logic is chosen. In addition, the comparison cycle is adjusted dynamically to guarantee the DAC settling and improve the ADC throughput. In the differential operation mode, the proposed SAR ADC exhibits a peak signal-to-noise-and-distortion ratio (SNDR) of 56.5 dB and a peak spurious-free-dynamic-range (SFDR) of 68 dB with the Nyquist input, at a 200-MS/s output rate. The ADC consumes 2.74 mW, resulting in a figure-of-merit (FOM) of 25-fJ/conversion-step. Meanwhile, in the single-ended mode, the SNDR and SFDR of the ADC are 53and 64 dB, respectively. And the power consumption is 2.25 mW, resulting in an FOM of 31-fJ/conversion-step. The area of the proposed ADC is $280\\times 380\\,\\,\\mu \\text{m}^{2}$ and the ADC core occupies an active area of only $140\\times 250\\,\\,\\mu \\text{m}^{2}$ .", "corpus_id": 52008451, "title": "A 9.1ENOB 200MS/s Asynchronous SAR ADC With Hybrid Single-Ended/Differential DAC in 55-nm CMOS for Image Sensing Signals" }
{ "abstract": "High-resolution video has rapidly integrated into our daily life in the context of progress in camera, display, signal processing, and communication technologies. The uppermost video parameters standardized at this moment include 8K, 120-fps, 12b RGB, wide-color-gamut, and HDR. Although a camera that fulfills all these parameters has been reported based on 1.7-inch 33-Mpixel CMOS imagers [1], achieving a smaller form factor while also maintaining image quality is required from the standpoint of mobility, lens design, and depth of focus. In general, miniaturization of the imager causes degradation of the image quality metrics such as sensitivity, dynamic range, and resolution. We deliberated on these difficulties, and set a target optical format of 1.25 inch.", "corpus_id": 3864310, "title": "A 33Mpixel CMOS imager with multi-functional 3-stage pipeline ADC for 480fps high-speed mode and 120fps low-noise mode" }
{ "abstract": "PBDTTPD is one of the best conjugated polymers for solar cell applications (up to 8.5% efficiency). We have investigated the dynamics of charge generation in the blend with fullerene (PCBM) and addressed highly relevant topics such as the role of bulk heterojunction structure, fullerene excitation, and excess energy. We show that there are multiple charge separation pathways. These include electron transfer from photoexcited polymer, hole transfer from photoexcited PCBM, prompt (<100 fs) charge generation in intimately mixed polymer:fullerene regions (which can occur from hot states), as well as slower electron and hole transfer from excitons formed in pure PBDTTPD or PCBM domains (diffusion to an interface is necessary). Very interestingly, all the charge separation pathways are highly efficient. For example, the yield of long-lived carriers is not significantly affected by the excitation wavelength, although this changes the fraction of photons absorbed by PCBM and the amount of excess energy brought to the system. Overall, the favorable properties of the PBDTTPD:PCBM blend in terms of morphology and exciton delocalization allow excellent charge generation in all circumstances and strongly contribute to the high photovoltaic performance of the blend.", "corpus_id": 6974837, "score": -1, "title": "Charge separation pathways in a highly efficient polymer: fullerene solar cell material." }
{ "abstract": "Wound healing is an essential biological process which involves tissue repair and recovery and includes the action of a complex system of blood cells, cytokines, and growth factors. It is a process attended by integrated cellular and biochemical events and characterized by four phases: haemostasis, inflammation, proliferation, and remodelling. Medicinal plants which have healing applications, continue to play a central role in the healthcare system of a large proportion of the world’s population. Numerous studies have shown that several Australian plant species used medicinally, contain biologically active extracts and compounds which have enormous potential for the treatment and management of a wound. Medicinal plants contain a wide range of chemical compounds as the unique flora of Australia offers an array of diverse bioactives, which elicit antibacterial, antioxidant, anti-inflammatory and wound healing abilities. Such properties are not limited to the edible sections of plants as the roots, bark, sap, leaves and seeds from a vast array of plants have demonstrated similar effects. In this review article, various Australian native plants which are scientifically proven to have antibacterial and anti-inflammatory properties that support wound healing, are discussed. This review also briefly discusses the general wound healing process, wound-colonizing bacteria, factors that affect wound healing, and costs involved in the treatment and management of chronic wounds.", "corpus_id": 228830645, "title": "Antimicrobial and anti-inflammatory activities of australian native plants in the context of wound healing: A review" }
{ "abstract": "ETHNOPHARMACOLOGICAL RELEVANCE\nExtracts of the medicinal plant species Dodonaea polyandra were investigated as part of a collegial research partnership between Northern Kaanju traditional owners represented by Chuulangun Aboriginal Corporation (centred on the Wenlock and Pascoe Rivers, Cape York Peninsula, Queensland, Australia) and university-based researchers. D. polyandra, known as \"Uncha\" in Kaanju language, is used in Northern Kaanju Traditional Medicine for relief from pain associated with toothache and related ailments. The species has a restricted distribution in Cape York Peninsula and there has been no previous Western scientific investigation of its pharmacology or chemistry.\n\n\nAIM OF THE STUDY\nThe current study investigates the anti-inflammatory effects of several extracts from D. polyandra.\n\n\nMATERIALS AND METHODS\nPhytochemical screening was conducted using TLC. Anti-inflammatory effects of leaf extracts were determined using an acute mouse ear oedema model induced by croton oil and 12-o-tetradecanoylphorbol-13-acetate (TPA) chemical irritants.\n\n\nRESULTS\nFlavonoid and terpenoid secondary compounds were detected in leaf extracts of D. polyandra. Non-polar hexane and methylene chloride/methanol extracts showed potent inhibition of inflammation in TPA-induced mouse ear oedema by 72.12 and 79.81%, respectively, after 24 h at 0.4 mg/ear.\n\n\nCONCLUSION\nIn a mouse model of acute inflammation, this study revealed that leaf extracts of D. polyandra possess significant anti-inflammatory potential. These results contribute to a Western scientific understanding of the ethnopharmacological use of the plant in Northern Kaanju Medicine for reducing tooth-related pain.", "corpus_id": 3336432, "title": "Evaluation of the anti-inflammatory properties of Dodonaea polyandra, a Kaanju traditional medicine." }
{ "abstract": "Hodgkin's lymphoma is a potentially curable malignancy of the lymphatic system characterized by a variable number of scattered and large mononucleated and multinucleated tumor cells, the Hodgkin and Reed‐Sternberg cells residing in an abundant heterogeneous admixture of non‐neoplastic inflammatory cells. It represents approximately 30% of all lymphomas according to the World Health Organization (WHO). Patients with Hodgkin's lymphoma typically present with painless peripheral adenopathy, fever, night sweats, and weight loss. We report a rare case of Hodgkin's lymphoma presented as a breast mass in a 23‐year‐old woman diagnosed on fine needle aspiration (FNA). At presentation, she had no B symptoms, or palpable lymphadenopathy. Diagn. Cytopathol. 2010;38:663–668. © 2010 Wiley‐Liss, Inc.", "corpus_id": 20705607, "score": -1, "title": "Reed‐sternberg cells in breast FNA of a patient with left breast mass" }
{ "abstract": "The fluoride concentration in cows’ milk has been reported to vary with the fluoride levels in drinking water but it seldom exceeds 0.5 μg/ml. This raised a question as to whether any caries-protective effect could be attributed to the intrinsic fluoride of milk. Two samples of cows’ milk with intrinsic fluoride concentrations of 0.03 and 0.3 μg/ml, respectively, were assessed for their protective effect on enamel in an in vitro demineralization model at relatively severe and mild acidic challenges (pH 4.6 and 5.0, respectively). Polished enamel discs were incubated individually in 5.0 ml of demineralization solution for 20 h per day alternated with 1-hour incubations in 1.0 ml of milk or control buffers: group 1, demineralization solution only (negative control); group 2, milk with 0.03 μg/ml fluoride; group 3, milk with 0.03 μg/ml fluoride; supplemented with NaF to 0.3 μg/ml fluoride; group 4, milk with 0.3 μg/ml fluoride; group 5, 0.3 μg/ml fluoride in 20 mM HEPES, pH 6.7; group 6, milk with 0.03 μg/ml fluoride supplemented with NaF to 5.0 μg/ml fluoride (positive control). The solutions were renewed each day and the calcium concentration in the demineralization solutions was followed during 4 days. The results showed that the protective effect of intrinsic milk fluoride on enamel is limited by the severity of the acidic challenge: There was a significant inhibition of the demineralization in groups 3–6 compared to groups 1 and 2, but only at pH 5.0 (p < 0.0001) and not at pH 4.6 (p = 0.2). The organic components of milk had limited protection against demineralization because milk and HEPES with the same fluoride concentration gave similar results. The 36% reduction in calcium loss at pH 5.0 by treatment with milk with only 0.3 μg/ml fluoride is an indication that intrinsic milk fluoride has some caries-protective properties.", "corpus_id": 46884191, "title": "The Effect of Intrinsic Fluoride in Cows’ Milk on in vitro Enamel Demineralization" }
{ "abstract": "The concentrations of diffusible and total fluoride in cows' milk samples from areas with widely different fluoride levels in drinking water were determined using a fluoride electrode. The diffusible fluoride was determined by direct hexamethyldisiloxane microdiffusion while for total fluoride, samples were subjected to either open ashing or digestion with proteolytic enzymes before microdiffusion. Magnesium nitrate was studied as a new fixative for milk during open ashing and compared with magnesium acetate. Diffusible fluoride ranged from 0.024 to 0.28 microgram ml-1 while total fluoride ranged from 0.05 to 0.31 microgram ml-1. The use of proteolytic enzymes before microdiffusion resulted in total fluoride measurement. It was concluded that all fluoride in milk is inorganic in nature with the bound fluoride being physically or chemically sequestered in the milk proteins. The proposed method is convenient for total fluoride analysis in milk.", "corpus_id": 179511, "title": "Enzymatic release of sequestered cows' milk fluoride for analysis by the hexamethyldisiloxane microdiffusion method." }
{ "abstract": "Zearalenone (ZEN) contamination of corn and cereal products is a serious health hazard throughout the world and its elimination by microbial methods is now being widely examined. In this study, an Aspergillus niger strain, FS10, isolated from Chinese fermented soybean, was shown to reduce levels of ZEN in corn steep liquor (CSL). Spores, mycelium and culture filtrate of the strain FS10 were tested for their ability to remove ZEN. The results indicated that strain FS10 could remove 89.56% of ZEN from potato dextrose broth (PDB) medium. Mycelium and culture filtrate decreased the ZEN content by 43.10% and 68.16%, respectively. The contaminated corn steep liquor initially contained ZEN 29 μg/ml, 60.01% of which could be removed by strain FS10. To demonstrate the loss of toxicity in vivo, the culture filtrate incubated with the contaminated corn steep liquor for 48 h was administered to rats. The results indicated that the contaminated corn steep liquor severely damaged liver and kidney tissue. Rats administered with contaminated corn steep liquor treated with the strain FS10 culture filtrate showed significantly less severe liver and kidney damage, and organ index values were comparable to the non-ZEN-exposed control (p<0.05). Our study suggests an effective approach to reduce the hazards of ZEN in corn steep liquor.", "corpus_id": 2709513, "score": -1, "title": "Biological detoxification of zearalenone by Aspergillus niger strain FS10." }
{ "abstract": "The Prolactin-inducible-protein (PIP)/Gross Cystic Disease Fluid Protein-15 (GCDFP-15) gene is highly expressed in salivary, lacrimal and sweat glands and the protein abundantly found in the secretions that originate from these glands; saliva, tears and sweat. PIP is thus considered to be strategically located at sites viewed as the first port of entry for invading organisms. PIP is also found over-expressed under abnormal and pathological conditions of the breast and prostate. The function of PIP has yet to be defined but it has been implicated to play a role in immunity, with respect to bacterial and viral infection, cancer and fertility. Despite such predictive functions, there is still no clear demonstration of an immunoregulatory role for PIP. In this review we will focus on accumulating evidence that suggests a role for PIP in both innate and adaptive immunity. Moreover, we will discuss recent evidence that defines a modulatory role for PIP with regards to a CD4+ T cell immune response, identifying for the first time, a critical role for PIP in effective cell-mediated immunity against an intracellular pathogen.", "corpus_id": 38463610, "title": "The prolactin-inducible-protein (PIP): A regulatory molecule in adaptive and innate immunity" }
{ "abstract": "Chiu WWC, Chamley LW. Antibody‐binding proteins in human seminal plasma. AJRI 2002; 48:269–274 © Blackwell Munksgaard, 2002", "corpus_id": 2834097, "title": "Antibody‐Binding Proteins in Human Seminal Plasma" }
{ "abstract": "Infertility affects approximately 10% of couples at reproductive age. Semen constituents may be potential immunogenic structures for women. The aim of our work is to detect and identify sperm proteins interacting with serum IgG antibodies from women with fertility disorders. The biochemical characterization of sperm antigens was performed using one and two dimensional gel electrophoresis, both of which were followed by immunoblotting. The IgG-binding proteins of interest were identified using mass spectrometry. From the serum pool of 30 infertile women, we detected sperm antigens within a relative molecular mass range between 30–80 kDa with an isoelectric point of 4–7. No antigens were detected using the serum pool from 10 fertile women (control group). Heat shock proteins (HSP 70) were identified as major sperm antigens associated with female immune infertility. Additionally, we report for the first time that alpha-enolase is a significant sperm antigen from the serum pool of infertile women. We suggest that the IgG-binding proteins identified in our study are related to immune infertility in the case of certain women with abnormally high levels of IgG antibodies linked to sperm proteins. Our results might be useful in the diagnoses of female immune infertility and may provide potential targets for further therapeutic treatment.", "corpus_id": 16586244, "score": -1, "title": "Immunodominant semen proteins I: New patterns of sperm proteins related to female immune infertility" }
{ "abstract": "At present, cloud computing has attracted a serious deal of research interest and attention in multiple domains. One of the core challenges in this environment is to achieve interoperability among heterogeneous cloud service providers (heterogeneous resources, APIs (Application Programming Interface), SLA(Service-level agreement) policy, etc.) to keep up with the increasing demand of cloud services and the growing requirements of user’s applications. For that, we provide in this paper an overview of the existing approaches and proposed solutions. In this setting, we aim to clarify: Who has posed the Cloud Computing Interoperability (CCI) problem? What does CCI mean? When and Why CCI is needed? Where does CCI problem arise? And the key question that is: How to resolve CCI problem? For this latter, we propose a taxonomy where we distinguish between the considered factors before resolving the CCI problem, and the obtained characteristics of the proposed solutions after resolving the CCI problem. Then we study existing works of CCI according to this proposed taxonomy, where we have generated three graphs allowing us to discuss CCI solution approach VS consumer-centric, CCI solution architecture VS consumer-centric, and CCI solution approach VS CCI solution type. We have concluded that: 1) the application service model is more highlighted in the literature then the management and platform levels, 2) the provider-centric solutions use generally model based approaches and are deployed as middleware or brokers, 3) the user-centric solutions are based on the adapting methodologies and deployed as brokers, 4) the hybrid solutions are based on the adapting methodologies and offer standard or broker architectures, 5) the type of CCI solution in model based approaches is mainly corresponding to framework products, 6) the final product of adapting methodologies can be a service or a library type.", "corpus_id": 258259988, "title": "Cloud Computing Interoperability : An overview" }
{ "abstract": "This article discusses the areas in which semantic models can support cloud computing. Semantic models are helpful in three aspects of cloud computing. The first is functional and nonfunctional definitions. The ability to define application functionality and quality-of-service details in a platform-agnostic manner can immensely benefit the cloud community. The second aspect is data modeling. Semantic modeling of data to provide a platform-independent data representation would be a major advantage in the cloud space. The third aspect is service description enhancement.", "corpus_id": 252692, "title": "Semantic Modeling for Cloud Computing, Part 2" }
{ "abstract": "Web penetration testing embodies both the understanding of attack and defense philosophies. By learning malicious hacking activities, students will understand the perspectives of attackers and realize how to defend a Web application system. To foster information security education, it is important to introduce the attack understanding philosophy. Using student group projects, this study aims to measure student learning effectiveness in Web application security and to discover how students perceive learning given the attack understanding philosophy. In support of triangulation, this research will employ pre-test and post-test study along with the grounded theory approach. The future research findings will propose a framework to improve student learning effectiveness and student learning perception in Web application security.", "corpus_id": 32570408, "score": -1, "title": "Work in progress — Web penetration testing: Effectiveness of student learning in Web application security" }
{ "abstract": "Grand Rapids, Michigan, USA is a medium-sized city located within the Lake Michigan watershed, one the five North American Great Lakes. Like many cities, Grand Rapids spends considerable money managing stormwater. Impervious surfaces collect and concentrate volumes of water and associated sediments and pollutants. This creates flooding, erosion, and pollution problems especially for downstream communities. However, stormwater quantity can be reduced and quality can be improved by, for example, mimicking natural hydrology, enhancing biodiversity, linking ecological and economic sustainability, taking an integrated approach at manageable scales, and viewing stormwater as a resource. Evidence is mounting that onsite stormwater management systems can be cost-effective, but the detailed benefit-cost analyses are still lacking. Therefore the West Michigan Environmental Action Council, together with researchers from Grand Valley State University, estimated the economic benefits and costs of various “green infrastructure” (GI) practices. Each GI practice was standardized to treat 3,000 ft of stormwater per 1.0-inch event plus the first inch of stormwater from larger events. This equates to about 113,000 ft of stormwater per year. The economic analysis used a benefit transfer approach to estimate the net present value (NPV) of capital, operations, and maintenance costs as well as the direct and indirect benefits. The suite of benefits varied for each GI practice and included flood risk reduction ; reductions in stormwater volume, phosphorus, total suspended solids (TSS), and air pollution; scenic amenity value; and CO2 storage. A 3.5 percent discount rate was applied to all costs and benefits, and each practice was analyzed over 50 years. Conserved natural areas had the largest net present value at $3.10/ft, followed by street tree planters at $1.48/ft, rain gardens at $1.12/ft, porous asphalt at $0.68/ft, and infiltration bioretention basins at $0.03/ft. Green roofs had a negative net present values of $-1.12/ft suggesting their lifetime costs exceed their benefits, at least in Grand Rapids where ground-level open space is plentiful. If the green roof is used to attain certification such as Leadership in Energy and Environmental Design (LEED), which has a high amenity value, then the net benefits turn positive ($0.16/ft). Rain barrels are another small-scale green infrastructure practice that can be useful and cost-effective at the household scale ($1.06/ft). However, there is a lot of variability in the costs and benefits associated with each of these GI practices, which will affect the net present value; we utilized likely values for the region. No one GI practice is appropriate for all situations. Rather the choice of GI practice will be driven by the site and budget. This benefit-cost analysis of GI practices has policy implications for Grand Rapids and other small to mid-size Midwestern cities. With the array of options available to manage stormwater on site, municipalities like Grand Rapids are well-positioned to adopt the GI practices that are most appropriate.", "corpus_id": 162172471, "title": "Benefit-cost analysis of stormwater green infrastructure for Grand Rapids , Michigan" }
{ "abstract": "The perception of the maintenance demands of Low Impact Development (LID) systems represents a significant barrier to the acceptance of LID technologies. Despite the increasing use of LID over the past two decades, stormwater managers still have minimal documentation in regards to the frequency, intensity, and costs associated with LID operations and maintenance. Due to increasing requirements for more effective treatment of runoff and the proliferation of total maximum daily load (TMDL) requirements, there is greater need for more documented maintenance information for planning and implementation of stormwater control measures (SCMs). This study examined seven different types of SCMs for the first 2-4 years of operations and studied maintenance demands in the context of personnel hours, costs, and system pollutant removal. The systems were located at a field facility designed to distribute stormwater in parallel, in order to normalize watershed characteristics including pollutant loading, sizing, and rainfall. System maintenance demand was tracked for each system and included materials, labor, activities, maintenance type, and complexity. Annualized maintenance costs ranged from $2,280/ha/yr for a vegetated swale to $7830/ha/yr for a wet pond. In terms of mass pollutant load reductions, marginal maintenance costs ranged from $4-$8 per kg/yr TSS removed", "corpus_id": 7162404, "title": "Comparison of Maintenance Cost, Labor Demands, and System Performance for LID and Conventional Stormwater Management" }
{ "abstract": "The popularity of dedicated microwave reactors in many academic and industrial laboratories has produced a plethora of synthetic protocols that are based on this enabling technology. In the majority of examples, transformations that require several hours when performed using conventional heating under reflux conditions reach completion in a few minutes or even seconds in sealed-vessel, autoclave-type, microwave reactors. However, one severe drawback of microwave chemistry is the difficulty in scaling this technology to a production-scale level. This Concept article demonstrates that this limitation can be overcome by translating batch microwave chemistry to scalable continuous-flow processes. For this purpose, conventionally heated micro- or mesofluidic flow devices fitted with a back-pressure regulator are employed, in which the high temperatures and pressures attainable in a sealed-vessel microwave chemistry batch experiment can be mimicked.", "corpus_id": 46323067, "score": -1, "title": "The microwave-to-flow paradigm: translating high-temperature batch microwave chemistry to scalable continuous-flow processes." }
{ "abstract": "\n Background Surgical management of cervical kyphosis in patients with NF-1 is a challenging task. Presently, anterior-only (AO), posterior-only (PO) and combined anterior-posterior (AP) spinal fusion are common surgical strategies. However, the choice of surgical strategy and application of Halo traction remain controversial. Few studies have shown and recommended posterior-only approach for cervical kyphosis correction in patients with NF-1. The aim of this study is to evaluate the safety and the effectiveness of Continuous-Incremental-Heavy Halo Traction (CIH-HT) combined with posterior-only approach for treatment of cervical kyphosis with NF-1.\nMethods 19 patients with severe cervical kyphosis due to NF-1 were reviewed retrospectively between January 2010 and April 2017. All the cases underwent CIH-HT combined with posterior instrumentation and fusion surgery. Correction result, neurologic status and complications were analyzed.\nResults In this study, cervical kyphosis Cobb angle decreased from initial 63.0 ± 21.0 degrees to postoperative 10.8 ± 4.0 degrees(P<0.01),with total correction rate of 92%, which consist of 44% from CIH-HT and 48% from surgical correction. JOA scores were improved from preoperative 13.6±1.6 to postoperative 16.0±1.0(P<0.01). Neurological status was also improved. There was no correction loss and the neurological status was stable in mean 3.7 years follow-up. The incidence of complications was 36.8% (7/19). Six patients underwent local complications and one patient underwent a second surgery.\nConclusion CIH-HT combined PO approach is safe and effective method for cervical kyphosis correction in patients with NF-1. A satisfied correction result, and successful bone fusion can be achieved via this procedure, even improvement of neurological deficits can also be obtained. Our study suggested that CIH-HT combined PO approach is another consideration for cervical kyphosis correction in patients with NF-1.\nKey words : Neurofibromatosis-1; Cervical kyphosis; Continuous-Incremental-Heavy Halo Traction; posterior-only approach;", "corpus_id": 241478606, "title": "Continuous Incremental Heavy Halo Traction Combined with Posterior-only Approach for Severe Cervical Kyphosis with Neurofibromatosis-1." }
{ "abstract": "OBJECT\nthe goal in this study was to retrospectively investigate the clinical efficacy of surgical treatment for cervical dystrophic kyphotic deformity due to neurofibromatosis Type 1.\n\n\nMETHODS\nbetween January 1998 and July 2008, 8 patients with cervical dystrophic kyphotic deformity due to neurofibromatosis Type 1 (mean Cobb angle of 58.5°) were surgically treated in the authors' department. The mean age at surgery was 19 years (range 12-38 years). Among these patients, 1 with a Cobb angle of 52° and good flexibility underwent single anterior correction, whereas the other 7 patients with severe deformity and poor flexibility received combined anterior and posterior cervical osteotomy. Motor-evoked potential studies were used intraoperatively for spinal cord monitoring. Radiographic assessment and Japanese Orthopaedic Association scoring were used to evaluate the clinical outcome.\n\n\nRESULTS\nno severe neurological complications were noted. Two patients complained of persistent neck and shoulder pain after combined anterior and posterior correction, which alleviated after conservative treatment half a year later. All patients were followed up for a mean of 21.1 months (range 6-36 months). All patients had a solid bone fusion at the latest follow-up, with Japanese Orthopaedic Association scoring improving from 11.5 preoperatively to 14.1 postoperatively (p < 0.01) at the final follow-up. The kyphotic deformities improved significantly, with average Cobb angles of 2.5° postoperatively and 4.1° at final follow-up.\n\n\nCONCLUSIONS\nthe deformity of neurofibromatosis with cervical kyphosis is severe, and surgery carries a high risk of failure. Although premature fusion may be performed, the deformity may still progress, and this situation may lead to failure of surgery. The successful management of this disease requires early recognition and a more aggressive and reliable intervention to prevent disastrous worsening of the deformity. Meticulous preoperative evaluation, appropriate surgical strategy, and skilled technique were essential for successful surgical treatment and good clinical results.", "corpus_id": 2237694, "title": "Surgical treatment of severe cervical dystrophic kyphosis due to neurofibromatosis Type 1: a review of 8 cases." }
{ "abstract": "The results of vascularised rib graft transfers are analysed in 25 patients followed up for more than two years (average 34 months). Radiographs showed early and rapid incorporation of the grafts in 4 to 16 weeks (average 8.5 weeks); external immobilisation averaged 11 weeks (range 5 to 24 weeks). The technique seems a useful alternative to allografts or homografts employing an avascular rib or fibula since it promotes rapid healing without needing microsurgical techniques.", "corpus_id": 34558862, "score": -1, "title": "Vascularised rib grafts for stabilisation of kyphosis." }
{ "abstract": "Study of the Temperature the Muscles Abstract Temporomandibular dysfunctions (TMD) consist of a collection of clinical signs and symptoms involving the temporomandibular joint and the muscles of chewing, with pain being the most constant symptom. Infrared thermography is a noninvasive, painless, and non-radioactive analytical instrument capable of examining physiological functions under body temperature control. Objective: To compare the temperature of the masseter and temporal muscles of people with and without temporomandibular dysfunctions. Methodology: This is an experimental cross-sectional field research, in which 25 individuals were selected, with ages ranging between 18 and 40 years and applied the anamnetic self-application questionnaire of Silveira et al. where the presence and severity of TMD was determined and then the temperature of the masseter and temporal muscle was analyzed through thermography. Results and Discussion: They indicate that there is a statistical difference of temperature between individuals without TMD in relation to individuals with severe TMD. There was no significant statistical difference between individuals with mild and moderate TMD compared to those without TMD. Conclusion: Therefore, even though there are not many studies regarding the use of infrared thermography in temporomandibular dysfunctions, the results of this study were positive regarding the use of thermography to detect the increase of temperature of the masseter and temporal muscles in individuals with severe TMD when compared to healthy individuals.", "corpus_id": 236588158, "title": "Comparative Study of the Temperature of the Masseter and Temporal Muscles of Patients with and Without Temporomandibular Dysfunctions Through Thermography" }
{ "abstract": "Aim The purpose of the present study was to correlate the degree of temporomandibular disorder (TMD) severity and skin temperatures over the temporomandibular joint (TMJ) and masseter and anterior temporalis muscles. Materials and methods This blind cross-sectional study involved 60 women aged 18–40 years. The volunteers were allocated to groups based on Fonseca anamnestic index (FAI) score: no TMD, mild TMD, moderate TMD, and severe TMD (n = 15 each). All volunteers underwent infrared thermography for the determination of skin temperatures over the TMJ, masseter and anterior temporalis muscles. The Shapiro–Wilk test was used to determine the normality of the data. The Kruskal–Wallis test, followed by Dunn’s test, was used for comparisons among groups according to TMD severity. Spearman’s correlation coefficients were calculated to determine the strength of associations among variables. Results Weak, positive, significant associations were found between FAI score and skin temperatures over the left TMJ (rs = 0.195, p = 0.009) and right TMJ (rs = 0.238, p = 0.001). Temperatures over the right and left TMJ were significantly higher in groups with more severe TMD (p < 0.05). Conclusion FAI score was associated with skin temperature over the TMJ, as determined by infrared thermography, in this sample. Women with more severe TMD demonstrated a bilateral increase in skin temperature.", "corpus_id": 1534577, "title": "Women with more severe degrees of temporomandibular disorder exhibit an increase in temperature over the temporomandibular joint" }
{ "abstract": "Thirteen patients with transient or permanent homonymous visual field defects experienced formed hallucinations localized to the affected part of the visual field. The lesion was occipital in 8 instances (infarction 7, porencephalic cyst 1), parietooccipital in 3 (infarction 2, angioma 1) and probably parietal in 2 (epilepsy 1, encephalitis 1). The disorder involved the right hemisphere in 9 cases, the left hemisphere in 3 cases and both hemispheres sequentially in one patient. Hallucinations were accompanied by palinopsia in 2 cases, metamorphopsia in one case and constriction of one pupil in another case. This particular type of hallucination is considered as an irritative phenomenon of the visual association cortex which can be symptomatic of a parieto-occipital lesion and does not necessarily implicate the temporal lobes. Distinctive features about the visions were that they consisted of people, animals or objects. There was no auditory accompaniment and any action that took place was stereotyped and did not tell a story. In most cases, the hallucinations were not clearly related to any visual memory. It is suggested that the visual association cortex amy be responsible for the organization of visual percepts into broad categories of which people, animals and objects are representative. The occurrence of such hallucinations with a visual field defect suggests that the cells of the association cortex are more likely to discharge spontaneously once they are deprived of their normal afferent inflow from the calcarine cortex.", "corpus_id": 20127290, "score": -1, "title": "Simple formed hallucinations confined to the area of a specific visual field defect." }
{ "abstract": "Mode‐Selective Enhanced Surveillance (Mode‐S EHS) aircraft reports can be collected at a low cost and are readily available around busy airports. The new work presented here demonstrates that observations derived from Mode‐S EHS reports can be used to study the evolution of temperature inversions since the data have a high spatial and temporal frequency. This is illustrated by a case study centred around London Heathrow airport for the period January 4–5, 2015. Using Mode‐S EHS reports from multiple aircraft and after applying quality control criteria, vertical temperature profiles are constructed by aggregating these reports at discrete intervals between the surface and 3,000 m. To improve these derived temperatures, four smoothing methods using low‐pass filters are evaluated. The effect of smoothing reduces the variance in the aircraft derived temperature by approximately half. After smoothing, the temperature variance between the altitudes 3,000 and 1,000 m is 1–2 K; below 1,000 m, it is 2–4 K. Although the differences between the four smoothing methods are small, exponential smoothing is favoured because it uses all available Mode‐S EHS reports. The resulting vertical profiles may be useful in operational meteorology for identifying elevated temperature inversions above 1,000 m. However, below 1,000 m they are less useful because of the reduced precision of the reported Mach number. A better source of in situ temperature observations would be for aircraft to use the meteorological reporting function of their automatic dependent surveillance system.", "corpus_id": 86694523, "title": "Towards operational use of aircraft‐derived observations: a case study at London Heathrow airport" }
{ "abstract": "Mode Selective Enhanced Surveillance (Mode‐S EHS) reports are aircraft‐based observations that have value in numerical weather prediction (NWP). These reports contain the aircraft's state vector in terms of its speed, direction, altitude and Mach number. Using the state vector, meteorological observations of temperature and horizontal wind can be derived. However, Mode‐S EHS processing reduces the precision of the state vector from 16‐bit to 10‐bit binary representation. We use full precision data from research‐grade instruments, on board the UK's Facility for Atmospheric Airborne Measurements, to emulate Mode‐S EHS reports and to compare with derived observations. We aim to understand the observation errors due to the reduced precision of Mode‐S EHS reports. We derive error models to estimate these observation errors. The temperature error increases from 1.25 to 2.5 K between an altitude of 10 km and the surface due to its dependency on Mach number and also Mode‐S EHS precision. For the cases studied, the zonal wind error is around 0.50 m s−1 and the meridional wind error is 0.25 m s−1. The wind is also subject to systematic errors that are directionally dependent. We conclude that Mode‐S EHS‐derived horizontal winds are suitable for data assimilation in high‐resolution NWP. Temperature reports may be usable when aggregated from multiple aircraft. While these reduced precision, high‐frequency data provide useful, albeit noisy, observations, direct reports of the higher‐precision data would be preferable.", "corpus_id": 3530249, "title": "Comparison of aircraft‐derived observations with in situ research aircraft measurements" }
{ "abstract": "Abstract Single- and multiple-Doppler radar systems are increasingly being used to monitor circulations within the clear-air boundary layer where the scatterers may be gradients of the refractive index or biota or a combination of both. When insects are the primary source of returned radar power, it must be assumed that the insects are either small and are being carried passively in the air, or are flying randomly so that the bulk velocity of all the insects contained within a pulse volume is zero relative to the air. This study presents dual-polarization radar observations of the interaction between a gust flow and a deep cloud of insects within a relatively unstable air mass over North Dakota on 4 July 1987. These data are unique in that they reveal several meteorological conditions for which the preceding assumption is not valid. The boundary layer was not capped, and circulations rose above an apparent threshold altitude above which these insects were not flying. Temperatures near the threshold altitu...", "corpus_id": 122247454, "score": -1, "title": "The Use of Insects as Tracers for “Clear-Air” Boundary-Layer Studies by Doppler Radar" }
{ "abstract": "Contour representation is an important application in image compression, template matching, object detection and recognition. However, it is far from meeting the current requirement due to the expensive computational cost and complex noise in the real-world application. In order to make contour representation more practical, we propose a novel approach of abstracting contours of the objects in an image. In our approach, we firstly find the salient points on the target contour by combining an ellipse model and Chord-to-point distance accumulation techniques. Then, based on the salient points, we adopt the least square method to fit a planar arc representing the target contour. The extensive experiments show that our approach has the lower computation cost, better robustness and more exact approximation to the original target contour. Our work provides more selections for the practical application of contour abstraction.", "corpus_id": 18801862, "title": "Contour abstraction based on salient points" }
{ "abstract": "Abstraction in imagery results from the strategic simplification and elimination of detail to clarify the visual structure of the depicted shape. It is a mainstay of artistic practice and an important ingredient of effective visual communication. We develop a computational method for the abstract depiction of 2D shapes. Our approach works by organizing the shape into parts using a new synthesis of holistic features of the part shape, local features of the shape boundary, and global aspects of shape organization. Our abstractions are new shapes with fewer and clearer parts.", "corpus_id": 6399740, "title": "Abstraction of 2D shapes in terms of parts" }
{ "abstract": "In this paper, we propose a novel approach to simplify sketch drawings. The core problem is how to group sketchy strokes meaningfully, and this depends on how humans understand the sketches. The existing methods mainly rely on thresholding low-level geometric properties among the strokes, such as proximity, continuity and parallelism. However, it is not uncommon to have strokes with equal geometric properties but different semantics. The lack of semantic analysis will lead to the inability in differentiating the above semantically different scenarios. In this paper, we point out that, due to the gestalt phenomenon of closure, the grouping of strokes is actually highly influenced by the interpretation of regions. On the other hand, the interpretation of regions is also influenced by the interpretation of strokes since regions are formed and depicted by strokes. This is actually a chicken-or-the-egg dilemma and we solve it by an iterative cyclic refinement approach. Once the formed stroke groups are stabilized, we can simplify the sketchy strokes by replacing each stroke group with a smooth curve. We evaluate our method on a wide range of different sketch styles and semantically meaningful simplification results can be obtained in all test cases.", "corpus_id": 10751145, "score": -1, "title": "Closure-aware sketch simplification" }
{ "abstract": "Microgrid technology enables reliable control and distribution of electricity on a small scale which can have a major impact on developing communities and can help to meet the UN's initiative to provide Universal Access to Modern Energy Services by 2030. Since the distributed energy resources (DER) of microgrids in rural areas will likely be dependent on renewable energy sources (RES), system stability and reliability will be a challenge. However, implementing a microgrid without proper experimentation of the dynamics of the DER and load variations is impractical. A microgrid test bench has been constructed at the University of Wisconsin - Madison which will allow for thorough experimentation. The experimentation will focus on RES using the wind turbine and solar emulator available in the lab. Additionally, other appropriate technologies that were developed at UW-Madison, like the recycled E-waste “Microformer” for micropower distribution, will be evaluated. Experimentation will justify the implementation of microgrid technologies for reliable energy distribution for the developing world and rural communities.", "corpus_id": 28598557, "title": "Microgrid test bench for small-scale renewable microsources" }
{ "abstract": "Microgrids are highly compatible with photovoltaic (PV) sources because of their ability to internally aggregate and balance multiple PV sources without imposing restrictions on the penetration of such intermittent power sources. There are two major types of inverter control configurations that are used in photovoltaic inverters to provide an interface to a CERTS microgrid. These control configurations exhibit important duality characteristics, and both are capable of tracking maximum input power while abiding by the CERTS droop algorithms. This paper investigates and demonstrates the comparative performance characteristics of these two major controller types: 1) a grid-forming droop-style controller similar to those used for controlling distributed generators; and 2) a current-regulated grid-follower controller. It is shown that only the grid-forming controller allows a PV source to operate alone in an islanded CERTS microgrid, but the grid-follower controller enjoys some inherent advantages with regard to faster dynamic response.", "corpus_id": 2777660, "title": "Comparison of PV inverter controller configurations for CERTS microgrid applications" }
{ "abstract": "Effective control strategies for reliable photovoltaic (PV) grid-connected systems are needed to efficiently use solar energy, an abundant and clean renewable energy source. A space vector pulse width modulation (SVPWM) has been widely applied in the current control of three-phase voltage source inverters (VSIs). The control strategy combines a constant voltage tracking method for the variable photovoltaic power with SVPWM-based proptional-intergral (PI) current controller in a single stage three-phase PV grid-connected system. The controller mimics deadbeat control in the synchronous d-q reference frame, and is very simple and robust to implement. With the necessary grid voltage detection in Photovoltaic systems for protection, grid harmonics disturbance is effectively suppressed through feed-forward compensation. The simulation results show that the current controller makes the PV grid-connected VSI output current to be in phase with the grid voltage and has an excellent steady-state response as well as an extremely fast dynamic response.", "corpus_id": 24330142, "score": -1, "title": "Three-phase grid-connected photovoltaic system with SVPWM current controller" }
{ "abstract": "Abstract A newly developed facility at the 3 MV Tandem Accelerator at Dhaka for measurement of proton induced reaction cross sections in the energy region below 5 MeV is outlined and tests for the beam characterization are described. The results were validated by comparison with the well-known excitation function of the 64Ni(p, n)64Cu reaction. Excitation functions of the reactions natNi(p, x)60,61Cu, natNi(p, x)55,57,58m+gCo and natNi(p, x)57Ni were also measured from threshold to 16 MeV using the stacked-foil technique, whereby irradiations were performed with 5 MeV protons available at the Tandem Accelerator and 16.7 MeV protons at the BC 1710 cyclotron at Jülich, Germany. The radioactivity was measured using HPGe γ-ray detectors. A few results are new, the others strengthen the database. In particular, the results of the reaction natNi(p, x)61Cu below 3 MeV could serve as beam monitor.", "corpus_id": 100681998, "title": "Experimental determination of proton induced reaction cross sections on natNi near threshold energy" }
{ "abstract": "Production cross-sections of the (nat)Ni(p,x)(60,61)Cu, (56,57)Ni, (55,56,57,58)Co nuclear reactions were measured in five experiments up to 65MeV by using a stacked foil activation technique. The results were compared with the available literature values, predictions of the nuclear reaction model codes ALICE-IPPE, TALYS-1.4, and extracted data from the TENDL-2012 library. Spline fits were made on the basis of selected data, from which physical yields were calculated and compared with the literature values. The applicability of the (nat)Ni(p,x)(57)Ni, (57)Co reactions for thin layer activation (TLA) was investigated. The production rate for (55)Co was compared for proton and deuteron induced reactions on Ni.", "corpus_id": 9159008, "title": "Activation cross-sections of proton induced reactions on natural Ni up to 65MeV." }
{ "abstract": "The isotopic composition of cosmic ray Be, B, C, and N has been studied using a new range versus total light technique. Special emphasis has been placed on the Be isotopes and, in particular, on the radioactive isotope /sup 10/Be whose mean lifetime against decay (tau/sub d/=2.2 x 10/sup 6/ yr) makes it an ideal ''clock'' with which to measure the cosmic-ray age. The experiment was carried by balloon on 1973 August 15 to atmospheric depths ranging from 3.5 to 5.0 g cm/sup -2/ residual atmosphere for a total exposure time of 23 hr. The mass resolution achieved by the experiment is given by sigma/sub A/=0.047 A. The importance of correcting the data for interactions in the detector, for varying energy windows (roughly 150 to 450 MeV per nucleon) for different isotopes, and most importantly for production and destruction of nuceli in nuclear interactions in the atmosphere is discussed. The data are compared with those of other experimenters. The results indicate the survival of (55 +- 20) % of the /sup 10/Be in the arriving cosmic rays. In the leaky box model, this is interpreted in terms of a mean cosmic-ray confinement time given by tau/sub e/=5(+6, -3) x 10/supmore » 6/ yr, which corresponds to a mean density, n=0.7 (+1.0, -0.4) atoms cm/sup -3/, of matter in the confinement volume.« less", "corpus_id": 121549759, "score": -1, "title": "Be-10 abundance and the age of cosmic rays - A balloon measurement" }
{ "abstract": "Liquid biopsy, including both circulating tumor cells and circulating tumor DNA, is gaining momentum as a diagnostic modality adopted in the clinical management of breast cancer. Prospective studies testing several technologies demonstrated clinical validity and, in some cases, achieved the United States Food and Drug Administration approval. The initial testing and clinical application of liquid biopsy focused primarily on the diagnosis, while molecular characterization and monitoring of metastatic disease, with larger data from prospective studies, came in the last two decades. Although its role in metastatic setting is thus widely recognized, the current evidence does not provide support for the routine clinical use of liquid biopsy methods for the earlier stage of this disease. Considering the relevance of early detection, characterization, and management of breast cancer in the early-stage, this clinical setting is the most suitable to increase the chances for effective treatment selection and improved prognosis, and a better understanding of the main application of liquid biopsy tools in the earlier stage of breast cancer is therefore crucial. The aim of this review is to provide an overview of the clinical evidence and subsequent potential applications of liquid biopsy in early breast cancer, identifying the main existing caveats and the possible future scenarios. Page 2 of 16 D’Amico et al. J Cancer Metastasis Treat 2021;7:3 I http://dx.doi.org/10.20517/2394-4722.2020.93", "corpus_id": 231814819, "title": "The use of liquid biopsy in early breast cancer: clinical evidence and future perspectives" }
{ "abstract": "Background\nWe present a pooled analysis of predictive and prognostic values of circulating tumour cells (CTC) and circulating endothelial cells (CEC) in two prospective trials of patients with inflammatory breast cancer (IBC) treated with neoadjuvant chemotherapy combined with neoadjuvant and adjuvant bevacizumab.\n\n\nPatients and methods\nNonmetastatic T4d patients were enrolled in two phase II multicentre trials, evaluating bevacizumab in combination with sequential neoadjuvant chemotherapy of four cycles of FEC followed by four cycles of docetaxel in HER2-negative tumour (BEVERLY-1) or docetaxel and trastuzumab in HER2-positive tumour (BEVERLY-2). CTC and CEC were detected in 7.5 and 4 ml of blood, respectively, with the CellSearch System.\n\n\nResults\nFrom October 2008 to September 2010, 152 patients were included and 137 were evaluable for CTC and CEC. At baseline, 55 patients had detectable CTC (39%). After four cycles of chemotherapy, a dramatic drop in CTC to a rate of 9% was observed (P < 0.01). Pathological complete response (pCR) rate was 40%. No correlation was found between CTC or CEC levels and pCR rate. Median follow-up was 43 months. CTC detection (≥1 CTC/7.5 ml) at baseline was associated with shorter 3-year disease-free survival (39% versus 70% for patients without CTC, P < 0.01, HR 2.80) and shorter 3-year overall survival (OS) (P < 0.01). In multivariate analysis, independent prognostic parameters for shorter survival were absence of hormonal receptors, no pCR and CTC detection at baseline. CEC level at baseline or variations during treatment had no prognostic value.\n\n\nConclusion\nIn this pooled analysis of two prospective trials in nonmetastatic IBC, detection rate of CTC was 39% with a strong and independent prognostic value for survival. Combination of pCR after neoadjuvant treatment with no CTC detection at baseline isolated a subgroup of IBC with excellent OS (94% 3-year OS), suggesting that CTC count could be part of IBC stratification in prospective trials.", "corpus_id": 3792271, "title": "Circulating tumour cells and pathological complete response: independent prognostic factors in inflammatory breast cancer in a pooled analysis of two multicentre phase II trials (BEVERLY-1 and -2) of neoadjuvant chemotherapy combined with bevacizumab" }
{ "abstract": "Abstract. One of the steps during the implantation of hearing aids and cochlea implants is the creation of a shallow implant bed in the thin skull bone. Correct placement of the implant is paramount but difficult to achieve preoperatively. The profile of the calotte, which can be determined through e.g. 3D imaging, is important for this task. Additionally, in a robot-based surgical operation, the fi-nal position and orientation of the implant must be known before milling starts to ensure optimal fitting. We present an algorithm which allows semi-automatic determination of this optimal placement through both the planning system and the surgeon. The computer supports the surgeons with his speed and accuracy, while the robot aids in the precise execution. 1 Motivation Case numbers for hearing loss are rising steadily. There are several kinds of hearing loss which can be handled with different types of hearing aids. Implantable hearing aids help with defective auditory canals. The ossicles are excited mechanically", "corpus_id": 15776095, "score": -1, "title": "First System for Interactive Position Planning of Implant Components" }
{ "abstract": "Innovacio, emprenedoria i subcontractacio son els pilars tematics de la present tesi doctoral. Lus del coneixement per part de les empreses representa lelement comu daquestes tematiques. Les evidencies empiriques provenen principalment del mon empresarial, pero es complementen amb les del mon academic com a principal proveidor de coneixement a la societat actual, la finalitat de la qual es crear riquesa i benestar socioeconomic. Lobjectiu principal daquesta tesi doctoral es contribuir a les diferents arees de recerca. La primera, gestio de la innovacio, mes concretament innovacio organitzativa, reflexionant sobre la seva importancia i monitoritzacio a traves denquestes. Tot seguit, un exemple dinnovacio organitzativa treball en equip- sanalitza en profunditat, aixi com tambe els seus determinants. La segona, analitza el proces de transicio duna universitat tradicional cap a una universitat emprenedora, comencant per la fase de disseny fins a lactualitat contemplant la seva funcionalitat i eficiencia en el marc de les institucions publiques de recerca dEuropa. I la tercera, descriu les barreres que les empreses han de fer front a lhora de cooperar, en general, i amb universitats, en particular. La mateixa mostra dempreses gasela tambe serveix per analitzar la decisio de fer-o-comprar. El resum daquests resultats, les conclusions, aixi com les futures linies de recerca finalitzen el treball.", "corpus_id": 130724638, "title": "Innovation, entrepreneurship and outsourcing: essays on the use of knowledge in business environments" }
{ "abstract": "This paper studies the initial resources on which new organizations are based and how these resources interact with the institutional origin and market characteristics. Using a unique hand-collected data set of research-based start-ups (RBSUs), we empirically test how technological, financial and human resources relate to each other to form distinct starting resource configurations. We find four different start- ing configurations: “venture capital-backed start-ups,” “prospectors,” “product start-ups” and “transitional start-ups”. The results show that VC-backed start-ups are a minority while half of the firms start as prospectors. Market complexity and growth prospects influence the probability of starting with venture capital. The unclearness of the product market at founding characterizes prospectors, while product start-ups mostly have an almost market-ready product targeted at an international niche market. Transitional starters initially commercialize technical know-how through consulting and become product oriented later on. This discussion contributes to the debate concerning the interplay of environment and firm resources.", "corpus_id": 154032962, "title": "How and Why do Research-Based Start-Ups Differ at Founding? A Resource-Based Configurational Perspective" }
{ "abstract": "The literature argues that research spin-offs (RSOs)—enterprises originating from a university or research institute—appear to have higher innovative potential and capabilities than other start-ups, at least in the early stages of their development. Yet, little is known about the innovative performance of these companies at later development phases. Thus, the main goal of this study is to investigate whether there are any differences in research and development (R&D) and innovation behavior between established and/or mature RSOs and otherwise created firms and, if so, to what extent they are driven by networking and cooperation activities as suggested by some scholars. To this end, we employ probit regression analysis and a matching approach using survey data on more than 6,000 East German firms, among which are 179 RSOs. Our first findings suggest that established RSOs engage in R&D and innovation activities more frequently than companies whose genesis was of another type. Nevertheless, the results obtained when accounting for collaboration measures show that the precedence of RSOs in further development stages over otherwise created firms in terms of innovation outputs is related to their higher intensity of cooperation activity and close, face-to-face interactions with universities, and not to type of firm creation. Moreover, our findings reveal that cooperating in various fields may be of different importance for specific inputs and outputs of the innovation activity. Finally, based on our results, we draw some implications for both practicing managers and public policymakers.", "corpus_id": 675109, "score": -1, "title": "How innovative are spin-offs at later stages of development? Comparing innovativeness of established research spin-offs and otherwise created firms" }
{ "abstract": "We demonstrate the feasibility of long lasting underwater networking by proposing the smart exploitation of the energy harvesting capabilities of underwater sensor nodes. We define a data routing framework that allows senders to select the best forwarding relay taking into account both residual energy and foreseeable harvestable energy. Our forwarding method, named HyDRO, for Harvesting-aware Data ROuting, is also configured to consider channel conditions and route-wide residual energy, performing network wide optimization via local information sharing. The performance of our protocol is evaluated via simulations in scenarios modeled to include realistic underwater settings as well as energy harvesting based on recorded traces. HyDRO is compared to state-of-the-art forwarding protocols for underwater networks. Our results show that jointly considering residual and predicted energy availability is key to achieve lower energy consumption and latency, while obtaining much higher packet delivery ratio.", "corpus_id": 49349362, "title": "Harnessing HyDRO: Harvesting-aware Data ROuting for Underwater Wireless Sensor Networks" }
{ "abstract": "This paper concerns the implementation of an efficient underwater acoustic network suitable for long lasting environmental monitoring in fish farming. Several hardware and software solutions have been designed and implemented to extend the network lifetime and to make the system autonomous and suitable for such an application scenario. The proposed system is composed of different components. The SUNSET Software Defined Communication Stack (SDCS) is used to provide networking capabilities to underwater nodes communicating acoustically through AppliCon SeaModem modems. The Hydrolab Series 5 probes are used to monitor the water quality. Lifetime of underwater nodes is extended through the use of a novel device that allows to harvest energy from underwater water currents via suitable propellers. In addition, novel sleep and wake up mechanisms have been designed and implemented into the underwater nodes to minimize the energy consumption of the system during the idle periods. The performance of the proposed system has been extensively evaluated in field by monitoring the water quality in three fish farming cages located in the Mediterranean Sea, Italy. The system has been connected to the Internet infrastructure allowing the users to easy interact with the underwater system in real-time. Our results confirm that the proposed system is suitable for long term monitoring providing a reliable and robust data collection scheme with an extended network life time.", "corpus_id": 2366382, "title": "Long lasting underwater wireless sensors network for water quality monitoring in fish farms" }
{ "abstract": "Abstract Energy-intensive industries account for almost 51% of energy consumption in China. A continuous improvement in energy efficiency is important for energy-intensive industries. Cleaner production has proven itself as an effective way to improve energy efficiency and reduce energy consumption. However, there is a lack of manufacturing data due to the difficult implementation of sensors in harsh production environment, such as high temperature, high pressure, high acid, high alkali, and smoky environment which hinders the implementation of the cleaner production strategy. Thanks to the rapid development of the Internet of Things, many data can be sensed and collected in the manufacturing processes. In this paper, a big data driven analytical framework is proposed to reduce the energy consumption and emission for energy-intensive manufacturing industries. Then, two key technologies of the proposed framework, namely energy big data acquisition and energy big data mining, are utilized to implement energy big data analytics. Finally, an application scenario of ball mills in a pulp workshop of a partner company is presented to demonstrate the proposed framework. The results show that the energy consumption and energy costs are reduced by 3% and 4% respectively. These improvements can promote the implementation of cleaner production strategy and contribute to the sustainable development of energy-intensive manufacturing industries.", "corpus_id": 158506353, "score": -1, "title": "A big data driven analytical framework for energy-intensive manufacturing industries" }
{ "abstract": "A novel technique for efficient computation of global light propagation in interactive DVR is presented in this paper. The approach is based on a combination of local shadows from the vicinity of each voxel with global shadows calculated at high resolution but stored in a sparser grid. The resulting intensities are then used as the initial illumination for an additional pass that computes first order scattering effects. The method captures global shadowing effects with enhanced shadows of near structures. A GPU framework is used to evaluate the illumination updates at interactive frame rates, using incremental refinements of the in-scattered light.", "corpus_id": 29257471, "title": "Interactive Global Light Propagation in Direct Volume Rendering using Local Piecewise Integration" }
{ "abstract": "Realistic rendering of participating media like clouds requires multiple anisotropic light scattering. This paper presents a propagation approximation for light scattered into M direction bins, which reduces the “ray effect” problem in the traditional “discrete ordinates” method. For a regular grid volume of n 3 elements, it takes O(Mn 3 log n + M 2 n 3) time and 0(Mn 3 + M 2) space.", "corpus_id": 2017548, "title": "Efficient light propagation for multiple anisotropic volume scattering" }
{ "abstract": "Advances in technology and instrumentation have now opened up virtually the entire radio spectrum to the study of stars. An international workshop, “Radio Stars: From kHz to THz”, was held at the Massachusetts Institute of Technology Haystack Observatory on 2017 November 1–3 to discuss the progress in solar and stellar astrophysics enabled by radio wavelength observations. Topics covered included the Sun as a radio star; radio emission from hot and cool stars (from the pre- to post-main-sequence); ultracool dwarfs; stellar activity; stellar winds and mass loss; planetary nebulae; cataclysmic variables; classical novae; and the role of radio stars in understanding the Milky Way. This article summarizes meeting highlights along with some contextual background information.", "corpus_id": 119372146, "score": -1, "title": "Radio Stars: From kHz to THz" }
{ "abstract": "Identification of a virus in the family Herpesviridae is based on the morphology of the virus particle. Viewed through an electron microscope, the virions of different members of the Herpesviridae family are indistinguishable and consist of four distinct components: the core, capsid, tegument, and envelope (Fig. 1) (1). The core contains a double-stranded DNA genome arranged in an unusual torus shape that is located inside an icosadeltahedral capsid that is approx 100 nm in size and contains 162 capsomeres (2). Located between the capsid and the viral envelope is an amorphous structure termed the tegument that contains numerous proteins. The tegument structure is generally asymmetrical, although some virus members (such as human herpesvirus 6 [HHV6] and human herpesvirus 7 [HHV-7]) have been shown to have well-defined tegument structures (3,4). Presumably, the tegument is responsible for connecting the capsid to the envelope and acting as a reservoir for viral proteins that are required during the initial stages of viral infection (5,6). The outermost structure of the herpes virion is the envelope, which is derived from cell nuclear membranes and contains several viral glycoproteins. The size of mature herpesviruses ranges from 120 to 300 nm owing to differences in the size of the individual viral teguments (1). The life cycle of all herpesviruses in their natural host can be divided into lytic (resulting in the production of infectious progeny) and latent (dormant) infections. During a lytic infection the virus is replicated and newly synthesized particles are released into the surrounding medium. During a latent infection viral replication is suppressed, resulting in the formation of a quiescent state. The establishment of viral latency is a hallmark of all known herpesviruses. As described below, the sites of lytic and latent infections differ among the various members of the human herpesvirus family.", "corpus_id": 201112018, "title": "2 Overview of Herpesviruses" }
{ "abstract": "Herpes simplex virus (HSV) has been a focus of research in many laboratories during the last 30-35 years, with the majority centered on the virus' replication, molecular biology and pathogenesis. Recently, HSV has begun to receive considerable attention in the field of neuroscience, where scientists have begun to use the virus as a tool or model for several areas of investigation. These areas include the construction and development of HSV-based vectors for gene therapy and the use of HSV as a neuronal tracer, as a model for demyelinating disease and to study interactions between the nervous, immune and endocrine systems. The goal of this paper is to review these different roles for HSV in the broad field of neuroscience.", "corpus_id": 2058998, "title": "The roles of herpes simplex virus in neuroscience." }
{ "abstract": "The goal of experiments reported here was to identify the genes that encode capsid proteins VP21 and VP24 of herpes simplex virus type 1 (HSV-1). Capsids were isolated from infected cells and the proteins were separated by SDS-PAGE. N-terminal amino acid sequence analysis of partial CNBr digestion products, and of intact VP21, showed that it is encoded within the UL26 open reading frame (ORF) of HSV-1 beginning with codon 248 and probably extending to the end of the ORF (codon 635). Similar analysis of digestion products confirmed that VP24 is specified by codons 1 to 247 at the 5' end of the UL26 ORF. Each of the seven known capsid proteins has now been assigned to an ORF.", "corpus_id": 45570401, "score": -1, "title": "Herpes simplex virus type 1 capsid protein, VP21, originates within the UL26 open reading frame." }
{ "abstract": "In vivo methods used to study human body composition continue to be developed, along with more advanced reference models that utilize the information obtained with these technologies. Some methods are well established, with a strong physiological basis for their measurement, whereas others are much more indirect. This review has been structured from the methodological point of view to help the reader understand what can be examined with each technique. The associations between the various in vivo methods (densitometry, dilution, bioelectrical impedance and conductance, whole body counting, neutron activation, X-ray absorptiometry, computer tomography, and magnetic resonance imaging) and the five-level multicompartment model of body composition are described, along with the limitations and advantages of each method. This review also provides an overview of the present status of this field of research in human biology, including examples of reference body composition data for infants, children, adolescents, and adults.", "corpus_id": 6658133, "title": "Human body composition: in vivo methods." }
{ "abstract": "ABSTRACT. Total body electrical conductivity (TOBEC) has been introduced as a rapid, safe, and noninvasive method suitable for the estimation of fat-free mass. The instrument (EMME or TOBEC) operates on the principle that organisms placed in an electromagnetic field perturb the field to a degree that depends on the amount and volume of distribution of electrolytes present. A study was designed to measure body composition in infants by the TOBEC method and to compare the results with those obtained using the isotope dilution technique. Sixteen infants (age range, 2 days to 9.7 months; weight range, 2 to 8.7 kg) were enrolled. Total body water (TBW) was determined by the isotope dilution technique using H218O. There was a good correlation between the natural logarithm of the TOBEC number and TBW, with a linear correlation coefficient of 0.949. The fat-free body mass of the infants was calculated by TBW (fat-free body mass=/0.082) and by the TOBEC method using the standard previously derived from mature rabbits. TBW measurements by H218O dilution appeared to overestimate fat-free mass which was greater than TBW in five of the 16 infants. Measured by the TOBEC method, fat-free mass ranged from 51 to 91% of total body weight. The TOBEC method is highly suitable for use with human infants and appears to determine body composition as accurately as other available methods.", "corpus_id": 1622569, "title": "Total Body Electrical Conductivity Used to Determine Body Composition in Infants" }
{ "abstract": "This study investigated the basic mechanical and microscopic properties of cement produced with metakaolin and quantified the production of residual white efflorescence. Cement mortar was produced at various replacement ratios of metakaolin (0, 5, 10, 15, 20, and 25% by weight of cement) and exposed to various environments. Compressive strength and efflorescence quantify (using Matrix Laboratory image analysis and the curettage method), scanning electron microscopy, and X-ray diffraction analysis were reported in this study. Specimens with metakaolin as a replacement for Portland cement present higher compressive strength and greater resistance to efflorescence; however, the addition of more than 20% metakaolin has a detrimental effect on strength and efflorescence. This may be explained by the microstructure and hydration products. The quantity of efflorescence determined using MATLAB image analysis is close to the result obtained using the curettage method. The results demonstrate the best effectiveness of replacing Portland cement with metakaolin at a 15% replacement ratio by weight.", "corpus_id": 9518312, "score": -1, "title": "Effect of Metakaolin on Strength and Efflorescence Quantity of Cement-Based Composites" }
{ "abstract": "HIV Prevalence Determinants Among Young People in Zimbabwe: Sexual Practices Analysis by Joyce Caroline Mphaya Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Public Health Walden University August 2017 Abstract A decline in Human Immunodeficiency Virus (HIV) prevalence rates have been observed among females ages 15 to 19 years and 20 to 24 years in Zimbabwe between 2005 and 2010. However, for males 15 to 19 years, rising trends were observed, whereas for males ages 20 to 24 years, rates fluctuated between 2005 and 2011. The purpose of this crosssectional study was to examine relationships between sexual behaviors and practices andA decline in Human Immunodeficiency Virus (HIV) prevalence rates have been observed among females ages 15 to 19 years and 20 to 24 years in Zimbabwe between 2005 and 2010. However, for males 15 to 19 years, rising trends were observed, whereas for males ages 20 to 24 years, rates fluctuated between 2005 and 2011. The purpose of this crosssectional study was to examine relationships between sexual behaviors and practices and HIV prevalence among young males and females ages 15 to 24 years in Zimbabwe. Guided by constructs of proximate determinants framework, extracted data from two National Demographic Health surveys of 2005/06 and 2010/11 were analyzed using chi square and binary logistic regression. This study revealed that sexual practices, relationship status, and education status increase the odds of being HIV positive differently among 15 to 19-year-olds and 20 to 24-year-olds based on gender and changes through time. Significant relationship existed between HIV positive serostatus and total number of life time partners among females 15 to 19 years and 20 to 24 years; lack of condom use among males 20 to 24 years in 2005/06; early sexual debut and lower education status among females 20 to 24 years; and being widowed, separated, or divorced among males and females 20 to 24 years in 2010/11. The Odds of being HIV positive for males ages 15 to 19 years was not predicted by sexual practice, creating a need for future study. This study can contribute to positive social change by providing information about the associations between HIV serostatus and the assessed risk factors, which may help promote awareness about HIV infection risk, thereby helping develop and implement targeted public health interventions to reduce the burden of HIV. HIV Prevalence Determinants Among Young People in Zimbabwe: Sexual Practices Analysis by Joyce Caroline Mphaya Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Public Health Walden University August 2017 Dedication I dedicate this dissertation you my two daughters, Flora and Tamanda, who have made me a proud mother by realizing the importance of education and to my late father, who believed in education for his children and made me realise my potential in life. Acknowledgments I wish to take this opportunity to acknowledge several special people who contributed to the successful completion of my dissertation journey. First of all, I would like to express my sincere appreciation to my committee chair, Dr. Peter Anderson, for the timely support and guidance and for the ongoing encouragement. I would also like to thank my committee member, Dr. Ernest Ekong, and my university research reviewer, Dr. Gudeta Fufaa, for their constructive feedback and valuable scholarly inputs to my research proposal and dissertation. I am much indebted to my mother, sister and brothers for their encouragement throughout the journey of my PhD study and for understanding my situation when I was not available for family commitments just to focus on my studies. I owe my children, Flora and Tamanda, for being understanding and constantly encouraging me during times when I felt like giving up on my studies. Last, I am grateful to God for giving me the grace, courage, and strength to complete my study.", "corpus_id": 79699250, "title": "HIV Prevalence Determinants Among Young People in Zimbabwe: Sexual Practices Analysis" }
{ "abstract": "Objectives In 2001 the United Nations (UN) Declaration of Commitment was signed by 189 countries with a goal to reduce HIV prevalence among young people by 25% by 2010. Progress towards this target is assessed. In addition, changes in reported sexual behaviour among young people aged 15–24 years are investigated. Methods Thirty countries most affected by HIV were invited to participate in the study. Trends in HIV prevalence among young antenatal clinic (ANC) attendees were analysed using data from sites that were consistently included in surveillance between 2000 and 2008. Regression analysis was used to determine if the UN target had been reached. Trends in prevalence data from repeat national population-based surveys were also analysed. Trends in sexual behaviour were analysed using data from repeat standardised national population-based surveys between 1990 and 2008. Results Seven countries showed a statistically significant decline of 25% or more in HIV prevalence among young ANC attendees by 2008, in rural or urban areas or in both: Botswana, Côte d'Ivoire, Ethiopia, Kenya, Malawi, Namibia and Zimbabwe. Three further countries showed a significant decline in HIV prevalence among young women (Zambia) or men (South Africa, Tanzania) in national surveys. Seven other countries are on track, whereas four are unlikely to reach the goal by 2010. Nine countries did not have adequate data to assess prevalence trends. Indications suggestive of changes towards less risky sexual behaviour were observed in the majority of countries. In eight countries with significant declines in HIV prevalence, significant changes were also observed in sexual behaviour in either men or women for at least two of the three sexual behaviour indicators. Conclusions Declines in HIV prevalence among young people were documented in the majority of countries with adequate data and in most cases were accompanied by changes in sexual behaviour. Further data, research and more rigorous analysis at country level are needed to understand the associations between programmatic efforts, reported behavioural changes and changes in prevalence and incidence of HIV.", "corpus_id": 2812738, "title": "Trends in HIV prevalence and sexual behaviour among young people aged 15–24 years in countries most affected by HIV" }
{ "abstract": "Injection of epinephrine or acetylcholine in atropinized cats, faradic stimulation of the stellate ganglia in cats with the adrenals tied, and faradic stimulation of the splanchnic nerves were found to be followed both by an accumulation of epinephrine and epinephrine-like catechol compounds (probably sympathin) in the heart muscle, and by cardiac acceleration. Simultaneous intravenous administration of nitroglycerine, papaverine, priscol and dibenamine hydrochloride abolished those types of adreno-sympathetic cardiac acceleration against which they were tested, either partially or completely. Nitroglycerine and priscol interfered only to a moderate extent with the accumulation of epinephrine-like material in the myocardium; papaverine inhibited it markedly; dibenamine hydrochloride prevented it completely during stimulation of the stellate ganglia. Thus, the mode of antagonistic action against adreno-sympathetic cardiac acceleration appears to be of a different nature in these drugs. The anti-epinephrine-sympathin effects of nitroglycerine and papaverine regarding myocardial function suggest that the therapeutic action of these drugs in angina pectoris is not due to coronary dilatation alone but also to a specific counteraction against the myocardial metabolic effects of an excess influx of adreno-sympathogenic epinephrine in the heart.", "corpus_id": 34486557, "score": -1, "title": "Drug action upon myocardial epinephreine-sympathin concentration and heart rate (nitro-glycerine, papverine, priscol, dibenamine hydrochloride)." }
{ "abstract": "Solving hard exploration tasks with sparse rewards is notoriously challenging in reinforcement learning (RL), which needs to address two key issues simultaneously: exploiting past successful experiences and exploring the unknown environment. Many prior works take expert demonstrations as successful experiences and learn to imitate them directly. However, these demonstrations are often not available in practice. Recently, curiosity-driven RL methods provide intrinsic rewards, encouraging the agent to explore states with high novelty. Nonetheless, they lack a mechanism for leveraging past good experiences effectively. This work presents a Pseudo Value Network Distillation (PVND) framework to balance the RL agent's exploitative and exploratory behaviors effectively and automatically. In particular, PVND learns to set high exploitation bonuses to the critical states in rewarded trajectories from past experiences and high exploration bonuses to the novel states that agents rarely visit during exploration. We theoretically demonstrate that PVND gives larger positive intrinsic rewards to more critical states. Furthermore, PVND automatically finds meaningful and critical hierarchical sub-tasks for agents to accomplish the final goal progressively. Competitive results in several hard exploration sparse reward problems have verified its effectiveness and efficiency.", "corpus_id": 260386482, "title": "Pseudo Value Network Distillation for High-Performance Exploration" }
{ "abstract": "Reinforcement learning algorithms struggle when the reward signal is very sparse. In these cases, naive random exploration methods essentially rely on a random walk to stumble onto a rewarding state. Recent works utilize intrinsic motivation to guide the exploration via generative models, predictive forward models, or discriminative modeling of novelty. We propose EMI, which is an exploration method that constructs embedding representation of states and actions that does not rely on generative decoding of the full observation but extracts predictive signals that can be used to guide exploration based on forward prediction in the representation space. Our experiments show competitive results on challenging locomotion tasks with continuous control and on image-based exploration tasks with discrete actions on Atari. The source code is available at this https URL .", "corpus_id": 59222749, "title": "EMI: Exploration with Mutual Information" }
{ "abstract": "One of the most important issues in machine learning is whether one can improve the performance of a supervised learning algorithm by including unlabeled data. Methods that use both labeled and unlabeled data are generally referred to as semi-supervised learning. Although a number of such methods are proposed, at the current stage, we still don't have a complete understanding of their effectiveness. This paper investigates a closely related problem, which leads to a novel approach to semi-supervised learning. Specifically we consider learning predictive structures on hypothesis spaces (that is, what kind of classifiers have good predictive power) from multiple learning tasks. We present a general framework in which the structural learning problem can be formulated and analyzed theoretically, and relate it to learning with unlabeled data. Under this framework, algorithms for structural learning will be proposed, and computational issues will be investigated. Experiments will be given to demonstrate the effectiveness of the proposed algorithms in the semi-supervised learning setting.", "corpus_id": 13650160, "score": -1, "title": "A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data" }
{ "abstract": "Pressurized artificial muscles are reviewed. These actuators consist of stiff reinforcing fibers surrounding an elastomeric bladder and operate using a pressurized internal fluid. The pressurized artificial muscles, known as McKibben actuators or flexible matrix composite actuators, can be applied to a wide array of applications, including prosthetics/orthotics, robots, morphing wing technologies, and variable stiffness structures. Analytical models for predicting the response behavior have used both virtual work methods and continuum mechanics. Various nonlinear control algorithms have been developed, including sliding mode control (SMC), adaptive control, neural networks, etc. In addition to traditional fluid-driving methods, innovative techniques such as chemical and electrical driving techniques are reviewed. With improved manufacturing techniques, the operational life of pressurized artificial muscles has been significantly extended, thus making them suitable for a vast range of potential applications.", "corpus_id": 111123385, "title": "Pressurized artificial muscles" }
{ "abstract": "Kikuuwe and Fujimoto have introduced proxy-based sliding mode control. It combines responsive and accurate tracking during normal operation with smooth, slow recovery from large position errors that can sometimes occur after abnormal events. The method can be seen as an extension to both conventional PID control and sliding mode control. In this paper, proxy-based sliding mode control is used to control a 2-DOF planar manipulator actuated by pleated pneumatic artificial muscles (PPAMs). The principal advantage of this control method is increased safety for people interacting with the manipulator. Two different forms of proxy-based sliding mode control were implemented on the system, and their performance was experimentally evaluated. Both forms performed very well with respect to safety. Good tracking was also obtained, especially with the second form.", "corpus_id": 2119080, "title": "Proxy-Based Sliding Mode Control of a Manipulator Actuated by Pleated Pneumatic Artificial Muscles" }
{ "abstract": "ABSTRACT We report the 3.78-Mbp high-quality draft assembly of the genome from a clinical isolate of Acinetobacter nosocomialis called strain M2 (previously known as Acinetobacter baumannii strain M2).", "corpus_id": 656126, "score": -1, "title": "Draft Genome Sequence of the Clinical Isolate Acinetobacter nosocomialis Strain M2" }
{ "abstract": "Ground water pollution and indiscriminate use of antibiotics in food animals and in treatment predisposed consumers to risks of antibiotic resistance. The aim of this research work is to determine the antibacterial susceptibility profile of Escherichia coli O157:H7 from shallow wells in some parts of Katsina state, Nigeria. The presence or absence of well covers and distance from pit laterines were observed at collection points. Most of the wells were uncovered or partially covered with old rusted roofing sheets, distance of wells from pit laterines range from 3-9 m which were all below the limit of 30 m set by WHO, Nigerian Environmental Protection Agency and 15.24 m or 50 ft set by United State Environmental Protection Agency (USEPA). The organism was isolated by cultural method using selective media, gram stained and subjected to series of biochemical tests, the isolates were further confirmed serologically using Latex agglutination kit (Oxoid, UK). Antibiotic susceptibility testing was performed using commercial gram negative disks and the multidrug resistance pattern and multiple antibiotic resistance indices of the isolates were also determined. Out of the 300 well water samples analysed 246 wells were positive for presumptive E. coli and 7 were serologically confirmed to be E. coli O157:H7. All the isolates were resistant to multiple antibiotics; the highest resistance was to tetracycline and Augmentin. However, 100% of the isolates were sensitive to flouroquinolones and Nitrofurantoin. Absent of pipe borne water, poor sanitary habits and indiscriminate use of drugs pre-disposed the inhabitants of the study area to dangers of multidrug resistant organisms. Provision of adequate potable water, improved sanitation and restricting the illegal use of drugs in food animals and in treatment of infections were recommended to overcome the problems of multidrug-resistant organisms. Citation: Fatima M, Mukhtar GL, Inabo HI, Jatau ED, Shitu AM (2016). Antibacterial susceptibility profile of Escherichia coli O157:H7 from shallow wells in some parts of Katsina State, Nigeria. Biosciences Research in Today’s World 2: 1-7. Received August 10, 2015; Accepted February 18, 2016; Published February 24, 2016. Copyright: © 2016 Fatima et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. BRTW is the official journal publication of BRSF. Competing Interests: The authors have declared that no competing interests exist. * E-mail: fateemam2@gmail.com", "corpus_id": 55943370, "title": "Antibacterial susceptibility profile of Escherichia coli O157:H7 from shallow wells in some parts of Katsina State, Nigeria" }
{ "abstract": "In 1992, a large outbreak of bloody diarrhea caused by Escherichia coli O157 infections occurred in southern Africa. In Swaziland, 40,912 physician visits for diarrhea in persons ages >5 years were reported during October through November 1992. This was a sevenfold increase over the same period during 1990-91. The attack rate was 42% among 778 residents we surveyed. Female gender and consuming beef and untreated water were significant risks for illness. E. coli O157:NM was recovered from seven affected foci in Swaziland and South Africa; 27 of 31 patient and environmental isolates had indistinguishable pulsed-field gel electrophoresis patterns. Compared with previous years, a fivefold increase in cattle deaths occurred in October 1992. The first heavy rains fell that same month (36 mm), following 3 months of drought. Drought, carriage of E. coli O157 by cattle, and heavy rains with contamination of surface water appear to be important factors contributing to this outbreak.", "corpus_id": 1916655, "title": "Factors contributing to the emergence of Escherichia coli O157 in Africa." }
{ "abstract": "Moral distress is a concept used to date in clinical literature to describe the experience of staff in circumstances in which they are prevented from delivering the kind of bedside care they believe is expected of them, professionally and ethically. Our research objective was to determine if this concept has relevance in terms of key health care managerial functions, such as priority setting and resource allocation. We conducted interviews and focus groups with mid- and senior-level managers in two British Columbia (Canada) health authorities. Transcripts were analyzed qualitatively using constant comparison to identify key themes related to moral distress. Both mid- and senior-level managers appear to experience moral distress, with both similarities and differences in how their experiences manifest. Several examples of this concept were identified including the obligation to communicate or ‘sell’ organizational decisions or policies with which a manager personally may disagree and situations where scarce resources compel managers to place staff in situations where they meet with predictable and potentially avoidable risks. Given that moral distress appears to be a relevant issue for at least some health care managers, further research is warranted into its exact nature, prevalence, and possible organizational and personal responses.", "corpus_id": 43584650, "score": -1, "title": "Moral Distress Among Health System Managers: Exploratory Research in Two British Columbia Health Authorities" }
{ "abstract": "Bolometer arrays on large antennas at high, dry sites have unveiled a dusty population of massive, luminous galaxies - submillimetre galaxies, or SMGs - which make a significant contribution to the star formation rate density at z > 1. The most crucial piece of information required to derive the history of obscured star formation is the redshift distribution of this galaxy population, N(z), which breaks degeneracies in the models and allows the mass and dynamics of the galaxies to be explored via high-resolution three-dimensional imaging in CO and by determining their level of clustering. Many SMGs are extremely faint, optically; some have no plausible counterparts, even in the infrared (IR), making the determination of an unbiased N(z) very difficult. The arrival of the Herschel Space Observatory and next-generation ground-based submm cameras will likely exacerbate this so-called redshift deadlock. Here, we report the first test of a new method for determining redshifts, based on the observed dependence of maser and IR luminosities. We have searched the dusty, lensed, hyperluminous quasar, APM 08279+5255, for the 1612-, 1665- and 1667-MHz hydroxyl lines as well as the 22-GHz water line. At z = 3.9, these are shifted to 329, 340 and 4538 MHz. Our relatively shallow test data reveal no convincing maser activity, but we set a meaningful constraint on the OH maser luminosity and we approach the expected thermal noise levels, meaning progress is possible. As an aside, we present deep new submm and radio imaging of this field. Using a simple shift-and-add technique, we uncover a new submm galaxy, conceivably at the redshift of APM 08279+5255.", "corpus_id": 17265092, "title": "Searching for a gigamaser in APM 08279+5255, and other short stories" }
{ "abstract": "Submillimeter (and in some cases millimeter) wavelength continuum measurements are presented for a sample of 40 active galactic nuclei (probably all quasars) lensed by foreground galaxies. The object of this study is to use the lensing boost, anywhere from ~3 to 20 times, to detect dust emission from more typical active galactic nuclei (AGNs) than the extremely luminous ones currently accessible without lensing. The sources are a mix of radio-loud and radio-quiet quasars, and after correction for synchrotron radiation (in the few cases where necessary), 23 of the 40 (58%) are detected in dust emission at 850 μm; 11 are also detected at 450 μm. Dust luminosities and masses are derived after correction for lensing magnification, and luminosities are plotted against redshift from z = 1 to z = 4.4, the redshift range of the sample. The main conclusions are (1) monochromatic submillimeter luminosities of quasars are, on average, only a few times greater than those of local IRAS galaxies; (2) radio-quiet and radio-loud quasars do not differ significantly in their dust luminosity; (3) mean dust luminosities of quasars and radio galaxies over the same redshift range are comparable; and (4) quasars and radio galaxies alike show evidence for more luminous and massive dust sources toward higher redshift, consistent with an early epoch of formation and possibly indicating that the percentage of obscured AGNs increases with redshift.", "corpus_id": 5989818, "title": "Accepted for publication in the Astrophysical Journal Preprint typeset using L ATEX style emulateapj v. 19/02/01 A SUBMILLIMETER SURVEY OF GRAVITATIONALLY LENSED QUASARS" }
{ "abstract": "A new magnetospheric phenomenon called a cusp energetic particle (CEP) event has been discovered by the POLAR spacecraft in 1996. The events were detected in the dayside polar cusp near the apogee of POLAR and could last for hours, in which the measured helium ions had energies up to 8 MeV. All of these events were associated with a dramatic decrease in the magnitude of the local magnetic field. A fundamental question is where do the cusp MeV ions come from? To answer this question, we have compared the ion flux in the September 18, 1996 CEP events with that in the upstream from the bow shock and found that bow shock acceleration cannot explain the measured ion flux in the CEP events. We have further determined the parallel power spectra of the local magnetic field turbulence calculated over the CEP event periods for fluctuations in the ultra-low frequency (ULF) ranges, corresponding to periods of about 0.33-500s. It is found that the mirror parameter, defined as the ratio of the square root of the integration of the parallel turbulent spectral component over the ULF ranges to the local mean field, is correlated with the intensity of the MeV helium flux. These new results represent a discovery that the high-altitude dayside cusp is a new acceleration region of the magnetosphere.", "corpus_id": 118302024, "score": -1, "title": "CUSP: A New Acceleration Region of the Magnetosphere" }
{ "abstract": "Nuclear factor kappa B (NFκB) is an inflammatory transcription factor that plays an important role in the host immune response to infection. The potential for chlamydiae to activate NFκB has been an area of interest, however most work has focused on chlamydiae impacting human health. Given that inflammation characteristic of chlamydial infection may be associated with severe disease outcomes or contribute to poor overall fitness in farmed animals, we evaluated the ability of porcine chlamydiae to induce NFκB activation in vitro. C. pecorum infection induced both NFκB nuclear translocation and activation at 2 hours post infection (hpi), an effect strongly enhanced by suppression of host de novo protein synthesis. C. suis and C. trachomatis showed less capacity for NFκB activation compared to C. pecorum, suggesting a species-specific variation in NFκB activation. At 24 hpi, C. pecorum induced significant NFκB activation, an effect not abolished by penicillin (beta lactam)-induced chlamydial stress. C. pecorum-dependent secretion of interleukin 6 was also detected in the culture supernatant of infected cells at 24 hpi, and this effect, too, was unchanged by penicillin-induced chlamydial stress. Taken together, these results suggest that NFκB participates in the early inflammatory response to C. pecorum and that stressed chlamydiae can promote inflammation.", "corpus_id": 3608602, "title": "Productive and Penicillin-Stressed Chlamydia pecorum Infection Induces Nuclear Factor Kappa B Activation and Interleukin-6 Secretion In Vitro" }
{ "abstract": "Chlamydia pecorum causes asymptomatic infection and pathology in ruminants, pigs, and koalas. We characterized the antichlamydial effect of the beta lactam penicillin G on Chlamydia pecorum strain 1710S (porcine abortion isolate). Penicillin-exposed and mock-exposed infected host cells showed equivalent inclusions numbers. Penicillin-exposed inclusions contained aberrant bacterial forms and exhibited reduced infectivity, while mock-exposed inclusions contained normal bacterial forms and exhibited robust infectivity. Infectious bacteria production increased upon discontinuation of penicillin exposure, compared to continued exposure. Chlamydia-induced cell death occurred in mock-exposed controls; cell survival was improved in penicillin-exposed infected groups. Similar results were obtained both in the presence and in the absence of the eukaryotic protein translation inhibitor cycloheximide and at different times of initiation of penicillin exposure. These data demonstrate that penicillin G induces the chlamydial stress response (persistence) and is not bactericidal, for this chlamydial species/strain in vitro, regardless of host cell de novo protein synthesis.", "corpus_id": 848700, "title": "Penicillin G-Induced Chlamydial Stress Response in a Porcine Strain of Chlamydia pecorum" }
{ "abstract": "The Earth is about 4.6 billion years old. In terms of its thermal regime, the planet is in the process of cooling. However, to have reached its current state, the Earth and the other objects making up the Solar System went through a number of stages such as the accretion of the planet from dust of the solar nebula, the formation of the magma-ocean, stratification of matter by density, solidification of the magma-ocean, formation of the lithosphere which is taking place today, periods of increased volcanic and metamorphic activity, numerous tectonic processes with global and regional significance (obduction, subduction, orogeny, etc.), heat production by short-lived and long-lived radioisotopes, and numerous other features and processes related to thermodynamic and temperature conditions. In this Chapter are analyzed such fundamental phenomena as sources of the thermal energy in the Earth's interior, geothermal gradient, density of heat flow, heat flow and geological age, mantle heat flow, temperature distribution inside the Earth and other Earth-thermal interactions.", "corpus_id": 126999098, "score": -1, "title": "The Thermal Field of the Earth" }
{ "abstract": "PURPOSE\nThe purpose of this study was to compare the safety and efficacy of DPP-4 inhibitors versus sulfonylurea as adjunctive second-line therapy in patients with type 2 diabetes mellitus, inadequately controlled with metformin mono-therapy.\n\n\nSOURCES\nA systematic review of published randomized controlled trials (RCTs) was performed in MEDLINE, EMBASE, PubMed and Cochrane library. Two reviewers independently selected the studies, extracted the data and assessed the risk of bias. Clinical outcomes were cardiovascular events, HbA1c % change from baseline, body weight and hypoglycemic event rate. A direct comparison meta-analysis using a random effect model was conducted to calculate mean differences in treatment effects and risk ratio between DPP-4 inhibitors and sulfonylurea.\n\n\nPRINCIPLE FINDINGS\nTen RCTs on adult patients with type 2 diabetes and inadequate glycemic control were included in the final analysis. DPP-4 inhibitors compared to sulfonylureas produced a non-significant difference in HbA1c% change in 10,139 subjects, whereas a significant decrease in the rate of hypoglycemic events was observed in favor of DPP-4 inhibitors (RR= 0.12; P<0.00001) involving 10,616 patients, with at least one hypoglycemic event during the follow-up period (12-104 weeks). Body weight decreased by 2.2 kg (95% CI 1.7-2.7) with DPP-4 inhibitors, compared with sulfonylureas. There were insufficient data to assess a difference in the risk for cardiovascular events.\n\n\nCONCLUSION\nThe review shows that, in terms of clinical efficacy, there is no significant difference between DPP4-inhibitors and sulfonylurea when either is added to metformin mono-therapy. In contrast, the safety assessment analysis showed a significant decrease in the risk of hypoglycemic events in patients using DPP4-inhibitors.", "corpus_id": 45519274, "title": "Safety and efficacy of dipeptidyl peptidase-4 inhibitors vs sulfonylurea in metformin-based combination therapy for type 2 diabetes mellitus: Systematic review and meta-analysis." }
{ "abstract": "Our review analyses the studies that have specifically compared the association iDPP4/metformin with glimepiride/metformin, both in second line pharmacotherapy of type 2 diabetes mellitus (DM2).", "corpus_id": 1445292, "title": "Effectiveness and safety of glimepiride and iDPP4, associated with metformin in second line pharmacotherapy of type 2 diabetes mellitus: systematic review and meta‐analysis" }
{ "abstract": "Classic neurosurgical teaching holds that once the Rolandic fissure (Rf) has been located, there are distinct differentiated primary motor and sensory functional units confined within a narrow cortical strip: Brodmann's Areas 4 and 6 for primary motor units in front of the Rf and 3, 1, and 2 for sensory units behind the Rf. To test this assumption, we examined in detail the records of cortical mapping done by electrical stimulation of the cerebral cortex via implanted subdural electrode grids in 35 patients with seizure disorders. Of 1381 stimulations of the electrode sites, 346 (25.1%) produced primary motor or motor-arrest and sensory responses in contralateral body parts: 56.8% were primary motor responses; 16.2% were motor-arrest; 22.5% were sensory; and the remaining 4.5% were mixed motor and sensory responses. Two-thirds (65.9%) of the primary motor responses were located within 10 mm of the Rf, and the remaining one-third (34.1%) were more than 10 mm anterior to the Rf or were posterior to the Rf. Furthermore, in the patient group with brain lesions, fewer than one-third (28.1%) of the responses were within the 10-mm narrow anterior strip. Our study reconfirmed that a significant number--at least one-third--of motor responses are distributed outside the classic narrow cortical strip. In patients with brain lesions, the motor representation is further displaced outside the narrow strip. This finding indicates that primary motor cortex may extend beyond the gyrus immediately anterior to the Rf.", "corpus_id": 22928353, "score": -1, "title": "Motor and sensory cortex in humans: topography studied with chronic subdural stimulation." }
{ "abstract": "ABSTRACT Application of sustainable computing based advanced intelligent power electronic technology for smart grid systems is presented in this paper. Virtualization technology is an effective way to solve the low efficiency of the energy consumption. The basic idea is that the physical resources of the data center are provided to the application for deployment in the way of virtual machines, so multiple virtual machines can be configured on a physical server, so as to improve the server utilization rate and energy efficiency. With the combination of the idea, this paper has the three major novelties. (1) According to the sparse storage technology of the nodal-branch association matrix model of the distribution network, the observability of the nodes covered by harmonic measurement points can be analyzed. (2) Dynamic droop control method considering generation cost is adopted to realize reasonable distribution of power fluctuation, avoid unbalance of the power distribution caused by power overrun of tie line or uncoordinated control coefficient after interconnection, and enhance system operation stability. (3) Sustainable computing model is revised to consider the complex scenarios. The experiment results show that the proposed system can undertake the smart grid system well and the robustness is verified.", "corpus_id": 86402327, "title": "Application of sustainable computing based advanced intelligent power electronic technology for smart grid systems" }
{ "abstract": "In this paper, in order to research the dynamic characteristics of soft soil under metro vibration loads, the mathematical expression of metro vibration loads is obtained. According to the loading form, drainage requirement and vibration frequencies of the actual situation, the corresponding experiment is conducted through indoor dynamic triaxial equipment. Then, the dynamic characteristics of experimental results are analyzed. An empirical formula is proposed to compute the dynamic characteristics of soft soil. Then, the computational results obtained by empirical formula are compared with those of the experimental. They are consistent with each other, and the results show that empirical formula is reliable to compute the dynamic characteristics of soft soil. Then, based on the verified empirical formula, the dynamic characteristics such as the vertical strain and pore water pressure with different CSR are also computed and compared. With the increase in CSR, the dynamic characteristics will be larger when the other parameters are consistent. However, empirical formula can only predict the dynamic characteristics of the simple model. In order to realize virtual reality of the dynamic characteristics of the complex model more accurately, the BP neural network and finite element are adopted, respectively. Then, the computational results are also compared with those of the experimental to verify their reliabilities. In the future, the BP neural network and finite element method can be also used to realize virtual reality of the dynamic characteristics of the more complex model.", "corpus_id": 3719753, "title": "Virtual reality research of the dynamic characteristics of soft soil under metro vibration loads based on BP neural networks" }
{ "abstract": "The paper explores the use of discrete element simulations to model granular soil response to monotonic and cyclic loading. Two- and three-dimensional random arrays of quartz spheres of various diameters are used that crudely represent rounded uniform quartz sand. Computer program CONBAL, developed by the writers from existing code TRUBAL, is used. All simulated granular specimens are first isotropically consolidated, and are then subjected to monotonic drained loading or constant volume (undrained) cyclic “simple shear” simulations. The monotonic results exhibit similar pressure-dependent shear strength and dilation behavior to that found in actual sands, but with the simulated specimens being stiffer and failing at a smaller strain. The simulated cyclic loading results closely resemble the “pore water pressure” buildup to initial liquefaction, hysteresis loop formation and degradation, “banana loop” shapes, and lines of phase transformation observed in sand experiments. The effect of intergranular friction coefficient μ\\N\\I\\ds\\N = tan φ\\N\\I\\du\\N and particle rotation on the results of simulated material is also studied, and they are found to be very important.", "corpus_id": 128456706, "score": -1, "title": "NUMERICAL SIMULATIONS OF MONOTONIC AND CYCLIC LOADING OF GRANULAR SOIL" }
{ "abstract": "In this brief, a pMOS pass gate (PPG) local bitline static random access memory (LB SRAM) architecture is proposed to reduce the read delay and resolve the half-select issue with a small area overhead. Virtual <inline-formula> <tex-math notation=\"LaTeX\">$V_{\\mathrm {SS}}$ </tex-math></inline-formula> write assist is included in the architecture to improve write ability. In 22-nm fin-shaped FET (FinFET) technology, the proposed PPG LB architecture achieves an improved read delay and reduced total operation energy by 44% and 65%, respectively, at 0.4 V, compared to the full-swing LB (FSLB) SRAM architecture.", "corpus_id": 213338521, "title": "pMOS Pass Gate Local Bitline SRAM Architecture With Virtual $V_{\\mathrm{SS}}$ for Near-Threshold Operation" }
{ "abstract": "The previously proposed average-8T static random access memory (SRAM) has a competitive area and does not require a write-back scheme. In the case of an average-8T SRAM architecture, a full-swing local bitline (BL) that is connected to the gate of the read buffer can be achieved with a boosted wordline (WL) voltage. However, in the case of an average-8T SRAM based on an advanced technology, such as a 22-nm FinFET technology, where the variation in threshold voltage is large, the boosted WL voltage cannot be used, because it degrades the read stability of the SRAM. Thus, a full-swing local BL cannot be achieved, and the gate of the read buffer cannot be driven by the full supply voltage ( $V_{\\textrm {DD}}$ ), resulting in a considerably large read delay. To overcome the above disadvantage, in this paper, a differential SRAM architecture with a full-swing local BL is proposed. In the proposed SRAM architecture, full swing of the local BL is ensured by the use of cross-coupled pMOSs, and the gate of the read buffer is driven by a full $V_{\\textrm {DD}}$ , without the need for the boosted WL voltage. Various configurations of the proposed SRAM architecture, which stores multiple bits, are analyzed in terms of the minimum operating voltage and area per bit. The proposed SRAM that stores four bits in one block can achieve a minimum voltage of 0.42 V and a read delay that is 62.6 times lesser than that of the average-8T SRAM based on the 22-nm FinFET technology.", "corpus_id": 13719983, "title": "Full-Swing Local Bitline SRAM Architecture Based on the 22-nm FinFET Technology for Low-Voltage Operation" }
{ "abstract": "We present, in this paper, a new 10T static random access memory cell having single ended decoupled read-bitline (RBL) with a 4T read port for low power operation and leakage reduction. The RBL is precharged at half the cell’s supply voltage, and is allowed to charge and discharge according to the stored data bit. An inverter, driven by the complementary data node (QB), connects the RBL to the virtual power rails through a transmission gate during the read operation. RBL increases toward the $V_{\\text {DD}}$ level for a read-1, and discharges toward the ground level for a read-0. Virtual power rails have the same value of the RBL precharging level during the write and the hold mode, and are connected to true supply levels only during the read operation. Dynamic control of virtual rails substantially reduces the RBL leakage. The proposed 10T cell in a commercial 65 nm technology is $2.47\\times $ the size of 6T with $\\beta = 2$ , provides $2.3\\times $ read static noise margin, and reduces the read power dissipation by 50% than that of 6T. The value of RBL leakage is reduced by more than 3 orders of magnitude and $({I_{\\mathrm{\\scriptscriptstyle ON}}}/{I_{\\mathrm{\\scriptscriptstyle OFF}}})$ is greatly improved compared with the 6T BL leakage. The overall leakage characteristics of 6T and 10T are similar, and competitive performance is achieved.", "corpus_id": 20305545, "score": -1, "title": "10T SRAM Using Half- $V_{\\text {DD}}$ Precharge and Row-Wise Dynamically Powered Read Port for Low Switching Power and Ultralow RBL Leakage" }
{ "abstract": "In this paper, we propose a Hierarchical Learned Video Compression (HLVC) method with three hierarchical quality layers and a recurrent enhancement network. The frames in the first layer are compressed by an image compression method with the highest quality. Using these frames as references, we propose the Bi-Directional Deep Compression (BDDC) network to compress the second layer with relatively high quality. Then, the third layer frames are compressed with the lowest quality, by the proposed Single Motion Deep Compression (SMDC) network, which adopts a single motion map to estimate the motions of multiple frames, thus saving bits for motion information. In our deep decoder, we develop the Weighted Recurrent Quality Enhancement (WRQE) network, which takes both compressed frames and the bit stream as inputs. In the recurrent cell of WRQE, the memory and update signal are weighted by quality features to reasonably leverage multi-frame information for enhancement. In our HLVC approach, the hierarchical quality benefits the coding efficiency, since the high quality information facilitates the compression and enhancement of low quality frames at encoder and decoder sides, respectively. Finally, the experiments validate that our HLVC approach advances the state-of-the-art of deep video compression methods, and outperforms the \"Low-Delay P (LDP) very fast\" mode of x265 in terms of both PSNR and MS-SSIM. The project page is at https://github.com/RenYang-home/HLVC.", "corpus_id": 211990027, "title": "Learning for Video Compression With Hierarchical Quality and Recurrent Enhancement" }
{ "abstract": "When lossy video compression algorithms are applied, compression artifacts often appear in videos, making decoded videos unpleasant for human visual systems. In this paper, we model the video artifact reduction task as a Kalman filtering procedure and restore decoded frames through a deep Kalman filtering network. Different from the existing works using the noisy previous decoded frames as temporal information in the restoration problem, we utilize the less noisy previous restored frame and build a recursive filtering scheme based on the Kalman model. This strategy can provide more accurate and consistent temporal information, which produces higher quality restoration results. In addition, the strong prior information of prediction residual is also exploited for restoration through a well designed neural network. These two components are combined under the Kalman framework and optimized through the deep Kalman filtering network. Our approach can well bridge the gap between the model-based methods and learning-based methods by integrating the recursive nature of the Kalman model and highly non-linear transformation ability of deep neural network. Experimental results on the benchmark dataset demonstrate the effectiveness of our proposed method.", "corpus_id": 52967320, "title": "Deep Kalman Filtering Network for Video Compression Artifact Reduction" }
{ "abstract": "In video super-resolution, the spatio-temporal coherence between, and among the frames must be exploited appropriately for accurate prediction of the high resolution frames. Although 2D convolutional neural networks (CNNs) are powerful in modelling images, 3D-CNNs are more suitable for spatio-temporal feature extraction as they can preserve temporal information. To this end, we propose an effective 3D-CNN for video super-resolution, called the 3DSRnet that does not require motion alignment as preprocessing. Our 3DSRnet maintains the temporal depth of spatio-temporal feature maps to maximally capture the temporally nonlinear characteristics between low and high resolution frames, and adopts residual learning in conjunction with the sub-pixel outputs. It outperforms the most state-of-the-art method with average 0.45 and 0.36 dB higher in PSNR for scales 3 and 4, respectively, in the Vidset4 benchmark. Our 3DSRnet first deals with the performance drop due to scene change, which is important in practice but has not been previously considered.", "corpus_id": 56657918, "score": -1, "title": "3DSRnet: Video Super-resolution using 3D Convolutional Neural Networks" }
{ "abstract": "Development of a grid-based clustering mechanism to improve LEACH performance in the Wireless Sensor Network environmentLow Energy Adaptive Clustering Hierarchy (LEACH) merupakan algoritma routing pada Wireless Sensor Network (WSN) berbasis cluster. LEACH memilih sebuah node sebagai cluster head (CH) yang tugasnya untuk melakukan komunikasi dengan sink maupun guna mengumpulkan data dari member node. Persebaran CH pada LEACH yang dikatakan acak, kadang mengalami masalah mengingat rumus probabilitas pada tiap round. Hal ini akan menyebabkan CH yang terpilih bisa berada di tepi area, juga terjadinya pemborosan energi karena jalur yang terbentuk akan menjadi panjang. Oleh karena itu, kami ingin mengembangkan routing protocol G-LEACH menggunakan teknik merge CH dalam suatu area (grid) disertai beberapa parameter yang relevan, seperti posisi node, node dengan sisa energi terbesar, dan jarak yang dihitung dalam tiga jarak yaitu jarak node menuju cluster center, jarak node menuju merge CH, dan jarak merge CH menuju sink. Hasil pengujian menunjukan bahwa dengan menggabungkan cluster (merge CH) pada transmisi data menuju sink pada protokol G-LEACH dapat menghasilkan masa hidup jaringan yang lebih lama pada seluruh operasi node, energi yang dibutuhkan pada semua node lebih rendah, dan lebih banyak paket data yang dikirim dan diterima oleh sink. Low Energy Adaptive Clustering Hierarchy (LEACH) is a routing algorithm in a cluster-based Wireless Sensor Network (WSN). LEACH selects a node as a cluster head (CH) whose responsibility is for communicating with sinks and collect data from the node members. The distribution of CH on LEACH, which is basically random, sometimes has a problem in remembering the probability formula on each round. This may make the selected CH on the edge of the area as well as generate energy waste because the pathway formed will be lengthy. Therefore, we would like to develop the G-LEACH routing protocol using a merge CH technique in one area (grid) with several relevant parameters, such as the position of the node, the node with the largest remaining energy, and the distance calculated in three distances: the distance of the node to the clustercenter, the distance of the node to the merge CH, and the distance of the merge CH to the sink. The test result showed that combining clusters (merge CH) in the data transmission to the sink in the G-LEACH protocol could produce a longer network life on all node operations, lower energy required for all nodes, and more data package sent and received by the sink.", "corpus_id": 203154816, "title": "Pengembangan mekanisme grid based clustering untuk peningkatan kinerja LEACH pada lingkungan Wireless Sensor Network" }
{ "abstract": null, "corpus_id": 16494711, "title": "A Survey on LEACH-Based Energy Aware Protocols for Wireless Sensor Networks" }
{ "abstract": "Wireless sensor network is a wireless network consisting of independent sensor, communicating with each other in distributed fashion to monitor the environment. Sensors are usually attached to microcontroller and are powered by battery. The goal of Wireless sensor network is to have long life time and high reliability with maximum coverage. Routing techniques are the most important issue for networks where resources are limited. LEACH is one of the first hierarchical routing approaches for sensor networks. Most of the clustering algorithms are derived from this algorithm. In this paper we propose an improvement on the LEACH Protocol. In our proposed algorithm, every cluster divided into 7 subsections that are called cells. Also every cell has a cell-head. Cell-heads communicate with cluster-heads directly. They aggregate their cell information and therefore they prevent sensors from communicating. In addition, we have made some changes in computation of the threshold value for a cluster-head selection formula. Something that was mentioned, cause efficiency reduce energy consumption and extend the network lifetime. We evaluate LEACH, LEACH-C and Cell-LEACH through extensive simulations using JSIM simulator which shows that Cell-LEACH performs better than LEACH and LEACH-C protocols.", "corpus_id": 30441853, "score": -1, "title": "An improvement on LEACH protocol (Cell-LEACH)" }
{ "abstract": "Evidence‐based therapy that target hyperlipidemia, hypertension, smoking cessation, and weight loss have demonstrated significant benefits in reducing cardiovascular risks and related events. Although the benefit of intensively lowering blood glucose is unclear, newer antidiabetic drugs (glucagon‐like peptide‐1 receptor agonists and sodium‐glucose cotransporter‐2 inhibitors) have shown cardiovascular benefits in addition to their antihyperglycemic effect. Yet, studies suggest that recent use of evidence‐based therapy and management of cardiovascular risk among individuals with type 2 diabetes (T2D) and cardiovascular disease (CVD) remains largely suboptimal. The following narrative review first identifies barriers to translating research evidence to clinical practice at the levels of provider, health system, patient, and cost. Then it synthesizes previous implementation strategies that addressed multifaceted barriers and attempted to improve care for patients with T2D and CVD. In conclusion, team‐based care coordination, reminding systems in combination to pharmacist consultation and patient education, provider education compatible with clinical workflow, and coupled incentives between providers and patients appeared to be effective in reducing cardiovascular risks for patients with T2D and CVD, though the scalability and long‐term clinical effect of these strategies as well as the possibility of interventions involving payers and health systems remain uncertain.", "corpus_id": 201748947, "title": "Opportunities for improving use of evidence‐based therapy in patients with type 2 diabetes and cardiovascular disease" }
{ "abstract": "BACKGROUND\nCigarette smoking is a well-known cardiovascular risk factor and its impact on cardiovascular disease is even greater among people with diabetes. The aim of this study is to compare the prevalence and determinants of smoking among US adults with diabetes or impaired fasting glucose, and those without diabetes or impaired fasting glucose.\n\n\nMETHODS\nWe analyzed data from the National Health and Nutrition Examination Surveys (1999-2008). Age-adjusted prevalence of smoking was calculated, and we used logistic regression models to identify the correlates of smoking among people with diabetes, impaired fasting glucose, and normal glucose metabolism.\n\n\nRESULTS\nAmong 24,649 participants ≥20 years old, age-adjusted smoking prevalence was 25.7% in 3111 individuals with diabetes, 24.2% in 3557 individuals with impaired fasting glucose, and 24.1% in 17,981 individuals without diabetes. Smoking prevalence did not differ across groups or change over time (1999-2008) in any group. Younger age, less education, more alcohol consumption, less physical activity, and major depression symptoms were associated with smoking in people with diabetes, impaired fasting glucose, and normal glucose metabolism.\n\n\nCONCLUSIONS\nIn the US, smoking prevalence among people with diabetes and impaired fasting glucose has not changed and is comparable with the nondiabetic population. Tobacco control efforts should be intensified among this population at high risk for complications and mortality.", "corpus_id": 305154, "title": "Smoking behavior among US adults with diabetes or impaired fasting glucose." }
{ "abstract": "Among end‐stage renal disease patients maintained by hemodialysis, anemia has been managed primarily through erythropoiesis‐stimulating agents (ESAs) and intravenous (IV) iron. Following concerns about the cardiovascular (CV) safety of ESAs and changes in the reimbursement policies in Medicare's ESRD program, the use of IV iron has increased. IV iron supplementation promotes hemoglobin production and reduces ESA requirements, yet there exists relatively little evidence on the long‐term safety of iron supplementation in hemodialysis patients. Labile iron can induce oxidative stress and is also essential in bacterial growth, leading to concerns about IV iron use and risk of CV events and infections in hemodialysis patients. Existing randomized controlled trials provide little evidence about safety due to insufficient power and short follow‐up; recent observational studies have been inconsistent, but some have associated iron exposure with increased risk of infections and CV events. Given the widespread use and potential safety concerns related to IV iron, well‐designed large prospective studies are needed to assess to identify optimal strategies for iron administration that maximize its benefits while avoiding potential risks.", "corpus_id": 4350332, "score": -1, "title": "Safety of intravenous iron in hemodialysis patients" }
{ "abstract": "The spotted fever rickettsia, Rickettsia helvetica, is an endemic tick-borne bacteria in Sweden. It causes infections in humans, manifested as aneruptive fever, headache, arthralgia and myalgia, an ...", "corpus_id": 53479547, "title": "Studies of Spotted Fever Rickettsia - Distribution, Detection, Diagnosis and Clinical Context : With a Focus on Vectors and Patients in Sweden" }
{ "abstract": "Ticks are monophyletic and composed of the hard (Ixodidae) and soft (Argasidae) tick families, as well as the Nuttalliellidae, a family with a single species, Nuttalliella namaqua. Significant biological differences in lifestyle strategies for hard and soft ticks suggest that various blood-feeding adaptations occurred after their divergence. The phylogenetic relationships between the tick families have not yet been resolved due to the lack of molecular data for N. namaqua. This tick possesses a pseudo-scutum and apical gnathostoma as observed for ixodids, has a leathery cuticle similar to argasids and has been considered the evolutionary missing link between the two families. Little knowledge exists with regard to its feeding biology or host preferences. Data on its biology and systematic relationship to the other tick families could therefore be crucial in understanding the evolution of blood-feeding behaviour in ticks. Live specimens were collected and blood meal analysis showed the presence of DNA for girdled lizards from the Cordylid family. Feeding of ticks on lizards showed that engorgement occurred rapidly, similar to argasids, but that blood meal concentration occurs via malpighian excretion of water. Phylogenetic analysis of the 18S nuclear and 16S mitochondrial genes indicate that N. namaqua grouped basal to the main tick families. The data supports the monophyly of all tick families and suggests the evolution of argasid-like blood-feeding behaviour in the ancestral tick lineage. Based on the data and considerations from literature we propose an origin for ticks in the Karoo basin of Gondwanaland during the late Permian. The nuttalliellid family almost became extinct during the End Permian event, leaving N. namaqua as the closest living relative to the ancestral tick lineage and the evolutionary missing link between the tick families.", "corpus_id": 924113, "title": "Nuttalliella namaqua: A Living Fossil and Closest Relative to the Ancestral Tick Lineage: Implications for the Evolution of Blood-Feeding in Ticks" }
{ "abstract": "BACKGROUND\nAnaphylactic reactions caused by bites of the European pigeon tick Argas reflexus are repeatedly reported. This soft-backed tick is a parasite of wild pigeons colonizing urban buildings and houses. Occasionally the ticks can bite human beings, inducing anaphylactic reactions in sensitized patients.\n\n\nOBJECTIVE\nOur aim was to characterize the major allergen implicated in a series of anaphylactic reactions caused by Argas bites and to produce the allergen as recombinant protein for diagnostic purposes.\n\n\nMETHODS\nProtein extracts were prepared from whole A reflexus bodies, and IgE immunoblots were performed with sera from 13 patients who had an anaphylactic reaction with pigeon tick bites. A cDNA expression library was constructed from whole ticks and screened with a polyclonal rabbit antiserum raised against the major allergen.\n\n\nRESULTS\nThe cDNA coding for the dominant allergen Arg r 1 could be isolated. It encodes a protein belonging to the lipocalin family. Allergenicity of the recombinant Arg r 1 was confirmed by immunoblot, ELISA, and intradermal skin tests.\n\n\nCONCLUSION\nThe dominant allergen of A reflexus has been isolated and the corresponding cDNA cloned. The recombinant protein, a lipocalin, was expressed in Escherichia coli and was shown to be immunoreactive in vitro and in vivo. Recombinant Arg r 1 was used as a diagnostic tool in a series of anaphylactic reactions caused by pigeon tick bites.", "corpus_id": 37917274, "score": -1, "title": "IgE-mediated anaphylaxis caused by bites of the pigeon tick Argas reflexus: cloning and expression of the major allergen Arg r 1." }
{ "abstract": "An underwater manipulator is essential for underwater robotic sampling and other service operations. Conventional rigid body underwater manipulators generally required substantial size and weight, leading to hindered general applications. Pioneering soft robotic underwater manipulators have defied this by offering dexterous and lightweight arms and grippers, but still requiring substantial actuation and control components to withstand the water pressure and achieving the desired dynamic performance. In this work, we propose a novel approach to underwater manipulator design and control, exploiting the unique characteristics of soft robots, with a hybrid structure (rigid frame+soft actuator) for improved rigidity and force output, a uniform actuator design allowing one compact hydraulic actuation system to drive all actuators, and a novel fully customizable soft bladder design that improves performances in multiple areas: (1) force output of the actuator is decoupled from the working depth, enabling wide working ranges; (2) all actuators are connected to the main hydraulic line without actuator-specific control loop, resulting in a very compact actuation system especially for high-dexterity cases; (3) dynamic responses were improved significantly compared with the counter system without bladder. A prototype soft manipulator with 4-DOFs, dual bladders, and 15 N payload was developed; the entire system (including actuation, control, and batteries) could be mounted onto a consumer-grade remotely operated vehicle, with depth-independent performances validated by various laboratory and field test results across various climatic and hydrographic conditions. Analytical models and validations of the proposed soft bladder design were also presented as a guideline for other applications.", "corpus_id": 211565549, "title": "An Underwater Robotic Manipulator with Soft Bladders and Compact Depth-Independent Actuation" }
{ "abstract": null, "corpus_id": 2506098, "title": "A soft stretchable bending sensor and data glove applications" }
{ "abstract": "In the past few years, Reddit -- a community-driven platform for submitting, commenting and rating links and text posts -- has grown exponentially, from a small community of users into one of the largest online communities on the Web. To the best of our knowledge, this work represents the most comprehensive longitudinal study of Reddit's evolution to date, studying both (i) how user submissions have evolved over time and (ii) how the community's allocation of attention and its perception of submissions have changed over 5 years based on an analysis of almost 60 million submissions. Our work reveals an ever-increasing diversification of topics accompanied by a simultaneous concentration towards a few selected domains both in terms of posted submissions as well as perception and attention. By and large, our investigations suggest that Reddit has transformed itself from a dedicated gateway to the Web to an increasingly self-referential community that focuses on and reinforces its own user-generated image- and textual content over external sources.", "corpus_id": 23217, "score": -1, "title": "Evolution of reddit: from the front page of the internet to a self-referential community?" }
{ "abstract": "Brain surface analysis is essential to neuroscience, however, the complex geometry of the brain cortex hinders computational methods for this task. The difficulty arises from a discrepancy between 3D imaging data, which is represented in Euclidean space, and the non-Euclidean geometry of the highly-convoluted brain surface. Recent advances in machine learning have enabled the use of neural networks for non-Euclidean spaces. These facilitate the learning of surface data, yet pooling strategies often remain constrained to a single fixed-graph. This paper proposes a new learnable graph pooling method for processing multiple surface-valued data to output subject-based information. The proposed method innovates by learning an intrinsic aggregation of graph nodes based on graph spectral embedding. We illustrate the advantages of our approach with in-depth experiments on two large-scale benchmark datasets. The ablation study in the paper illustrates the impact of various factors affecting our learnable pooling method. The flexibility of the pooling strategy is evaluated on four different prediction tasks, namely, subject-sex classification, regression of cortical region sizes, classification of Alzheimer’s disease stages, and brain age regression. Our experiments demonstrate the superiority of our learnable pooling approach compared to other pooling techniques for graph convolutional networks, with results improving the state-of-the-art in brain surface analysis.", "corpus_id": 208248316, "title": "Learnable Pooling in Graph Convolutional Networks for Brain Surface Analysis" }
{ "abstract": "The study of brain functions using fMRI often requires an accurate alignment of cortical data across a population. Particular challenges are surface inflation for cortical visualizations and measurements, and surface matching or alignment of functional data on surfaces for group-level analyses. Present methods typically treat each step separately and can be computationally expensive. For instance, smoothing and matching of cortices often require several hours. Conventional methods also rely on anatomical features to drive the alignment of functional data between cortices, whereas anatomy and function can vary across individuals. To address these issues, we propose BrainTransfer, a spectral framework that unifies cortical smoothing, point matching with confidence regions, and transfer of functional maps, all within minutes of computation. Spectral methods decompose shapes into intrinsic geometrical harmonics, but suffer from the inherent instability of eigenbasis. This limits their accuracy when matching eigenbasis, and prevents the spectral transfer of functions. Our contributions consist of, first, the optimization of a spectral transformation matrix, which combines both, point correspondence and change of eigenbasis, and second, focused harmonics, which localize the spectral decomposition of functional data. BrainTransfer enables the transfer of surface functions across interchangeable cortical spaces, accounts for localized confidence, and gives a new way to perform statistics directly on surfaces. Benefits of spectral transfers are illustrated with a variability study on shape and functional data. Matching accuracy on retinotopy is increased over conventional methods.", "corpus_id": 1890155, "title": "Brain Transfer: Spectral Analysis of Cortical Surfaces and Functional Maps" }
{ "abstract": "Recent advances in mapping cortical areas in the human brain provide a basis for investigating the significance of their spatial arrangement. Here we describe a dominant gradient in cortical features that spans between sensorimotor and transmodal areas. We propose that this gradient constitutes a core organizing axis of the human cerebral cortex, and describe an intrinsic coordinate system on its basis. Studying the cortex with respect to these intrinsic dimensions can inform our understanding of how the spectrum of cortical function emerges from structural constraints.", "corpus_id": 2800600, "score": -1, "title": "Large-Scale Gradients in Human Cortical Organization" }
{ "abstract": "This paper discusses a novel design for MIMO IEEE802.11n WLAN deinterleaver which is able to omit the need of reordering after FFT process. It combines reordering process and deinterleaving process in the deinterleaver. Thus RAM and latency cost for reordering can be eliminated. In this way, overall system implementation area and latency cost can be minimized. Proposed deinterleaver employs combination of LUT-AGU technique. The proposed deinterleaver is able to minimize the LUT cost in the price of higher design complexity and higher logic area cost. The design is implemented on 90nm CMOS ASIC technology. The logic area cost is 0.0929mm2 and correspond gates count is 16.79 KGE. This area result is four times lower than other LUT technique. The power consumption is 2.43mW at 160MHz. This is the lowest compare to the other works.", "corpus_id": 40733270, "title": "Design and implementation of MIMO WLAN deinterleaver with bit-reversed input" }
{ "abstract": "The proliferation and rapid evolution of wireless communication systems have forced the birth of products that includes several standards into a single device. One of the approaches in the development of such multi-standard devices is to design specialized blocks with reconfigurable capabilities. In this work we propose a new reconfigurable interleaver block, capable to implement any of the interleaving operation defined in the 802.11a, 802.11n, 802.16e & Digital Video Broadcasting (DVB) wireless standards. The proposed Multi Standard Interleaver architecture achieves a reduction of silicon area and power consumption when compared with current approaches. The proposed architecture has also the capability to embrace new interleaver definitions that might be defined in upcoming new communication standards.", "corpus_id": 759397, "title": "Design and Implementation of a Multi-standard Interleaver for 802.11a, 802.11n, 802.16e & DVB Standards" }
{ "abstract": "Model based predictive control (MBPC) has been extensively investigated and is widely used in industry. Besides this, interest in non-linear systems has motivated the development of MBPC formulations for non-linear systems. Moreover, the importance of security and reliability in industrial processes is in the origin of the fault tolerant strategies developed in the last two decades. In this paper a MBPC based on support vector machines (SVM) able to cope with faults in the plant itself is presented. The fault tolerant capability is achieved by means of the accurate on-line support vector regression (AOSVR) which is capable of training an SVM in an incremental way. Thanks to AOSVR is possible to train a plant model when a fault is detected and to change the nominal model by the new one, that models the faulty plant. Results obtained under simulation are presented.", "corpus_id": 5311602, "score": -1, "title": "Fault tolerance in the framework of support vector machines based model predictive control" }
{ "abstract": "Studies of diseases associated with pathological irreversible aggregation of proteins have become of special relevance and attracted the attention of researchers through‐ out the world because of the appearance of a new conceptual model based on the capacity of some proteins to self-assemble by the prion mechanism. Along with direct prion diseases, such as bovine rabies and Creutzfeldt-Jakob disease in humans, a great number of neurodegenerative disorders associated with the formation of aggregates through the prion mechanism are revealed. These disorders include Alzheimer’s and Parkinson’s diseases, amyotrophic lateral sclerosis, Huntington disease, and mucovis‐ cidosis, some types of diabetes and hereditary cataracts. The listed diseases are caused by transition of a “healthy” protein or peptide molecule from the native conformation to a very stable “pathological” form. In this case, molecules in the “pathological” conformation aggregate specifically, forming amyloid fibrils that can multiply infinitely. An important result of studying the molecular mechanisms of prion diseases and different proteinopathies, associated with the formation of pathological aggrega‐ tions by the prion mechanism, is the discovery of protein chain regions responsible for their aggregation. The ability to regulate aggregation (fibrillation) of proteins can be the focal tool for the drug development. Herein by the example of 29 RNA-binding proteins with prion-like domains, we demonstrate what role the amino acid repeats in prion-like domains can play. For these proteins, quite different repeats are revealed in the disordered part of the protein chain predicted with bioinformatics methods. Ten proteins of the 29 RNA-binding proteins are involved in the development of some diseases. The prion-like domains of FUS, TAF15, and EWS are critical for the aggrega‐ tion of proteins associated with human neurodegenerative diseases. Proteins of this family are involved not only in neurodegenerative diseases, such as amyotrophic lateral sclerosis pathological aggregates, which are crucibles of amyotrophic lateral sclerosis (ALS) pathogenesis.", "corpus_id": 56235050, "title": "Influence of Repeats in the Protein Chain on its Aggregation Capacity for ALS-Associated Proteins" }
{ "abstract": "A combination of yeast genetics and protein biochemistry define how the fused in sarcoma (FUS) protein might contribute to Lou Gehrig's disease.", "corpus_id": 2838505, "title": "Molecular Determinants and Genetic Modifiers of Aggregation and Toxicity for the ALS Disease Protein FUS/TLS" }
{ "abstract": "Ferrets have long been used as a disease model for the study of influenza vaccines, but a more recent use has been for the study of human monoclonal antibodies directed against influenza viruses. Published data suggest that human antibodies are cleared unusually quickly from the ferret and that immune responses may be partially responsible. This immunogenicity increases variability within groups and may present an obstacle to long‐term studies.", "corpus_id": 9726289, "score": -1, "title": "Chimeric antibodies with extended half-life in ferrets" }
{ "abstract": "Mirizzi Syndrome (MS) is an uncommon complication of chronic gallstone disease defined as a common bile duct (CBD) obstruction secondary to gallstone impaction in the cystic duct or gallbladder neck. MS is still a challenging clinical situation: preoperative diagnosis of MS is complex and can be made in 18-62.5% of patients. Over 50% of patients with MS is diagnosed during surgery. In most of cases, laparotomy is the preferred surgical approach. We report the case of a 70-year-old woman with a history of asthenia, jaundice, abdominal pain and preoperative imaging that suggest the presence of biliary stones with a choledocal stenosis. Intraoperatively, a MS with cholecysto-biliary fistula involving less than two-thirds of the circumference of the bile duct was diagnosed and successfully treated.", "corpus_id": 201837114, "title": "Mirizzi syndrome: a challenging diagnosis. Case report." }
{ "abstract": "BackgroundThis article reviews the feasibility of the laparoscopic treatment of Mirizzi syndrome and determines the associated risks and complications of this technique.MethodsAn electronic search of the literature between 1989 and 2008 was undertaken to identify relevant articles. Studies comprising at least four patients treated by laparoscopy and reporting on the preoperative diagnosis rate and analytical conversion and complication data were considered for inclusion.ResultsFrom 66 abstracts reviewed, 10 eligible studies were identified. Conversion, complication, and reoperation rates were 41%, 20%, and 6%, respectively. The risks for open conversion and procedure-related complications were similar for patients with type I and type II Mirizzi syndrome. However, patients of studies reporting a high preoperative diagnosis rate had a significantly lower risk for conversion (p < 0.05), procedure-related complications (p < 0.05), and reoperation (p < 0.05), when compared with studies with a low preoperative diagnosis rate.ConclusionCurrent evidence suggests that laparoscopic treatment of Mirizzi syndrome cannot be recommended as a standard procedure. Preoperative diagnosis of the syndrome seems an important predicting factor of technical success.", "corpus_id": 1228435, "title": "Laparoscopic treatment of Mirizzi syndrome: a systematic review" }
{ "abstract": "In the current era of rapid industrialization, the foremost challenge is the management of industrial wastes. Activities such as mining and industrialization spill over a large quantity of toxic waste that pollutes soil, water and air. This poses a major environmental and health challenge. The toxic heavy metals present in the soil and water are entering the food chain, which in turn causes severe health hazards. Environmental clean-up and reclamation of heavy metal contaminated soil and water are very important, and it necessitates efforts of environmentalists, industrialists, scientists and policymakers. Phytoremediation is a plant-based approach to remediate heavy metal/organic pollutant contaminated soil and water in an eco-friendly, cost-effective and permanent way. This review covers the effect of heavy metal toxicity on plant growth and physiological process, the concept of heavy metal accumulation, detoxification, and the mechanisms of tolerance in plants. Based on plants' ability to uptake heavy metals and metabolize them within tissues, phytoremediation techniques have been classified into six types: phytoextraction, phytoimmobilization, phytovolatilization, phytodegradation, rhizofiltration and rhizodegradation. The development of research in this area led to the identification of metal hyper-accumulators, which could be utilized for reclamation of contaminated soil through phytomining. Concurrently, breeding and biotechnological approaches can enhance the remediation efficiency. Phytoremediation technology, combined with other reclamation technologies/practices, can provide clean soil and water to the ecosystem. This article is protected by copyright. All rights reserved.", "corpus_id": 233290898, "score": -1, "title": "Insights into decontamination of soils by phytoremediation: a detailed account on heavy metal toxicity and mitigation strategies." }
{ "abstract": "Sepelvaltimotauti on syovan ohella yksi merkittavimmista kansantaudeista Suomessa. Sydamen asialla -hankkeen tavoitteena oli sepelvaltimoiden varjoainekuvaukseen paasyn sujuvoittamisen ohella kehittaa sepelvaltimotautipotilaiden ohjausta seka parantaa potilaiden valmiuksia vaikuttaa omilla valinnoillaan terveyteensa. \n \nTama kirjallisuuskatsaus kuvaa sepelvaltimotautia sairastavien potilaiden kokemuksia saamastaan potilasohjauksesta seka siihen vaikuttavista tekijoista. Aineisto haettiin Medic- ja CINAHL-tietokannoista. Aineiston analyysiin valikoitui kuusi alkuperaisartikkelia. Sepelvaltimotautipotilaan kokemus saadusta potilasohjauksesta muodostui ohjauksen sisallosta, potilaan yksilollisyyden huomioimisesta seka ohjausosaamisesta. Potilaan kokemuksiin vaikuttivat potilaan sosioekonomiset taustatekijat seka sairauteen ja ohjaukseen liittyvat tekijat. Saatuja tuloksia voidaan hyodyntaa potilasohjauksen asiakaslahtoisessa kehittamisessa. Sepelvaltimotautipotilaiden lisaksi tuloksia voidaan hyodyntaa myos muiden potilasryhmien ohjauksessa seka neuvonnassa.", "corpus_id": 212917658, "title": "Potilaslähtöistä, yksilöllistä ohjausta kehittämässä – sepelvaltimotautipotilaiden kokemuksia saamastaan potilasohjauksesta" }
{ "abstract": "Background: Patient delay in seeking treatment for acute coronary syndrome symptoms remains a problem. Thus, it is vital to test interventions to improve this behavior, but at the same time it is essential that interventions not increase anxiety. Purpose: To determine the impact on anxiety and perceived control of an individual face-to-face education and counseling intervention designed to decrease patient delay in seeking treatment for acute coronary syndrome symptoms. Methods: This was a multicenter randomized controlled trial of the intervention in which anxiety data were collected at baseline, 3-months and 12-months. A total of 3522 patients with confirmed coronary artery disease were enrolled; data from 2597 patients with anxiety data at all time points are included. The intervention was a 45 min education and counseling session, in which the social, cognitive and emotional responses to acute coronary syndrome symptoms were discussed as were barriers to early treatment seeking. Repeated measures analysis of covariance was used to compare anxiety and perceived control levels across time between the groups controlling for age, gender, ethnicity, education level, and comorbidities. Results: There were significant differences in anxiety by group (p = 0.03). Anxiety level was stable in patients in the control group, but decreased across time in the intervention group. Perceived control increased across time in the intervention group and remained unchanged in the control group (p = 0.01). Conclusion: Interventions in which cardiac patients directly confront the possibility of an acute cardiac event do not cause anxiety if they provide patients with appropriate strategies for managing symptoms.", "corpus_id": 1198752, "title": "The impact on anxiety and perceived control of a short one-on-one nursing intervention designed to decrease treatment seeking delay in people with coronary heart disease" }
{ "abstract": "Abstract An experimental study of the axial mixing of dry, powders in batch ball mills has been carried out. For a system of silicon carbide and garnet particles in a laboratory mill containing small plastic balls, the observed mixing is in good agreement with a simple diffusion model. Empirical expressions are presented which relate the diffusion coefficient to the ball and particle loading in the mill. The expressions seem to be valid over most of the practical range of operating conditions except when the filing of either particles or balls is very low. Under these conditions, segregation of balls and particles occurs, apparently leading to anomalous values of the diffusion coefficient.", "corpus_id": 96045443, "score": -1, "title": "Axial mixing of particles in batch ball mills" }
{ "abstract": "Endometriosis is a benign gynecological condition characterized by specific histological, molecular, and clinical findings. It affects 5%–10% of premenopausal women, is a cause of infertility, and has been implicated as a precursor for certain types of ovarian cancer. Advances in technology, primarily the ability for whole genome sequencing, have led to the discovery of new mutations and a better understanding of the function of previously identified genes and pathways associated with endometriosis associated ovarian cancers (EAOCs) that include PTEN, CTNNB1 (β-catenin), KRAS, microsatellite instability, ARID1A, and the unique role of inflammation in the development of EAOC. Clinically, EAOCs are associated with a younger age at diagnosis, lower stage and grade of tumor, and are more likely to occur in premenopausal women when compared with other ovarian cancers. A shift from screening strategies adopted to prevent EAOCs has resulted in new recommendations for clinical practice by national and international governing bodies. In this paper, we review the common histologic and molecular characteristics of endometriosis and ovarian cancer, risks associated with EAOCs, clinical challenges and give recommendations for providers.", "corpus_id": 2796788, "title": "Endometriosis and ovarian cancer: links, risks, and challenges faced" }
{ "abstract": "OBJECTIVE\nThe aim of this investigation was to compare outcomes of patients with clear cell carcinoma (CCC) and endometrioid carcinoma (EC) of the ovary associated with endometriosis to patients with ovarian papillary serous carcinoma (PSC).\n\n\nMETHODS\nPatients with CCC and EC of the ovary associated with endometriosis were identified and matched by age and stage to PSC controls. Student's t test and chi square test were used to analyze continuous and categorical data. The Kaplan-Meier method was used for survival analysis.\n\n\nRESULTS\n67 cases associated with endometriosis were identified, of which 45 were arising in endometriosis. Cases were matched to 134 PSC controls. 27 patients with tumors associated with endometriosis presented at stage I (40.3%), 27 at stage II (40.3%), ten at stage III (14.9%) and three at stage IV (4.5%). There was no difference in rate of optimal cytoreduction or response to chemotherapy in cases vs. PSC controls. There was a significant increase in synchronous endometrial cancer in tumors associated with endometriosis compared to PSC (25.4% vs. 3.7%; P<0.001). 18 cases (26.9%) had recurrent disease vs. 55 (41%) controls (P=0.03). The 5-year disease-free survival (DFS) and overall survival (OS) of patients with tumors associated with endometriosis compared to PSC controls were 75% vs. 55% (P=0.03) and 85% vs. 77% (P=0.2), respectively.\n\n\nCONCLUSIONS\nPatients with tumors associated with endometriosis had a higher rate of synchronous endometrial cancer. Cases also demonstrated a lower rate of recurrence and improved 5 year DFS; however, this did not translate into a difference in OS.", "corpus_id": 741953, "title": "Comparison of clinical outcomes of patients with clear cell and endometrioid ovarian cancer associated with endometriosis to papillary serous carcinoma of the ovary." }
{ "abstract": "OBJECTIVE\nAn ability to predict survival is of crucial importance in determining the need for cancer therapy. Recent advances in tumor typing of ovarian carcinomas lead to a classification which is more reproducible and reflects underlying biology more accurately than grade. We tested whether updated tumor type predicts outcome for patients with low-stage ovarian carcinoma.\n\n\nMETHODS\nFrom a population-based cohort of 1326 women diagnosed with stage I-II ovarian carcinoma between 1984 and 2003, 652 cases were available for central pathological slide review using contemporary criteria. Six hundred thirty cases were confirmed as ovarian carcinoma. Twenty-five ovarian carcinomas of rare types were excluded leaving 605 cases for this study. Recursive partitioning analysis and univariate models were used to identify subsets with an excellent outcome, i.e., disease-specific survival at 10 years (DSS10y) > or =95%.\n\n\nRESULTS\nSeventy-seven ovarian carcinomas of endometrioid and mucinous type, stage Ia or Ib, were associated with an excellent outcome [DSS10y=95%]. No subset of the high-grade serous type with an excellent outcome could be identified. Clear cell carcinomas of stage Ia or Ib had a favorable outcome [DSS10y=87%] compared to stage Ic-II [DSS10y=66%].\n\n\nCONCLUSIONS\nA subset of ovarian carcinoma patients with an excellent outcome can be identified based on tumor type (endometrioid or mucinous) and stage (Ia or Ib). Type is more reproducibly assigned than grade and identifies a larger cohort of women with stage I/II ovarian carcinoma with favorable outcomes (12.2% vs. 6.5%), and therefore is superior to grade in estimating risk of death from ovarian carcinoma.", "corpus_id": 44922794, "score": -1, "title": "Tumor type and substage predict survival in stage I and II ovarian carcinoma: insights and implications." }
{ "abstract": "Heart rate variability (HRV), the beat-to-beat variation in either heart rate or the duration of the R-R interval, has become a popular clinical and investigational tool (Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology, 1996; Billman, 2011). Indeed, the term “heart rate variability” yields nearly 18,000 “hits” when placed in the pubmed search engine. These temporal fluctuations in heart rate exhibit a marked synchrony with respiration (increasing during inspiration and decreasing during expiration—the so called respiratory sinus arrhythmia) and are widely believed to reflect changes in cardiac autonomic regulation (Billman, 2011). Although the exact contributions of the parasympathetic and the sympathetic divisions of the autonomic nervous system to this variability are controversial and remain the subject of active investigation and debate, a number of time and frequency domain techniques have been developed to provide insight into cardiac autonomic regulation in both health and disease (Billman, 2011). It is the purpose of this book to provide a comprehensive assessment of the strengths and limitations of HRV techniques. Particular emphasis will be placed on the application of HRV techniques in the clinic and on the interaction between prevailing heart rate and HRV. This book contains both state-of-the art review and original research articles that have been grouped into two main sections: Methodological Considerations and Clinical Application. A brief summary of the chapters contained in each section follows below.", "corpus_id": 11578177, "title": "An introduction to heart rate variability: methodological considerations and clinical applications" }
{ "abstract": "Although physiological (e.g., exercise) and pathological (e.g., infection) stress affecting the cardiovascular system have both been documented to be associated with a reduction in overall heart rate variability (HRV), it remains unclear if loss of HRV is ubiquitously similar across different domains of variability analysis or if distinct patterns of altered HRV exist depending on the stressor. Using Continuous Individualized Multiorgan Variability Analysis (CIMVA™) software, heart rate (HR) and four selected measures of variability were measured over time (windowed analysis) from two datasets, a set (n = 13) of patients who developed systemic infection (i.e., sepsis) after bone marrow transplant (BMT), and a matched set of healthy subjects undergoing physical exercise under controlled conditions. HR and the four HRV measures showed similar trends in both sepsis and exercise. The comparison through Wilcoxon sign-rank test of the levels of variability at baseline and during the stress (i.e., exercise or after days of sepsis development) showed similar changes, except for LF/HF, ratio of power at low (LF) and high (HF) frequencies (associated with sympathovagal modulation), which was affected by exercise but did not show any change during sepsis. Furthermore, HRV measures during sepsis showed a lower level of correlation with each other, as compared to HRV during exercise. In conclusion, this exploratory study highlights similar responses during both exercise and infection, with differences in terms of correlation and inter-subject fluctuations, whose physiologic significance merits further investigation.", "corpus_id": 3025336, "title": "Do physiological and pathological stresses produce different changes in heart rate variability?" }
{ "abstract": "Numerous symptoms have been associated with the overtraining syndrome (OT), including changes in autonomic function. Heart rate variability (HRV) provides non‐invasive data about the autonomic regulation of heart rate in real‐life conditions. The aims of the study were to: (i) characterize the HRV profile of seven athletes (OA) diagnosed as suffering of OT, compared with eight healthy sedentary (C) and eight trained (T) subjects during supine rest and 60° upright, and (ii) compare the traditional time‐ and frequency‐domain analysis assessment of HRV with the non‐linear Poincaré plot analysis. In the latter each R‐R interval is plotted as a function of the previous one, and the standard deviations of the instantaneous (SD1) and long‐term R‐R interval variability are calculated. Total power was higher in T than in C and OA both in supine (1158 ± 1137, 6092 ± 3554 and 2970 ± 2947 ms2 for C, T and OA, respectively) and in upright (640 ± 499, 1814 ± 806 and 1092 ± 712 ms2 for C, T and OA, respectively; P<0·05) positions. In supine position, indicators of parasympathetic activity to the sinus node were higher in T compared with C and OA (high‐frequency power: 419·1 ± 381·2, 1105·3 ± 781·4 and 463·7 ± 715·8 ms2 for C, T and OA, respectively; P<0·05; SD1: 29·5 ± 18·5, 75·2 ± 17·2 and 37·6 ± 27·5 for C, T and OA, respectively; P<0·05). OA had a marked predominance of sympathetic activity regardless of the position (LF/HF were 0·47 ± 0·35, 0·47 ± 0·50 and 3·96 ± 5·71 in supine position for C, T and OA, respectively, and 2·09 ± 2·17, 7·22 ± 6·82 and 12·04 ± 10·36 in upright position for C, T and OA, respectively). The changes in HRV indexes induced by the upright posture were greater in T than in OA. The shape of the Poincaré plots allowed the distinction between the three groups, with wide and narrow shapes in T and OA, respectively, compared with C. As Poincaré plot parameters are easy to compute and associated with the ‘width’ of the scatter gram, they corroborate the traditional time‐ and frequency‐domain analysis. We suggest that they could be used to indicate fatigue and/or prevent OT.", "corpus_id": 17892490, "score": -1, "title": "Decrease in heart rate variability with overtraining: assessment by the Poincaré plot analysis" }
{ "abstract": "In this work, we design and propose an improved fuzzy membership function to detect anomalies and intrusions. The objective of the present approach is to achieve an optimal transformation matrix which can improve classifier accuracies. The transformation matrix is aimed at mapping the original process onto a new fuzzy space, so that the resultant representation is free from noise data and facilitates to improve the overall accuracy and also individual class accuracies. Experimental results show that accuracies obtained using our approach is better compared to other approaches. In particular U2R and R2L accuracies are recorded to be very much promising. This research shows an approach which addresses the improvement in overall accuracy and also improvement in detecting R2L and U2R attack accuracies. In future, we plan to extend this work by applying new measures for dimensionality reduction and classification for improving U2R and R2L attack classification accuracies.", "corpus_id": 23848256, "title": "Evolutionary approach for intrusion detection" }
{ "abstract": "Intrusion Detection is one of major threats for organization. The approach of intrusion detection using text processing has been one of research interests which is gaining significant importance from researchers. In text mining based approach for intrusion detection, system calls serve as source for mining and predicting possibility of intrusion or attack. When an application runs, there might be several system calls which are initiated in the background. These system calls form the strong basis and the deciding factor for intrusion detection. In this paper, we mainly discuss the approach for intrusion detection by designing a distance measure which is designed by taking into consideration the conventional Gaussian function and modified to suit the need for similarity function. A Framework for intrusion detection is also discussed as part of this research.", "corpus_id": 1547404, "title": "Intrusion Detection A Text Mining Based Approach" }
{ "abstract": "Abstract A specific, sensitive and reliable radioligand assay for plasma dehydroepiandrosterone (DHA) and its sulfate has been developed. Antisera were obtained by immunizing rabbits with a DHA-17-albumin conjugate. DHA was separated from cross-reacting Δ 5 -steroids by thin layer chromatography. DHA-sulfate was solvolyzed prior to chromatography. Separation of antibody bound and free steroid was achieved with γ-globulin-dextran-coated charcoal. The standard curve was linear on a logit-log plot from 0.1 to 10 ng.", "corpus_id": 37997466, "score": -1, "title": "Radioligand assay for 5 -3 -hydroxysteroids. I. 3 -Hydroxy-5-androstene-17-one and its 3-sulfate." }
{ "abstract": "DNA sensing is critical in various applications such as the early diagnosis of diseases and the investigation of forensic evidence, food processing, agriculture, environmental protection, etc. As a wide-bandgap semiconductor with excellent chemical, physical, electrical, and biocompatible properties, silicon carbide (SiC) is a promising material for DNA sensors. In recent years, a variety of SiC-based DNA-sensing technologies have been reported, such as nanoparticles and quantum dots, nanowires, nanopillars, and nanowire-based field-effect-transistors, etc. This article aims to provide a review of SiC-based DNA sensing technologies, their functions, and testing results.", "corpus_id": 260674200, "title": "Silicon Carbide-Based DNA Sensing Technologies" }
{ "abstract": "This work reports on the label-free electrical detection of DNA molecules for the first time, using silicon carbide (SiC) as a novel material for the realization of nanowire field effect transistors (NWFETs). SiC is a promising semiconductor for this application due to its specific characteristics such as chemical inertness and biocompatibility. Non-intentionally n-doped SiC NWs are first grown using a bottom-up vapor–liquid–solid (VLS) mechanism, leading to the NWs exhibiting needle-shaped morphology, with a length of approximately 2 μm and a diameter ranging from 25 to 60 nm. Then, the SiC NWFETs are fabricated and functionalized with DNA molecule probes via covalent coupling using an amino-terminated organosilane. The drain current versus drain voltage (Id–Vd) characteristics obtained after the DNA grafting and hybridization are reported from the comparative and simultaneous measurements carried out on the SiC NWFETs, used either as sensors or references. As a representative result, the current of the sensor is lowered by 22% after probe DNA grafting and by 7% after target DNA hybridization, while the current of the reference does not vary by more than ±0.6%. The current decrease confirms the field effect induced by the negative charges of the DNA molecules. Moreover, the selectivity, reproducibility, reversibility and stability of the studied devices are emphasized by de-hybridization, non-complementary hybridization and re-hybridization experiments. This first proof of concept opens the way for future developments using SiC-NW-based sensors.", "corpus_id": 2783020, "title": "A silicon carbide nanowire field effect transistor for DNA detection" }
{ "abstract": "We represent behaviorally relevant information in different spatial reference frames in order to interact effectively with our environment. For example, we need an egocentric (e.g., body-centered) reference frame to specify limb movements and an allocentric (e.g., world-centered) reference frame to navigate from one location to another. Posterior parietal cortex (PPC) is vital for performing transformations between these different coordinate systems. Here, we review evidence for multiple pathways in the human brain, from PPC to motor, premotor, and supplementary motor areas, as well as to structures in the medial temporal lobe. These connections are important for transformations between egocentric reference frames to facilitate sensory-guided action, or from egocentric to allocentric reference frames to facilitate spatial navigation.", "corpus_id": 1028848, "score": -1, "title": "Human fronto-parietal and parieto-hippocampal pathways represent behavioral priorities in multiple spatial reference frames" }
{ "abstract": "Aim and Objective: The present study was aimed to evaluate and compare the distribution of biopsy confirmed oral and maxillofacial lesions in adult and geriatric population in Central Kerala region. Materials and Methods: This retrospective study was conducted using past 11 year biopsy records of oral lesions of patients who were treated at Government Dental College, Kottayam.The data was retrieved, categorized into those of adult group and elderly group and was retrospectively analyzed and compared. Descriptive statistical analysis was performed using Statistical Package for Social Sciences (SPSS)", "corpus_id": 264422313, "title": "Pattern of Distribution of Biopsy Confirmed Oral and Maxillofacial Lesions in Adult and Geriatric Age Groups of Central Kerala Population – An Institutional Retrospective Study of 11 Years" }
{ "abstract": "Background Oral health is important to individuals of all age groups. Previous epidemiologic studies of the oral health status of the general population in India provided very little information about oral mucosal lesions in the elderly. Hence, the purpose of the present study was to determine the prevalence of the oral lesions in a geriatric Indian population. Methods 5,100 patients were clinically evaluated, with age ranging from 60 to 98 years. There were 3,100 males and 2,000 females, with a mean age of 69 ± 6.3 yrs. The statistical analysis was done using the SPSS software, where p < .05 was considered to be significant. Results 64% of the patients presented with one or more oral lesions, associated to tobacco, betel nut consumption, and lesions secondary to trauma and prosthesis. Males were more affected than females and this difference was clinically not significant (p > .05). The lesions were more frequently observed between 65 to 70 yrs. The most common alterations observed were smoker’s palate (43%), denture stomatitis (34%), oral submucous fibrosis (30%), frictional keratosis (23%), leukoplakia (22%), and pyogenic granuloma (22%). Hard palate was the most commonly affected site (23.1%). Conclusions The findings of the present study provide important information when clinically evaluating oral cavity in elderly. Close follow-up and systematic evaluation is required in the elderly population to plan future treatment needs.", "corpus_id": 1276994, "title": "Prevalence and Distribution of Oral Mucosal Lesions in a Geriatric Indian Population" }
{ "abstract": "Objective of this study was aimed to highlight the frequency and prevalence of oral pathological lesions. Four hundred and twenty five patients visiting the Department of Oral and Maxillofacial Surgery/ Oral Medicine of Qurayyat Specialized Dental Center, Al-Qurayyat, Saudi Arabia, were included in the study. The study was conducted from year 2011 to 2015. Frequency of patients was noted. Males were 260/425 (61.2%) and females 165/425 (38.8%). Age range was 6-77 years with mean 38.4 + 13.65. Reactive lesions were the most common occurrences, diagnosed in 425 cases (8.94%), and followed by fungal infections (7.8%), lichen planus (7.1%) and pulp and periapical lesions (6.82%). Most common malignant lesion was squamous cell carcinoma 4.7 % (30/425). Most common salivary gland pathology was mucoepidermoid carcinoma 3.1% (13/425).", "corpus_id": 26206578, "score": -1, "title": "A 5-YEARS RETROSPECTIVE STUDY OF ORAL PATHOLOGICAL LESIONS IN 425 SAUDI PATIENTS" }
{ "abstract": "ABSTRACT Alzheimer disease (AD) pathology includes the accumulation of poly-ubiquitylated (also known as poly-ubiquitinated) proteins and failures in proteasome-dependent degradation. Whereas the distribution of proteasomes and its role in synaptic function have been studied, whether proteasome activity regulates the axonal transport and metabolism of the amyloid precursor protein (APP), remains elusive. By using live imaging in primary hippocampal neurons, we showed that proteasome inhibition rapidly and severely impairs the axonal transport of APP. Fluorescence cross-correlation analyses and membrane internalization blockage experiments showed that plasma membrane APP does not contribute to transport defects. Moreover, by western blotting and double-color APP imaging, we demonstrated that proteasome inhibition precludes APP axonal transport by enhancing its endo-lysosomal delivery, where β-cleavage is induced. Taken together, we found that proteasomes control the distal transport of APP and can re-distribute Golgi-derived vesicles to the endo-lysosomal pathway. This crosstalk between proteasomes and lysosomes regulates the intracellular APP dynamics, and defects in proteasome activity can be considered a contributing factor that leads to abnormal APP metabolism in AD. This article has an associated First Person interview with the first author of the paper. Summary: Proteasome inhibition induces rapid and significant reductions of APP axonal transport towards synapses due to increased cell body amyloidogenic processing of APP in lysosomes.", "corpus_id": 23207228, "title": "Proteasome stress leads to APP axonal transport defects by promoting its amyloidogenic processing in lysosomes" }
{ "abstract": "The rate of lateral diffusion of proteins over micron-scale distances in the plasma membrane (PM) of mammalian cells is much slower than in artificial membranes [1, 2]. Different models have been advanced to account for this discrepancy. They invoke either effects on the apparent viscosity of cell membranes through, for example, protein crowding [3, 4], or a role for cortical factors such as actin or spectrin filaments [1]. Here, we use photobleaching to test specific predictions of these models [5]. Neither loss of detectable cortical actin nor knockdown of spectrin expression has any effect on diffusion. Disruption of the PM by formation of ventral membrane sheets or permeabilization induces aggregation of membrane proteins, with a concomitant increase in rates of diffusion for the nonaggregated fraction. In addition, procedures that directly increase or decrease the total protein content of the PM in live cells cause reciprocal changes in lateral diffusion rates. Our data imply that slow diffusion over micron-scale distances is an intrinsic property of the membrane itself and that the density of proteins within the membrane is a significant parameter in determining rates of lateral diffusion.", "corpus_id": 2475431, "title": "Modulation of Lateral Diffusion in the Plasma Membrane by Protein Density" }
{ "abstract": "This is the report of the Ultraviolet-Optical Working Group (UVOWG) commissioned by NASA to study the scientific rationale for new missions in ultraviolet/optical space astronomy approximately ten years from now, when the Hubble Space Telescope (HST) is de-orbited. The UVOWG focused on a scientific theme, The Emergence of the Modern Universe, the period from redshifts z = 3 to 0, occupying over 80% of cosmic time and beginning after the first galaxies, quasars, and stars emerged into their present form. We considered high-throughput UV spectroscopy (10-50x throughput of HST/COS) and wide-field optical imaging (at least 10 arcmin square). The exciting science to be addressed in the post-HST era includes studies of dark matter and baryons, the origin and evolution of the elements, and the major construction phase of galaxies and quasars. Key unanswered questions include: Where is the rest of the unseen universe? What is the interplay of the dark and luminous universe? How did the IGM collapse to form the galaxies and clusters? When were galaxies, clusters, and stellar populations assembled into their current form? What is the history of star formation and chemical evolution? Are massive black holes a natural part of most galaxies? A large-aperture UV/O telescope in space (ST-2010) will provide a major facility in the 21st century for solving these scientific problems. The UVOWG recommends that the first mission be a 4m aperture, SIRTF-class mission that focuses on UV spectroscopy and wide-field imaging. In the coming decade, NASA should investigate the feasibility of an 8m telescope, by 2010, with deployable optics similar to NGST. No high-throughput UV/Optical mission will be possible without significant NASA investments in technology, including UV detectors, gratings, mirrors, and imagers.", "corpus_id": 117862001, "score": -1, "title": "The Emergence of the Modern Universe: Tracing the Cosmic Web" }
{ "abstract": "A survey of alumni perceptions of a post graduate programme in Information and Library Science. the B.Bibl. Honours. at\nthe University of Natal. South Africa is described. Module content and apprapriateness are reviewed in relation to\ndemands of the workplace. Alumni views on delivery and assessment methods are interrogated as are requirements in\nterms of continuing education. Critical issues in ILS education are identified, for example. balancing a human-centred\napproach with Information and Communication Technology competencies in the networked age. Reference is made to\nInformation Management and Knowledge Management. Findings suggest that the Programme has broadly attained its\nanticipated outcomes in preparing alumni for the workplace and that to some extent a balance between the various\nconsiderations outlined in the literature had been achieved.", "corpus_id": 55294172, "title": "Alumni perceptions of a post graduate Information and Library Science Education programme at the University of Natal, South Africa." }
{ "abstract": "Recent trends in UK information and library education are usually portrayed as innovations in response to changes in the labour market for library and information workers. This paper presents an alternative analysis which relates key developments to the growth of a \"New Vocationalism\" in British higher education. The elements of this new orthodoxy are identified and its influence on current programmes of Information and Library Studies assessed. Important consequences have, it is argued, ensued: a decline of specialised contextual study; an erosion of a social model of vocational preparation and an accompanying change of emphasis in the focus of professional information work. In conclusion, these trends are held to amount to an \"instrumental drift\" in UK information and library education. 1. Innovation? It is commonplace to characterise the current state of infonnation and library educa­ tion in the United Kingdom (UK) in tenns of innovation and restructuring. The main dimensions of change are well known: the decline of traditional \"librarianship\" and its replacement by more pluralistic programmes of study based on various definitions of infonnation \"handling\"; a refocusing of infonnation \"science\" to incorporate paradigms of infonnation \"management\" and technology; an erosion of \"specialisation\" in favour of \"generic\" theory and development of student competencies. Such shifts have tended to sacrifice specific academic and vocational outcomes in favour of a broad and flexible \"skills mix\", which educators usually contend brings benefits to employers, students and services alike. Wilson, for example, reviewing these trends, sees them in tenns of creative innovation. UK educators are, he claims, responding in \"vigorous and dynamic fashion\" to the pressures for change (1, p.225). Within the infonnation and library community, such trends are usually seen as the result of the changing demands of the labour market. Curriculum refonners habitually", "corpus_id": 153506136, "title": "Innovation ... or instrumental drift? The “New Vocationalism” and information and library education in the United Kingdom" }
{ "abstract": "The rapid evolution in methods for information processing and sources of information supply is producing a climate of activity which makes it appropriate to consider the future of the traditional information/library department.", "corpus_id": 62672669, "score": -1, "title": "Expanding information functions and horizons" }
{ "abstract": "Genetic algorithms and artificial neural networks are two widely-used techniques that can be combined with each other to produce evolved neural networks. Some research has also looked at the use of diploidy in genetic algorithms for possible advantages over the haploid genetic representation usually used, most notably in the form of better adaptation to changing environments. This paper proposes a diploid representation for evolving neural networks, used in an agent-based simulation. Two versions of the diploid representation were tested with a haploid version in one static and two changing environments. All three genetic types performed differently in different environments.", "corpus_id": 49668016, "title": "Diploidy for evolving neural networks" }
{ "abstract": "In nature the genotype of many organisms exhibits diploidy, i.e., it includes two copies of every gene. In this paper we describe the results of simulations comparing the behavior of haploid and diploid populations of ecological neural networks living in both fixed and changing environments. We show that diploid genotypes create more variability in fitness in the population than haploid genotypes and buffer better environmental change; as a consequence, if one wants to obtain good results for both average and peak fitness in a single population one should choose a diploid population with an appropriate mutation rate. Some results of our simulations parallel biological findings.", "corpus_id": 2694119, "title": "Two is better than one: A diploid genotype for neural networks" }
{ "abstract": "We present an individual‐based model that uses artificial evolution to predict fit behavior and life‐history traits on the basis of environmental data and organism physiology. Our main purpose is to investigate whether artificial evolution is a suitable tool for studying life history and behavior of real biological organisms. The evolutionary adaptation is founded on a genetic algorithm that searches for improved solutions to the traits under scrutiny. From the genetic algorithm’s “genetic code,” behavior is determined using an artificial neural network. The marine planktivorous fish Müller’s pearlside (Maurolicus muelleri) is used as the model organism because of the broad knowledge of its behavior and life history, by which the model’s performance is evaluated. The model adapts three traits: habitat choice, energy allocation, and spawning strategy. We present one simulation with, and one without, stochastic juvenile survival. Spawning pattern, longevity, and energy allocation are the life‐history traits most affected by stochastic juvenile survival. Predicted behavior is in good agreement with field observations and with previous modeling results, validating the usefulness of the presented model in particular and artificial evolution in ecological modeling in general. The advantages, possibilities, and limitations of this modeling approach are further discussed.", "corpus_id": 8610199, "score": -1, "title": "Artificial Evolution of Life History and Behavior" }
{ "abstract": "Guanylyl cyclase-activating proteins (GCAPs) and recoverin are retina-specific Ca(2+)-binding proteins involved in phototransduction. We provide here evidence that in spite of structural similarities GCAPs and recoverin differently change their overall hydrophobic properties in response to Ca(2+). Using native bovine GCAP1, GCAP2 and recoverin we show that: i) the Ca(2+)-dependent binding of recoverin to Phenyl-Sepharose is distinct from such interactions of GCAPs; ii) fluorescence intensity of 1-anilinonaphthalene-8-sulfonate (ANS) is markedly higher at high [Ca(2+)](free) (10 microM) than at low [Ca(2+)](free) (10 nM) in the presence of recoverin, while an opposing effect is observed in the presence of GCAPs; iii) fluorescence resonance energy transfer from tryptophane residues to ANS is more efficient at high [Ca(2+)](free) in recoverin and at low [Ca(2+)](free) in GCAP2. Such different changes of hydrophobicity evoked by Ca(2+) appear to be the precondition for possible mechanisms by which GCAPs and recoverin control the activities of their target enzymes.", "corpus_id": 6417252, "title": "Ca2+ differently affects hydrophobic properties of guanylyl cyclase-activating proteins (GCAPs) and recoverin." }
{ "abstract": "Recoverin is an EF-hand calcium-binding protein reportedly involved in the transduction of light by vertebrate photoreceptor cells. It also is an autoantigen in a cancer-associated degenerative disease of the retina. Measurements by circular dichroism presented here demonstrate that the binding of calcium to recoverin causes large structural changes. increasing the alpha-helical content of the protein and decreasing its beta-turn, beta-sheet and 'other' structures. The maximum helical content (67%) was observed at 100 microM free calcium and, unlike calmodulin, decreased as the calcium concentration was modulated in either direction from this value. Fluorescence measurements indicated that recoverin may aggregate or undergo structural changes independent of calcium binding as the calcium concentration is increased above 100 microM. EGTA also appeared to affect the structure of recoverin independent of its chelation of calcium. While calcium-induced conformational changes have been proposed to alter the membrane binding of recoverin through association of its myristoylated amino terminus, in the experiments presented here the partitioning of recoverin between the cytoplasmic and membrane compartments of the rod photoreceptor outer segment was unaffected by the concentration of calcium, therefore it appears unlikely that a calcium-myristoyl switch acts alone to anchor recoverin directly to the membrane. These experiments were conducted with native recoverin which is heterogeneously acylated, but mass spectrometry confirmed that simple chromatographic methods could be devised to isolate the different forms of recoverin for further studies.", "corpus_id": 2280656, "title": "Calcium binding to recoverin: implications for secondary structure and membrane association." }
{ "abstract": "NCS (neuronal Ca2+ sensor) proteins belong to a family of calmodulin-related EF-hand Ca2+-binding proteins which, in spite of a high degree of structural similarity, are able to selectively recognize and regulate individual effector enzymes in a Ca2+-dependent manner. NCS proteins vary at their C-termini, which could therefore serve as structural control elements providing specific functions such as target recognition or Ca2+ sensitivity. Recoverin, an NCS protein operating in vision, regulates the activity of rhodopsin kinase, GRK1, in a Ca2+-dependent manner. In the present study, we investigated a series of recoverin forms that were mutated at the C-terminus. Using pull-down assays, surface plasmon resonance spectroscopy and rhodopsin phosphorylation assays, we demonstrated that truncation of recoverin at the C-terminus significantly reduced the affinity of recoverin for rhodopsin kinase. Site-directed mutagenesis of single amino acids in combination with structural analysis and computational modelling of the recoverin-kinase complex provided insight into the protein-protein interface between the kinase and the C-terminus of recoverin. Based on these results we suggest that Phe3 from the N-terminal helix of rhodopsin kinase and Lys192 from the C-terminal segment of recoverin form a cation-π interaction pair which is essential for target recognition by recoverin. Taken together, the results of the present study reveal a novel rhodopsin-kinase-binding site within the C-terminal region of recoverin, and highlights its significance for target recognition and regulation.", "corpus_id": 1391768, "score": -1, "title": "Involvement of the recoverin C-terminal segment in recognition of the target enzyme rhodopsin kinase." }
{ "abstract": "Proteins frequently accomplish their biological function by collective atomic motions. Yet the identification of collective motions related to a specific protein function from, e.g., a molecular dynamics trajectory is often non-trivial. Here, we propose a novel technique termed “functional mode analysis” that aims to detect the collective motion that is directly related to a particular protein function. Based on an ensemble of structures, together with an arbitrary “functional quantity” that quantifies the functional state of the protein, the technique detects the collective motion that is maximally correlated to the functional quantity. The functional quantity could, e.g., correspond to a geometric, electrostatic, or chemical observable, or any other variable that is relevant to the function of the protein. In addition, the motion that displays the largest likelihood to induce a substantial change in the functional quantity is estimated from the given protein ensemble. Two different correlation measures are applied: first, the Pearson correlation coefficient that measures linear correlation only; and second, the mutual information that can assess any kind of interdependence. Detecting the maximally correlated motion allows one to derive a model for the functional state in terms of a single collective coordinate. The new approach is illustrated using a number of biomolecules, including a polyalanine-helix, T4 lysozyme, Trp-cage, and leucine-binding protein.", "corpus_id": 11020106, "title": "Detection of Functional Modes in Protein Dynamics" }
{ "abstract": "A COMPLETE description of an enzyme requires a knowledge of its structure and the dynamics of its function. From the crystal structures of enzymes and enzyme–inhibitor complexes and the known chemistry of model systems, it has been possible in some cases to draw reasonable inferences concerning the mechanisms of enzyme-catalysed reactions. Little has been done so far, however, to supplement such essentially static results by an investigation of the reaction dynamics. This requires an understanding of the internal motions of the enzyme, as well as those of the substrate, since both are likely to be essential to the function. Here we present a theoretical study of a low frequency vibration involving the two globular lobes of lysozyme between which the cleft containing the active site is located1. Any motion involving this cleft could play a part in the catalytic activity; in fact, atom displacements of up to 0.75 Å found in a comparison of the free enzyme and the enzyme-inhibitor complex indicate that the cleft has closed down in the latter2. The force constant for the low frequency bending vibration corresponding to the opening and closing of the cleft is obtained from empirical energy functions3. Because the protein surface moves appreciably during the vibration, damping effects resulting from the viscous dissipation in the solvent are included in the calculation4,5.", "corpus_id": 4164924, "title": "The hinge-bending mode in lysozyme" }
{ "abstract": "In this essay, we aim to illustrate how Martin Karplus and his research group effectively set in motion the engine of molecular dynamics (MD) simulations of biomolecules. This process saw its prodromes between 1969 and the early 1970s with Karplus’ landing in biology, a transition that came to fruition with the treatment of 11- cis -retinal photoisomerization and the development of an allosteric model to account for the mechanism of cooperativity in hemoglobin. In 1977, J. Andrew McCammon, Bruce Gelin, and Martin Karplus published an article in Nature reporting the MD simulation of bovine pancreatic trypsin inhibitor (BPTI). This publication helped initiate the merger of computational statistical mechanics and biochemistry, a process that Karplus undertook at a later stage and whose beginnings we propose to reconstruct in this article through unpublished accounts of the key people who participated in this endeavor.", "corpus_id": 252878782, "score": -1, "title": "The emergence of protein dynamics simulations: how computational statistical mechanics met biochemistry" }
{ "abstract": "This work demonstrates the usefulness of 3D printing for optical imaging applications. Progress in developing optical imaging for biomedical applications requires customizable and often complex objects for testing and evaluation. There is therefore high demand for what have become known as tissue-simulating \"phantoms.\" We present a new optical phantom fabricated using inexpensive 3D printing methods with multiple materials, allowing for the placement of complex inhomogeneities in complex or anatomically realistic geometries, as opposed to previous phantoms, which were limited to simple shapes formed by molds or machining. We use diffuse optical imaging to reconstruct optical parameters in 3D space within a printed mouse to show the applicability of the phantoms for developing whole animal optical imaging methods. This phantom fabrication approach is versatile, can be applied to optical imaging methods besides diffusive imaging, and can be used in the calibration of live animal imaging data.", "corpus_id": 8311448, "title": "Fabrication and application of heterogeneous printed mouse phantoms for whole animal optical imaging." }
{ "abstract": "The sensitivity to surface profile of non-contact optical imaging, such as spatial frequency domain imaging, may lead to incorrect measurements of optical properties and consequently erroneous extrapolation of physiological parameters of interest. Previous correction methods have focused on calibration-based, model-based, and computation-based approached. We propose an experimental method to correct the effect of surface profile on spectral images. Three-dimensional (3D) phantoms were built with acrylonitrile butadiene styrene (ABS) plastic using an accurate 3D imaging and an emergent 3D printing technique. In this study, our method was utilized for the correction of optical properties (absorption coefficient μa and reduced scattering coefficient μs′) of objects obtained with a spatial frequency domain imaging system. The correction method was verified on three objects with simple to complex shapes. Incorrect optical properties due to surface with minimum 4 mm variation in height and 80 degree in slope were detected and improved, particularly for the absorption coefficients. The 3D phantom-based correction method is applicable for a wide range of purposes. The advantages and drawbacks of the 3D phantom-based correction methods are discussed in details.", "corpus_id": 5640458, "title": "Three-dimensional phantoms for curvature correction in spatial frequency domain imaging" }
{ "abstract": "Evaluation of image quality (IQ) in Computed Tomography (CT) is important to ensure that diagnostic questions are correctly answered, whilst keeping radiation dose to the patient as low as is reasonably possible. The assessment of individual aspects of IQ is already a key component of routine quality control of medical x-ray devices. These values together with standard dose indicators can be used to give rise to 'figures of merit' (FOM) to characterise the dose efficiency of the CT scanners operating in certain modes. The demand for clinically relevant IQ characterisation has naturally increased with the development of CT technology (detectors efficiency, image reconstruction and processing), resulting in the adaptation and evolution of assessment methods. The purpose of this review is to present the spectrum of various methods that have been used to characterise image quality in CT: from objective measurements of physical parameters to clinically task-based approaches (i.e. model observer (MO) approach) including pure human observer approach. When combined together with a dose indicator, a generalised dose efficiency index can be explored in a framework of system and patient dose optimisation. We will focus on the IQ methodologies that are required for dealing with standard reconstruction, but also for iterative reconstruction algorithms. With this concept the previously used FOM will be presented with a proposal to update them in order to make them relevant and up to date with technological progress. The MO that objectively assesses IQ for clinically relevant tasks represents the most promising method in terms of radiologist sensitivity performance and therefore of most relevance in the clinical environment.", "corpus_id": 2403655, "score": -1, "title": "Image quality in CT: From physical measurements to model observers." }
{ "abstract": "Although power conversion efficiencies of organic-inorganic lead halide perovskite solar cells (PSCs) are approaching those of single-crystal silicon solar cells, the working device stability due to internal and external factors, such as light, temperature, and moisture, is still a key issue to address. The current world-record efficiency of PSCs is based on organic hole transport materials, which are usually susceptible to degradation from heat and diffusion of dopants. A simple solution would be to replace the generally used organic hole transport layers (HTLs) with a more stable inorganic material. This review article summarizes recent contributions of inorganic hole transport materials to PSC development, focusing on aspects of device performance and long-term stability. Future research directions of inorganic HTLs in the progress of PSC research and challenges still remaining will also be discussed.", "corpus_id": 245619863, "title": "Efficient and Stable Perovskite Solar Cells Based on Inorganic Hole Transport Materials" }
{ "abstract": "UNLABELLED\nInverted planar heterojunction perovskite solar cells with poly (3,4-ethylenedioxythiophene):poly (styrenesulfonate) sulfonic acid (\n\n\nPEDOT\nPSS) as the hole transport layer (HTL) have attracted significant attention during recent years. However, these devices suffer from a serious stability issue due to the acidic and hygroscopic characteristics of\n\n\nPEDOT\nPSS. In this work, we demonstrate a room-temperature and solution-processed CuI film which is used as the HTL for inverted perovskite solar cells. As a result, an impressive PCE of 16.8% is achieved by the device based on the CuI HTL. Moreover, the unsealed CuI-based device displays enhanced air stability compared to the\n\n\nPEDOT\nPSS-based device. In addition, the fabrication of the CuI HTL is a simple and time-saving procedure without any post-treatment, thus making it a promising candidate as the HTL in inverted perovskite solar cells and a potential target for efficient flexible and tandem solar cells.", "corpus_id": 20342846, "title": "Room-temperature and solution-processed copper iodide as the hole transport layer for inverted planar perovskite solar cells." }
{ "abstract": "As a first phase in the investigation of the feasibility of storing light water reactor spent fuel in air, oxidation tests were performed on nonirradiated UO2 pellets over the temperature range of 150 to 345°C. The objective of the tests was to determine the important independent variables that affect the oxidation behavior of fuel. Pellets tested at the high end of the temperature range (>230°C) oxidized very rapidly from the standpoint of projected storage periods in air. These results suggest that acceptable spent-fuel storage temperatures should be <230°C. The tests also revealed that the oxidation was initially retarded by the presence of a coating, probably a higher oxide, that formed on pellets during the period of air storage before they were tested. The oxide coating became increasingly semiprotective after longer storage periods. Other variables identified as important to oxidation behavior of fuel were temperature, radiolysis of a static air atmosphere, fuel microstructure, gadolinia content, a...", "corpus_id": 92620901, "score": -1, "title": "Oxidation Behavior of Nonirradiated UO2" }
{ "abstract": "Eukaryotic RNAs typically contain 5′ cap structures that have been primarily studied in yeast and metazoa. The only known RNA cap structure in unicellular protists is the unusual Cap4 on Trypanosoma brucei mRNAs. We have found that T. vaginalis mRNAs are protected by a 5′ cap structure, however, contrary to that typical for eukaryotes, T. vaginalis spliceosomal snRNAs lack a cap and may contain 5′ monophophates. The distinctive 2,2,7-trimethylguanosine (TMG) cap structure usually found on snRNAs and snoRNAs is produced by hypermethylation of an m7G cap catalyzed by the enzyme trimethylguanosine synthase (Tgs). Here, we biochemically characterize the single T. vaginalis Tgs (TvTgs) encoded in its genome and demonstrate that TvTgs exhibits substrate specificity and amino acid requirements typical of an RNA cap-specific, m7G-dependent N2 methyltransferase. However, recombinant TvTgs is capable of catalysing only a single round of N2 methylation forming a 2,7-dimethylguanosine cap (DMG) as observed previously for Giardia lamblia. In contrast, recombinant Entamoeba histolytica and Trypanosoma brucei Tgs are capable of catalysing the formation of a TMG cap. These data suggest the presence of RNAs with a distinctive 5′ DMG cap in Trichomonas and Giardia lineages that are absent in other protist lineages.", "corpus_id": 18430343, "title": "The divergent eukaryote Trichomonas vaginalis has an m7G cap methyltransferase capable of a single N2 methylation" }
{ "abstract": "A scheme of eukaryotic phylogeny has been suggested based on the structure and physical linkage of the RNA triphosphatase and RNA guanylyltransferase enzymes that catalyze mRNA cap formation. Here we show that the unicellular pathogen Giardia lamblia encodes an mRNA capping apparatus consisting of separate triphosphatase and guanylyltransferase components, which we characterize biochemically. We also show that native Giardia mRNAs have blocked 5′-ends and that 7-methylguanosine caps promote translation of transfected mRNAs in Giardia in vivo. The Giardia triphosphatase belongs to the tunnel family of metal-dependent phosphohydrolases that includes the RNA triphosphatases of fungi, microsporidia, and protozoa such as Plasmodium and Trypanosoma. The tunnel enzymes adopt a unique active-site fold and are structurally and mechanistically unrelated to the cysteine-phosphatase-type RNA triphosphatases found in metazoans and plants, which comprise part of a bifunctional triphosphataseguanylyltransferase fusion protein. All available evidence now points to the separate tunnel-type triphosphatase and guanylyltransferase as the aboriginal state of the capping apparatus. We identify a putative tunnel-type triphosphatase and a separate guanylyltransferase encoded by the red alga Cyanidioschyzon merolae. These findings place fungi, protozoa, and red algae in a common lineage distinct from that of metazoa and plants.", "corpus_id": 1418076, "title": "Yeast-like mRNA Capping Apparatus in Giardia lamblia*" }
{ "abstract": "Personalized/individualized/tailored therapy for each patient is an important goal for improving the outcome of patients with colorectal adenocarcinoma and includes the intention to maximize efficacy and minimize toxicity of chemotherapeutic agents. Numerous barriers must be overcome to reach this goal because outcome is affected by an unholy trinity of tumor characteristics that include somatic alterations at the DNA, RNA, and protein level; patient characteristics that include germline genetic differences such as polymorphisms in enzymes affecting the metabolism of chemotherapeutic agents; and environmental exposures and factors that include diet and physical activity. At present, evaluation of epidermal growth factor receptor (EGFR) expression by immunohistochemistry in colorectal adenocarcinoma is generally required for treatment with one of the monoclonal antibody therapies directed against that target, despite the absence of evidence for predictive value of the assay, whereas EGFR fluorescent in situ hybridization (FISH) may be predictive. In addition, the Food and Drug Administration of the United States now requires a ‘black box’ warning on the packaging of irinotecan for evaluation of germline polymorphism in UGT1A1, the gene mutated in Gilbert's syndrome, for potential reduction of drug dosage in patients with the UGT1A1*28 polymorphism. Numerous other potential markers have been identified but have not yet reached levels of evidence that support their routine usage. For example, KRAS gene mutation appears to preclude improved survival after therapy with monoclonal antibody therapy directed at EGFR, and extensive DNA methylation is associated with lack of efficacy of 5-fluorouracil (5-FU)-based chemotherapy. Additional markers will come into routine usage as reports of research studies continue to appear in the literature. Clinical trials driven by molecular targets and agents directed against them, and understanding of the conflicting data on utility of markers reported in the literature, are needed to advance the field.", "corpus_id": 9716905, "score": -1, "title": "Targeted therapy of cancer: new roles for pathologists in colorectal cancer" }
{ "abstract": "Models describing the thermodynamic stability of soft-matter quasicrystals are reviewed and expanded. New analytical methods for treating them are presented, and a number of new stable quasicrystalline structures are reported.", "corpus_id": 13707961, "title": "Multiple-scale structures: from Faraday waves to soft-matter quasicrystals" }
{ "abstract": "Over the past decade, quasicrystalline order has been observed in many soft-matter systems: in dendritic micelles, in star and tetrablock terpolymer melts and in diblock copolymer and surfactant micelles. The formation of quasicrystals from such a broad range of ‘soft’ macromolecular micelles suggests that they assemble by a generic mechanism rather than being dependent on the specific chemistry of each system. Indeed, micellar softness has been postulated and shown to lead to quasicrystalline order. Here we theoretically explore this link by studying two-dimensional hard disks decorated with step-like square-shoulder repulsion that mimics, for example, the soft alkyl shell around the aromatic core in dendritic micelles. We find a family of quasicrystals with 10-, 12-, 18- and 24-fold bond orientational order which originate from mosaics of equilateral and isosceles triangles formed by particles arranged core-to-core and shoulder-to-shoulder. The pair interaction responsible for these phases highlights the role of local packing geometry in generating quasicrystallinity in soft matter, complementing the principles that lead to quasicrystal formation in hard tetrahedra. Based on simple interparticle potentials, quasicrystalline mosaics may well find use in diverse applications ranging from improved image reproduction to advanced photonic materials.", "corpus_id": 4461629, "title": "Mosaic two-lengthscale quasicrystals" }
{ "abstract": "We derive estimates for the characteristics of gravitational radiation from stellar collapse, using recent models of the core collapse of Chandrasekhar-massed white dwarfs (accretion-induced collapse), core-collapse supernovae and collapsars, and the collapse of very massive stars (≳300 M☉). We study gravitational wave emission mechanisms using several estimation techniques, including two-dimensional numerical computation of quadrupole wave emission, estimates of bar-mode strength, estimates of r-mode emission, and estimates of waves from black hole ringing. We also review the rate at which the relevant collapses are believed to occur, which has a major impact on their relevance as astrophysical sources. Although the latest supernova progenitor simulations produce cores rotating much slower than those used in the past, we find that bar-mode and r-mode instabilities from core-collapse supernovae remain among the leading candidate sources for second-generation detectors at the Laser Interferometer Gravitational-Wave Observatory (LIGO II). Accretion-induced collapse (AIC) of a white dwarf could produce gravitational wave signals similar to those from core collapse. In the models that we examine, such collapses are not unstable to bar modes; we note that models recently examined by Liu & Lindblom, which have slightly more angular momentum, are certainly unstable to bar formation. Because AIC events are probably 1000 times less common than core-collapse supernovae, the typical AIC event will be much farther away, and thus the observed waves will be much weaker. In the most optimistic circumstances, we find that it may be possible to detect gravitational waves from the collapse of 300 M☉ Population III stars.", "corpus_id": 5651041, "score": -1, "title": "Gravitational Wave Emission from Core Collapse of Massive Stars" }
{ "abstract": "We study the paradoxical aspects of closed time-like curves and their impact on the theory of computation. After introducing the $\\text{TM}_\\text{CTC}$, a classical Turing machine benefiting CTCs for backward time travel, Aaronson et al. proved that $\\text{P} = \\text{PSPACE}$ and the $\\Delta_2$ sets, such as the halting problem, are computable within this computational model. Our critical view is the physical consistency of this model, which leads to proposing the strong axiom, explaining that every particle rounding on a CTC will be destroyed before returning to its starting time, and the weak axiom, describing the same notion, particularly for Turing machines. We claim that in a universe containing CTCs, the two axioms must be true; otherwise, there will be an infinite number of any particle rounding on a CTC in the universe. An immediate result of the weak axiom is the incapability of Turing machines to convey information for a full round on a CTC, leading to the proposed $\\text{TM}_\\text{CTC}$ programs for the aforementioned corollaries failing to function. We suggest our solution for this problem as the data transferring hypothesis, which applies another $\\text{TM}_\\text{CTC}$ as a means for storing data. A prerequisite for it is the existence of the concept of Turing machines throughout time, which makes it appear infeasible in our universe. Then, we discuss possible physical conditions that can be held for a universe containing CTCs and conclude that if returning to an approximately equivalent universe by a CTC was conceivable, the above corollaries would be valid.", "corpus_id": 256358733, "title": "Turing Machines Equipped with CTC in Physical Universes" }
{ "abstract": "We ask, and answer, the question of what's computable by Turing machines equipped with time travel into the past: that is, closed timelike curves or CTCs (with no bound on their size). We focus on a model for CTCs due to Deutsch, which imposes a probabilistic consistency condition to avoid grandfather paradoxes. Our main result is that computers with CTCs can solve exactly the problems that are Turing-reducible to the halting problem, and that this is true whether we consider classical or quantum computers. Previous work, by Aaronson and Watrous, studied CTC computers with a polynomial size restriction, and showed that they solve exactly the problems in PSPACE, again in both the classical and quantum cases. ::: Compared to the complexity setting, the main novelty of the computability setting is that not all CTCs have fixed-points, even probabilistically. Despite this, we show that the CTCs that do have fixed-points suffice to solve the halting problem, by considering fixed-point distributions involving infinite geometric series. The tricky part is to show that even quantum computers with CTCs can be simulated using a Halt oracle. For that, we need the Riesz representation theorem from functional analysis, among other tools. ::: We also study an alternative model of CTCs, due to Lloyd et al., which uses postselection to \"simulate\" a consistency condition, and which yields BPP_path in the classical case or PP in the quantum case when subject to a polynomial size restriction. With no size limit, we show that postselected CTCs yield only the computable languages if we impose a certain finiteness condition, or all languages nonadaptively reducible to the halting problem if we don't.", "corpus_id": 8465307, "title": "Computability Theory of Closed Timelike Curves" }
{ "abstract": "Quantum gravity may have as much to tell us about the foundations and interpretation of quantum mechanics as it does about gravity. The Copenhagen interpretation of quantum mechanics and Everett's Relative State Formulation are complementary descriptions which in a sense are dual to one another. My purpose here is to discuss this duality in the light of the of ER=EPR conjecture.", "corpus_id": 13896453, "score": -1, "title": "Copenhagen vs Everett, Teleportation, and ER=EPR" }
{ "abstract": "High-$Z$impurities in magnetic-confinement devices are prone to develop density variations on the flux surface, which can significantly affect their transport. In this paper, we generalize earlier analytic stellarator calculations of the neoclassical radial impurity flux in the mixed-collisionality regime (collisional impurities and low-collisionality bulk ions) to include the effect of such flux-surface variations. We find that only in the homogeneous density case is the transport of highly collisional impurities (in the Pfirsch–Schlüter regime) independent of the radial electric field. We study these effects for a Wendelstein 7-X (W7-X) vacuum field, with simple analytic models for the potential perturbation, under the assumption that the impurity density is given by a Boltzmann response to a perturbed potential. In the W7-X case studied, we find that larger amplitude potential perturbations cause the radial electric field to dominate the transport of the impurities. In addition, we find that classical impurity transport can be larger than the neoclassical transport in W7-X.", "corpus_id": 59335103, "title": "Collisional transport of impurities with flux-surface varying density in stellarators" }
{ "abstract": "The control of impurity accumulation is one of the main challenges for future stellarator fusion reactors. The standard argument to explain this accumulation relies on the, in principle, large inward pinch in the neoclassical impurity flux caused by the typically negative radial electric field in stellarators. This simplified interpretation was proven to be flawed by Helander et al (2017 Phys. Rev. Lett. 118 155002), who showed that in a relevant regime (low-collisionality main ions and collisional impurities) the radial electric field does not drive impurity transport. In that reference, the effect of the component of the electric field that is tangent to the magnetic surface was not included. In this letter, an analytical calculation of the neoclassical radial impurity flux incorporating such effect is given, showing that it can be very strong for highly charged impurities and that, once it is taken into account, the dependence of the impurity flux on the radial electric field reappears. Realistic examples are provided in which the inclusion of the tangential electric field leads to impurity expulsion.", "corpus_id": 4899873, "title": "Stellarator impurity flux driven by electric fields tangent to magnetic surfaces" }
{ "abstract": "BACKGROUND AND AIMS\nNumerous studies suggest n -3 polyunsaturated fatty acids (n -3 PUFA) and oleic acid intake have beneficial effects on health including risk reduction of coronary heart disease. The purpose of this study was to evaluate the effect of a commercially available skimmed milk supplemented with n -3 PUFA, oleic acid, and vitamins E, B(6), and folic acid (Puleva Omega3) on risk factors for cardiovascular disease. (CVD).\n\n\nMETHODS\nThirty volunteers were given 500 ml/day of semi-skimmed milk for 4 weeks and then 500 ml/day of the n -3 enriched milk for 8 further weeks. Plasma and LDL lipoproteins were obtained from volunteers at the beginning of the study (T(pre)), and at 4, 8 and 12 weeks.\n\n\nRESULTS\nThe consumption of n -3 enriched milk produced a significant decrease in plasma concentration of total and LDL cholesterol accompanied by a reduction in plasma levels of homocysteine. Plasma and LDL oxidability and vitamin E concentration remained unchanged throughout the study. A significant reduction in plasma levels of vascular cell adhesion molecule 1, and an increase in plasma concentration of folic acid were also observed.\n\n\nCONCLUSION\nDaily intake of n -3 PUFA and oleic acid supplemented skimmed milk plus folic acid and B-type vitamins has favourable effects on risk factors for CVD.", "corpus_id": 6385687, "score": -1, "title": "n-3 Fatty acids plus oleic acid and vitamin supplemented milk consumption reduces total and LDL cholesterol, homocysteine and levels of endothelial adhesion molecules in healthy humans." }
{ "abstract": "Speech Emotion Recognition is one of the most recent topic in the Human Computer Interaction field. Now a day's natural communication plays an important role in our daily life, so in natural communication interface between human and computer, a computer have become integral part of our lives. For improving the interaction between human and computer currently work is going on.To accomplish this goal, a computer would have to be capable of differentiate its present situation and act in response differently depending on that observation. Some part of this process involves understanding a user's emotional state.(2) To make human computer interaction more natural, computer should be able to recognize the emotional states same as that human does. For getting efficient system. The System depends on classifiers and type of feature extracted which are used for detection of Emotion. Most of the system's basic objective is to detect the emotions like anger, happy, neutral, sadness. Selection of a good database is also important task.While classifying Emotional states MFCC and Energy is used for feature extraction .", "corpus_id": 12424687, "title": "A Survey on Different Classifier in Speech Recognition Techniques." }
{ "abstract": "In human machine interface application, emotion recognition from the speech signal has been research topic since many years. To identify the emotions from the speech signal, many systems have been developed. In this paper speech emotion recognition based on the previous technologies which uses different classifiers for the emotion recognition is reviewed. The classifiers are used to differentiate emotions such as anger, happiness, sadness, surprise, neutral state, etc. The database for the speech emotion recognition system is the emotional speech samples and the features extracted from these speech samples are the pitch, energy, linear prediction cepstrum coefficient (LPCC), Mel frequency cepstrum coefficient (MFCC). The classification performance is based on extracted features. Inference about the performance and limitation of speech emotion recognition system based on the different classifiers are also discussed. Keywords—Feature Selection, Emotion recognition, Classifier,Feature extraction.", "corpus_id": 15954651, "title": "Speech Emotion Recognition" }
{ "abstract": "In this study, modulation spectral features (MSFs) are proposed for the automatic recognition of human affective information from speech. The features are extracted from an auditory-inspired long-term spectro-temporal representation. Obtained using an auditory filterbank and a modulation filterbank for speech analysis, the representation captures both acoustic frequency and temporal modulation frequency components, thereby conveying information that is important for human speech perception but missing from conventional short-term spectral features. On an experiment assessing classification of discrete emotion categories, the MSFs show promising performance in comparison with features that are based on mel-frequency cepstral coefficients and perceptual linear prediction coefficients, two commonly used short-term spectral representations. The MSFs further render a substantial improvement in recognition performance when used to augment prosodic features, which have been extensively used for emotion recognition. Using both types of features, an overall recognition rate of 91.6% is obtained for classifying seven emotion categories. Moreover, in an experiment assessing recognition of continuous emotions, the proposed features in combination with prosodic features attain estimation performance comparable to human evaluation.", "corpus_id": 9615949, "score": -1, "title": "Automatic speech emotion recognition using modulation spectral features" }
{ "abstract": "The frequently employed spatial join processing over two large layers of polygonal datasets to detect cross-layer polygon pairs (CPP) satisfying a join-predicate faces challenges common to ill-structured sparse problems, namely, that of identifying the few intersecting cross-layer edges out of the quadratic universe. The algorithmic engineering challenge is compounded by GPGPU SIMT architecture. Spatial join involves lightweight filter phase typically using overlap test over minimum bounding rectangles (MBRs) to discard majority of CPPs, followed by refinement phase to rigorously test the join predicate over the edges of the surviving CPPs. In this dissertation, we develop new techniques algorithms, data structure, i/o, load balancing and system implementation to accelerate the two-phase spatial-join processing. We present a new filtering technique, called Common MBR Filter (CMF ), which changes the overall characteristic of the spatial join algorithms wherein the refinement phase is no longer the computational bottleneck. CMF is designed based on the insight that intersecting cross-layer edges must lie within the rectangular intersection of the MBRs of CPPs, their common MBRs (CMBR). We also address a key limitation of CMF for class of spatial datasets with either large or dense active CMBRs by extended CMF, called CMF-grid, that effectively employs both CMBR and grid techniques by embedding a uniform grid over CMBR of each CPP, but of suitably engineered sizes for different CPPs. To show efficiency of CMF-based filters, extensive mathematical and experimental analysis is provided. Then, two GPU-based spatial join systems are proposed based on two CMF versions including four components: 1) sort-based MBR filter, 2) CMF/CMF-grid, 3) point-in-polygon test, and, 4) edge-intersection test. The systems show two orders of magnitude speedup over the optimized sequential GEOS C++ library. Furthermore, we present a distributed system of heterogeneous compute nodes to exploit GPU-CPU computing in order to scale up the computation. A load balancing model based on Integer Linear Programming (ILP) is formulated for this system. We also provide three heuristic algorithms to approximate the ILP. Finally, we develop MPI-cuda-GIS system based on this heterogeneous computing model by integrating our CUDA-based GPU system into a newly designed distributed platform designed based on Message Passing Interface (MPI). Experimental results show good scalability and performance of MPI-cuda-GIS system. INDEX WORDS: Spatial data, Spatial join, GPU computing, Parallel algorithm, Colocation pattern mining, Distributed systems, MPI, Heterogeneous systems, Load balancing, ILP optimization A HETEROGENEOUS HIGH PERFORMANCE COMPUTING FRAMEWORK FOR ILL-STRUCTURED SPATIAL JOIN PROCESSING", "corpus_id": 86531581, "title": "A Heterogeneous High Performance Computing Framework For Ill-Structured Spatial Join Processing" }
{ "abstract": "We summarize the need and present our vision for accelerating geo-spatial computations and analytics using a combination of shared and distributed memory parallel platforms, with general-purpose Graphics Processing Units (GPUs) with 100s to 1000s of processing cores in a single chip forming a key architecture to parallelize over. A GPU can yield one-to-two orders of magnitude speedups and will become increasingly more affordable and energy efficient due to mass marketing for gaming. We also survey the current landscape of representative geo-spatial problems and their parallel, GPU-based solutions.", "corpus_id": 14389594, "title": "A vision for GPU-accelerated parallel computation on geo-spatial datasets" }
{ "abstract": "k-Means is a versatile clustering algorithm widely used in practice. To cluster large data sets, state-of-the-art implementations use GPUs to shorten the data to knowledge time. These implementations commonly assign points on a GPU and update centroids on a CPU.", "corpus_id": 53245249, "score": -1, "title": "Efficient and Scalable k‑Means on GPUs" }
{ "abstract": "We study the known techniques for designing Matrix Multiplication algorithms. The two main approaches are the Laser method of Strassen, and the Group theoretic approach of Cohn and Umans. We define a generalization based on zeroing outs which subsumes these two approaches, which we call the Solar method, and an even more general method based on monomial degenerations, which we call the Galactic method. We then design a suite of techniques for proving lower bounds on the value of omega, the exponent of matrix multiplication, which can be achieved by algorithms using many tensors T and the Galactic method. Some of our techniques exploit 'local' properties of T, like finding a sub-tensor of T which is so 'weak' that T itself couldn't be used to achieve a good bound on omega, while others exploit 'global' properties, like T being a monomial degeneration of the structural tensor of a group algebra. Our main result is that there is a universal constant ℓ>2 such that a large class of tensors generalizing the Coppersmith-Winograd tensor CW_q cannot be used within the Galactic method to show a bound on omega better than ell, for any q. We give evidence that previous lower-bounding techniques were not strong enough to show this. We also prove a number of complementary results along the way, including that for any group G, the structural tensor of C[G] can be used to recover the best bound on omega which the Coppersmith-Winograd approach gets using CW_|G|-2 as long as the asymptotic rank of the structural tensor is not too large.", "corpus_id": 53046999, "title": "Limits on All Known (and Some Unknown) Approaches to Matrix Multiplication" }
{ "abstract": "We consider the techniques behind the current best algorithms for matrix multiplication. Our results are threefold. \n(1) We provide a unifying framework, showing that all known matrix multiplication running times since 1986 can be achieved from a single very natural tensor - the structural tensor $T_q$ of addition modulo an integer $q$. \n(2) We show that if one applies a generalization of the known techniques (arbitrary zeroing out of tensor powers to obtain independent matrix products in order to use the asymptotic sum inequality of Sch\\\"{o}nhage) to an arbitrary monomial degeneration of $T_q$, then there is an explicit lower bound, depending on $q$, on the bound on the matrix multiplication exponent $\\omega$ that one can achieve. We also show upper bounds on the value $\\alpha$ that one can achieve, where $\\alpha$ is such that $n\\times n^\\alpha \\times n$ matrix multiplication can be computed in $n^{2+o(1)}$ time. \n(3) We show that our lower bound on $\\omega$ approaches $2$ as $q$ goes to infinity. This suggests a promising approach to improving the bound on $\\omega$: for variable $q$, find a monomial degeneration of $T_q$ which, using the known techniques, produces an upper bound on $\\omega$ as a function of $q$. Then, take $q$ to infinity. It is not ruled out, and hence possible, that one can obtain $\\omega=2$ in this way.", "corpus_id": 1132095, "title": "Further Limitations of the Known Approaches for Matrix Multiplication" }
{ "abstract": "We give a condition for a \"classical\" Goppa (1970) code to have a cyclic extension. This condition follows from the properties of the semilinear automorphism group of the corresponding generalized Reed-Solomon codes. Using this condition, we can construct not only all the known Goppa codes with a cyclic extension, but also new families.", "corpus_id": 31835162, "score": -1, "title": "New Classes of Cyclic Extended Goppa Codes" }
{ "abstract": "The purpose of this paper is to explain how auditors’ professional and organizational identities are associated with commercialization in audit firms. Unlike previous studies exploring the consequences of commercialization in the firms, the study directs its attention toward the potential driver of commercialization, which the authors argue to be the identities of the auditors.,The paper is based on 374 responses to a survey distributed to 3,588 members of FAR, the professional association of accountants, auditors and advisors in Sweden. The study used established measures of organizational and professional identity and introduced market, customer and firm process orientation as aspects of commercialization. The study explored the data through descriptive statistics, principle component analysis and correlation analysis and tested the hypotheses with multiple linear regression analysis.,The findings indicated that the organizational identity of auditors has a positive association with three aspects of commercialization: market orientation, customer orientation and firm process orientation. Contrary to the arguments based on prior literature, the study has found that the professional identity of auditors is also a positively associated with commercialization. This indicates a change of the role of professional identity vis-a-vis commercialization of audit firms. The positive association between professional identity and commercial orientation could indicate the development of “organizational professionalism.” The study also found differences between the association between professional identity and commercialization in Big 4 and non-Big 4 firms. While in Big 4 firms, professional identity is positively associated only with the firm’s process orientation, in non-Big 4 firms, professional identity has a positive association with all three aspects of commercialization.,The paper provides insight into how auditors’ identities have influenced commercialization of audit firms and into the normalizing of commercialization within auditing. The study also developed a new instrument for measuring commercialization, one based on market, customer and firm process orientation concepts. This paper suggests that this instrument is an alternative to the observation through proxies.", "corpus_id": 4941712, "title": "Auditors professional and organizational identities and commercialization in audit firms" }
{ "abstract": "Auditing is an important activity in today’s society and is characterised by several dilemmas. The assumption underlying this thesis is that auditing is influenced in prac-tice by the thought patte ...", "corpus_id": 153119609, "title": "Perspektiv på revision : tankemönster, förväntningsgap och dilemman" }
{ "abstract": "Den har studien handlar om varfor sma foretag valjer att inte anlita en revisor och vad som skulle fa dem att gora det. Revisionsplikten avskaffades i november 2010 och detta gjorde att sma aktiebo ...", "corpus_id": 108274697, "score": -1, "title": "Vilka faktorer påverkar valet att anlita eller inte anlita en revisor" }
{ "abstract": "Initially, computers were invented as devices to speed-up computations and facilitate the performance of repetitive mathematical operations. Their wider application in different areas upgraded this basic role and converted computers from calculating machines into devices for processing large amounts of data, with processing understood in a very general sense. The wonderfully large and ever increasing variety of applications based on various forms of data and information processing imposes various challenges, and computer hardware is supposed to provide the appropriate answers. This is the motivation for the surprisingly fast evolution of computing power in the relatively short history of contemporary computers. It seems that presently the opening of graphics processing units (GPU) for general purpose computations (GPGPU) is an answer that can meet the high demands for large computing power in certain applications. Their efficient use can provide a short break in the inevitable race for entirely new computing technologies that are necessary and offer some extended time for the underlying research work. This chapter discusses first the rationales that lead to the development of computing resulting in the appearance of the GPGPU and the GPU as the underlying technological platform. Then, we present the foundations of GPU architecture and the GPU programming frameworks up to the extent that is necessary for understanding the applications of GPUs discussed in this book. 1 Stanislav Stanković Rovio Entertainment Ltd., Tampere, Finland, e-mail: stanislav.stankovic@gmail.com Dušan Gajić Dept. of Computer Science, Faculty of Electronic Engineering, University of Niš, Niš, Serbia, e-mail: dule.gajic@gmail.com Radomir S. Stanković Dept. of Computer Science, Faculty of Electronic Engineering, University of Niš, Niš, Serbia, e-mail: Radomir.Stankovic@gmail.com 1 The work of Stanislav Stanković and Radomir S. Stanković was supported by the Academy of Finland, Finnish Center of Excellence Programme, Grant No. 44876.", "corpus_id": 64619363, "title": "GPU Computing with Applications in Digital Logic" }
{ "abstract": "Evolution of quantum circuits faces two major challenges: complex and huge search spaces and the high costs of simulating quantum circuits on conventional computers. In this paper we analyze different selection strategies, which are applied to the Deutsch-Jozsa problem and the 1-SAT problem using our GP system. Furthermore, we show the effects of adding randomness to the selection mechanism of a (1,10) selection strategy. It can be demonstrated that this boosts the evolution of quantum algorithms on particular problems.", "corpus_id": 1096035, "title": "Comparison of Selection Strategies for Evolutionary Quantum Circuit Design" }
{ "abstract": "Distributed quantum computing has been well-known for many years as a system composed of a number of small-capacity quantum circuits. Limitations in the capacity of monolithic quantum computing systems can be overcome by using distributed quantum systems which communicate with each other through known communication links. In our previous study, an algorithm with an exponential complexity was proposed to optimize the number of qubit teleportations required for the communications between two partitions of a distributed quantum circuit (DQC). In this work, a genetic algorithm is used to solve the optimization problem in a more efficient way. The results are compared with the previous study and we show that our approach works almost the same with a remarkable speed-up. Moreover, the comparison of the proposed approach based on GA with a random search over the search space verifies the effectiveness of GA.", "corpus_id": 254572743, "score": -1, "title": "An Evolutionary Approach to Optimizing Teleportation Cost in Distributed Quantum Computation" }
{ "abstract": "Automatically evaluating the coherence of summaries is of great significance both to enable cost-efficient summarizer evaluation and as a tool for improving coherence by selecting high-scoring candidate summaries. While many different approaches have been suggested to model summary coherence, they are often evaluated using disparate datasets and metrics. This makes it difficult to understand their relative performance and identify ways forward towards better summary coherence modelling. In this work, we conduct a large-scale investigation of various methods for summary coherence modelling on an even playing field. Additionally, we introduce two novel analysis measures, _intra-system correlation_ and _bias matrices_, that help identify biases in coherence measures and provide robustness against system-level confounders. While none of the currently available automatic coherence measures are able to assign reliable coherence scores to system summaries across all evaluation metrics, large-scale language models fine-tuned on self-supervised tasks show promising results, as long as fine-tuning takes into account that they need to generalize across different summary lengths.", "corpus_id": 252220240, "title": "How to Find Strong Summary Coherence Measures? A Toolbox and a Comparative Study for Summary Coherence Measure Evaluation" }
{ "abstract": "We propose a computationally efficient graph-based approach for local coherence modeling. We evaluate our system on three tasks: sentence ordering, summary coherence rating and readability assessment. The performance is comparable to entity grid based approaches though these rely on a computationally expensive training phase and face data sparsity problems.", "corpus_id": 1851389, "title": "Graph-based Local Coherence Modeling" }
{ "abstract": "This paper investigates the automatic evaluation of text coherence for machine-generated texts. We introduce a fully-automatic, linguistically rich model of local coherence that correlates with human judgments. Our modeling approach relies on shallow text properties and is relatively inexpensive. We present experimental results that assess the predictive power of various discourse representations proposed in the linguistic literature. Our results demonstrate that certain models capture complementary aspects of coherence and thus can be combined to improve performance.", "corpus_id": 8893038, "score": -1, "title": "Automatic Evaluation of Text Coherence: Models and Representations" }
{ "abstract": "PHAX (phosphorylated adaptor for RNA export) promotes nuclear export of short transcripts of RNA polymerase II such as spliceosomal U snRNA precursors, as well as intranuclear transport of small nucleolar RNAs (snoRNAs). However, it remains unknown whether PHAX has other critical functions. Here we show that PHAX is required for efficient DNA damage response (DDR) via regulation of phosphorylated histone variant H2AX (γH2AX), a key factor for DDR. Knockdown of PHAX led to a significant reduction of H2AX mRNA levels, through inhibition of both transcription of the H2AX gene and nuclear export of H2AX mRNA, one of the shortest mRNAs in the cell. As a result, PHAX-knockdown cells become more sensitive to DNA damage due to a shortage of γH2AX. These results reveal a novel function of PHAX, which secures efficient DDR and hence genome stability.", "corpus_id": 221037553, "title": "The RNA transport factor PHAX is required for proper histone H2AX expression and DNA damage response" }
{ "abstract": "Replication-dependent histone mRNAs are the only metazoan mRNAs that are not polyadenylated, ending instead in a conserved stem-loop sequence. Histone pre-mRNAs lack introns and are processed in the nucleus by a single cleavage step, which produces the mature 3' end of the mRNA. We have systematically examined the requirements for the nuclear export of a mouse histone mRNA using the Xenopus oocyte system. Histone mRNAs were efficiently exported when injected as mature mRNAs, demonstrating that the process of 3' end cleavage is not required for export factor binding. Export also does not depend on the stem-loop binding protein (SLBP) since mutations of the stem-loop that prevent SLBP binding and competition with a stem-loop RNA did not affect export. Only the length of the region upstream of the stem-loop, but not its sequence, was important for efficient export. Histone mRNA export was blocked by competition with constitutive transport element (CTE) RNA, indicating that the mRNA export receptor TAP is involved in histone mRNA export. Consistent with this observation, depletion of TAP from Drosophila cells by RNAi resulted in the restriction of mature histone mRNAs to the nucleus.", "corpus_id": 757125, "title": "Nuclear export of metazoan replication-dependent histone mRNAs is dependent on RNA length and is mediated by TAP." }
{ "abstract": "RNA export is the process by which RNAs are transported to the cytoplasm after synthesis, processing, and RNP assembly within the nucleus. The primary focus of this review is mRNA export with particular attention paid to the yeast Saccharomyces cerevisiae. Because there is rather little known about mRNA export in general and even less about yeast mRNA export, our thinking about the problem is influenced by information from other transport processes. These include not only mRNA export in vertebrate systems but also studies on the export of other RNA substrates and even studies on protein import. Several of these areas of investigation have recently intersected in gratifying ways.", "corpus_id": 8102695, "score": -1, "title": "Nuclear RNA export." }
{ "abstract": "The essay critically examines measurement of R&D. R&D capital and cited patents are used in the literature to measure R&D intensity and investigate market returns for R&D. The results are ambiguous. Previous literature suggests that patents are distinct from R&D expenditure, and R&D is influenced by competition. We suggest eight new measures based on the interplay between R&D and competition. We empirically test these measures on pharmaceutical and computer software industries which have the highest R&D intensities of all industries. The new measures are more significant than R&D capital and offer further insights on R&D in these industries. These measures help in capital allocation for R&D at firm level which maximizes stock returns.", "corpus_id": 55454934, "title": "Measuring Corporate R&D" }
{ "abstract": "Purpose – This paper sets out to test the effects of firms’ and industry's R&D intensity on persistence of abnormal earnings. Design/methodology/approach – Ohlson's valuation model is used with pooled regressions along with Fama–Macbeth methodology on yearly regressions and partitioning on Herfindahl index to conduct the tests. Findings – It was found that firms’ and industries’ R&D intensities are both positively correlated with persistence of abnormal earnings. The evidence suggests that the positive effect on earnings persistence caused by R&D's effectiveness in mitigating competition dominates the negative effect brought by more risk from R&D projects Practical implications – The fact that the firm's own R&D investment leads to incremental earnings persistence beyond that of the industry suggests the importance of incorporating both industry and firm's R&D intensity in earnings persistence. While industry R&D investment leads to competition mitigation via creation of entry barriers, a firm's own investment in R&D differentiates its products from those of its competitors, and thereby results in further competition mitigation by creating replacement barriers. Originality/value – Finally, since R&D intensity is correlated with earnings persistence, inclusion of R&D intensity in future earnings persistence studies may lead to better model specification by reducing the problem of correlated omitted variables.", "corpus_id": 154170778, "title": "Effect of R&D investments on persistence of abnormal earnings" }
{ "abstract": "This research has attempted to examine the effect of Research and Development (R&D) expenditures on firms value. Criteria that have been considered for firms value include operational profit, dividend, operational assets, book value and other information. For this purpose the market value of firms (pharmaceutical firms) which have R&D has been compared with those lacking that, using the Jones model (Jones, 2000). Results from testing the hypotheses have reflected that R&D increases sales and expenses as well; moreover the persistence of abnormal earnings shall increase but would have no impact on market value.", "corpus_id": 5518266, "score": -1, "title": "The Effect of Research and Development(R & D) Expenditures on Firms Value" }
{ "abstract": "Using immunofluorescent labeling and laser-scanning confocal microscopy, we show that isoforms of histone H4 acetylated on lysine 5, 8 and/or 12 (H4.Ac5-12), as well as RNA polymerase II, become enriched at the nuclear periphery around the time of zygotic gene activation, i.e., the 2-cell stage, in the preimplantation mouse embryo. In contrast, DNA and H4 acetylated on lysine 16 are uniformly distributed throughout the cytoplasm. Culture of embryos with inhibitors of histone deacetylase trichostatin A and trapoxin results in an increase in the (1) amount of acetylated histone H4 detected by immunoblotting, (2) intensity and sharpness of the peripheral staining for H4.Ac5-12, and (3) relative rate of synthesis of proteins that are markers for zygotic gene activation. The enhanced staining for H4.Ac5-12 at the nuclear periphery seems to require DNA replication, but appears independent of cytokinesis or transcription, since its development is inhibited by aphidicolin but not by either cytochalasin D or alpha-amanitin. Lastly, the restricted localization of H4.Ac 5-12 is not observed in the 4-cell embryo or at later stages of preimplantation development. These results suggest that changes in chromatin structure underlie, at least in part, zygotic gene activation in the mouse.", "corpus_id": 24790841, "title": "Temporally restricted spatial localization of acetylated isoforms of histone H4 and RNA polymerase II in the 2-cell mouse embryo." }
{ "abstract": "Genetic and biochemical approaches have recently been used to demonstrate the pivotal role of chromatin structure in gene regulation at two levels of organization. The three-dimensional folding of DNA mediated by chromatin structural proteins over several hundred base pairs has been shown to be critical for the local control of both transcriptional activation and repression. Nuclear domains also exist in which the further long-range organization of chromatin over 5-50 kb exerts a global control on the transcription process.", "corpus_id": 2340554, "title": "The transcription of chromatin templates." }
{ "abstract": "Abstract Parkinson's disease, spinal muscular atrophy and amyotrophic lateral sclerosis are caused by genetically unstable (CAG) n microsatellites, resulting in genetic anticipation and meiotic instability.", "corpus_id": 1932364, "score": -1, "title": "Parkinson's disease, amyotrophic lateral sclerosis and spinal muscular atrophy are caused by an unstable (CAG)n trinucleotide repeat microsatellite." }
{ "abstract": "ABSTRACT The endothelium plays an important role in cancer metastasis, but the mechanisms involved are still not clear. In the present work, we characterised the changes in endothelial function at early and late stages of breast cancer progression in an orthotopic model of murine mammary carcinoma (4T1 cells). Endothelial function was analysed based on simultaneous microflow liquid chromatography–tandem mass spectrometry using multiple reaction monitoring (microLC/MS-MRM) quantification of 12 endothelium-related biomarkers, including those reflecting glycocalyx disruption – syndecan-1 (SDC-1), endocan (ESM-1); endothelial inflammation – vascular cell adhesion molecule 1 (VCAM-1), intercellular adhesion molecule 1 (ICAM-1), E-selectin (E-sel); endothelial permeability – fms-like tyrosine kinase 1 (FLT-1), angiopoietin 2 (Angpt-2); and haemostasis – von Willebrand factor (vWF), tissue plasminogen activator (t-PA), plasminogen activator inhibitor 1 (PAI-1), as well as those that are pathophysiologically linked to endothelial function – adrenomedullin (ADM) and adiponectin (ADN). The early phase of metastasis in mouse plasma was associated with glycocalyx disruption (increased SDC-1 and ESM-1), endothelial inflammation [increased soluble VCAM-1 (sVCAM-1)] and increased vascular permeability (Angpt-2). During the late phase of metastasis, additional alterations in haemostasis (increased PAI-1 and vWF), as well as a rise in ADM and substantial fall in ADN concentration, were observed. In conclusion, in a murine model of breast cancer metastasis, we identified glycocalyx disruption, endothelial inflammation and increased endothelial permeability as important events in early metastasis, while the late phase of metastasis was additionally characterised by alterations in haemostasis. Summary: A microLC/MS-MRM-based approach for simultaneous determination of endothelium-related biomarkers identified glycocalyx disruption, endothelial inflammation and increased endothelial permeability as important events in early pulmonary metastasis in a murine model of breast cancer metastasis.", "corpus_id": 59274199, "title": "Early and late endothelial response in breast cancer metastasis in mice: simultaneous quantification of endothelial biomarkers using a mass spectrometry-based method" }
{ "abstract": "Urokinase plasminogen activator (uPA) is an extracellular matrix-degrading protease involved in cancer invasion and metastasis, interacting with plasminogen activator inhibitor-1 (PAI-1), which was originally identified as a blood-derived endogenous fast-acting inhibitor of uPA. At concentrations found in tumor tissue, however, both PAI-1 and uPA promote tumor progression and metastasis. Consistent with the causative role of uPA and PAI-1 in cancer dissemination, several retrospective and prospective studies have shown that elevated levels of uPA and PAI-1 in breast tumor tissue are statistically independent and potent predictors of poor patient outcome, including adverse outcome in the subset of breast cancer patients with lymph node-negative disease. In addition to being prognostic, high levels of uPA and PAI-1 have been shown to predict benefit from adjuvant chemotherapy in patients with early breast cancer. The unique clinical utility of uPA/PAI-1 as prognostic biomarkers in lymph node-negative breast cancer has been confirmed in two independent level-of-evidence-1 studies (that is, in a randomized prospective clinical trial in which the biomarker evaluation was the primary purpose of the trial and in a pooled analysis of individual data from retrospective and prospective studies). Thus, uPA and PAI-1 are among the best validated prognostic biomarkers currently available for lymph node-negative breast cancer, their main utility being the identification of lymph node-negative patients who have HER-2-negative tumors and who can be safely spared the toxicity and costs of adjuvant chemotherapy. Recently, a phase II clinical trial using the low-molecular-weight uPA inhibitor WX-671 reported activity in metastatic breast cancer.", "corpus_id": 1019588, "title": "uPA and PAI-1 as biomarkers in breast cancer: validated for clinical use in level-of-evidence-1 studies" }
{ "abstract": "The GPI-anchored urokinase plasminogen activator receptor (uPAR) does not internalize free urokinase (uPA). On the contrary, uPAR-bound complexes of uPA with its serpin inhibitors PAI-1 (plasminogen activator inhibitor type-1) or PN-1 (protease nexin-1) are readily internalized in several cell types. Here we address the question whether uPAR is internalized as well upon binding of uPA-serpin complexes. Both LB6 clone 19 cells, a mouse cell line transfected with the human uPAR cDNA, and the human U937 monocytic cell line, express in addition to uPAR also the endocytic alpha 2-macroglobulin receptor/low density lipoprotein receptor-related protein (LRP/alpha 2-MR) which is required to internalize uPAR-bound uPA-PAI-1 and uPA-PN-1 complexes. Downregulation of cell surface uPAR molecules in U937 cells was detected by cytofluorimetric analysis after uPA-PAI-1 and uPA-PN-1 incubation for 30 min at 37 degrees C; this effect was blocked by preincubation with the ligand of LRP/alpha 2-MR, RAP (LRP/alpha 2-MR- associated protein), known to block the binding of the uPA complexes to LRP/alpha 2-. MR. Downregulation correlated in time with the intracellular appearance of uPAR as assessed by confocal microscopy and immuno-electron microscopy. After 30 min incubation with uPA-PAI-1 or uPA-PN-1 (but not with free uPA), confocal microscopy showed that uPAR staining in permeabilized LB6 clone 19 cells moved from a mostly surface associated to a largely perinuclear position. This effect was inhibited by the LRP/alpha 2-MR RAP. Perinuclear uPAR did not represent newly synthesized nor a preexisting intracellular pool of uPAR, since this fluorescence pattern was not modified by treatment with the protein synthesis inhibitor cycloheximide, and since in LB6 clone 19 cells all of uPAR was expressed on the cell surface. Immuno-electron microscopy confirmed the plasma membrane to intracellular translocation of uPAR, and its dependence on LRP/alpha 2-MR in LB6 clone 19 cells only after binding to the uPA-PAI-1 complex. After 30 min incubation at 37 degrees C with uPA-PAI-1, 93% of the specific immunogold particles were present in cytoplasmic vacuoles vs 17.6% in the case of DFP-uPA. We conclude therefore that in the process of uPA-serpin internalization, uPAR itself is internalized, and that internalization requires the LRP/alpha 2-MR.", "corpus_id": 11642637, "score": -1, "title": "alpha-2 Macroglobulin receptor/Ldl receptor-related protein(Lrp)- dependent internalization of the urokinase receptor" }
{ "abstract": "This thesis aims to investigate the formation of deposits from thermally degraded biodiesel on a hot metal surface under the influence of sodium or copper contaminations. Biodiesel or Fatty Acid Methyl Esters (FAMEs) is a widely utilized biofuel with the potential to replace fossil fuels, however, issues regarding the thermal and oxidative stability prevent the progress of biodiesel for utilization as vehicle fuel. The thermal degradation of biodiesel causes formation of deposits often occurring in the fuel injectors, which could result in reduced engine efficiency, increased emissions and engine wear. However, still have no standard method for evaluation of a fuels’ tendency to form deposits been developed. In this study biodiesel deposits have been formed on aluminum test tubes utilizing a Hot Liquid Process Simulator (HLPS), an instrument based on the principle of the Jet Fuel Thermal Oxidation Tester (JFTOT). Quantitative and qualitative analyses have been made utilizing an array of techniques including Scanning Electron Microscopy (SEM), Gas Chromatography Mass Spectrometry (GCMS) and Attenuated Total Reflectance Fourier Transform Infrared Spectrometry (ATR-FTIR). A multi-factorial trial investigating the effects of sodium hydroxide and copper contaminations at trace levels and the impact of a paraffin inhibitor copolymer additive on three different FAME products, two derived from rapeseed oil and one from waste cooking oil as well as a biodiesel blend with mineral diesel, was conducted.The results exhibited that FAMEs are the major precursor to deposit formation in diesel fuel. The SEM analyses exploited the nature of FAME deposits forming porous structures on hot metal surfaces. Sodium hydroxide proved to participate in the deposit formation by forming carboxylic salts. However, the copper contamination exhibited no enhancing effect on the deposits, possibly due to interference of the blank oil in which copper was received. The paraffin inhibitor functioning as a crystal modifier had significant reducing effect on the deposit formation for all biodiesel samples except for the FAME product derived from waste cooking oil. Further studies are needed in order to investigate the influence of glycerin and water residues to the biodiesel deposit formation. Mechanisms involving oxidative or thermal peroxide formation, polymerization and disintegration have been suggested as degradation pathways for biodiesel. The involvement of oxidation intermediates, peroxides, was confirmed by the experiments performed in this thesis. However, the mechanisms of biodiesel deposit formation are complex and hard to study as the deposits are seemingly insoluble. Nevertheless, ATR-FTIR in combination with JFTOT-processing has potential as standard method for evaluation of deposit forming tendencies of biodiesel.", "corpus_id": 92916243, "title": "Qualitative and Quantitative Analysis of Biodiesel Deposits Formed on a Hot Metal Surface" }
{ "abstract": "In face of recent changes in the edaphoclimatic conditions (climate and soil) occurring worldwide, it has been necessary reflections on the need to exploit natural resources in a sustainable manner. Sustainability is a systemic concept, relating to the continuity of eco‐ nomic, social, cultural and environmental aspects of human society. It proposes to be a means of configuring the civilization and human activity so that the society, its members and its economies can fulfill its needs and express its greatest potential at present, while pre‐ serving biodiversity and natural ecosystems, planning and acting to achieve pro-efficiency in maintaining undefined these ideals [1].", "corpus_id": 1246641, "title": "Biodiesel: Production, Characterization, Metallic Corrosion and Analytical Methods for Contaminants" }
{ "abstract": "This thesis aims to investigate the formation of deposits from thermally degraded biodiesel on a hot metal surface under the influence of sodium or copper contaminations. Biodiesel or Fatty Acid Methyl Esters (FAMEs) is a widely utilized biofuel with the potential to replace fossil fuels, however, issues regarding the thermal and oxidative stability prevent the progress of biodiesel for utilization as vehicle fuel. The thermal degradation of biodiesel causes formation of deposits often occurring in the fuel injectors, which could result in reduced engine efficiency, increased emissions and engine wear. However, still have no standard method for evaluation of a fuels’ tendency to form deposits been developed. In this study biodiesel deposits have been formed on aluminum test tubes utilizing a Hot Liquid Process Simulator (HLPS), an instrument based on the principle of the Jet Fuel Thermal Oxidation Tester (JFTOT). Quantitative and qualitative analyses have been made utilizing an array of techniques including Scanning Electron Microscopy (SEM), Gas Chromatography Mass Spectrometry (GCMS) and Attenuated Total Reflectance Fourier Transform Infrared Spectrometry (ATR-FTIR). A multi-factorial trial investigating the effects of sodium hydroxide and copper contaminations at trace levels and the impact of a paraffin inhibitor copolymer additive on three different FAME products, two derived from rapeseed oil and one from waste cooking oil as well as a biodiesel blend with mineral diesel, was conducted.The results exhibited that FAMEs are the major precursor to deposit formation in diesel fuel. The SEM analyses exploited the nature of FAME deposits forming porous structures on hot metal surfaces. Sodium hydroxide proved to participate in the deposit formation by forming carboxylic salts. However, the copper contamination exhibited no enhancing effect on the deposits, possibly due to interference of the blank oil in which copper was received. The paraffin inhibitor functioning as a crystal modifier had significant reducing effect on the deposit formation for all biodiesel samples except for the FAME product derived from waste cooking oil. Further studies are needed in order to investigate the influence of glycerin and water residues to the biodiesel deposit formation. Mechanisms involving oxidative or thermal peroxide formation, polymerization and disintegration have been suggested as degradation pathways for biodiesel. The involvement of oxidation intermediates, peroxides, was confirmed by the experiments performed in this thesis. However, the mechanisms of biodiesel deposit formation are complex and hard to study as the deposits are seemingly insoluble. Nevertheless, ATR-FTIR in combination with JFTOT-processing has potential as standard method for evaluation of deposit forming tendencies of biodiesel.", "corpus_id": 92916243, "score": -1, "title": "Qualitative and Quantitative Analysis of Biodiesel Deposits Formed on a Hot Metal Surface" }
{ "abstract": "Received: Revised: Accepted: 2013–06–1", "corpus_id": 52994785, "title": "Effect of Weaning Age and Sex on Meat Quality Traits of Pigs Raised under Intensive System and Slaughtered at 70 Kg Body Weight" }
{ "abstract": "Concern over the environmental effect of P excretion from pig production has led to reduced dietary P supplementation. To examine how genetics influence P utilization, 94 gilts sired by 2 genetic lines (PIC337 and PIC280) were housed individually and fed either a P-adequate diet (PA) or a 20% P-deficient diet (PD) for 14 wk. Initially and monthly, blood samples were collected and BW recorded after an overnight fast. Growth performance and plasma indicators of P status were determined monthly. At the end of the trial, carcass traits, meat quality, bone strength, and ash percentage were determined. Pigs fed the PD diet had decreased (P < 0.05) plasma P concentrations and poorer G:F (P < 0.05) over the length of the trial. After 4 wk on trial, pigs fed the PD diet had increased (P < 0.05) plasma 1,25(OH)(2)D(3) and decreased (P < 0.05) plasma parathyroid hormone compared with those fed the PA diet. At the end of the trial, pigs fed the PD diet had decreased (P < 0.05) BW, HCW, and percentage fat-free lean and tended to have decreased LM area (P = 0.06) and marbling (P = 0.09) and greater (P = 0.12) 10th-rib backfat than pigs fed the PA diet. Additionally, animals fed the PD diet had weaker bones and also decreased (P < 0.05) ash percentage and increased (P < 0.05) concentrations of 1alpha-hydroxylase and parathyroid hormone receptor mRNA in kidney tissue. Regardless of dietary treatment, PIC337-sired pigs consumed more feed and gained more BW than their PIC280-sired counterparts (P < 0.05) during the study. The PIC337-sired pigs also had greater (P < 0.05) HCW, larger (P < 0.01) LM area, and tended to have (P = 0.07) greater dressing percentage. Meat from the PIC337-sired pigs also tended to have greater (P = 0.12) concentrations of lactate but decreased (P = 0.07) concentrations of total glucose units 24 h postslaughter. Although plasma 1,25(OH)(2)D(3) concentrations were elevated (P < 0.05) in all the animals fed the PD diet, this elevation due to P deficiency tended (P = 0.09) to be greater in the PIC337-sired pigs after 12 wk on the treatment. The PIC337-sired pigs had stronger (P < 0.01) bones with greater ash percentage than the PIC280-sired pigs. The difference in the strength of the radii between the PIC337-sired pigs fed the PA and PD diets was greater than their PIC280-sired counterparts, which resulted in sire line x treatment interactions (P < 0.05). These data indicate differing mechanisms of P utilization between these genetic lines. Elucidating these mechanisms may lead to strategies to increase efficiency of growth in a more environmentally friendly manner.", "corpus_id": 8310695, "title": "Response to dietary phosphorus deficiency is affected by genetic background in growing pigs." }
{ "abstract": "Dietary phosphorus frequently exceeds age-specific requirements and pig manure often contains high phosphorus load which causes environmental burden at regional scales. Therefore, feeding strategies towards improved phosphorus efficiency and reduced environmental phosphorus load have to be developed. A 5-week feeding trial was conducted: piglets received medium, lower (−25%), or higher (+25%) amounts of phosphorus and calcium. Dietary responses were reflected by performance parameters, bone characteristics, and molecular data retrieved from serum, intestinal mucosa, and kidney cortex (p < 0.05). Transcripts associated with vitamin D hydroxylation (Cyp24A1, Cyp27A1, Cyp27B1) were regulated by diet at local tissue sites. Low-fed animals showed attempts to maintain mineral homoeostasis via intrinsic mechanisms, whereas the high-fed animals adapted at the expense of growth and development. Results suggest that a diet containing low phosphorus and calcium levels might be useful to improve resource efficiency and to reduce phosphorus losses along the agricultural value chain.", "corpus_id": 733190, "score": -1, "title": "Lower dietary phosphorus supply in pigs match both animal welfare aspects and resource efficiency" }
{ "abstract": "Geometrical patterns of Paleoproterozoic dyke swarms in the Superior craton, North America, and paleomagnetic studies of those dykes, both indicate relative motion across the Kapuskasing Structural Zone (KSZ) that divides the craton into eastern and western sectors. Previous work has optimized the amount of vertical-axis rotation necessary to bring the dyke trends and paleomagnetic remanence declinations into alignment, yet such calculations are not kinematically viable in a plate-tectonic framework. Here we subdivide the Superior craton into two internally rigid subplates and calculate Euler parameters that optimally group the paleomagnetic remanence data from six dyke swarms with ages between 2470 and 2070 Ma. Our dataset includes 59 sites from the Matachewan dykes for which directional results are reported for the first time. Our preferred restoration of the eastern Superior subprovince relative to the western subprovince is around an Euler pole at 51◦N, 85◦W, with a rotation angle of 14◦ CCW. Although we do not include data from the KSZ in our rigid-subplate calculations, we can align its dyke strikes by ◦ applying a 23 CCW distributed shear that preserves line length of all dykes pinned to the western margin. Our model predicts approximately 90 km of dextral transpressional displacement at ca. 1900 Ma, about half of which is accommodated by distributed strain within the KSZ, and the other half by oblique lateral thrusting (with NE-vergence) across the Ivanhoe Lake shear zone. We produce a combined apparent polar wander path for the early Paleoproterozoic Superior craton that incorporates data from both western and eastern subplates, and that can be rotated to either of the subplates’ reference frames for the purposes oic s of Archean-Paleoproteroz", "corpus_id": 43750095, "title": "estoring Proterozoic deformation within the Superior craton" }
{ "abstract": "SEISMIC imaging of Precambrian cratons holds the promise of deep structural mapping and interpretation of fundamental crustal construction processes. Deep structures may be identified by tracing reflections to the surface1, particularly in exposed crustal cross-sections2 elevated along deeply penetrating faults, but many such cross-sections have structural orientations too steep for successful reflection imaging (like the Ivrea zone3) or occur in plate-boundary settings not representative of continental interiors. By contrast, the Archaean greenstone terrain exposed in the Kapus-kasing uplift of Canada4, 5 has gently dipping structures, and an intracratonic setting within the Superior Province, thus providing an opportunity to examine the third dimension of continental crust down to 25–30 km palaeodepth6. Here we present new seismic reflection data which, although generally supportive of structural models based on surface geology4, gravity modeling5 and seismic-refraction studies7,8, also indicate that the Kapuskasing structure is a relatively thin thrust sheet containing seismically reflective sequences of high-grade rock which may be analogues of seismi-cally reflective in situ lower crust.", "corpus_id": 4355774, "title": "Seismic reflection profiles across deep continental crust exposed in the Kapuskasing uplift structure" }
{ "abstract": "The Belt basin represents a slowly sinking reentrant on the North American craton that began to form about 1,500 m.y. ago and persisted for more than 600 m.y. This sinking block somewhat resembles an aulacogen, but the basin is not a true grabenlike trough extending into the craton at a plate separation. The sinking block was at times almost triangular in shape; a central platform was bounded by the North American craton and associated narrow troughs on the south and northeast, and on the northwest by the Cordilleran miogeocline that extended past several reentrants along the North American craton. Following the end of Belt sedimentation and the onset of the East Kootenay orogeny about 850 m.y. ago, the Belt basin has been subjected to a variety of stresses. Within the basin the strain appears to reflect the inhomogeneities of the platform and troughs formed in Belt time. During and after the East Kootenay orogeny the central platform acted as a somewhat rigid block and contains gentle folds and vertical block faults; the southern trough contains a series of tear faults and tight folds of the Lewis and Clark line; the northeastern trough and old cratonic edge are the site of the Montana disturbed belt; the intersection of the two troughs forms an embayment that contains the Boulder batholith and related volcanics; and the old miogeocline on the northwest is the site of the Kootenay arc mobile belt, which contains gneiss domes and thrusts that rode eastward up over the platform", "corpus_id": 55223146, "score": -1, "title": "Tectonic features of the Precambrian Belt Basin and their influence on post-Belt structures" }
{ "abstract": "The genotype–phenotype (GP) map is a central concept in evolutionary biology as it describes the mapping of molecular genetic variation onto phenotypic trait variation. Our understanding of that mapping remains partial, especially when trying to link functional clustering of pleiotropic gene effects with patterns of phenotypic trait co-variation. Only on rare occasions have studies been able to fully explore that link and tend to show poor correspondence between modular structures within the GP map and among phenotypes. By dissecting the structure of the GP map of the replicative capacity of HIV-1 in 15 drug environments, we provide a detailed view of that mapping from mutational pleiotropic variation to phenotypic co-variation, including epistatic effects of a set of amino-acid substitutions in the reverse transcriptase and protease genes. We show that epistasis increases the pleiotropic degree of single mutations and provides modularity to the GP map of drug resistance in HIV-1. Moreover, modules of epistatic pleiotropic effects within the GP map match the phenotypic modules of correlated replicative capacity among drug classes. Epistasis thus increases the evolvability of cross-resistance in HIV by providing more drug- and class-specific pleiotropic profiles to the main effects of the mutations. We discuss the implications for the evolution of cross-resistance in HIV.", "corpus_id": 3696389, "title": "Epistasis and Pleiotropy Affect the Modularity of the Genotype–Phenotype Map of Cross-Resistance in HIV-1" }
{ "abstract": "Development and physiology translate genetic variation into phenotypic variation and determine the genotype-phenotype map, such as which gene affects which character (pleiotropy). Any genetic change in this mapping reflects a change in development. Here, we discuss evidence for variation in pleiotropy and propose the selection, pleiotropy and compensation model (SPC) for adaptive evolution. It predicts that adaptive change in one character is associated with deleterious pleiotropy in others and subsequent selection to compensate for these pleiotropic effects. The SPC model provides a unifying perspective for a variety of puzzling phenomena, including developmental systems drift and character homogenization. The model suggests that most adaptive signatures detected in genome scans could be the result of compensatory changes, rather than of progressive character adaptations.", "corpus_id": 1735774, "title": "A model of developmental evolution: selection, pleiotropy and compensation." }
{ "abstract": "Mycobacteria propelled modulation of host responses is of considerable interest in the face of emerging drug resistance. Although it is known that Abl tyrosine kinases affect entry and persistence of mycobacteria, mechanisms that couple c-Abl to proximal signaling pathways during immunity are poorly understood. Loss-of-function of c-Abl through Imatinib, in a mouse model of tuberculosis or RNA interference, identified bone morphogenesis protein (BMP) signaling as its cellular target. We demonstrate that c-Abl promotes mycobacterial survival through epigenetic modification brought about by KAT5-TWIST1 at Bmp loci. c-Abl-BMP signaling deregulated iNOS, aggravating the inflammatory balance. Interestingly, BMP signaling was observed to have far-reaching effects on host immunity, as it attenuated TLR3 pathway by engaging miR27a. Significantly, these events were largely mediated via WhiB3 and DosR/S/T but not SecA signaling pathway of mycobacteria. Our findings suggest molecular mechanisms of host pathways hijacked by mycobacteria and expand our understanding of c-Abl inhibitors in potentiating innate immune responses.", "corpus_id": 3352784, "score": -1, "title": "c-Abl-TWIST1 Epigenetically Dysregulate Inflammatory Responses during Mycobacterial Infection by Co-Regulating Bone Morphogenesis Protein and miR27a" }
{ "abstract": "Hepatectomy is the main curative strategy for patients with colorectal liver metastasis (CRLM). In recent years, laparoscopic hepatectomy (LH) has gradually been adopted for the treatment of CRLM. However, in most cases reported in previous studies, CRLM was located in the anterolateral segments. The aim of the current analysis was to compare the shortand long-term outcomes of LH for CRLM in the posterosuperior segments. Clinical and follow-up data of patients with CRLM, undergoing LH at our hospital from March 2009 to October 2016 were retrospectively analyzed. Patients were divided into the posterosuperior group (38 cases) and the anterolateral group (81 cases) based on the location of CRLM. Compared with the anterolateral group, the posterosuperior group had longer operative time, greater intraoperative blood loss, and higher rate of conversion. There was no statistical difference in the rate and severity of postoperative 30-day complications, postoperative 30-day mortality, length of hospital stay, pathological results, 5-year overall survival, and disease-free survival. In summary, although LH for CRLM in the posterosuperior segments has shortcomings such as long operative time, high intraoperative blood loss, and high rate of conversion, the incidence of postoperative complications, severity of complications, postoperative 30day mortality, and long-term survival outcomes in the PS group were not different from those in the anterolateral segments.", "corpus_id": 53606096, "title": "Comparison of short-and long-term outcomes of laparoscopic hepatectomy for colorectal liver metastasis in the posterosuperior and anterolateral segments" }
{ "abstract": "PURPOSE\nOver the last decade, laparoscopic liver surgery has significantly evolved. The aim of this study was to analyse the outcomes of Laparoscopic Left Lateral Hepatectomy (LLLH) for colorectal cancer (CRC) metastases in a tertiary referral hepato-pancreato-biliary centre.\n\n\nMETHODS\nA consecutive series of patients undergoing LLLH between January 2009 and April 2013 were analysed using prospectively collected data in a tertiary referral HPB centre. In particular, the study focused on patients who had LLLH for colorectal liver metastasis (CRLM). The following features were analysed: operative time, intraoperative blood loss, number and size of tumours, resection margins, complication rates, follow up period and recurrence rates.\n\n\nRESULTS\nA total of 17 patients were finally included. There were no bile leaks or collections and no postoperative bleeding. The median hospital stay was 4 days (range 2-10). The median size of the metastatic lesions was 28.1 mm (range 8-56). The resection was R0 in all except 2 patients (11%) where the margin was less than 1 mm. The mean resection margin was 14.6 mm (range 1-50). Eight patients (47%) did not develop any recurrence till latest follow up. Seven patients (41%) developed recurrence in the liver or lungs. The median time to recurrence was 11 months (range 2-12). There was only one death in the follow up period (22-77 months). Sixteen patients (94%) were alive at the latest follow up.\n\n\nCONCLUSION\nLLLH for CRLM is safe and can be performed with low complication rates, adequate resection margins, short hospital stay, and oncologic outcomes similar to those of open surgery.", "corpus_id": 1505483, "title": "Laparoscopic left lateral hepatectomy for colorectal metastasis is the standard of care." }
{ "abstract": "OBJECTIVES\nThis study sought to assess the consequences of switching prasugrel to clopidogrel on platelet inhibition and clinical outcomes after an acute coronary syndrome (ACS).\n\n\nBACKGROUND\nMany ACS patients are switched from prasugrel to clopidogrel within the recommended 1-year duration of treatment.\n\n\nMETHODS\nPlatelet reactivity was measured with the VerifyNow P2Y(12) assay (Accumetrics, San Diego, California) in 300 ACS patients treated for 15 days with prasugrel 10 mg. Patients displaying low on-treatment platelet reactivity (LPR) and/or at high risk of bleeding were switched to clopidogrel 75 mg and tested again 15 days later. The rate of patients with high on-treatment platelet reactivity (HPR), P2Y(12) reaction units (PRU) >208, and LPR (PRU <0) were evaluated before and after the switch. Bleeding and ischemic events were also recorded.\n\n\nRESULTS\nOn a regimen of prasugrel 10 mg, the rate of patients with LPR was 45.6% (n = 137), whereas 4.3% (n = 13) had HPR. A group of 31 patients (10.3%) was switched to clopidogrel 75 mg, of whom 29 had LPR (93.5%) on a regimen of prasugrel. On-treatment platelet reactivity (PRU) increased from 14 ± 4 on a regimen of prasugrel to 155 ±15 on a regimen of clopidogrel (p = 0.0001), resulting in a much lower rate of patients with LPR (9.7%). The rate of patients with HPR increased from 0% with prasugrel to 29% (n = 9) with clopidogrel. The rate of minor bleeding decreased after the switch from 32.2% to 9.7%; p = 0.03.\n\n\nCONCLUSIONS\nAn LPR is frequent in patients treated with prasugrel 10 mg. Early switching from prasugrel 10 mg to clopidogrel 75 mg reduces the number of patients with LPR and minor bleeding events but unmasks a group of nonresponders to clopidogrel with unknown consequences on clinical outcomes.", "corpus_id": 24559935, "score": -1, "title": "Switching acute coronary syndrome patients from prasugrel to clopidogrel." }
{ "abstract": "Working memory (WM) allows information to be stored and manipulated over short time scales. Performance on WM tasks is thought to be supported by the frontoparietal system (FPS), the default mode system (DMS), and interactions between them. Yet little is known about how these systems and their interactions relate to individual differences in WM performance. We address this gap in knowledge using functional MRI data acquired during the performance of a 2-back WM task, as well as diffusion tensor imaging data collected in the same individuals. We show that the strength of functional interactions between the FPS and DMS during task engagement is inversely correlated with WM performance, and that this strength is modulated by the activation of FPS regions but not DMS regions. Next, we use a clustering algorithm to identify two distinct subnetworks of the FPS, and find that these subnetworks display distinguishable patterns of gene expression. Activity in one subnetwork is positively associated with the strength of FPS-DMS functional interactions, while activity in the second subnetwork is negatively associated. Further, the pattern of structural linkages of these subnetworks explains their differential capacity to influence the strength of FPS-DMS functional interactions. To determine whether these observations could provide a mechanistic account of large-scale neural underpinnings of WM, we build a computational model of the system composed of coupled oscillators. Modulating the amplitude of the subnetworks in the model causes the expected change in the strength of FPS-DMS functional interactions, thereby offering support for a mechanism in which subnetwork activity tunes functional interactions. Broadly, our study presents a holistic account of how regional activity, functional interactions, and structural linkages together support individual differences in WM in humans.", "corpus_id": 58981478, "title": "Multiscale and multimodal network dynamics underpinning working memory" }
{ "abstract": "The dorsolateral prefrontal cortex (DLPFC) plays a crucial role in working memory. Notably, persistent activity in the DLPFC is often observed during the retention interval of delayed response tasks. The code carried by the persistent activity remains unclear, however. We critically evaluate how well recent findings from functional magnetic resonance imaging studies are compatible with current models of the role of the DLFPC in working memory. These new findings suggest that the DLPFC aids in the maintenance of information by directing attention to internal representations of sensory stimuli and motor plans that are stored in more posterior regions. Working memory refers to the temporary representation of information that was just experienced or just retrieved from long-term memory. These active representations are short-lived, but can be maintained for longer periods of time through active rehearsal strategies, and can be subjected to various operations that manipulate the information in such a way that makes it useful for goaldirected behavior. Most definitions of working memory include both storage and (executive) control components [1]. Cognitive neuroscientists are searching for ways to disassociate the separate components of working memory in attempts to localize and clearly characterize their neural implementation. The prefrontal cortex (PFC) is thought to be the most important substrate for working memory (Fig. 1). Two key findings from studies of monkeys performing delayed response tasks suggest a crucial role for the PFC in working memory. First, experimental lesions of the principal sulcus in the dorsolateral prefrontal cortex (DLPFC) cause delay-dependent impairments [2‐4]. That is, forgetting increases not only when a delay is imposed but increases with the length of the delay. Second, neurophysiological unit recordings from the DLPFC often show persistent, sustained levels of neuronal firing during the retention interval of delayed response Fig. 1. Lateral surface of (a) macaque and (b) human brain. The PFC is composed of lateral, medial, and orbital sectors that are believed to be functionally distinct given the selective effects of damage and distribution of afferent and efferent projections. The tinted areas correspond to those defined by Petrides and Pandya [71] based on cytoarchitecture and connectivity. Notably, the mid-DLPFC comprises areas 46 and 9/46 and the mid-VLPFC comprises areas 45 and 47/12. Note that much of area 46 lies in the depths of the principle sulcus of the monkey and the intermediate frontal sulcus of the human. Frontal premotor regions are also highlighted. The frontal eye field (F) in the macaque lies in the anterior bank of the arcuate sulcus in area 8A. In the human, F is found in the vicinity of the precentral sulcus and superior frontal sulcus junction (area 6 and maybe the caudal-most portion of 8A). The frontal eye field is a premotor region involved in the control of eye movements. Broca’s area (B, area 44) is also a premotor area that is involved in the production of speech. The dotted line represents the principle sulcus in the macaque and the inferior frontal sulcus in the human. TRENDS in Cognitive Sciences", "corpus_id": 15763406, "title": "Persistent activity in the prefrontal cortex during working memory" }
{ "abstract": "This paper proposes a new design of wideband oval-shaped antenna with ±4° dipoles suitable for the base station of mobile communication systems. The designed antennas cover bands of 700–960 MHz (the lower band antenna) and 1700–2700 MHz (the upper band antenna) for cellular 2G/3G/LTE technologies. The design enjoys the advantages of stable directivity and beamwidth within frequency bands and a simple feeding structure with a compact size and low profile. The oval-shaped structure makes the antenna smaller in size than other polygon shapes. The antenna in the lower band has a beamwidth of 85.4°± 1.3° and directivity of 7.2±0.13 dBi with the reflection coefficient ≤ −17 dB and the isolation between the ports ≥ 19 dB while the antenna in the upper band has a beamwidth of 84.15°±3.9° and directivity of 6.94±0.38 dBi with the reflection coefficient ≤ −10.5 dB and the isolation between the ports ≥ 20.5dB. High cross polarization discrimination ratios were also achieved within the desired frequency bands.", "corpus_id": 36846014, "score": -1, "title": "Design of broadband dual-polarized oval-shaped base station antennas for mobile systems" }
{ "abstract": "Objective: To measure the association between the degree of knowledge on how to solve problems associated with the use of contraceptive methods and the presence of unplanned pregnancies in women using short-acting contraception; to determine the prevalence of unplanned pregnancies; and to describe attitudes, perceptions and characteristics in relation to family planning activities. Materials and methods: Cross-sectional analytical study conducted in women between 14 and 49 years of age coming for a pregnancy test to a public, low-complexity healthcare institution in Medellin, Colombia, that provides care to a population covered under a state-subsidized healthcare system. Before delivering the result of the pregnancy test they were given a structured survey and a test to measure their knowledge for solving situations that might affect the effectiveness of contraception methods under the woman’s control. Results: Of the 471 women surveyed, 75.2 % were not planning to become pregnant and 57 % had an unplanned pregnancy. The median knowledge level was 50% (p25: 37.5%, p75: 62.5%). The prevalence ratio of unplanned pregnancy with an intermediate or high level of knowledge was 0.56 (95% CI 0.34-0.92). Conclusion: Knowledge about how to solve problems regarding contraceptive methods that depend for their effectiveness on the appropriate use by the woman is associated with a lower frequency of unplanned pregnancies.", "corpus_id": 74175289, "title": "Asociación entre conocimientos de anticoncepción y embarazo no planeado: Estudio de corte transversal" }
{ "abstract": "Purpose Compare the relationship between childbearing intentions, maternal behaviors, and pregnancy outcomes in a group of early/middle adolescents versus a group of late adolescents (specifically high school seniors, high school graduates, and GED certificate recipients). Methods The reasons given by a racially/ethnically diverse group of 1,568 pregnant 13–18 year olds for not using contraception were used to classify their pregnancies as intended or unintended. Proportion comparison tests and stepwise logistic regression analyses were used to study the relationship between childbearing intentions, maternal behaviors, and pregnancy outcomes. Results Regardless of age, adolescents who intended to become pregnant conceived in an objectively more hospitable and supportive childbearing milieu than those who conceived unintentionally. This is evidenced by their greater likelihood of having goals compatible with adolescent childbearing, cohabitation with the father of the child, and living in a non-chaotic environment. However, pregnancy planning was not associated with improved compliance with preventive health care recommendations during gestation nor with infant outcomes. As such, the consequences among adolescents with intended pregnancies were negative, as evidenced by a higher rate of smoking, STDs late in gestation, school dropout, and repeat conception. Conclusions Like adults, adolescents with intended pregnancies conceived in an objectively more supportive environment than their counterparts with unintended pregnancies. However, this advantage did not translate into better support, healthier maternal behavior during gestation, or improved pregnancy outcomes.", "corpus_id": 581572, "title": "Reasons for Ineffective Contraceptive Use Antedating Adolescent Pregnancies: Part 2: A Proxy for Childbearing Intentions" }
{ "abstract": "OBJECTIVE\nTo describe the perceived quality of life of teen mothers.\n\n\nSTUDY DESIGN\nThe Medical Outcomes Survey-Short Form 36, version 2 is a scale that measures a subject's perception of 8 health dimensions. The Medical Outcomes Survey-Short Form 36, version 2 and a demographics survey were completed by women during obstetric or gynecologic visits to a resident continuity clinic. Mean scores were compared between women with children and those without.\n\n\nRESULTS\nThere was no significant difference between adults or teens, with or without children, in any health component scale with the exception of social functioning. When compared with the normative population age, all groups in our population scored significantly on physical functioning and role-physical subscales. In addition, teens with children scored lower on the role-emotional subscale.\n\n\nCONCLUSION\nPerceived quality of life in teen mothers does not appear to be lower than quality of life in teens without children or adult women.", "corpus_id": 39657937, "score": -1, "title": "The effect of parenthood on perceived quality of life in teens." }
{ "abstract": "Actualmente, tres generaciones de trabajadores están presentes en las organizaciones; la generación de los baby boomers, está a punto de salir del mercado laboral y le abren paso a la generación de trabajadores millennials. El objetivo de este artículo es presentar una revisión sistemática de la literatura referente a la generación del milenio, destacando: primero, la caracterización de los millennials, que contiene elementos clasificados en psicológicos, familiares y sociales que han demostrado tener efecto en el contexto laboral; y segundo, la extracción de prácticas que han dado algunos autores para retenerlos y evitar altos costos por la pérdida de productividad, contratación y formación de nuevos empleados. De esta revisión se concluye que los millennials son aquellas personas nacidas entre 1980 y el 2000, quienes en el contexto laboral buscan tener equilibrio entre su vida y el trabajo, retroalimentación, contacto con los líderes y crecimiento rápido a posiciones altas. Además, prefieren laborar en y para organizaciones que les generen aprendizaje y los desafíen, razón por la que cambian constantemente de trabajo. En consecuencia, es importante entender que en las prácticas de retención no es necesario realizar grandes inversiones de dinero, sino mejorar y fortalecer los procesos del área de recursos humanos.", "corpus_id": 245977938, "title": "Caracterización de la generación del milenio en el contexto laboral: una revisión de la literatura" }
{ "abstract": "Purpose – This paper aims to explore the influence of career expectations on job satisfaction of Generation Y, as well as the mediating effect of career expectations on the relationship between hotel career management (HCM) and job satisfaction. Design/methodology/approach – Data were collected from the main tourist cities in China with Generation Y employees working in the hospitality industry as the target population. A total of 442 valid questionnaires were obtained, and structural equation modeling was used to examine the relationships among the constructs. Findings – HCM contributed positively to employees’ career expectation and job satisfaction. Career expectation was positively related to job satisfaction, as well as mediated the relationship between HCM and job satisfaction. Research limitations/implications – This study is limited by the use of self-reported data in the cross-sectional design because all participants filled out the questionnaires by themselves. The use of convenience sampling me...", "corpus_id": 154070360, "title": "Meeting career expectation: can it enhance job satisfaction of Generation Y?" }
{ "abstract": "PurposeThis qualitative study deals with the career longevity phenomenon in the hospitality sector of Pakistan and aimed at exploring the factors which become the reason for continuing services in this sector for a longer period despite the prevailing perception of the short-term and unsatisfactory hospitality careers.Design/methodology/approachThe study has taken up an interpretive social constructivism approach to carry out the research. The purposive sampling technique is used to solicit expert insights into the dynamics of the hospitality career. A thematic analysis was employed to identify the common themes, extract the meaning from the discussion patterns of the respondents, and outline viewpoints and ideas of the respondents.FindingsThe findings of the study are discussed at three levels of career, i.e. entry level, development level, and consolidation level. Long careers in the hospitality sector are a product of dedication and commitment to the job, professionalism, variety, complexity of the job, and healthy relationship with coworkers, supervisors, and guests.Originality/valueThe study links the belief of belonging and socialization attributes to the retention of employees in the hospitality sector jobs. Secondly, the study uses a qualitative approach to provide a diverse perspective of employee–industry loyalty rather than employee–organization loyalty. Thirdly, the study brings forth practical implications for personnel managers in the hospitality sector and proposes that the management should systematically stimulate the socialization of the workers to hold the talent despite providing workers with the opportunity to join another sector. Finally, the study informs about research limitations and directions for future research.", "corpus_id": 252229759, "score": -1, "title": "Career development in the hospitality sector: an exploratory study from Pakistan" }
{ "abstract": "In this paper, a method for locating and validating different load devices placed on a variable-phase CET desktop is presented. The CET desktop consists of multiple embedded primary coils, which have the ability to power small electronic devices, such as cellular phones, music players, and PDA's laying on its surface. Here, only the three primary coils closest to the load device are used to transfer power, and it is thus important for the system to correctly locate the positions of the load devices laying on the table. The presented method can detect the positions, and distinguish between three different types of objects placed on the table, namely, valid resonant load devices, conductive materials and ferrites. The objects are detected through the process of “scanning” which involves energizing each coil for a short duration of time, while measuring its impedance. Each object type influences the coil impedance in a different manner, which can be detected by the CET system. In this way, the object positions and types can be accurately estimated. The system is implemented, and various load detection experiments are performed on the prototype. The results show measurable and predictable changes in the coil impedances, and the system is able to correctly locate and identify all the objects placed on the CET desktop.", "corpus_id": 5629050, "title": "Load position detection and validation on variable-phase contactless energy transfer desktops" }
{ "abstract": "In this paper, an equivalent circuit model of a multilayer planar winding array structure that can be used as a universal contactless battery charging platform is presented. This model includes the mutual-inductive effects of partial overlaps of planar windings in the multilayer structure. It has been successfully simulated with PSpice and practically verified with measurements obtained from three prototypes. This circuit model forms the basis of an overall system model of the planar charging platform. It is demonstrated that model parameters can be derived from the geometry of the winding structure. Errors between the calculated and the measured results are found to be within a tolerance of 5%", "corpus_id": 19122524, "title": "Equivalent Circuit Modeling of a Multilayer Planar Winding Array Structure for Use in a Universal Contactless Battery Charging Platform" }
{ "abstract": "A single crystal YAG: Ce3+ annular scintillator axially placed in a movable light guide forms the essential part of a new BSE detector. Comparison of properties of this detector with those of a semiconductor detector is made. The bandwidth, signal-to-noise ratio, capacitance effects, and relative efficiency are parameters which favour the scintillation detector. Its disadvantage is that it must be equipped with a photomultiplier and a light guide. The position of the scintillator above the specimen permits efficient detection at a large collection angle of BSE. For normal beam incidence, the signal homogeneity from any area of the scintillator ensures that images are obtained without shadow effects due to signal loss in the scintillator or due to detector geometry. The same probe current as for other detection modes can be used. Resolution of details is as high as for an SE image.", "corpus_id": 123662800, "score": -1, "title": "A BSE scintillation detector in the (S)TEM" }
{ "abstract": "2. Лечение хронического некалькулезного холецистита (ХНХ) (с болевым синдромом) и с целью профилактики хронического некалькулезного холецистита с билиарным сладжем, атрофического гастрита антрального отдела желудка (дуоденограстральный рефлюкс), хронического билиарного панкреатита (билиопанкреатический рефлюкс) и хронического спаситического панкреатита (III панкреатический тип дисфункции сфинктера Одди) − Целекоксиб − по 100 мг 2 раза в день после еды − 5-7 дней, после чего Урсодезоксихолевая кислота − по 750 мг 1 раз в день на ночь − 1 месяц. Эффективность − 95%.", "corpus_id": 199745465, "title": "Использование продуктов ЗАО «Биофит» в комплексной терапии заболеваний желчевыводящих путей" }
{ "abstract": "OBJECTIVE:In about 30% of cases, the etiology of acute recurrent pancreatitis remains unexplained, and the term “idiopathic” is currently used to define such disease. We aimed to evaluate the long-term outcome of patients with idiopathic recurrent pancreatitis who underwent endoscopic cholangiopancreatography (ERCP) followed by either endoscopic biliary (and seldom pancreatic) sphincterotomy or ursodeoxycholic acid (UDCA) treatment, in a prospective follow-up study.METHODS:A total of 40 consecutive patients with intact gallbladder entered the study protocol after a 24-month observation period during which at least two episodes of pancreatitis occurred. All patients underwent diagnostic ERCP, followed by biliary or minor papilla sphincterotomy in cases of documented or suspected bile duct microlithiasis and sludge, type 2 sphincter of Oddi dysfunction, or pancreas divisum with dilated dorsal duct. Patients with no definite anatomical or functional abnormalities received long-term treatment with UDCA. After biliary sphincterotomy, patients with further episodes of pancreatitis underwent main pancreatic duct stenting followed by pancreatic sphincterotomy if the stent had proved to be effective.RESULTS:ERCP found an underlying cause of pancreatitis in 70% of cases. Patients were followed-up for a period ranging from 27 to 73 months. Effective therapeutic ERCP or UDCA oral treatment proved that occult bile stone disease and type 2 or 3 sphincter of Oddi dysfunction (biliary or pancreatic segment) had been etiological factors in 35 of the 40 cases (87.5%) After therapeutic ERCP or UDCA, only three patients still continued to have episodes of pancreatitis.CONCLUSIONS:Diagnostic and therapeutic ERCP and UDCA were effective in 92.5% of our cases, over a long follow-up, indicating that the term “idiopathic” was justified only in a few patients with acute recurrent pancreatitis.", "corpus_id": 3215380, "title": "Idiopathic recurrent pancreatitis: long-term results after ERCP, endoscopic sphincterotomy, or ursodeoxycholic acid treatment" }
{ "abstract": "This study is the first to explore the quality of care based on the outlier or the inlier status of patients for a large heterogeneous General Medicine (GM) service at a busy public hospital. The study compared the quality of care between ward outliers and ward inliers based on a homogenous group of patients using Two-step clustering method. Contrary to common perception, ward outliers had overall shorter Length of Stay (LOS) than ward inliers. The study also was unable to support the perception of shorter LOS in the outlier group being associated with higher in-hospital mortality. The study confirmed that overall the outliers received inferior quality of care as discharge summaries for the outliers were delayed and more outliers were re-admitted within 7 days of discharge in comparison to the inliers.", "corpus_id": 56030533, "score": -1, "title": "Analysing homogenous patient journeys to assess quality of care for patients admitted outside of their 'home-ward'" }
{ "abstract": "Exploratory search of scientific literature plays an essential part of a researcher's work. Efforts to provide interfaces supporting this task accomplished significant progress, but the field is open for further evolution. In this paper I present four basic design concepts identified in exploratory search interfaces: relevance, diversity, relationships and categories, and propose a novel browsing layout featuring a unique combination of these concepts.", "corpus_id": 5921643, "title": "Exploratory search interfaces: blending relevance, diversity, relationships and categories" }
{ "abstract": "We present TopicNets, a Web-based system for visual and interactive analysis of large sets of documents using statistical topic models. A range of visualization types and control mechanisms to support knowledge discovery are presented. These include corpus- and document-specific views, iterative topic modeling, search, and visual filtering. Drill-down functionality is provided to allow analysts to visualize individual document sections and their relations within the global topic space. Analysts can search across a dataset through a set of expansion techniques on selected document and topic nodes. Furthermore, analysts can select relevant subsets of documents and perform real-time topic modeling on these subsets to interactively visualize topics at various levels of granularity, allowing for a better understanding of the documents. A discussion of the design and implementation choices for each visual analysis technique is presented. This is followed by a discussion of three diverse use cases in which TopicNets enables fast discovery of information that is otherwise hard to find. These include a corpus of 50,000 successful NSF grant proposals, 10,000 publications from a large research center, and single documents including a grant proposal and a PhD thesis.", "corpus_id": 481381, "title": "TopicNets: Visual Analysis of Large Text Corpora with Topic Modeling" }
{ "abstract": "As a tremendous number of mobile applications (apps) are readily available, users have difficulty in identifying apps that are relevant to their interests. Recommender systems that depend on previous user ratings (i.e., collaborative filtering, or CF) can address this problem for apps that have sufficient ratings from past users. But for apps that are newly released, CF does not have any user ratings to base recommendations on, which leads to the cold-start problem. In this paper, we describe a method that accounts for nascent information culled from Twitter to provide relevant recommendation in such cold-start situations. We use Twitter handles to access an app's Twitter account and extract the IDs of their Twitter-followers. We create pseudo-documents that contain the IDs of Twitter users interested in an app and then apply latent Dirichlet allocation to generate latent groups. At test time, a target user seeking recommendations is mapped to these latent groups. By using the transitive relationship of latent groups to apps, we estimate the probability of the user liking the app. We show that by incorporating information from Twitter, our approach overcomes the difficulty of cold-start app recommendation and significantly outperforms other state-of-the-art recommendation techniques by up to 33%.", "corpus_id": 1958151, "score": -1, "title": "Addressing cold-start in app recommendation: latent user models constructed from twitter followers" }
{ "abstract": "In this paper we propose a methodology for assessing the accuracy of techniques of wave field rendering through loudspeaker arrays. In order to measure the rendered wave field we adopt a solution based on a circular harmonic analysis of the sound field captured by a virtual microphone array. As a result of this analysis stage, we are able to compare the target, the theoretical and the measured wave fields, which may differ due to the non-ideality in the loudspeaker array or in the environment that generates some spurious reverberations. Moreover, in order to quantify the error between target, theoretical and measured wave fields, we define some evaluation metrics, based on RMSE and modal analysis of the acquired wave fields. We show some experimental results on real data.", "corpus_id": 6964126, "title": "A methodology for evaluating the accuracy of wave field rendering techniques" }
{ "abstract": "Wave field analysis of stationary sound fields can be performed with high spatial resolution when sequential measurements on a circle are employed. Applying a circular harmonics decomposition to wideband audio data requires a careful setup of the measurement process to avoid noise amplification at certain frequencies. To this end, measurements with several microphones at multiple radii are considered here. Three mathematically well-founded strategies to combine their measurement signals are compared regarding the resulting noise amplification.", "corpus_id": 8090650, "title": "Wave field analysis using multiple radii measurements" }
{ "abstract": "With the recent emergence of surround sound technology, renewed interest has been shown in the problem of sound field reproduction. However, in practical acoustical environments, the performance of sound reproduction techniques are significantly degraded by reverberation. In this paper, we develop a method of sound field reproduction for reverberant environments. The key to this method is an efficient parametrization of the acoustic transfer function over a region of space. Using this parametrization, a practical method has been provided for determining the transfer function between each loudspeaker and every point in the reproduction region. Through several simulation examples, the reverberant field designs have been shown to yield a reproduction accuracy as good as conventional free-field designs, and better than multipoint least squares designs when loudspeaker numbers are limited. The successful reproduction of sound over a wide frequency range has also been demonstrated. This approach reveals the appropriate choices for fundamental design parameters.", "corpus_id": 18812682, "score": -1, "title": "Theory and design of sound field reproduction in reverberant rooms." }
{ "abstract": "Context. The frequency of brown-dwarf companions in close orbit around Sun-like stars is low compared to the frequency of planetary and stellar companions. There is presently no comprehensive explanation of this lack of brown-dwarf companions. Aims. By combining the orbital solutions obtained from stellar radial-velocity curves and Hipparcos astrometric measurements, we attempt to determine the orbit inclinations and therefore the masses of the orbiting companions. By determining the masses of potential brown-dwarf companions, we improve our knowledge of the companion mass-function. Methods. The radial-velocity solutions revealing potential brown-dwarf companions are obtained for stars from the CORALIE and HARPS planet-search surveys or from the literature. The best Keplerian fit to our radial-velocity measurements is found using the Levenberg-Marquardt method. The spectroscopic elements of the radial-velocity solution constrain the fit to the intermediate astrometric data of the new Hipparcos reduction. The astrometric solution and the orbit inclination are found using non-linear χ 2 -minimisation on a two-parameter search grid. The statistical confidence of the adopted orbital solution is evaluated based on the distribution-free permutation test. Results. The discovery of nine new brown-dwarf candidates orbiting stars in the CORALIE and HARPS radial-velocity surveys is reported. New CORALIE radial velocities yielding accurate orbits of six previously-known hosts of potential brown-dwarf companions are presented. Including the literature targets, 33 hosts of potential brown-dwarf companions are examined. Employing innovative methods, we use the new reduction of the Hipparcos data to fully characterise the astrometric orbits of six objects, revealing M-dwarf companions of masses between 90 MJ and 0.52 M� . In addition, the masses of two companions can be restricted to the stellar domain. The companion to HD 137510 is found to be a brown dwarf. At 95% confidence, the companion of HD 190228 is also a brown dwarf. Twenty-three companions remain brown-dwarf candidates. On the basis of the CORALIE planet-search sample, we obtain an upper limit of 0.6% for the frequency of brown-dwarf companions around Sun-like stars. We find that the companion-mass distribution function increases toward the lower end of the brown-dwarf mass range, suggesting that we detect the high-mass tail of the planetary distribution. Conclusions. Our findings agree with the results of previous similar studies and confirm the pronounced paucity of brown-dwarf companions around Sun-like stars. They are affected by the Hipparcos astrometric precision and mission duration, which limits the minimum detectable companion mass, and some of the remaining candidates are probably brown-dwarf companions.", "corpus_id": 119276951, "title": "Search for brown-dwarf companions of stars" }
{ "abstract": "The combination of Hipparcos astrometric data with the spectroscopic data of putative extrasolar planets seems to indicate that a significant fraction of these low-mass companions could be brown or M dwarfs (Han et al. 2001). We show that this is due to the adopted reduction procedure, and consequently that the Hipparcos data do not reject the planetary mass hypothesis in all but one cases. Additional companions, undetected so far, might also explain the large astrometric residuals of some of these stars.", "corpus_id": 378792, "title": "Screening the Hipparcos-based astrometric orbits of sub-stellar objects" }
{ "abstract": "The difference in formation process between binary stars and planetary systems is reflected in their composition, as well as orbital architecture, particularly in their orbital eccentricity as a function of orbital period. It is suggested here that this difference can be used as an observational criterion to distinguish between brown dwarfs and planets. Application of the orbital criterion suggests that, with three possible exceptions, all of the recently discovered substellar companions may be brown dwarfs and not planets. These criterion may be used as a guide for interpretation of the nature of substellar-mass companions to stars in the future.", "corpus_id": 3608784, "score": -1, "title": "Possible Observational Criteria for Distinguishing Brown Dwarfs from Planets" }
{ "abstract": "ABSTRACT Mindfulness-based approaches that promote health, improve quality of life, and reduce the impact of comorbidities are key aspects in chronic diseases management. We aimed to verify the impact of a short-term meditation protocol on psychosocial and physiological parameters in chronic hemodialysis patients. We enrolled twenty-two patients, median age of 69.5 years old, into a 12-week meditation protocol that occurred during each hemodialysis session for 10–20 minutes, 3x/week, in a private tertiary hospital. We then evaluated clinical, psychological, and laboratorial parameters pre- and post-meditation. Patients exhibited a better control of serum phosphorus (−0.72 mg/dL; P = 0.002), a decrease in systolic blood pressure (−1.90 mmHg; P = 0.009), a 23% decrease in depressive symptoms (P = 0.014), and an increase of 7% in the self-compassion scale (P = 0.048) after meditation. To note, we observed an increase in 13% of the mindfulness score (P = 0.019). Our preliminary study describes the effects of a short-term meditation protocol in chronic hemodialysis setting. We observed a decrease in depressive symptoms and in blood pressure values, an improvement in self-compassion and serum phosphorous levels. In conjunction with the promising results of meditation in chronic kidney disease setting, this encouraging preliminary study supports the need for additional clinical trials.", "corpus_id": 231622898, "title": "The effects of a short-term meditation-based mindfulness protocol in patients receiving hemodialysis" }
{ "abstract": "Some evidence from previous randomized controlled trials and systematic reviews has demonstrated a positive association between hypertension and transcendental meditation (TM). However, other trials and reviews showed the effect of TM on blood pressure (BP) was unclear but did not use subgroup analysis to rigorously investigate this relationship. The American Heart Association has stated that TM is potentially beneficial but did not give a standard indication. The present study explored several subgroup analyses in systematic reviews to investigate the effect of TM on BP. Medline, Embase, Cochrane Library, Web of Science and Chinese BioMedical Literature Database were searched through August 2014. Randomized controlled trials of TM as a primary intervention for BP were included. Two reviewers independently used the Cochrane Collaboration’s quality assessment tool to assess each study’s quality. Twelve studies with 996 participants indicated an approximate reduction of systolic and diastolic BP of −4.26 mm Hg (95% CI=−6.06, −2.23) and −2.33 mm Hg (95% CI=−3.70, −0.97), respectively, in TM groups compared with control groups. Results from subgroup analysis suggested that TM had a greater effect on systolic BP among older participants, those with higher initial BP levels, and women, respectively. In terms of diastolic BP, it appears that TM might be more efficient in a short-term intervention and with individuals experiencing higher BP levels. However, some biases may have influenced the results, primarily a lack of information about study design and methods of BP measurement in primary studies.", "corpus_id": 22261, "title": "Investigating the effect of transcendental meditation on blood pressure: a systematic review and meta-analysis" }
{ "abstract": "CLCWeb: Comparative Literature and Culture, the peer-reviewed, full-text, and open-access learned journal in the humanities and social sciences, publishes new scholarship following tenets of the discipline of comparative literature and the field of cultural studies designated as \"comparative cultural studies.\" Publications in the journal are indexed in the Annual Bibliography of English Language and Literature (Chadwyck-Healey), the Arts and Humanities Citation Index (Thomson Reuters ISI), the Humanities Index (Wilson), Humanities International Complete (EBSCO), the International Bibliography of the Modern Language Association of America, and Scopus (Elsevier). The journal is affiliated with the Purdue University Press monograph series of Books in Comparative Cultural Studies. Contact:", "corpus_id": 144179206, "score": -1, "title": "Bibliography of Publications in Media and (Inter)mediality Studies" }
{ "abstract": "Need for cognition (NFC) refers to stable individual differences in the intrinsic motivation to engage in and enjoy effortful cognitive endeavors and has been a useful predictor of dispositional differences in information processing. Although cognitive resource allocation conceptualized as cognitive effort is assumed to be the key mediator of NFC-specific processing, to date no research has systematically addressed its underpinnings. Using a neurocognitive paradigm and recording event-related potentials associated with bottom-up and top-down-driven aspects of attention, the present research contributes to filling this gap. In Study 1, high-NFC individuals showed larger P3a amplitudes to contextually novel events, indicating greater involuntary (automatic) attention allocation. This effect was replicated in Study 2, where NFC also was positively correlated with the P3b to target stimuli, indicating voluntary (controlled) processes of attention allocation. Thus, our findings provide first evidence for neurophysiological correlates of NFC and can improve the understanding of NFC-specific processing.", "corpus_id": 34770785, "title": "Neurophysiological Measures of Involuntary and Voluntary Attention Allocation and Dispositional Differences in Need for Cognition" }
{ "abstract": "OBJECTIVE\nTo determine how specific methodological choices affect \"data-driven\" simplifications of event-related potentials (ERPs) using principal components analysis (PCA). The usefulness of the extracted component measures can be evaluated by knowledge about the variance distribution of ERPs, which are characterized by the removal of baseline activity. The variance should be small before and at stimulus onset (across and within cases), but large near the end of the recording epoch and at ERP component peaks. These characteristics are preserved with a covariance matrix, but lost with a correlation matrix, which assigns equal weights to each sample point, yielding the possibility that small but systematic variations may form a factor.\n\n\nMETHODS\nVarimax-rotated PCAs were performed on simulated and real ERPs, systematically varying extraction criteria (number of factors) and method (correlation/covariance matrix, using unstandardized/standardized loadings before rotation).\n\n\nRESULTS\nConservative extraction criteria changed the morphology of some components considerably, which had severe implications for inferential statistics. Solutions converged and stabilized with more liberal criteria. Interpretability (more distinctive component waveforms with narrow and unambiguous loading peaks) and statistical conclusions (greater effect stability across extraction criteria) were best for unstandardized covariance-based solutions. In contrast, all standardized covariance- and correlation-based solutions included \"high-variance\" factors during the baseline, confirming findings for simulated data.\n\n\nCONCLUSIONS\nUnrestricted, unstandardized covariance-based PCA solutions optimize ERP component identification and measurement.", "corpus_id": 2247397, "title": "Optimizing PCA methodology for ERP component identification and measurement: theoretical rationale and empirical evaluation" }
{ "abstract": "This proof-of-concept study investigated whether a time-frequency EEG approach could be used to examine vection (i.e., illusions of self-motion). In the main experiment, we compared the event-related spectral perturbation (ERSP) data of 10 observers during and directly after repeated exposures to two different types of optic flow display (each was 35° wide by 29° high and provided 20 s of motion stimulation). Displays consisted of either a vection display (which simulated constant velocity forward self-motion in depth) or a control display (a spatially scrambled version of the vection display). ERSP data were decomposed using time-frequency Principal Components Analysis (t–f PCA). We found an increase in 10 Hz alpha activity, peaking some 14 s after display motion commenced, which was positively associated with stronger vection ratings. This followed decreases in beta activity, and was also followed by a decrease in delta activity; these decreases in EEG amplitudes were negatively related to the intensity of the vection experience. After display motion ceased, a series of increases in the alpha band also correlated with vection intensity, and appear to reflect vection- and/or motion-aftereffects, as well as later cognitive preparation for reporting the strength of the vection experience. Overall, these findings provide support for the notion that EEG can be used to provide objective markers of changes in both vection status (i.e., “vection/no vection”) and vection strength.", "corpus_id": 694928, "score": -1, "title": "Identifying Objective EEG Based Markers of Linear Vection in Depth" }
{ "abstract": "A complex chemical cocktail, with unknown composition and concentrations, is present in marine waters. Although the awareness of the vulnerability of marine ecosystems to pollution-induced changes increased, the ecotoxicological effects of chemical pollutants on marine ecosystems are poorly understood. Even in intensively monitored regions such as the North Sea, current knowledge of the ecotoxicological effects of chemicals is limited to few (priority) substances and few (model) species (discussed in chapter I). To partly address this knowledge gap, in the present work, using phytoplankton, it is assessed how marine ecosystems respond to the presence of organic chemicals. By analyzing existing data and performing laboratory experiments, ecotoxicological effects of organic chemicals to marine organisms and ecosystem functions are quantified. Specific aims of this work are: (1) to infer spatiotemporal trends of concentrations of organic chemicals; (2) to investigate the impact of primary and secondary emissions on the spatiotemporal trends of organic chemicals; (3) to examine the partitioning of organic chemicals in different environmental compartments; (4) to assess the potential ecotoxicological effect of realistic mixtures of organic chemicals along environmental gradients; and (5) to quantify the relative contribution of organic chemicals to the phytoplankton growth dynamics. Spatiotemporal trends of polychlorinated biphenyl (PCB) concentrations are inferred based on an extensive set of concentrations monitored between 1991 and 2010 in sediments of the Belgian Coastal Zone (BCZ) and the Western Scheldt estuary in chapter II. The time trends unravel two to three-fold PCB concentration decreases in the BCZ during the last 20 years. In the Western Scheldt estuary, time trends are spatially heterogeneous and not significantly decreasing. These results demonstrate that international efforts to cut down emissions of PCBs have been effective to reduce concentrations in open water ecosystems like the BCZ but had little effect in the urbanized and industrialized area of the Scheldt estuary. Most likely, estuaries are subject to secondary emissions from historical pollution. In chapter III, trends found for the BCZ (chapter II) are confirmed at larger spatiotemporal scales. In chapter III multidecadal field observations (1979–2012) in the North Sea and Celtic Sea are analyzed to infer spatiotemporal concentration trends of PCBs in mussels (Mytilus edulis) and in sediments. Decreasing interannual PCB concentrations are found in North Sea sediments and mussels. PCB concentrations in sediments show, less than PCB levels in mussels, decreasing interannual trends. In addition in chapter III, interannual changes of PCB concentrations are separated from seasonal variability. By doing so, superimposed to the generally decreasing interannual trends, seasonally variable PCB concentrations are observed. These seasonal variations are tightly coupled with seasonally variable chlorophyll a concentrations and organic carbon concentrations. Indeed, the timing of phytoplankton blooms in spring and autumn corresponds to the annual maxima of the organic carbon content and the PCB concentrations in sediments. These results demonstrate the role of seasonal phytoplankton dynamics (biological pump) in the environmental fate of PCBs at large spatiotemporal scales.The latter is a novel result since the working of the biological pump was never assessed before based on field data collected at the scale of a regional sea in multiple decades. \nDespite the generally decreasing spatiotemporal trends of PCBs that are found in chapter II and III, it is not clear whether current concentrations (still) pose a risk to marine ecosystems. In chapter IV, the spatiotemporal trends inferred in chapter III are used to assess the ecological risk of PCBs in North Sea and Celtic Sea sediments and mussels. To do so, PCB concentrations are compared with environmental assessment criteria (EAC). It is found that the potential ecotoxicological risk of PCBs change considerably over time and in space. Risk quotients (RQs) of PCBs in marine sediments primarily depend on the location of the monitoring site, i.e. the closer to the coast, the higher the RQ. Especially in summer, when PCB concentrations in sediments are high, PCBs present in marine coastal sediments may pose an environmental risk. By contrast, RQs in mussel depend first on the interannual changes of PCB concentrations. At present, in the Celtic Sea, RQs in mussels are below the value of 1, suggesting no potential environmental risk. In the North Sea, however, PCBS in mussels may still exceed the prescribed environmental quality criteria. Overall, the results shown in chapter IV demonstrate that the spatiotemporal variability in PCB concentrations should be considered in future environmental risk assessments. \nComparing concentrations of chemicals with quality thresholds (as in chapter IV) only suggest a potential ecological risk. Therefore, in case if risk quotients exceed the value of 1, additional assessments are recommended. Considering the results obtained in chapter IV, in chapter V, additional experimental studies are performed in which a marine diatom is exposed to a realistic mixture of organic contaminants. To do so, passive samplers are used to achieve exposure to realistic mixtures of organic chemicals close to ambient concentrations. The main conclusion is that organic chemicals present in Belgian marine waters do not affect the intrinsic growth rate of Phaeodactylum tricornutum. In this context, caution is needed when extrapolating these results to field conditions. In the present research, results were obtained under laboratory controlled conditions with one single species and thus neglecting possible species interactions. Therefore, prior to extrapolating these results to other diatoms and other groups of phytoplankton species, it is suggested to assess the validity of the results in a mesocosm experiment (including multiple species and different trophic levels) or under field conditions. In addition, in chapter V, the relative contribution of organic chemicals to the growth of a marine diatom is examined. Natural drivers such as nutrients regime, light intensity and temperature explain about 85% of the observed variability in the experimental data. \nAlthough the methodology used in chapter V is a standard way to assess toxicity of chemicals, it is not realistic to use just one algal species to represent ecotoxicological effects of an entire phytoplankton community. Therefore in chapter VI, an ecosystem model is used to assess the potential adverse effects of organic contaminants on the total primary production. To do so, we model phytoplankton dynamics using four classical drivers (light and nutrient availability, temperature and zooplankton grazing) and test whether extending this model with a POP-induced phytoplankton growth limitation term improves model fit. As inclusion of monitored concentrations of PCBs and pesticides did not lead to a better model fit, it is suggested that POP-induced growth limitation of marine phytoplankton in the North Sea and the Kattegat is small compared to the limitations caused by the classical drivers. The inferred contribution of POPs to phytoplankton growth limitation is about 1% in Belgian coastal waters, but in the Kattegat POPs explain about 10% of the phytoplankton growth limitation. These results suggest that there are regional differences in the contribution of POPs to the phytoplankton growth limitation. The validity of these conclusions should be further assessed for other substances, other species and higher trophic levels.", "corpus_id": 127242319, "title": "Potential risk of organic micropollutants on marine phytoplankton in the greater North Sea: integration of modelling and experimental approaches" }
{ "abstract": "Equilibrium partitioning between different environmental media is one of the main driving forces that govern the environmental fate of organic chemicals. In the global environment, equilibrium partitioning is in competition with long-range transport, advective phase transfer processes such as wet deposition, and degradation. Here we investigate under what conditions equilibrium partitioning is strong enough to control the global distribution of organic chemicals. We use a global multimedia mass-balance model to calculate the Globally Balanced State (GBS) of organic chemicals. The GBS is the state where equilibrium partitioning is in balance with long-range transport; it represents the maximum influence of thermodynamic driving forces on the global distribution of a chemical. Next, we compare the GBS with the Temporal Remote State, which represents the long-term distribution of a chemical in the global environment when the chemical's distribution is influenced by all transport and degradation processes in combination. This comparison allows us to identify the chemical properties required for a substance to reach the GBS as a stable global distribution. We find that thermodynamically controlled distributions are rare and do not occur for most Persistent Organic Pollutants. They are only found for highly volatile and persistent substances, such as chlorofluorocarbons. Furthermore, we find that the thermodynamic cold-trap effect (i.e., accumulation of pollutants at the poles because of reduced vapor pressure at low temperatures) is often strongly attenuated by atmospheric and oceanic long-range transport.", "corpus_id": 5329586, "title": "Do persistent organic pollutants reach a thermodynamic equilibrium in the global environment?" }
{ "abstract": "SQL injection is one of the biggest challenges for the web application security. Based on the studies by OWASP, SQL injection has the highest rank in the web based vulnerabilities. In case of a successful SQL injection attack, the attacker can have access to the web application database. With the rapid rise of SQL injection based attacks, researchers start to provide different security solutions to protect web application against them. One of the most common solutions is the using of web application firewalls. Usually these firewalls use signature based technique as the main core for the detection. In this technique the firewall checks each packet against a list of predefined SQL injection attacks known as signatures. The problem with this technique is that, an attacker with a good knowledge of SQL language can change the look of the SQL queries in a way that firewall cannot detect them but still they lead to the same malicious results. In this paper first we described the nature of SQL injection attack, then we analyzed current SQL injection detection evasion techniques and how they can bypass the detection filters, afterward we proposed a combination of solutions which helps to mitigate the risk of SQL injection attack.", "corpus_id": 16777679, "score": -1, "title": "SQL Injection Is Still Alive: A Study on SQL Injection Signature Evasion Techniques" }
{ "abstract": "Amyotrophic lateral sclerosis (ALS) and frontotemporal lobar degeneration (FTLD) are characterized by intraneuronal deposition of the nuclear TAR DNA-binding protein 43 (TDP-43) caused by unknown mechanisms. Here, we studied TDP-43 in primary neurons under different stress conditions and found that only proteasome inhibition by MG-132 or lactacystin could induce significant cytoplasmic accumulation of TDP-43, a histopathological hallmark in disease. This cytoplasmic accumulation was accompanied by phosphorylation, ubiquitination and aggregation of TDP-43, recapitulating major features of disease. Proteasome inhibition produced similar effects in both hippocampal and cortical neurons, as well as in immortalized motor neurons. To determine the contribution of TDP-43 to cell death, we reduced TDP-43 expression using small interfering RNA (siRNA), and found that reduced levels of TDP-43 dose-dependently rendered neurons more vulnerable to MG-132. Taken together, our data suggests a role for the proteasome in subcellular localization of TDP-43, and possibly in disease.", "corpus_id": 9294886, "title": "Cytoplasmic Accumulation and Aggregation of TDP-43 upon Proteasome Inhibition in Cultured Neurons" }
{ "abstract": "The TAR DNA-binding protein 43 (TDP-43) has been identified as the major disease protein in amyotrophic lateral sclerosis (ALS) and frontotemporal lobar degeneration with ubiquitin inclusions (FTLD-U), defining a novel class of neurodegenerative conditions: the TDP-43 proteinopathies. The first pathogenic mutations in the gene encoding TDP-43 (TARDBP) were recently reported in familial and sporadic ALS patients, supporting a direct role for TDP-43 in neurodegeneration. In this study, we report the identification and functional analyses of two novel and one known mutation in TARDBP that we identified as a result of extensive mutation analyses in a cohort of 296 patients with variable neurodegenerative diseases associated with TDP-43 histopathology. Three different heterozygous missense mutations in exon 6 of TARDBP (p.M337V, p.N345K, and p.I383V) were identified in the analysis of 92 familial ALS patients (3.3%), while no mutations were detected in 24 patients with sporadic ALS or 180 patients with other TDP-43–positive neurodegenerative diseases. The presence of p.M337V, p.N345K, and p.I383V was excluded in 825 controls and 652 additional sporadic ALS patients. All three mutations affect highly conserved amino acid residues in the C-terminal part of TDP-43 known to be involved in protein-protein interactions. Biochemical analysis of TDP-43 in ALS patient cell lines revealed a substantial increase in caspase cleaved fragments, including the ∼25 kDa fragment, compared to control cell lines. Our findings support TARDBP mutations as a cause of ALS. Based on the specific C-terminal location of the mutations and the accumulation of a smaller C-terminal fragment, we speculate that TARDBP mutations may cause a toxic gain of function through novel protein interactions or intracellular accumulation of TDP-43 fragments leading to apoptosis.", "corpus_id": 333085, "title": "Novel Mutations in TARDBP (TDP-43) in Patients with Familial Amyotrophic Lateral Sclerosis" }
{ "abstract": "Null mutations in the progranulin gene (PGRN) were recently reported to cause tau-negative frontotemporal dementia linked to chromosome 17. We assessed the genetic contribution of PGRN mutations in an extended population of patients with frontotemporal lobar degeneration (FTLD) (N=378). Mutations were identified in 10% of the total FTLD population and 23% of patients with a positive family history. This mutation frequency dropped to 5% when analysis was restricted to an unbiased FTLD subpopulation (N=167) derived from patients referred to Alzheimer's Disease Research Centers (ADRC). Among the ADRC patients, PGRN mutations were equally frequent as mutations in the tau gene (MAPT). We identified 23 different pathogenic PGRN mutations, including a total of 21 nonsense, frameshift and splice-site mutations that cause premature termination of the coding sequence and degradation of the mutant RNA by nonsense-mediated decay. We also observed an unusual splice-site mutation in the exon 1 5' splice site, which leads to loss of the Kozac sequence, and a missense mutation in the hydrophobic core of the PGRN signal peptide. Both mutations revealed novel mechanisms that result in loss of functional PGRN. One mutation, c.1477C>T (p.Arg493X), was detected in eight independently ascertained familial FTLD patients who were shown to share a common extended haplotype over the PGRN genomic region. Clinical examination of patients with PGRN mutations revealed highly variable onset ages with language dysfunction as a common presenting symptom. Neuropathological examination showed FTLD with ubiquitin-positive cytoplasmic and intranuclear inclusions in all PGRN mutation carriers.", "corpus_id": 15398043, "score": -1, "title": "Mutations in progranulin are a major cause of ubiquitin-positive frontotemporal lobar degeneration." }
{ "abstract": "Despite the success of liver transplantation, long-term complications remain, including de novo malignancies, metabolic syndrome, and the recurrence of hepatitis C virus (HCV) and hepatocellular carcinoma (HCC). The current mainstay of treatment, calcineurin inhibitors (CNIs), can also worsen posttransplant renal dysfunction, neurotoxicity, and diabetes. Clearly there is a need for better immunosuppressive agents that maintain similar rates of efficacy and renal function whilst minimizing adverse effects. The mammalian target of rapamycin (mTOR) inhibitors with a mechanism of action that is different from other immunosuppressive agents has the potential to address some of these issues. In this review we surveyed the literature for reports of the use of mTOR inhibitors in adult liver transplantation with respect to renal function, efficacy, safety, neurological symptoms, de novo tumors, and the recurrence of HCC and HCV. The results of our review indicate that mTOR inhibitors are associated with efficacy comparable to CNIs while having benefits on renal function in liver transplantation. We also consider newer dosing schedules that may limit side effects. Finally, we discuss evidence that mTOR inhibitors may have benefits in the oncology setting and in relation to HCV-related allograft fibrosis, metabolic syndrome, and neurotoxicity.", "corpus_id": 7851865, "title": "The Role of mTOR Inhibitors in Liver Transplantation: Reviewing the Evidence" }
{ "abstract": "Hepatitis C virus (HCV) causes progressive liver fibrosis in liver transplant recipients and is the principal cause of long‐term allograft failure. The antifibrotic effects of sirolimus are seen in animal models but have not been described in liver transplant recipients. We reviewed 1274 liver recipients from 2002 to 2010 and identified a cohort of HCV recipients exposed to sirolimus as primary immunosuppression (SRL Cohort) and an HCV Control Group of recipients who had never received sirolimus. Yearly protocol biopsies were done recording fibrosis stage (METAVIR score) with biopsy compliance of >80% at both year one and two. In an intent‐to‐treat analysis, the SRL Cohort had significantly less advanced fibrosis (stage ≥2) compared to the HCV Control Group at year one (15.3% vs. 36.2%, p < 0.0001) and year two (30.1% vs. 50.5%, p = 0.001). Because sirolimus is sometimes discontinued for side effects, the SRL Cohort was subgroup stratified for sirolimus duration, showing progressively less fibrosis with longer sirolimus duration. Multivariate analysis demonstrated sirolimus as an independent predictor of minimal fibrosis at year one, and year two. This is the first study among liver transplant recipients with recurrent HCV to describe the positive impact of sirolimus in respect of reduced fibrosis extent and rate of progression.", "corpus_id": 1458258, "title": "Limiting Hepatitis C Virus Progression in Liver Transplant Recipients Using Sirolimus‐Based Immunosuppression" }
{ "abstract": "Rapamycin is an immunosuppressant with antiproliferative properties. We investigated whether rapamycin treatment of bile duct-ligated (BDL) rats is capable of inhibiting liver fibrosis and thereby affecting hemodynamics. Following BDL, rats were treated for 28 days with rapamycin (BDL SIR). BDL animals without drug treatment (BDL CTR) and sham-operated animals served as controls. After 28 days, hemodynamics were measured, and livers were harvested for histology/immunohistochemistry. Liver mRNA levels of transforming growth factor (TGF)-β1, connective tissue growth factor (CTGF), platelet-derived growth factor (PDGF)-β, cyclin-dependent kinase inhibitor p27kip (p27), and cyclin-dependent kinase inhibitor p21WAF1/CIP1 (p21) were quantified by real-time polymerase chain reaction. Liver protein levels of p27, p21, p70 S6 kinase (p70s6k), phosphorylated p70s6k (p-p70s6k), eukaryotic initiation factor 4E-binding protein (4E-BP1), p-4E-BP1 (Thr37/46), and p-4E-BP1 (Ser65/Thr70) were determined by Western blotting. Portal vein pressure was lower in BDL SIR than in BDL CTR animals. Volume fractions of connective tissue, bile duct epithelial, and desmin- and actin-positive cells were lower in BDL SIR than in BDL CTR rats. On the mRNA level, TGF-β1, CTGF, and PDGF were decreased by rapamycin. p27 and p21 mRNA did not differ. On the protein level, rapamycin increased p27 and decreased p21 levels. Levels of nonphosphorylated p70s6k and 4E-BP1 did not vary between groups, but levels of p-p70s6k were decreased by rapamycin. Rapamycin had no effect on p-4E-BP1 (Thr37/46) and p-4E-BP1 (Ser65/Thr70) levels. In BDL rats, rapamycin inhibits liver fibrosis and ameliorates portal hypertension. This is paralleled by decreased levels of TGF-β1, CTGF, and PDGF. Rapamycin influences the cell cycle by up-regulation of p27, down-regulation of p21, and inhibition of p70s6k phosphorylation.", "corpus_id": 6945018, "score": -1, "title": "Long-Term Treatment of Bile Duct-Ligated Rats with Rapamycin (Sirolimus) Significantly Attenuates Liver Fibrosis: Analysis of the Underlying Mechanisms" }
{ "abstract": "We study the problem of language variant identification, approximated by the problem of labeling tweets from Spanish speaking countries by the country from which they were posted. While this task is closely related to “pure” language identification, it comes with additional complications. We build a balanced collection of tweets and apply techniques from language modeling. A simplified version of the task is also solved by human test subjects, who are outperformed by the automatic classification. Our best automatic system achieves an overall F-score of 67.7% on 5-class classification.", "corpus_id": 5310075, "title": "Language variety identification in Spanish tweets" }
{ "abstract": "There are many accurate methods for language identification of long text samples, but identification of very short strings still presents a challenge. This paper studies a language identification task, in which the test samples have only 5–21 characters. We compare two distinct methods that are well suited for this task: a naive Bayes classifier based on character n-gram models, and the ranking method by Cavnar and Trenkle (1994). For the n-gram models, we test several standard smoothing techniques, including the current state-of-theart, the modified Kneser-Ney interpolation. Experiments are conducted with 281 languages using the Universal Declaration of Human Rights. Advanced language model smoothing techniques improve the identification accuracy and the respective classifiers outperform the ranking method. The higher accuracy is obtained at the cost of larger models and slower classification speed. However, there are several methods to reduce the size of an n-gram model, and our experiments with model pruning show that it provides an easy way to balance the size and the identification accuracy. We also compare the results to the language identifier in Google AJAX Language API, using a subset of 50 languages.", "corpus_id": 8216769, "title": "Language Identification of Short Text Segments with N-gram Models" }
{ "abstract": "We present a new methodology for proving security of encryption systems using what we call Dual System Encryption. Our techniques result in fully secure Identity-Based Encryption (IBE) and Hierarchical Identity-Based Encryption (HIBE) systems under the simple and established decisional Bilinear Diffie-Hellman and decisional Linear assumptions. Our IBE system has ciphertexts, private keys, and public parameters each consisting of a constant number of group elements. These results are the first HIBE system and the first IBE system with short parameters under simple assumptions. ::: ::: In a Dual System Encryption system both ciphertexts and private keys can take on one of two indistinguishable forms. A private key or ciphertext will be normal if they are generated respectively from the system's key generation or encryption algorithm. These keys and ciphertexts will behave as one expects in an IBE system. In addition, we define semi-functional keys and ciphertexts. A semi-functional private key will be able to decrypt all normally generated ciphertexts; however, decryption will fail if one attempts to decrypt a semi-functional ciphertext with a semi-functional private key. Analogously, semi-functional ciphertexts will be decryptable only by normal private keys. ::: ::: Dual System Encryption opens up a new way to prove security of IBE and related encryption systems. We define a sequence of games where we change first the challenge ciphertext and then the private keys one by one to be semi-functional. We finally end up in a game where the challenge ciphertext and all private keys are semi-functional at which point proving security is straightforward.", "corpus_id": 4983202, "score": -1, "title": "Dual System Encryption: Realizing Fully Secure IBE and HIBE under Simple Assumptions" }
{ "abstract": "Abstract Malnutrition among adolescents is often associated with inadequate dietary diversity (DD). We aimed to explore the prevalence of inadequate DD and its socio-economic determinants among adolescent girls and boys in Bangladesh. A cross-sectional survey was conducted during the 2018–19 round of national nutrition surveillance in Bangladesh. Univariate and multivariable logistic regression was performed to identify the determinants of inadequate DD among adolescent girls and boys separately. This population-based survey covered eighty-two rural, non-slum urban and slum clusters from all divisions of Bangladesh. A total of 4865 adolescent girls and 4907 adolescent boys were interviewed. The overall prevalence of inadequate DD was higher among girls (55⋅4 %) than the boys (50⋅6 %). Moreover, compared to boys, the prevalence of inadequate DD was higher among the girls for almost all socio-economic categories. Poor educational attainment, poor maternal education, female-headed household, household food insecurity and poor household wealth were associated with increased chances of having inadequate DD in both sexes. In conclusion, more than half of the Bangladeshi adolescent girls and boys consumed an inadequately diversified diet. The socio-economic determinants of inadequate DD should be addressed through context-specific multisectoral interventions.", "corpus_id": 244958318, "title": "Prevalence and socio-economic determinants of inadequate dietary diversity among adolescent girls and boys in Bangladesh: findings from a nationwide cross-sectional survey" }
{ "abstract": "Bangladesh has a high prevalence of adolescent pregnancy, but little is known about the nutritional status and dietary practices of Bangladeshi adolescents in early pregnancy or associated factors. We used the baseline data of 1552 pregnant adolescents from a longitudinal, cluster‐randomized effectiveness trial conducted in northwest Bangladesh. Forty‐four percent of the adolescents were short for their age, 36% had low body mass index, 28% were anemic, 10% had iron deficiency, and 32% had vitamin A deficiency. The mean consumption of animal‐source foods was 10.3 times/week. In multivariate analysis, socioeconomic status, education, and food security were generally positively associated with anthropometric indicators and dietary practices but not with iron or vitamin A status. Our findings confirm that there is a high burden of undernutrition among these Bangladeshi adolescents in early pregnancy. Understanding factors related to undernutrition can help to identify adolescent pregnant women at higher risk and provide appropriate counseling and care.", "corpus_id": 3355438, "title": "Factors associated with nutritional status and dietary practices of Bangladeshi adolescents in early pregnancy" }
{ "abstract": "BackgroundPeruvian adolescents are at high nutritional risk, facing issues such as overweight and obesity, anemia, and pregnancy during a period of development. Research seeking to understand contextual factors that influence eating habits to inform the development of public health interventions is lacking in this population. This study aimed to understand socio-cultural influences on eating among adolescents in periurban Lima, Peru using qualitative methods.MethodsSemi-structured interviews and pile sort activities were conducted with 14 adolescents 15–17 years. The interview was designed to elicit information on influences on eating habits at four levels: individual (intrapersonal), social environmental (interpersonal), physical environmental (community settings), and macrosystem (societal). The pile sort activity required adolescents to place cards with food images into groups and then to describe the characteristics of the foods placed in each group. Content analysis was used to identify predominant themes of influencing factors in interviews. Multidimensional scaling and hierarchical clustering analysis was completed with pile sort data.ResultsIndividual influences on behavior included lack of financial resources to purchase food and concerns about body image. Nutrition-related knowledge also played a role; participants noted the importance of foods such as beans for anemia prevention. At the social environmental level, parents promoted healthy eating by providing advice on food selection and home-cooked meals. The physical environment also influenced intake, with foods available in schools being predominantly low-nutrient energy-dense. Macrosystem influences were evident, as adolescents used the Internet for nutrition information, which they viewed as credible.ConclusionsTo address nutrition-related issues such as obesity and iron-deficiency anemia in Peruvian adolescents, further research is warranted to elucidate the roles of certain factors shaping behavior, particularly that of family, cited numerous times as having a positive influence. Addressing nutrition-related issues such as obesity and iron-deficiency anemia in this population requires consideration of the effect of social and environmental factors in the context of adolescent lifestyles on behavior. Nutrition education messages for adolescents should consider the cultural perceptions and importance of particular foods, taking into account the diverse factors that influence eating behaviors.", "corpus_id": 16395775, "score": -1, "title": "Influences on eating: a qualitative study of adolescents in a periurban area in Lima, Peru" }