query dict | pos dict | neg dict |
|---|---|---|
{
"abstract": "The availability of good quality water resources is essential to ensure healthy crops and livestock. The objective of this study was to evaluate the level of pollution in Bustillos Lagoon in northern Mexico. Physical-chemical parameters like sodium, chloride, sulfate, electrical conductivity, nitrates, and the pesticide dichlorodiphenyltrichloroethane (DDT) were analyzed to determine the water quality available in the lagoon. Although DDT has been banned in several countries, it is still used for agricultural purposes in Mexico and its presence in this area had not been analyzed previously. Bustillos Lagoon was divided into three zones for the evaluation: (1) industrial; (2) communal lands; and (3) agricultural. The highest concentrations of sodium (2360 mg/L) and SAR (41 meq/L) reported in the industrial zone are values exceeding the United Nations Food and Agricultural Organization (FAO) irrigation water quality guidelines. DDT and its metabolites were detected in all of the 21 sites analyzed, in the agricultural zone ∑DDTs = 2804 ng/mL, this level is much higher than those reported for other water bodies in Mexico and around the world where DDT has been used heavily. The water in the communal zone is the least contaminated, but can only be recommended for irrigation of plants with high stress tolerance and not for crops.",
"corpus_id": 7077286,
"title": "Levels and Distribution of Pollutants in the Waters of an Aquatic Ecosystem in Northern Mexico"
} | {
"abstract": "In this chapter, we analyzed the historical findings related to pollution in water bodies in Latin America, focusing on the presence and effects caused by the main pollutants that are present in water bodies such as heavy metals, pesticides, hydrocarbon compounds, plastics, organic compounds, and others, with emphasis on their ecotoxicological impact. We reviewed works carried out in countries such as Mexico, Argentina, Brazil, Chile, and Colombia, as well as other Latin American countries. Besides physicochemical parameters, bioindicators, such as Lingulodinium polyedrum, Baccharis sarothroides Gray, E. imbricata, Crassostrea corteziensis, Cyathea costaricensis, Rhinella arenarum, Cyprinus carpio, Pimelodella laticeps, Daphnia sp., and others, were used to evaluate the toxic effects caused by the pollutants. It is important to continue with studies on the ecotoxicological impact and, in selected cases, to continue existing studies, to generate a complete panorama about the problem that, although this chapter is focused on Latin America, is a global problem.",
"corpus_id": 214102397,
"title": "Historical Findings on Presence of Pollutants in Water Bodies in Latin America and Their Ecotoxicological Impact"
} | {
"abstract": "Common bread wheat (Triticum aestivum L.) is an allohexaploid species (2n = 6x = 42, having AABBDD genome). Globally, wheat is a staple food for human population due to its bread making quality. The bread making quality of wheat is controlled by seed storage proteins. In the present study water soluble seed storage proteins were extracted from ditelosomic and deletion lines of group 5 homoeologous chromosomes. Sodium Dodecyl Sulphate Polyacrylamide Gel Electrophoresis (SDS-PAGE) was carried out using 12.5% resolving gel (3.0M Tris pH 9, 0.4% SDS and 4.5% stacking gel (0.4M Tris pH 7.0, 0.4% SDS). Each individual protein band was considered as a locus / allele. Alleles were scored as present (1), absent (0) and a bivariate (1-0) data matrix was generated. In a total, 69 alleles were scored in thirteen genotypes giving an average of 5.3 alleles per genotype. High amount of genetic diversity ranging from * Corresponding author. 1Department of Genetics, Hazara University, Mansehra, KPK, Pakistan. 2Department of Genetics, Karachi University, Karachi, Pakistan. 3Rice Program, Crop Science Institute, National Agricultural Research Centre, Islamabad, Pakistan. 4PARC Institute of Advanced Studies in Agriculture, National Agricultural Research Centre, Islamabad, Pakistan. Niaz Ali, Imtiaz Ahmed Khan, Muhammad Amir Zia, Tanweer Kumar, Muhammad Yousuf, Sohaib Roomi, Habib AhmadEstimation of Genetic Diversity in Genetic Stocks of Common Wheat (Triticum aestivum L.) Using SDS-PAGE EUROPEAN ACADEMIC RESEARCH, VOL. I, ISSUE 8/ NOVEMBER 2013 1862 0-100% was estimated in available wheat genotypes. One comparison (Del5AL12-Del 5AS-3) showed complete homozygosity (GD=0%). Two comparisons (Del 5AS10--Del 5DL-1 and Dit 5Bl--Del 5BL6) showed 100 % genetic distance. Bivariate data was also used to construct a dendrogram using computer program Popgene ver. 32. The genetic stocks of hexaploid wheat of different lines were clustered in 4 groups (A, B, C and D) comprising 4, 3, 3, and 3 genotypes, respectively. Genetic stocks Del5AL12 and Del 5DL-1 were most distantly related and hence it is recommended that these two lines should be crossed to create a breeding population with maximum genetic diversity which will be useful for the development of new improved varieties of wheat.",
"corpus_id": 208176573,
"score": 1,
"title": "EUROPEAN ACADEMIC RESEARCH, VOL"
} |
{
"abstract": "Virtual HomeEnvironment(VHE) letsusersusethesamesetof selectedservices anda personalcustomizeduserinterfacewhenroamingin othernetworks or using differentterminals.Authenticationof the userandserviceprovidersis importantas accessto userprofileshastobetightly controlled.Therearedifferentmethodsfor user authentication, in mobilesystemsatoken/passwordcombinationis traditionallyused. Thepaperpresentsa proposedVHE architecturebuilt with agenttechnology. Other currentwork andongoingVHE projectsis alsopresented.Analysisis performedon the examplearchitectureandotherimplementationproposals.Conclusionsarethen presentedstatingthatsecurityandauthenticationis generallynot well thoughtof in prototypeimplementationsandcandidatearchitectures.",
"corpus_id": 681796,
"title": "UserAuthenticationin Virtual HomeEnvironment"
} | {
"abstract": "In services offered over information networks, like electronic banking, validating the identity of a user and the authorities he has is a fundamental issue. There are many ways to perform this operation, some of which provide a higher degree of certainty and are easier to use than others. Public Key Infrastructure (PKI) based solutions are generally considered to be the most secure and reliable. In a mobile environment, where the same services can be used through different channels, like the web and the WAP, the issue of authentication and authorization is often more complex. For instance in a PKI managing the private key in such a manner that it can be used, without being compromised, on different device platforms is a challenge. The standardization of technologies to be used to solve the problems is advancing at a fast and steady pace. The actual implementations are lagging a step behind, especially when it comes to developing overall solutions for authentication and authorization.",
"corpus_id": 17908569,
"title": "Authentication and Authorization in Mobile Environment"
} | {
"abstract": "The certification methods are respectively classified as directory, referral and collaborative based. For two parties in a dialogue, the three methods are further classified as extrinsic because they depend on references which are outside the scope of the dialogue. A series of conceptual, legal and implementation flaws – including lack of suitability of purpose – is catalogued for each case, emphasizing X.509 and CAs. This analysis can be applied as safety guidelines for those who need to rely on digital certificates. Governmental initiatives introducing Internet regulations on certification, such as by TTP, are also discussed with their pros and cons regarding security and privacy. Throughout, the paper stresses the basic paradox of security versus privacy when dealing with extrinsic certification systems – which is very important in voting systems.",
"corpus_id": 7991991,
"score": 2,
"title": "Overview of Certification Systems: X.509, Pkix, Ca, Pgp & Skip"
} |
{
"abstract": "09:20 09:30 Opening 09:30 10:10 Daniele Mundici Deductive interpolation in Łukasiewicz logic and amalgamation of MV-algebras 10:10 10:50 George Metcalfe Craig Interpolation for Semilinear Varieties 10:50 11:20 Break 11:20 12:00 Leonardo Cabrer, José Gil-Férez Leibniz Interpolation Properties 12:00 12:40 小野寛晰 (Hiroakira Ono) Regular completions of residuated lattices 12:40 14:10 Lunch break 14:10 14:50 林哲 (Zhe Lin) Finite Embeddability Property of S4 modal residuated groupoids 14:50 15:30 関隆宏 (Takahiro Seki) An Algebraic Proof of the -admissibility of Relevant Modal Logics 15:30 16:10 William Young,小野寛晰 (Hiroakira Ono) Modal substructural logics 16:10 16:40 Break 16:40 17:20 Alberto Carraro Resource combinatory algebras 17:20 18:00 Sándor Jenei,小野寛晰 (Hiroakira Ono) On involutive FLe algebras 18:00 19:30 Reception",
"corpus_id": 1531383,
"title": "Algebra and Substructural Logics – Take Schedule and Abstracts"
} | {
"abstract": "The class of equivalential logics comprises all implicative logics in the sense of Rasiowa [9], Suszko's logicSCI and many Others. Roughly speaking, a logic is equivalential iff the greatest strict congruences in its matrices (models) are determined by polynomials. The present paper is the first part of the survey in which systematic investigations into this class of logics are undertaken. Using results given in [3] and general theorems from the theory of quasi-varieties of models [5] we give a characterization of all simpleC-matrices for any equivalential logicC (Theorem I.14). In corollaries we give necessary and sufficient conditions for the class of all simple models for a given equivalential logic to be closed under free products (Theorem I.18).",
"corpus_id": 189784378,
"title": "Equivalential logics (I)"
} | {
"abstract": "Supercritical carbon dioxide (SC-CO(2)) is effective in extracting nonpolar and slightly polar chemicals from soils. However, pure SC-CO(2) is unsatisfactory for recovering polar chemicals in soils. A simple supercritical fluid extraction (SFE) procedure was developed to quantitatively recover polar and nonpolar chemicals from soils. The polar chemicals tested were aromatic acids and phenols. The nonpolar and slightly polar chemicals used as model compounds were common pesticides and environmental pollutants such as polycyclic aromatic hydrocarbons. The procedure required pretreatment of the samples with 15% water (g/g), 5% (ethylenedinitrilo)tetraacetic acid tetrasodium salt (Na(4)EDTA) (g/g), and 50% methanol (mL/g) prior to extractions using SC-CO(2) at 60 °C and 34.5 MPa. Recoveries ranged from 90 to 106% for the aromatic acids using the Na(4)EDTA-assisted SFE compared with only 7-63% recoveries of the corresponding chemicals when no Na(4)EDTA was used. The method quantitatively extracted 2,4-D and its close analogues aged in the soil for 2-30 days. The Na(4)EDTA-assisted SFE was also adequate for extracting phenolic analytes including picric acid and pentachlorophenol with recoveries from 85 to 104%. Na(4)EDTA is a good enhancer for extraction of the 29 analytes representing a wide range of polarity from the soil using SC-CO(2). The method is valuable for the analysis of parent pollutants and transformed products, particularly oxygen-borne metabolites in the environment.",
"corpus_id": 33717622,
"score": 0,
"title": "Na(4)EDTA-Assisted Sub-/Supercritical Fluid Extraction Procedure for Quantitative Recovery of Polar Analytes in Soil."
} |
{
"abstract": "Polyarteritis Nodosa (PAN) is a rare systemic necrotising vasculitis of medium and small-sized arteries. Patients typically present with systemic symptoms. Obstructive intestinal symptoms are described but usually resolve with treatment of the underlying vascular disease. We report a case of a one year old boy with multiple ischemic small bowel strictures secondary to infantile PAN, who was treated with resection of the affected segments by single port laparoscopy.",
"corpus_id": 1130597,
"title": "Intestinal obstruction secondary to infantile polyarteritis nodosa"
} | {
"abstract": "This study aimed to investigate the molecular mechanism of systemic vasculitis via bioinformatics analysis. Gene express profile of E‐GEOD‐16945 (13 Takayasu arteritis samples and 13 control samples) was downloaded from European Bioinformatics Institute (EBI) database. Differentially expressed genes (DEGs) were screened between Takayasu arteritis and normal controls (|log FC| > 1). Basic local alignment search tool (BLASTX) was used for the Clusters of Orthologous Groups (COG) classification of DEGs. Gene ontology analysis was performed for the DEGs (P < 0.05). A gene expression network was built with DEGs. Mcode in Cytoscape software was used to extract modules from the network (degree ≥ 2, K‐core ≥ 2 and adjusted P‐value < 0.05) followed by pathway analysis using GenMAPP (false discovery rate < 0.05). A total of 747 DEGs were identified. There were 16 significant GO function terms enriched with DEGs, of which immune and defence response was the most significant GO term. Totally, three modules were extracted from gene expression network, including one module constituted with upregulated genes and two modules constituted with downregulated genes. Furthermore, human leucocyte antigen (HLA)‐DRB1, HLA‐DPA1, HLA‐DPB1, HLA‐DOA and HLA‐DRA in the downregulated modules were significantly linked to immune‐related pathways (intestinal immune network for IgA production and systemic lupus erythematosus pathways), while ribosomal protein L 31 (RPL31), RPS3A and RPL9 in the upregulated module were enriched in ribosome pathway. The immune‐related pathways, ribosome pathway, immune‐related genes including (HLA‐DRB1, HLA‐DPA1, HLA‐DPB1, HLA‐DOA and HLA‐DRA) and ribosome‐related genes (RPL31, RPS3A and RPL9) might be involved in systemic vasculitis.",
"corpus_id": 206302219,
"title": "Immune‐ and Ribosome‐Related Genes were Associated with Systemic Vasculitis"
} | {
"abstract": "The authors report the case of a patient with acute alithiasic cholecystitis associated with viral B hepatitis revealing periarteritis nodosa. Histopathological results showed signs of focal arteritis in the gallbladder and liver. Because of the negativity of the viral DNA in serum and the lack of histopathological necrosis in hepatic specimen, the patient was treated by steroid therapy only with a rapid regression of signs of vasculitis and the disappearance of the hepatitis markers.",
"corpus_id": 9738841,
"score": 2,
"title": "[Alithiasic cholecystitis and viral hepatitis B disclosing periarteritis nodosa]."
} |
{
"abstract": "Rigid organic nanotubes were prepared from six-membered phenylene-butadiynylene macrocycles through topochemical polymerization in the xerogel state. All six butadiyne units underwent polymerization, thus creating rigid nanotubes with six polydiacetylene chains lying parallel, one relative to each other.",
"corpus_id": 19271956,
"title": "Rigid organic nanotubes obtained from phenylene-butadiynylene macrocycles."
} | {
"abstract": "Molecular assemblies with well-defined structures capable of photo-induced electron transfer and charge transport or photochemical reactions are reviewed. Hierarchical supramolecular architectures, which assemble the modular units into specific spatial arrangements and facilitate them to work cooperatively, are vital for the achievement of photo-functions in these systems. The chemical design of molecular building blocks and noncovalent interactions exploited to realize supramolecular organizations are particularly discussed. Reviewing and recapitulating the chemical evolution traces of these accomplished systems will hopefully delineate certain fundamental design principles and guidelines useful for developing more advanced functions in the future.",
"corpus_id": 5581071,
"title": "Chemical designs of functional photoactive molecular assemblies."
} | {
"abstract": "Abstract ω-Acetylenic acids are valuable intermediates in the synthesis of long-chain acids. Their dimagnesium salts may be alkylated with 1-bromo-2-alkynes or 2-alkenes to synthesize many important naturally occurring fatty acids1–3. Also, internally-monounsaturated acids are prepared by alkylation of ω-acetylenic acid dilithium salts4,5 or the sodium salts of their N,N-dimethylamides6.",
"corpus_id": 92642397,
"score": 2,
"title": "A Convenient Synthesis of ?-Acetylenic Acids"
} |
{
"abstract": "Abstract : This report discusses the principles of good governance as outlined by Carver (2002) and Pointer and Orlikoff (1999, 2002a,b) in their writings on nonprofit organizational governance. These principles highlight the primary role of the public (or 'owners') in governing a public enterprise and the importance of separating the board from management. To reinforce this separateness, the governing body must address two key questions: For whom are we governing? For what are we governing? The answers to these questions naturally lead the governing body to consider policies that can enable management to attain the organization's mission.",
"corpus_id": 152594558,
"title": "Governance for Whom and for What: Principles to Guide Health Policy in Miami-Dade County"
} | {
"abstract": "An evaluation of Community Voices Miami (CVM), a project aimed at enhancing health care access and quality for the underserved in Miami-Dade County, Florida. The report concludes that CVM affected intermediate outcomes – raising awareness of issues, getting safety-net providers to collaborate, nurturing neighborhood-based solutions, and advocating for establishment of an independent health care planning body; but measurement of ultimate outcomes – access to health care – remains for future study.",
"corpus_id": 10608938,
"title": "Evaluation of Community Voices Miami: Affecting Health Policy for the Uninsured"
} | {
"abstract": "In the summer of 1995, the chief administrative officer of the County of Los Angeles proposed closure of Los Angeles County+University of Southern California (LAC + USC) Medical Center, the nation's biggest and busiest hospital, in order to address a budgetary shortfall of $655 million in the Los Angeles County Health Department. The magnitude of the fiscal crisis facing the Los Angeles County public health system is perhaps best demonstrated by the fact that the County Board of Supervisors gave serious consideration to the proposal, despite widespread predictions that such a move would trigger collapse of the nation's second-largest public health system. 1 (The Los Angeles County emergency agency has predicted a \"dominolike\" effect of closure of LAC + USC, where other hospitals' emergency departments would be overwhelmed by the numbers and complexities of patients for whom care would be unreimbursed.) Considering that the hospital (1) is the country's largest teaching hospital,",
"corpus_id": 33664831,
"score": 2,
"title": "Health care crisis from a trauma center perspective: The LA story."
} |
{
"abstract": "This paper considers a new nonlinear filter which combines the good properties of the Kalman filter and the particle filter. Compared with other particle filters like Rao-Blackwellised particle filter (RBPF), it adds a local linearization in a kernel representation of the conditional density, which yields a Kalman type correction complementing the usual particle correction. Therefore, it can operate with much less number of particles. It reduces the Monte-Carlo fluctuations and the risk of divergence. The new filter is applied to the highly nonlinear and multimodal terrain navigation problem. Simulations show that it outperforms the RBPF.",
"corpus_id": 8590046,
"title": "Application of the Kalman-particle kernel filter to the updated inertial navigation system"
} | {
"abstract": "An automatic tracker is developed, which is capable of tracking intra-cellular features in living cells from 3-D confocal image sequences corrupted by noise. The proposed approach takes a Poisson MAP-MRF classification as an initial stage to detect objects. These are then used to update the multiple target locations generated by 3D Poisson Kalman Particle filters (PKPF). A probabilistic nearest neighbour search strategy for object association is developed to produce improved prediction of target locations. Our approach is tested in real 3D confocal image sequences with challenging illumination conditions. Results show that our Poisson Kalman particle filter approach obtains very promising results and outperforms three other tracking approaches.",
"corpus_id": 19035125,
"title": "Poisson Kalman Particle Filtering for Tracking Centrosomes in Low-Light 3-D Confocal Image Sequences"
} | {
"abstract": "A rapid review of the literature on closed areas that recognize key ecosystem-based management (EBM) principles of fisheries and biodiversity conservation and had fisher involvement was employed to review closed areas worldwide from a fisheries perspective and to develop a scorecard that can assess their efficacy. The review provided 523 abstracts of which 19 areas from various regions worldwide had peer-reviewed studies that met strict selection criteria. Criteria included fisher involvement, biodiversity conservation and fisheries management objectives. A repeat search without “fisher” and synonyms found, 62,622 papers indicating that most closed area studies had no mention of any fisher involvement. The general success of the areas selected suggests that fisher involvement benefits both biological conservation and fisheries management. Fisheries and biodiversity conservation outcomes were not exclusive to any one type of management closure (e.g. MPA, Fishery Closure). Twenty-four indicators were selected, designed to provide measurable targets. High scoring indicators included management, planning and socio-economic indicators such as local support (100%), habitat protection (100%), conservation and fisheries objectives (100%), monitoring (91.7%) and fishers concerns (91.7%). Bio-ecological-based indicators scored lower in most cases for all types of areas. Fisheries closures rated as highly as the MPAs with respect to both fisheries and bio-ecological indicators. The scorecard provided a reasonable means to evaluate management success in the light of often qualitative or missing data. Addressing the interests and utilizing knowledge of those affected by closures and familiar with the area, most often local fishers, is key to achieving management objectives.",
"corpus_id": 89723382,
"score": 0,
"title": "How fisher-influenced marine closed areas contribute to ecosystem-based management: A review and performance indicator scorecard"
} |
{
"abstract": "Despite distal arm impairment after brain injury is an extremely disabling consequence of neurological damage, most studies on robotic therapy are mainly focused on recovery of proximal upper limb motor functions, routing the major efforts in rehabilitation to shoulder and elbow joints. In the present study we developed a novel therapeutic protocol aimed at restoring wrist functionality in chronic stroke patients. A haptic three DoFs (degrees of freedom) robot has been used to quantify motor impairment and assist wrist and forearm articular movements: flexion/extension (FE), abduction/adduction (AA), pronation/supination (PS). This preliminary study involved nine stroke patients, from a mild to severe level of impairment. Therapy consisted in ten 1-hour sessions over a period of five weeks. The novelty of the approach was the adaptive control scheme which trained wrist movements with slow oscillatory patterns of small amplitude and progressively increasing bias, in order to maximize the recovery of the active range of motion. The primary outcome was a change in the active RoM (range of motion) for each DoF and a change of motor function, as measured by the Fugl-Meyer assessment of arm physical performance after stroke (FMA). The secondary outcome was the score on the Wolf Motor Function Test (WOLF). The FMA score reported a significant improvement (average of 9.33±1.89 points), revealing a reduction of the upper extremity motor impairment over the sessions; moreover, a detailed component analysis of the score hinted at some degree of motor recovery transfer from the distal, trained parts of the arm to the proximal untrained parts. WOLF showed an improvement of 8.31±2.77 points, highlighting an increase in functional capability for the whole arm. The active RoM displayed a remarkable improvement. Moreover, a three-months follow up assessment reported long lasting benefits in both distal and proximal arm functionalities. The experimental results of th- s preliminary clinical study provide enough empirical evidence for introducing the novel progressive, adaptive, gentle robotic assistance of wrist movements in the clinical practice, consolidating the evaluation of its efficacy by means of a controlled clinical trial.",
"corpus_id": 7081381,
"title": "Wrist Rehabilitation in Chronic Stroke Patients by Means of Adaptive, Progressive Robot-Aided Therapy"
} | {
"abstract": "There is increasing interest in using robotic devices to assist in movement training following neurologic injuries such as stroke and spinal cord injury. This paper reviews control strategies for robotic therapy devices. Several categories of strategies have been proposed, including, assistive, challenge-based, haptic simulation, and coaching. The greatest amount of work has been done on developing assistive strategies, and thus the majority of this review summarizes techniques for implementing assistive strategies, including impedance-, counterbalance-, and EMG- based controllers, as well as adaptive controllers that modify control parameters based on ongoing participant performance. Clinical evidence regarding the relative effectiveness of different types of robotic therapy controllers is limited, but there is initial evidence that some control strategies are more effective than others. It is also now apparent there may be mechanisms by which some robotic control approaches might actually decrease the recovery possible with comparable, non-robotic forms of training. In future research, there is a need for head-to-head comparison of control algorithms in randomized, controlled clinical trials, and for improved models of human motor recovery to provide a more rational framework for designing robotic therapy control strategies.",
"corpus_id": 8991741,
"title": "Review of control strategies for robotic movement training after neurologic injury"
} | {
"abstract": "LDPC-MIMO system provides great capacity increment in wireless communications. Based on the powerful analysis tool of the extrinsic information transfer chart, we are able to design optimal LDPC code via curve fitting. Degree profile of the code can be optimized using linear programming and mutual information transfer function. Via simulation, our profile provides a better curve match and performance in MIMO fading channels. This fact manifests the significance to design good LDPC codes to fit different channels.",
"corpus_id": 26219425,
"score": -1,
"title": "Design of low-density parity-check codes using linear programming for modulation and detection"
} |
{
"abstract": "Many highway toll collection systems have already been developed and are widely used in India. Some of these include Manual toll collection, RF tags, Barcodes, Number plate recognition. All these systems have disadvantages that lead to some errors in the corresponding system. This paper presents a brief review of toll collection systems present in India, their advantages and disadvantages and also aims to design and develop a new efficient toll collection system which will be a good low cost alternative among all other systems. The system is based on Computer Vision vehicle detection using OpenCV library in Embedded Linux platform. The system is designed using Embedded Linux development kit (Raspberry pi). In this system, a camera captures images of vehicles passing through toll booth thus a vehicle is detected through camera. Depending on the area occupied by the vehicle, classification of vehicles as light and heavy is done. Further this information is passed to the Raspberry pi which is having web server set up on it. When raspberry pi comes to know the vehicle, then it access the web server information and according to the type of the vehicle, appropriate toll is charged. This system can also made to count moving vehicles from pre-recorded videos or stored videos by using the same algorithm and procedure that we follow in this paper.",
"corpus_id": 17783704,
"title": "Computer vision based vehicle detection for toll collection system using embedded Linux"
} | {
"abstract": "Background Subtraction methods are wildly used to detect moving object from static cameras. It has many applications such as traffic monitoring, human motion capture and recognition, and video surveillance. It is hard to propose a background model which works well under all different situations. Actually, there is no need to propose a pervasive model; it is a good model as long as it works well under a special situation. In this paper, a new method combining Gaussian Average and Frame Difference is proposed. Shadow suppression is not specifically dealt with, because it is considered to be part of the background, and can be subtracted by using an appropriate threshold. At last, a new method is raised to fill small gaps that the detected foreground or the moving objects may contain.",
"corpus_id": 13529498,
"title": "Background subtraction using running Gaussian average and frame difference"
} | {
"abstract": "The Cahn-Hilliard equation is a nonlinear fourth order diffusion equation originating in material science for modeling phase separation and phase coarsening in binary alloys. The inpainting of binary images using the Cahn-Hilliard equation is a new approach in image processing. In this paper we discuss the stationary state of the proposed model and introduce a generalization for grayvalue images of bounded variation. This is realized by using subgradients of the total variation functional within the flow, which leads to structure inpainting with smooth curvature of level sets.",
"corpus_id": 10718927,
"score": -1,
"title": "Cahn–Hilliard Inpainting and a Generalization for Grayvalue Images"
} |
{
"abstract": "Rule-based systems generate many of the redundant rules; such rules are expensive especially in online systems. Currently, there are many of the available rule minimization techniques; however, they still suffer from high complexity and lack of efficiency. In this paper, we introduce a novel method (QMR) based on Quine-McCluskey (Q-M) algorithm. The novelty of our algorithm is in the adaptation of Q-M that is used in reducing Boolean expressions to the rule minimization. Our minimization method is very simple and supports many items (variables). In addition, we propose an encoding method that reduces the size of any given data set. This encoding utilizes the usage of binary numbers which fits Q-M simplification method. The encoding is very simple to be automated as well as Q-M algorithm. This research shows a proof of concept examples as well as rule minimization test cases. We compare our algorithm to ID3 which is one of the most used algorithms in rule based systems; especially in networks. However, ID3 does not support more than two states output in contrast to our QMR algorithm which support as many as output states. Therefore, our comparison with ID3 will be limited to the two state output test cases; another test case will be conducted to show the applicability of our approach to more than two states output.",
"corpus_id": 1277067,
"title": "QMR : QUINE-MCCLUSKEY FOR RULE MINIMIZATION IN RULE-BASED SYSTEMS"
} | {
"abstract": "Decision trees that are limited to testing a single variable at a node are potentially much larger than trees that allow testing multiple variables at a node. This limitation reduces the ability to express concepts succinctly, which renders many classes of concepts difficult or impossible to express. This paper presents the PT2 algorithm, which searches for a multivariate split at each node. Because a univariate test is a special case of a multivariate test, the expressive power of such decision trees is strictly increased. The algorithm is incremental, handles ordered and unordered variables, and estimates missing values.",
"corpus_id": 1882858,
"title": "An Incremental Method for Finding Multivariate Splits for Decision Trees"
} | {
"abstract": "Scalable high efficiency video coding (SHVC) standard is expected to play a more important role in the heterogeneous landscape of broadcasting, multimedia, networks, and various services applications as it is specified as a layered coding technique in the advanced television systems committee 3.0. However, its block-based structure of temporal and spatial prediction makes it sensitive to information loss and error propagation due to transmission errors. In this context, we propose an improved SHVC with a joint layer prediction (JLP) solution which adaptively combines the decoded information from the base and the enhancement layers to create an additional reference for the SHVC enhancement encoder. To optimize the quality of the joint prediction, the minimum mean square error estimation is executed in computing a combination factor which gives weights to each contribution of the decoded information from the layers. In addition, the proposed JLP is integrated into the SHVC decoder to work as an error concealment solution to mitigate the error propagation happening inevitably in practical video transmission. Experiments have shown that the proposed SHVC framework significantly outperforms its relevant benchmarks, notably by up to 14.8% in bitrate reduction with respect to the standard SHVC codec. The proposed SHVC error concealment strategy also greatly improves the concealed picture quality as well as reducing the problem of error propagation when compared to conventional error concealment approaches.",
"corpus_id": 70312738,
"score": 1,
"title": "Joint Layer Prediction for Improving SHVC Compression Performance and Error Concealment"
} |
{
"abstract": "Multiple emitting components in a fluorophoric system often produce complicated emission spectra. Extracting the individual spectral information from the composite spectra is important in order to comprehend the photophysical processes occurring in the multifluorophoric systems. Although the combination of Principal Component Analysis and Multivariate Curve Resolution-Alternate Least Square (PCA/MCR-ALS) technique is a well-known approach for curve deconvolution, its applicability in the spectral deconvolution of vibronically and electronically mixed up emitting systems as well as systems merged up with multiple electronic bands without a priori knowledge of the individual emitting species, is not properly studied. The present work highlights the strength of PCA/MCR-ALS in retrieving pure spectral information from the set of complex spectra arising out of the regular variation of causative factors that result in the variation of spectral composition. The retrieval of the emission bands utilizing the PCA/MCR-ALS technique has been made without having a priori information of the emitting species present in the multifluorophoric systems and the resolved spectra correspond well with the fluorescence spectra of the individual chemical species. The common curve fitting methods such as Gaussian and Lorentzian techniques have been found to be unsuccessful in providing meaningful photophysical information through the retrieved spectra. A comparative study of the curve fitting techniques MCR-ALS, Gaussian and Lorentzian in a set of complicated emission spectra of (i) pyrene and its excimer, (ii) pyrene and its excimer in presence of benzo[a]pyrene, and (iii) fisetin in bile salt medium is presented herein in details.",
"corpus_id": 4581474,
"title": "Application of Multivariate Curve Resolution–Alternate Least Square Technique on Extracting Pure Spectral Components from Multiple Emitting Systems: a Case Study"
} | {
"abstract": "Fisetin, a bioflavonoid, has important biological relevance. It exhibits intramolecular excited state proton transfer (ESIPT), analogous to the structurally similar flavonoids. The presence of multiple prototropic forms of fisetin was observed at various concentrations of different bile salt molecules. The presence of ground state fisetin anion (FA)(GS) (λ(ex) 418 nm; λ(em) 490 nm) in alcohols and bile salt micellar media is a novel observation. The interaction of fisetin with sodium cholate (NaC) and some other bile salts has been studied in detail, using the intrinsic fluorescence of different prototropic forms of fisetin: neutral form (FN, λ(ex) 369 nm, λ(em) ~ 400 nm), ground state anion form ((FA)(GS), λ(ex) 418 nm, λ(em) 490 nm) and phototautomer (FT, λ(ex) 369 nm, λ(em) 540 nm). The hypsochromic shift of (FA*)(ES) emission and bathochromic shift of FT emission with increasing bile salt concentration suggests the progressive reduction of polarity of the bile salt media, which could be resulting from the neutralization of bile salt molecules as their concentration increases.",
"corpus_id": 24312586,
"title": "Multiple prototropism of fisetin in sodium cholate and related bile salt media."
} | {
"abstract": "Top predator losses aff ect a wide array of ecological processes, and there is growing evidence that top predators are disproportionately vulnerable to environmental changes. Despite increasing recognition of the fundamental role that top predators play in structuring communities and ecosystems, it remains challenging to predict the consequences of predator extinctions in highly variable environments. Both biotic and abiotic drivers determine community structure, and manipulative experiments are necessary to disentangle the eff ects of predator loss from other co-occurring environmental changes. To explore the consistency of top predator eff ects in ecological communities that experience high local environmental variability, we experimentally removed top predators from arid-land stream pool mesocosms in southeastern Arizona, USA, and measured natural background environmental conditions. We inoculated mesocosms with aquatic invertebrates from local streams, removed the top predator Abedus herberti (Hemiptera: Belostomatidae) from half of the mesocosms as a treatment, and measured community divergence at the end of the summer dry season. We repeated the experiment in two consecutive years, which represented two very diff erent biotic and abiotic environments. We found that some of the eff ects of top predator removal were consistent despite signifi cant diff erences in environmental conditions, community composition, and colonist sources between years. As in other studies, top predator removal did not aff ect overall species richness or abundance in either year, and we observed inconsistent eff ects on community and trophic structure. However, top predator removal consistently aff ected large-bodied species (those in the top 1% of the community body size distribution) in both years, increasing the abundance of mesopredators and decreasing the abundance of detritivores, even though the identity of these species varied between years. Our fi ndings highlight the vulnerability of large taxa to top predator extirpations and suggest that the consistency of observed ecological patterns may be as important as their magnitude.",
"corpus_id": 86129983,
"score": 1,
"title": "Top predator removals have consistent effects on large species despite high environmental variability"
} |
{
"abstract": "Remote robotic explorations for collapsed buildings in a severe disaster are demanded. However, rescue robots cannot approach the rubble due to safety risks. This study proposes a remote vertical exploration system for collapsed buildings with a robotic inspection system hoisted by a crane. An Active Scope Camera (ASC) has many advantages for the vertical exploration such as a light and flexible continuum body to produce distributed driving forces. The purpose of this paper is to confirm the feasibility of the vertical exploration system with the ASC. The vertical explorations have proper problems related to contact and hanging conditions of the scope cable. We developed a new ASC that has a two-step bending mechanism to produce larger head movement in multi-DOF. We also evaluated the performances of the prototype when the contact areas were small. Finally, we conducted a remote vertical exploration experiments at the simulated collapsed building in 6 m height. The robot could explore in six different pathways by changing head directions and running the rubbles within seven trials. The experimental results showed that the proposed system has high potential to get inserted in the deep area in the rubble.",
"corpus_id": 13405083,
"title": "Remote vertical exploration by Active Scope Camera into collapsed buildings"
} | {
"abstract": "In this paper, we develop an articulated mobile robot that can climb stairs, and also move in narrow spaces and on 3-D terrain. This paper presents two control methods for this robot. The first is a 3-D steering method that is used to adapt the robot to the surrounding terrain. In this method, the robot relaxes its joints, allowing it to adapt to the terrain using its own weight, and then, resumes its motion employing the follow-the-leader method. The second control method is the semi-autonomous stair climbing method. In this method, the robot connects with the treads of the stairs using a body called a connecting part, and then shifts the connecting part from its head to its tail. The robot then uses the sensor information to shift the connecting part with appropriate timing. The robot can climb stairs using this method even if the stairs are steep, and the sizes of the riser and the tread of the stairs are unknown. Experiments are performed to demonstrate the effectiveness of the proposed methods and the developed robot.",
"corpus_id": 4870162,
"title": "Development and Control of Articulated Mobile Robot for Climbing Steep Stairs"
} | {
"abstract": "There is not always a linear relationship between the blood pressure and the pulse duration obtained from photoplethysmography (PPG) signal. In order to estimate the blood pressure from the PPG signal, A Support Vector Machine (SVM) method for continuous blood pressure estimation from a PPG Signal is applied in this paper. Training data were extracted from The University of Queensland Vital Signs Dataset for better representation of possible pulse and pressure variation. In total there were more than 7000 heartbeats and 9 parameters to be extracted from each other for analysis, then these features were defined as the input vector for training. The comparison between estimated and reference values shows better accuracy than the linear regression method and also shows better accuracy than the ANN method in diastolic blood pressure, which brings great significance in the field of mobile wearable.",
"corpus_id": 5205901,
"score": -1,
"title": "A SVM Method for Continuous Blood Pressure Estimation from a PPG Signal"
} |
{
"abstract": "The worldwide depletion mid-point for conventional oil is expected to be reached in the next 15 years. Around that time the inevitable decline in production will commence. As that point is approached and thereafter, the reserve/production (RIP) ratio (RIP) as an indicator of the availability of conventional oil will becomes more and more misleading: oil will be available much longer, but in decreasing quantities. It is hard to imagine that the anticipated continued rise in demand for crude can be met by additional production from non-conventional deposits. Therefore, trouble lies ahead. The deficit in production capacity will, most likely lead to a considerable rise in the price of oil early next century.",
"corpus_id": 157173376,
"title": "Future World Oil Supplies – Possibilities and Constraints"
} | {
"abstract": "In 1995, the world produced 22.4 billion barrels (Gb) of oil I. It means that there is that much less to produce in the future, oil being a finite resource, formed on a few occasions in the Earth's long geological history, and then only in a few places where the very exceptional conditions for oil generation and entrapment were met. No one knows precisely how much oil there is, but advances in technology and the now colossal database from worldwide drilling begin to allow plausible estimates to be made. It is furthermore possible to model the likely depletion pattern that will have a profound effect upon future production. This is not an exact science but nor is it just gazing into the crystal ball. A number of such assessments have been made over the years-. Each has to take into account the impact of new discovery and production to obtain a new calculation of how much remains. Furthermore, it has to reflect the growing knowledge of the resource-base and the evolving judgment of the validity of the reported reserves. The new assessment given here evaluates the status as of the end of 1995. Estimates of future supply and demand are legion, but most rely almost exclusively on economic and political factors, assuming that the resource itself is near infinite. It most certainly is not. Much of the reserve data in the public domain is unreliable, and there are very few realistic estimates of the undiscovered potential, apart from that by the US Geological Survey (Masters) which can be used profitably, once its definitions have been decoded\". The main causes for confusion relate to unclear distinctions between what are termed conventional and non-conventional (or unconventional) oil, and whether or not natural gas liquids (NGL) are included in the statistics. For a comprehensive coverage of the subject, it is necessary to tum to a Petroconsultants report that has the benefit of its authoritative database'.",
"corpus_id": 132823752,
"title": "The Status of World Oil Depletion at the end of 1995"
} | {
"abstract": "Abstract 1-D global inversions of observatory and satellite geomagnetic data reveal radial conductivity profiles in the Earth’s mantle by means of electromagnetic induction. Traditionally, these have been interpreted as average values at given depth. However, the predominant dipolar geometry of the magnetospheric ring current represents strong bias in 1-D interpretation of the responses of a fully 3-D heterogeneous Earth. We present a series of synthetic checks, applying 1-D time-domain inversion technique to 3-D simulated data for conductivity models with lateral heterogeneities in the lowermost mantle, ranging from simple geometrical configurations to complicated structures derived from phase composition based on geodynamical modelling. We show that it is the presence or lack of lateral interconnection of the highly conductive phase in the direction of prevailing external currents that determines the results of 1-D inversion. In particular, this effect can explain the recently shown invisibility of highly conductive postperovskite in the D″ layer to induction studies excited by strong transient signals—the geomagnetic storms.",
"corpus_id": 55562212,
"score": 1,
"title": "On the detectability of 3-D postperovskite distribution in D″ by electromagnetic induction"
} |
{
"abstract": "A gas-liquid chromatographic method is described for the quantitative estimation of cyclopropene fatty acids as their methyl mercaptan derivatives. This method estimates individual cyclopropene acids as well as normal and cyclopropane acids. Nine seed oils were analyzed for their cyclopropene fatty acid content.Evidence was obtained for the presence of a cyclopropene fatty acid of shorter chain length than malvalic inAlthaea rosea cav and one with a higher chain length than sterculic inBombacopsis glabra seed oil. This method is less accurate for cottonseed oil than for the other oils tested because of the appearance of some unsymmetrical peaks of unknown origin.The mercaptan derivatives of the cyclopropene acids may be isolated by silver ion thin-layer chromatography.Small amounts of cyclopropane fatty acids were found in a number of the oils analyzed for cyclopropene fatty acids.",
"corpus_id": 3985043,
"title": "Gas-liquid chromatographic analysis of cyclopropene fatty acids"
} | {
"abstract": "IT has been known for many years that the ingestion by hens of malvaceous plants or crude fats derived from such plants gives rise to pink ‘whites’ in stored eggs1. Schaible and Bandemer2 showed that the pink discoloration was caused by iron diffusing from the yolk and chelating with the conalbumin of the white. The disorder is accompanied by a putty-like condition of the yolks when the eggs are cold, and affected yolks have a higher water content than normal. The pH values of the yolk and white, normally 6.5 and 9.0 respectively, tend to converge. Schaible and Bandemer2 suggested that the effects of a diet containing malvaceous products could be explained by an increased permeability of the vitelline membrane surrounding the yolk.",
"corpus_id": 4242963,
"title": "A Biologically Active Fatty-acid in Malvaceae"
} | {
"abstract": "A simple, rapid and sensitive method for the determination of nicotine, cotinine, nornicotine, anabasine, and anatabine in human urine and saliva was developed. These compounds were analyzed by on-line in-tube solid-phase microextraction (SPME) coupled with liquid chromatography-mass spectrometry (LC-MS). Nicotine, cotinine and related alkaloids were separated within 7 min by high performance liquid chromatography (HPLC) using a Synergi 4u POLAR-RP 80A column and 5 mM ammonium formate/methanol (55/45, v/v) as a mobile phase at a flow-rate of 0.8 mL/min. Electrospray ionization conditions in the positive ion mode were optimized for MS detection of these compounds. The optimum in-tube SPME conditions were 25 draw/eject cycles with a sample size of 40 microL using a CP-Pora PLOT amine capillary column as the extraction device. The extracted compounds could be desorbed easily from the capillary by passage of the mobile phase, and no carryover was observed. Using the in-tube SPME LC-MS method, the calibration curves were linear in the concentration range of 0.5-20 ng/mL of nicotine, cotinine and related compounds in urine and saliva, and the detection limits (S/N=3) were 15-40 pg/mL. The method described here showed 20-46-fold higher sensitivity than the direct injection method (5 microL injection). The within-run and between-day precision (relative standard deviations) were below 4.7% and 11.3% (n=5), respectively. This method was applied successfully to analysis of urine and saliva samples without interference peaks. The recoveries of nicotine, cotinine and related compounds spiked into urine and saliva samples were above 83%, and the relative standard deviations were below 7.1%. This method was used to analyze urinary and salivary levels of these compounds in nicotine intake and smoking.",
"corpus_id": 35309796,
"score": 1,
"title": "Determination of nicotine, cotinine, and related alkaloids in human urine and saliva by automated in-tube solid-phase microextraction coupled with liquid chromatography-mass spectrometry."
} |
{
"abstract": "y . Dissolved boron in seawater occurs mainly in the form of boric acid B OH and borate B OH . While the 34 equilibrium properties of the dissociation of boric acid have been studied in detail, very little work has focused on the kinetics of the boric acid-borate equilibrium in seawater. Here, we present a theoretical study of the relaxation of the w",
"corpus_id": 611215,
"title": "A Theoretical Study of the Kinetics of the Boric Acid-borate Equilibrium in Seawater"
} | {
"abstract": "Abstract. In order to fully constrain paleo-carbonate systems, proxies for two out of seven parameters, plus temperature and salinity, are required. The boron isotopic composition (δ11B) of planktonic foraminifera shells is a powerful tool for reconstructing changes in past surface ocean pH. As B(OH)4− is substituted into the biogenic calcite lattice in place of CO32−, and both borate and carbonate ions are more abundant at higher pH, it was suggested early on that B ∕ Ca ratios in biogenic calcite may serve as a proxy for [CO32−]. Although several recent studies have shown that a direct connection of B ∕ Ca to carbonate system parameters may be masked by other environmental factors in the field, there is ample evidence for a mechanistic relationship between B ∕ Ca and carbonate system parameters. Here, we focus on investigating the primary relationship to develop a mechanistic understanding of boron uptake. Differentiating between the effects of pH and [CO32−] is problematic, as they co-vary closely in natural systems, so the major control on boron incorporation remains unclear. To deconvolve the effects of pH and [CO32−] and to investigate their impact on the B ∕ Ca ratio and δ11B, we conducted culture experiments with the planktonic foraminifer Orbulina universa in manipulated culture media: constant pH (8.05), but changing [CO32−] (238, 286 and 534 µmol kg−1 CO32−) and at constant [CO32−] (276 ± 19.5 µmol kg−1) and varying pH (7.7, 7.9 and 8.05). Measurements of the isotopic composition of boron and the B ∕ Ca ratio were performed simultaneously using a femtosecond laser ablation system coupled to a MC-ICP-MS (multiple-collector inductively coupled plasma mass spectrometer). Our results show that, as expected, δ11B is controlled by pH but it is also modulated by [CO32−]. On the other hand, the B ∕ Ca ratio is driven by [HCO3−], independently of pH. This suggests that B ∕ Ca ratios in foraminiferal calcite can possibly be used as a second, independent, proxy for complete paleo-carbonate system reconstructions. This is discussed in light of recent literature demonstrating that the primary relationship between B ∕ Ca and [HCO3−] can be obscured by other environmental parameters.",
"corpus_id": 13788554,
"title": "Decoupled carbonate chemistry controls on the incorporation of boron into Orbulina universa"
} | {
"abstract": "Abstract A new technique for osmium isotope ratio determinations by negative thermal ionization mass spectrometry using the formation of OsO-3 ions is presented. Different filament materials and chemicals to reduce the electron work function are investigated. With Ba(OH)2 and platinum as filament material an ionization efficiency of more than 1% is obtained for nanogram sample amounts. The isotopic abundances of a laboratory standard are measured with relative standard deviations of less than 0.01% for the most abundant isotopes. This significant improvement in the precision of the osmium isotopic measurements, compared with previous investigations, provides a suitable tool to develop a ReOs age determination method on this basis. The new isotopic data could also be used recalculate the atomic weight of osmium to be 190.2251 ± 0.0001, which is much better in uncertainty than the current IUPAC data.",
"corpus_id": 222011164,
"score": 2,
"title": "Osmium isotope ratio determinations by negative thermal ionization mass spectrometry"
} |
{
"abstract": "Total lipid extracts from washed trypsinized human platelets were fractionated into neutral lipids, glycosphingolipids, and phospholipids by silicic acid chromatography. The concentrations and chemical structures of the neutral and acidic glycosphingolipids were then studied in detail. On the basis of sugar molar ratios, studies of permethylation products, and the action of stereospecific glycosidases on the lipids, identifications were made of four neutral glycosphingolipids. Lactosylceramide was the most abundant type and accounted for 64% of the total neutral glycolipid mixture. The major fatty acids of the lactosylceramide were 20:0, 22:0, 24:0, and 24:1; the major long-chain base was 4-sphingenine. The platelets were surprisingly rich in a ceramide fraction, which represented 1.3% of the total platelet lipids. It had a different fatty acid composition than the neutral glycosphingolipid and ganglioside fractions. Hematoside was also isolated from the total lipid fraction of platelets; the neuraminic acid component was N-acetylneuraminic acid. Treatment of platelets with trypsin, chymotrypsin, or thrombin increased the yield of hematoside as compared with a control, while the level of ceramides was not changed. It was concluded that the platelets are similar to leukocytes, liver, and spleen in that lactosylceramide and hematoside are the principal neutral and acidic glycosphingolipids. The presence of a relatively high proportion of ceramide in platelets may be a unique characteristic of this cellular fraction of blood.",
"corpus_id": 1787580,
"title": "Sphingolipid composition of human platelets."
} | {
"abstract": "Thymic lipids (representing 2.6% of tissue wet weight) from two strains of normal, adult, white rats have been subjected to a variety of chromatographic techniques and chemical analyses leading to the complete separation and quantification of lipid components. Neutral lipids and phospholipids were separated on silicic acid by a batch procedure; neutral lipids represented from 66 to 80% of total lipids. Individual neutral lipids were separated isolated using column chromatography on Florisil; neutral lipids were mainly triglycerides (82–85%), with cholesterol (6–10%) and cholesterol esters (3–4%) representing significant contributions to the total; smaller amounts of lower glycerides and free fatty acids were also present. Individual phospholipids were separated isolated by two-dimensional thin-layer chromatography on Silica Gel G; 8 phospholipids were identified with phosphatidyl choline (49–50% of total), phosphatidyl ethanolamine (19%), phosphatidyl inositol (12%), sphingomyelin (10–11%), phosphatidyl serine (5–7%) representing the main components; small amounts of lysolecithin, phosphatidic acid and cardiolipin were also present. \n \nThe fatty acid composition of the isolated lipid fractions was determined by quantitative gas-liquid chromatography. Among the neutral lipids, unsaturated fatty acids predominated (48–80% of total acids); among unsaturated fatty acids, oleic linoleic represented the largest proportion, although in certain of the lower glycerides hexadecatrienoic acid represented 35 to 52% of total fatty acids; of the saturated acids, palmitic, stearic and arachidic acids constituted the majority. \n \nUnsaturated fatty acids predominated in most, but not all, of the phospholipids; this relationship depended on the strain of rat. Although oleic, linoleic and linolenic acids comprised a large portion of the unsaturated acids, large amounts of C20 to C24 polyunsaturated acids were also found in all phospholipids. Palmitic, stearic, arachidic and behenic acids represented the bulk of the saturated acids in phospholipids; however, large amounts of 26: o were also found in a number of phospholipids.",
"corpus_id": 22205827,
"title": "THE LIPID COMPOSITION OF NORMAL RAT THYMUS."
} | {
"abstract": "A highly sensitive procedure for GC/MS determination of etorphine in horse urine is described. This assay provides both specificity and reliability and is particularly well suited for the confirmation of radioimmunoassay screening procedures usually used for etorphine. After solvent extraction and purifications, the etorphine is characterized as a pentafluoroacetic derivative (PFAA) by using mass fragmentography. The detection limit is 0.1 ng/mL in urine; the coefficient of variation of the estimations is 10.9%. The procedure has been validated after on-field administration of 5 to 90 micrograms of etorphine to five thoroughbred horses (10 to 180 ng/kg).",
"corpus_id": 35327231,
"score": 1,
"title": "GC/MS confirmatory method for etorphine in horse urine."
} |
{
"abstract": "The production of recombinant therapeutic proteins is one of the fastest growing sectors of the pharmaceutical industry, particularly monoclonal antibodies and Fc-fusion proteins. Currently, mammalian cells are the dominant production system for these proteins because they can perform complex post-translational modifications that are often required for efficient secretion, drug efficacy, and stability. These protein modifications include misfolding and aggregation, oxidation of methionine, deamidation of asparagine and glutamine, variable glycosylation, and proteolysis. Such modifications not only pose challenges for accurate and consistent bioprocessing, but also may have consequences for the patient in that incorrect modifications and aggregation can lead to an immune response to the therapeutic protein. This mini-review describes examples analytical and preventative advances in the fields of protein oxidation, deamidation, misfolding and aggregation (glycosylation is covered in other articles in this issue). The feasibility of partially replacing traditional analytical methods such as peptide mapping with high-throughput screens and their use in clone and media selection are evaluated. This review also discusses how further technical advances could improve the manufacturability, potency, and safety of biotherapeutics.",
"corpus_id": 66962,
"title": "Post-translational Modifications of Recombinant Proteins: Significance for Biopharmaceuticals"
} | {
"abstract": "The selection of the appropriate excipient and the amount of excipient required to achieve a 2-year shelf-life is often done by using iso-osmotic concentrations of excipients such as sugars (e.g., 275 mM sucrose or trehalose) and salts. Excipients used for freeze-dried protein formulations are selected for their ability to prevent protein denaturation during the freeze-drying process as well as during storage. Using a model recombinant humanized monoclonal antibody (rhuMAb HER2), we assessed the impact of lyoprotectants, sucrose, and trehalose, alone or in combination with mannitol, on the storage stability at 40 degrees C. Molar ratios of sugar to protein were used, and the stability of the resulting lyophilized formulations was determined by measuring aggregation, deamidation, and oxidation of the reconstituted protein and by infrared (IR) spectroscopy (secondary structure) of the dried protein. A 360:1 molar ratio of lyoprotectant to protein was required for storage stability of the protein, and the sugar concentration was 3-4-fold below the iso-osmotic concentration typically used in formulations. Formulations with combinations of sucrose (20 mM) or trehalose (20 mM) and mannitol (40 mM) had comparable stability to those with sucrose or trehalose alone at 60 mM concentration. A formulation with 60 mM mannitol alone provided slightly less protection during storage than 60 mM sucrose or trehalose. The disaccharide/mannitol formulations also inhibited deamidation during storage to a greater extent than the lyoprotectant formulations alone. The reduction in aggregation and deamidation during storage correlated directly with inhibition of unfolding during lyophilization, as assessed by IR spectroscopy. Thus, it appears that the protein must be retained in its native-like state during freeze-drying to assure storage stability in the dried solid. Long-term studies (23-54 months) performed at 40 degrees C revealed that the appropriate molar ratio of sugar to protein stabilized against aggregation and deamidation for up to 33 months. Therefore, long-term storage at room temperature or above may be achieved by proper selection of the molar ratio and sugar mixture. Overall, a specific sugar/protein molar ratio was sufficient to provide storage stability of rhuMAb HER2.",
"corpus_id": 22377478,
"title": "A specific molar ratio of stabilizer to protein is required for storage stability of a lyophilized monoclonal antibody."
} | {
"abstract": "Cross-species reciprocal chromosome painting was used to determine homologous chromosomal regions between the laboratory mouse and Chinese hamster. When mouse chromosome-specific paints were hybridized to Chinese hamster chromosomes, paints specific for mouse chromosomes 3, 4, 9, 14, 18, 19 and X each painted a single chromosomal region, whilst other mouse paints delineated multiple discrete chromosomal regions. The mouse Y paint produced non-specific signals on Chinese hamster chromosomes. Nineteen mouse autosome paints identified a total of 47 homologous chromosome regions in the genome of the Chinese hamster. Hybridization of Chinese hamster paints to mouse chromosomes not only confirmed the above results, but also identified which of the chromosomal regions of these two species were homologous. In total, 10 Chinese hamster autosomal paints detected 38 homologous autosomal segments in the mouse genome. A comparative chromosome map was established based on these reciprocal chromosome painting patterns. This map forms the basis for exchanging gene mapping information between the species and for studying genome evolution.",
"corpus_id": 4693926,
"score": 2,
"title": "Comparative Chromosome Map of the Laboratory Mouse and Chinese Hamster Defined by Reciprocal Chromosome Painting"
} |
{
"abstract": "We study the communication complexity of secure function evaluation (SFE). Consider a setting where Alice has a short input χA, Bob has an input χB and we want Bob to learn some function y = f(χA, χB) with large output size. For example, Alice has a small secret decryption key, Bob has a large encrypted database and we want Bob to learn the decrypted data without learning anything else about Alice's key. In a trivial insecure protocol, Alice can just send her short input χA to Bob. However, all known SFE protocols have communication complexity that scales with size of the output y, which can potentially be much larger. Is such 'output-size dependence' inherent in SFE' Surprisingly, we show that output-size dependence can be avoided in the honest-but-curious setting. In particular, using indistinguishability obfuscation (iO) and fully homomorphic encryption (FHE), we construct the first honest-but-curious SFE protocol whose communication complexity only scales with that of the best insecure protocol for evaluating the desired function, independent of the output size. Our construction relies on a novel way of using iO via a new tool that we call a 'somewhere statistically binding (SSB) hash', and which may be of independent interest. On the negative side, we show that output-size dependence is inherent in the fully malicious setting, or even already in an honest-but-deterministic setting, where the corrupted party follows the protocol as specified but fixes its random tape to some deterministic value. Moreover, we show that even in an offline/online protocol, the communication of the online phase must have output-size dependence. This negative result uses an incompressibility argument and it generalizes several recent lower bounds for functional encryption and (reusable) garbled circuits, which follow as simple corollaries of our general theorem.",
"corpus_id": 16244045,
"title": "On the Communication Complexity of Secure Function Evaluation with Long Output"
} | {
"abstract": null,
"corpus_id": 78179,
"title": "Hiding the Input-Size in Secure Two-Party Computation"
} | {
"abstract": "In this paper, we study of the notion of differing-input obfuscation, introduced by Barak et al. (CRYPTO 2001, JACM 2012). For any two circuits C0 and C1, a differing-input obfuscator diO guarantees that the non-existence of an adversary that can find an input on which C0 and C1 differ implies that diO(C0) and diO(C1) are computationally indistinguishable. We show many applications of this notion: We define the notion of a differing-input obfuscator for Turing machines and give a construction for the same (without converting it to a circuit) with input-specific running times. More specifically, for each input, our obfuscated Turning machine takes time proportional to the running time of the Turing machine on that specific input rather than the machine’s worst-case running time. We give a functional encryption scheme that allows for secret-keys to be associated with Turing machines, and thereby achieves input-specific running times. Further, we can equip our functional encryption scheme with delegation properties. We construct a multi-party non-interactive key exchange protocol with no trusted setup where all parties post only logarithmic-size messages. It is the first such scheme with such short messages. We similarly obtain a broadcast encryption system where the ciphertext overhead and secret-key size is constant (i.e. independent of the number of users), and the public key is logarithmic in the number of users. All our constructions make inherent use of the power provided by differing-input obfuscation. It is not currently known how to construct systems with these properties from the weaker notion of indistinguishability obfuscation.",
"corpus_id": 8302612,
"score": -1,
"title": "Differing-Inputs Obfuscation and Applications"
} |
{
"abstract": "This paper describes a case study that took place at the Public Research Centre Henri Tudor, Luxembourg in November 2012. A tangible user interface (TUI) was used in the context of collaborative problem solving. The task of participants was to explore the relation of external parameters on the production of electricity of a windmill presented on a tangible tabletop; these parameters were represented through physical objects. The goal of the study was to observe, analyze, and understand the interactions of multiple participants with the table while collaboratively solving a task. In this paper we focus on the gestures that the users performed during the experiment and the reaction of the other users to those gestures. Gestures were categorized into deictic/pointing, iconic, emblems, adaptors, and TUI-related. TUI-related/manipulative gestures, such as tracing and rotating, represented the biggest part, followed by the pointing gestures. In addition, we evaluated how active was the participation of the participants and whether gesture was accompanied by speech during the user study. Our case study can be described as a collaborative, problem solving, and cognitive activity, which showed that gesturing facilitates group focus, enhances collaboration among the participants, and encourages the use of epistemic actions.",
"corpus_id": 330353,
"title": "Gesture analysis in a case study with a tangible user interface for collaborative problem solving"
} | {
"abstract": "David McNeill, a pioneer in the ongoing study of the relationship between gesture and language, here argues that gestures are active participants in both speaking and thinking. He posits that gestures are key ingredients in an \"imagery-language dialectic\" that fuels speech and thought. The smallest unit of this dialectic is the growth point, a snapshot of an utterance at its beginning psychological stage. In \"Gesture and Thought\", the central growth point comes from a Tweety Bird cartoon. Over the course of twenty-five years, the McNeill Lab showed this cartoon to numerous subjects who spoke a variety of languages, and a fascinating pattern emerged. The shape and timing of gestures depends not only on what speakers see but on what they take to be distinctive; this, in turn, depends on the context. Those who remembered the same context saw the same distinctions and used similar gestures; those who forgot the context understood something different and changed gestures or used none at all. Thus, the gesture becomes part of the growth point - the building block of language and thought. \"Gesture and Thought\" is an ambitious project in the ongoing study of how we communicate and how language is connected to thought.",
"corpus_id": 6824486,
"title": "Gesture and Thought"
} | {
"abstract": "Tabletop and tangible interfaces are often described in terms of their support for shared access to digital resources. However, it is not always the case that collaborators want to share and help one another. In this paper we detail a video-analysis of a series of prototyping sessions with children who used both cardboard objects and an interactive tabletop surface. We show how the material qualities of the digital interface and physical objects affect the kinds of bodily strategies adopted by children to stop others from accessing them. We discuss how children fight for and maintain control of physical versus digital objects in terms of embodied interaction and what this means when designing collaborative applications for shareable interfaces.",
"corpus_id": 616858,
"score": 2,
"title": "Fighting for control: children's embodied interactions when using physical and digital representations"
} |
{
"abstract": "Objective:We investigated whether alexithymia is at the root of the decision-making deficit classically reported in pathological gamblers. Background:Alexithymia has been shown to be a recurrent personality trait of pathological gamblers and to impair the decision-making abilities of nonpathological gamblers, but no previous studies have investigated whether alexithymia significantly affects pathological gamblers’ decision making. Although investigations of pathological gamblers typically have studied those seeking treatment, most pathological gamblers do not seek treatment. Thus, to study people representative of the general population of pathological gamblers, we conducted our study in “sportsbook” casinos with a small sample of gamblers who were not seeking treatment. Methods:We recruited gamblers in sportsbooks and classified them based on their scores on the South Oaks Gambling Screen and the Toronto Alexithymia Scale: 3 groups of pathological gamblers (6 alexithymic, 8 possibly alexithymic, and 6 nonalexithymic) and 8 healthy controls. All of the participants completed an adaptation of the Iowa Gambling Task. Results:The alexithymic group chose less advantageously on the task than the other groups. The severity of the deficit in decision-making abilities was related to the severity of alexithymia, even when we controlled for the effects of anxiety and depression. Conclusions:Our findings provide preliminary evidence that alexithymia might be a critical personality trait underlying pathological gamblers’ decision-making deficits.",
"corpus_id": 1974147,
"title": "The Impact of Alexithymia on Pathological Gamblers’ Decision Making: A Preliminary Study of Gamblers Recruited in “Sportsbook” Casinos"
} | {
"abstract": "Background: Pathological gambling is more prevalent among postsecondary students than among the general adult population. While the prevalence of pathological gambling in this group has risen over the past decade, factors underlying the development of problem gambling among university students remain largely unexplored. One early study found alexithymia to be associated with pathological gambling. The aim of the present study was to further examine the relationship between alexithymia and gambling among postsecondary students. Methods: The relationship between alexithymia and pathological gambling was examined in 562 postsecondary students who completed the South Oaks Gambling Screen (SOGS) and the 20-item Toronto Alexithymia Scale (TAS-20). Results: Approximately 12% of the sample was classified as alexithymic according to the TAS-20. These individuals were found to have significantly more gambling problems, as measured by the SOGS, than nonalexithymic individuals. Approximately 9% of the sample was classified as pathological gamblers according to the SOGS. These individuals were found to have significantly higher levels of alexithymia, as measured by the TAS-20, than nonproblem gamblers. Conclusions: Alexithymia is associated with pathological gambling and may be a risk factor among postsecondary students for developing severe gambling problems.",
"corpus_id": 22671945,
"title": "Alexithymia in Young Adulthood: A Risk Factor for Pathological Gambling"
} | {
"abstract": "The purpose of this review is to gain more insight in the neuropathology of pathological gambling (PG) and problem gambling, and to discuss challenges in this research area. Results from the reviewed PG studies show that PG is more than just an impulse control disorder. PG seems to fit very well with recent theoretical models of addiction, which stress the involvement of the ventral tegmental-orbito frontal cortex. Differentiating types of PG on game preferences (slot machines vs. casino games) seems to be useful because different PG groups show divergent results, suggesting different neurobiological pathways to PG. A framework for future studies is suggested, indicating the need for hypothesis driven pharmacological and functional imaging studies in PG and integration of knowledge from different research areas to further elucidate the neurobiological underpinnings of this disorder.",
"corpus_id": 22821005,
"score": 2,
"title": "Why gamblers fail to win: A review of cognitive and neuroimaging findings in pathological gambling"
} |
{
"abstract": "In this paper we give a brief outline of our bulk specimen technique developed to measure intracellular water concentration in frozen-hydrated biological specimens by means of energy dispersive X-ray microanalysis. Fractured surface of the deep-frozen tissue samples is analyzed in an electron microscope (a specimen area of 15 x 11.5 micron is scanned) using 20 kV accelerating voltage and 1-5 pA effective beam current (measured in the specimen). Strong electric charging, which is the main problem associated with the low temperature X-ray microanalysis of frozen-hydrated specimens, is reduced by choosing optimum temperature range for the measurements (170-185 K) and by etching a thin surface layer on specimen surface. The main advantage of the method over other X-ray microanalytical techniques using sections and bulk specimens for water and dry-mass content determinations in cells (which are shortly reviewed) is the simple specimen preparation, the easy sample handling and the good stability of specimen during measurements. The main disadvantage is the poor spatial resolution as compared to the analysis of sections. Measurements with our method provided meaningful results of the change in intracellular water contents in various postmitotic cells of rats dependent on age. The observed decline of the intracellular water contents results in increased ionic strength and slower diffusion in old cells than in young ones. These effects may be implicated in senescent deterioration of cell metabolism.",
"corpus_id": 1581776,
"title": "Age dependent dehydration of postmitotic cells as measured by X-ray microanalysis of bulk specimens."
} | {
"abstract": "The theoretical background and the experimental data described in this paper justify the application of the Hall's continuum method of quantitation and the use of bulk crystals of known composition as standards, without ZAF correction, for the biological bulk specimen Xray microanalysis, provided that proper criteria are respected during the realization of such measurements. The most important points are as follows : (i) Only crystals can be selected where the electrostatic charging is negligible or absent. This depends in part on the own characteristics of the crystals, and can also be facilitated by using low accelerating voltage, e.g. 10 kV, well-conducting specimen holders, and fast scanning rates; (ii) Apart from the element of interest (Na, K, Cl, etc.) all other accompanying components must be of low atomic number (11 or lower), in order to assure the similarity to the composition of the biological matrix where C, 0, N and H are the most abundant elements. Comparison of the results in brain and liver cell nuclei and cytoplasm revealed that the elemental concentrations of Na and K are identical within the statistical scatter, if the continuum radiation used for the calculation of the peak-to-background ratios is selected under the respective elemental peak, or farther, in a peak-free region of the spectrum.",
"corpus_id": 103749459,
"title": "A review on the extension of Hall's method of quantification to bulk specimen X-ray microanalysis"
} | {
"abstract": "BackgroundThere is increasing concern that pollution from pharmaceuticals used in human medicine and agriculture can be a threat to the environment. Little is known, however, if people are aware that pharmaceuticals may have a detrimental influence on the environment. The present study examines people’s risk perception and choices in regard to environmental risks of pharmaceuticals used in human medicine and for agricultural purposes.MethodsA representative sample of the U.S. population (N = 640) was surveyed. Respondents completed a hypothetical choice task that involved tradeoffs between human and environmental health. In addition, it was examined how much people would support an environment policy related to drug regulation.ResultsFor agricultural pharmaceuticals, respondents reported a high level of satisfaction for a policy requiring farms to limit their use of antibiotics. In the domain of pharmaceuticals used in human medicine, we found that people were willing to consider environmental consequences when choosing a drug, but only when choices were made about treatment options for a rather harmless disease. In contrast, when decisions were made about treatment options for a severe disease, the drug’s effectiveness was the most important criterion.ConclusionsIt can be concluded that the environmental impact of a drug will be hardly considered in decisions about pharmaceuticals for severe diseases like cancer, and this may be due to the fact that these decisions are predominantly affective in nature. However, for less severe health risks, people are willing to balance health and environmental considerations.",
"corpus_id": 18146213,
"score": 0,
"title": "Consumer-perceived risks and choices about pharmaceuticals in the environment: a cross-sectional study"
} |
{
"abstract": "A total of 43 Campylobacter isolates from poultry, cattle and pigs were investigated for their ability to form biofilm. The studied strains were also screened for motility, adhesion and invasion of Caco-2 cells as well as extracellular DNase activity. The relation between biofilm formation and selected phenotypes was examined. Biofilm formation by the tested strains was found as irrespective from their motility and not associated with colonization abilities of human Caco-2 cells. Results of our study show that Campylobacter isolates from various animal sources are able to form biofilm and invade human Caco-2 cells in vitro.",
"corpus_id": 3799189,
"title": "Evaluation of selected phenotypic features among Campylobacter sp. strains of animal origin."
} | {
"abstract": "ABSTRACT The species Campylobacter jejuni is considered naturally competent for DNA uptake and displays strong genetic diversity. Nevertheless, nonnaturally transformable strains and several relatively stable clonal lineages exist. In the present study, the molecular mechanism responsible for the nonnatural transformability of a subset of C. jejuni strains was investigated. Comparative genome hybridization indicated that C. jejuni Mu-like prophage integrated element 1 (CJIE1) was more abundant in nonnaturally transformable C. jejuni strains than in naturally transformable strains. Analysis of CJIE1 indicated the presence of dns (CJE0256), which is annotated as a gene encoding an extracellular DNase. DNase assays using a defined dns mutant and a dns-negative strain expressing Dns from a plasmid indicated that Dns is an endogenous DNase. The DNA-hydrolyzing activity directly correlated with the natural transformability of the knockout mutant and the dns-negative strain expressing Dns from a plasmid. Analysis of a broader set of strains indicated that the majority of nonnaturally transformable strains expressed DNase activity, while all naturally competent strains lacked this activity. The inhibition of natural transformation in C. jejuni via endogenous DNase activity may contribute to the formation of stable lineages in the C. jejuni population.",
"corpus_id": 8128302,
"title": "A DNase Encoded by Integrated Element CJIE1 Inhibits Natural Transformation of Campylobacter jejuni"
} | {
"abstract": "Food allergies are a global food challenge. For correct food labelling, the detection and quantification of allergens are necessary. However, novel product formulations and industrial processes produce new scenarios, which require much more technological developments. For this purpose, OMICS technologies, especially proteomics, seemed to be relevant in this context. This review summarises the current knowledge and studies that used proteomics to study food allergens. In the case of the allergenic proteins, a wide variety of isoforms, post-translational modifications and other structural changes during food processing can increase or decrease the allergenicity. Most of the plant-based food allergens are proteins with biological functions involved in storage, structure, and plant defence. The allergenicity of these proteins could be increased by the presence of heavy metals, air pollution, and pesticides. Targeted proteomics like selected/multiple reaction monitoring (SRM/MRM) have been very useful, especially in the case of gluten from wheat, rye and barley, and allergens from lentil, soy, and fruit. Conventional 1D and 2-DE immunoblotting have been further widely used. For animal-based food allergens, the widely used technologies are 1D and 2-DE immunoblotting followed by MALDI-TOF/TOF, and more recently LC-MS/MS, which is becoming useful to assess egg, fish, or milk allergens. The detection and quantification of allergenic proteins using mass spectrometry-based proteomics are promising and would contribute to greater accuracy, therefore improving consumer information.",
"corpus_id": 221359562,
"score": 1,
"title": "Current Trends in Proteomic Advances for Food Allergen Analysis"
} |
{
"abstract": "Even though many programmers rely on 3-way merge tools to integrate changes from different branches, such tools can introduce subtle bugs in the integration process. This paper aims to mitigate this problem by defining a semantic notion of confict-freedom, which ensures that the merged program does not introduce new unwanted behaviors. We also show how to verify this property using a novel, compositional algorithm that combines lightweight dependence analysis for shared program fragments and precise relational reasoning for the modifications. We evaluate our tool called SafeMerge on 52 real-world merge scenarios obtained from Github and compare the results against a textual merge tool. The experimental results demonstrate the benefits of our approach over syntactic confict-freedom and indicate that SafeMerge is both precise and practical.",
"corpus_id": 3335637,
"title": "Verifying Semantic Conflict-Freedom in Three-Way Program Merges"
} | {
"abstract": "Relational verification aims to prove properties that relate a pair of programs or two different runs of the same program. While relational properties (e.g., equivalence, non-interference) can be verified by reducing them to standard safety, there are typically many possible reduction strategies, only some of which result in successful automated verification. Motivated by this problem, we propose a novel relational verification algorithm that learns useful reduction strategies using reinforcement learning. Specifically, we show how to formulate relational verification as a Markov Decision Process (MDP) and use reinforcement learning to synthesize an optimal policy for the underlying MDP. The learned policy is then used to guide the search for a successful verification strategy. We have implemented this approach in a tool called Coeus and evaluate it on two benchmark suites. Our evaluation shows that Coeus solves significantly more problems within a given time limit compared to multiple baselines, including two state-of-the-art relational verification tools.",
"corpus_id": 204732679,
"title": "Relational verification using reinforcement learning"
} | {
"abstract": "Fuzzy models occupy one of the dominant positions on the research agenda of fuzzy sets exhibiting a wealth of conceptual developments and algorithmic pursuits as well as a plethora of applications. Granular fuzzy modeling dwelling on the principles of fuzzy modeling opens new horizons of investigations and augments the existing design methodology exploited in fuzzy modeling. In a nutshell, granular fuzzy models are constructs built upon fuzzy models or a family of fuzzy models. We elaborate on a number of compelling reasons behind the emergence of granular fuzzy modelling, and granular modeling, in general. Information granularity present in such models plays an important role. Given a fuzzy model M, the associated granular model incorporates granular information to quantify a performance of the original model, facilitate collaborative pursuits of knowledge management and knowledge transfer. We discuss several main categories of granular fuzzy models where such categories depend upon the formalism of information granularity giving rise to interval-valued fuzzy models, fuzzy fuzzy model (fuzzy2 models, for short), and rough -fuzzy models. The design of granular fuzzy models builds upon two fundamental concepts of Granular Computing: the principle of justifiable granularity and an optimal allocation (distribution) of information granularity. The first one supports a construction of information granules of a granular fuzzy model. The second one emphasizes the role of information granularity being treated as an important design asset. The underlying performance indexes guiding the design of granular fuzzy models are discussed and a multiobjective nature of the construction of these models is stressed.",
"corpus_id": 13793678,
"score": 1,
"title": "From Fuzzy Models to Granular Fuzzy Models"
} |
{
"abstract": "This paper describes an image segmentation and normalization technique using 3D point distribution model and its counterpart in 2D space. This segmentation is efficient to work for holistic image recognition algorithm. The results have been tested with face recognition application using Cohn Kanade Facial Expressions Database (CKFED). The approach follows by fitting a model to face image and registering it to a standard template. The models consist of distribution of points in 2D and 3D. We extract a set of feature vectors from normalized images using principal components analysis and using them for a binary decision tree for classification. A promising recognition rate of up to 98.75% has been achieved using 3D model and 92.93% using 2D model emphasizing the goodness of our normalization. The experiments have been performed on more than 3500 face images of the database. This algorithm is capable to work in real time in the presence of facial expressions.",
"corpus_id": 1805434,
"title": "Image normalization for face recognition using 3D model"
} | {
"abstract": "Although many approaches for face recognition have been proposed in the last years, none of them can overcome the main problem of this kind of biometrics: the huge variability of many environmental parameters (lighting, pose, scale). Hence, face recognition systems can achieve good results at the expense of robustness. In this work we describe a methodology for improving the robustness of a face recognition system based on the “fusion” of two well-known statistical representations of a face: PCA and LDA. Experimental results that confirm the benefits of fusing PCA and LDA are reported.",
"corpus_id": 14047064,
"title": "Fusion of LDA and PCA for Face Recognition"
} | {
"abstract": "The present study investigated properties of various mixtures of organic acids (malic and malonic) and calcium phosphate compounds (beta-tricalcium phosphate, ashed bovine bone, and synthetic hydroxyapatite) with the objective of determining the optimum combination of organic acid and calcium phosphate compound for components of a chitosan-bonded bone-filling paste. beta-tricalcium phosphate was decomposed by malic acid and malonic acid, but these two acids did not decompose synthetic hydroxyapatite and ashed bovine bone. Assessment of ion release from a set paste containing either synthetic hydroxyapatite or ashed bovine bone indicated that only calcium ions were appreciably released after storing and stirring the set paste in physiologic saline for 7 days.",
"corpus_id": 27320060,
"score": 0,
"title": "In vitro properties of a chitosan-bonded bone-filling paste: studies on solubility of calcium phosphate compounds."
} |
{
"abstract": "Personal communication service (PCS) networks offer mobile users multimedia applications with different quality-of-service (QoS) and bandwidth requirements. This paper proposes a distributed call admission algorithm to provide QoS guarantees for multimedia traffic carried in a heterogeneous PCS network, where the parameters (e.g., number of channels, new and hand-off call arrival rates) of cells can be varied. The algorithm is based on the moveable boundary scheme that dynamically adjusts the number of channels for different types of traffic. With the moveable boundary scheme, the bandwidth can be utilized efficiently while satisfying the QoS requirements for different types of traffic. In addition, by taking the effect of hand-off traffic into account, we develop an analytical model for the network using the proposed algorithm. Using our model, the impact of hot-spot traffic caused by hand-offs can be effectively analyzed. The analytical model is validated by simulation.",
"corpus_id": 2759789,
"title": "Distributed Call Admission Control for a Heterogeneous PCS Network"
} | {
"abstract": "To guarantee proportional fairness (PF) is an important way to realize predictable and controllable service differentiation. In this paper, a proportional differentiation call admission control (CAC) algorithm, named PBS-PF, is proposed to guarantee the network services' PF in terms of call blocking probability (CBP). PBS-PF is based on the partial bandwidth sharing CAC scheme due to this scheme's high bandwidth utilization rate. PBS-PF dynamically calculates the bandwidth admission threshold of each service class to control the blocking call number for guaranteeing the CBP ratio of high and low priority services to accord with a given value. The simulations show that the CBP of each service class changes with the network load, but their ratio approximately keeps unchanged, which proves the effectiveness of PBS-PF. At last, we analyze the time complexity of PBS-PF.",
"corpus_id": 537009,
"title": "Proportional fairness of call blocking probability"
} | {
"abstract": "It is argued that Europe is well ahead of the US in spectrum identification, standards development, and equipment deployment for personal communication networks (PCNs). However, the very speed with which wireless activities vaulted the Europeans into prominence has caused the process to outrun the marketplace, the realities of which have slowed the process. In the US, the pace of PCN deployment will speed up now that the Federal Communications Commission (FCC) has selected the spectrum location and has promised a fast track for the rulemaking. The progress made over the last three years in the US and Europe in wireless communications and, specifically PCNs are discussed.<<ETX>>",
"corpus_id": 37808575,
"score": 2,
"title": "PCS in the US and Europe"
} |
{
"abstract": "Didem Tuzemen and Thealexa Becker study the Massachusetts Health Care Reform Act and find the reform may have supported self-employment in the state.",
"corpus_id": 152698463,
"title": "Does health care reform support self-employment?"
} | {
"abstract": "Independent Retail Business Owners’ Perceptions of the Patient Protection and Affordable Care Act by Bradley A. Hall MBA, University of Tampa, 1999 BBA, Baylor University, 1992 Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy Management",
"corpus_id": 166767470,
"title": "Independent Retail Business Owners' Perceptions of the Patient Protection and Affordable Care Act."
} | {
"abstract": "We have undertaken a prospective study of the frequency and prognosis associated with N4 node metastases in gastric cancer in 136 patients referred for surgical treatment between 1976 and 1983. N4 node metastases (pre‐aortic or hepatic hilar nodes) were present in 20 of 31 patients who had a laparotomy without resection (64 per cent), in 2 of 8 patients who had a “palliative” resection in the presence of distant metastases (25 per cent) and, in 19 of 85 patients who had a “curative” resection (22 per cent). The median survival in patients having a “curative” resection with N4 nodes was 4.5 months which was only marginally longer than in patients having a “palliative” resection (median survival 3 months). In view of these findings and since immediate imprint cytology can be used to detect nodal metastases at operation, involvement of N4 nodes might be a contra‐indication to extensive gastric resection in non‐obstructing gastric cancer.",
"corpus_id": 44816745,
"score": 1,
"title": "Incidence and prognosis of N4 node involvement in gastric cancer"
} |
{
"abstract": "Cervical cancer remains a significant cause of mortality in low-income countries. As in many other diseases, the existence of several screening/diagnosis methods and subjective physician preferences creates a complex ecosystem for automated methods. In order to diminish the amount of labeled data from each modality/expert we propose a regularization-based transfer learning strategy that encourages source and target models to share the same coefficient signs. We instantiated the proposed framework to predict cross-modality individual risk and cross-expert subjective quality assessment of colposcopic images for different modalities. Thus, we are able to transfer knowledge gained from one expert/modality to another.",
"corpus_id": 31603047,
"title": "Transfer Learning with Partial Observability Applied to Cervical Cancer Screening"
} | {
"abstract": "In this paper, we tackle the problem of automatic classification of pulmonary peri-fissural nodules (PFNs). The classification problem is formulated as a machine learning approach, where detected nodule candidates are classified as PFNs or non-PFNs. Supervised learning is used, where a classifier is trained to label the detected nodule. The classification of the nodule in 3D is formulated as an ensemble of classifiers trained to recognize PFNs based on 2D views of the nodule. In order to describe nodule morphology in 2D views, we use the output of a pre-trained convolutional neural network known as OverFeat. We compare our approach with a recently presented descriptor of pulmonary nodule morphology, namely Bag of Frequencies, and illustrate the advantages offered by the two strategies, achieving performance of AUC = 0.868, which is close to the one of human experts.",
"corpus_id": 10735894,
"title": "Automatic classification of pulmonary peri-fissural nodules in computed tomography using an ensemble of 2D views and a convolutional neural network out-of-the-box"
} | {
"abstract": "The majority of security vulnerabilities published in the literature is due to software bugs. Many researchers have developed program transformation and analysis techniques to automatically detect or eliminate such vulnerabilities. So far, most of them cannot be applied to commercially distributed applications on the Windows/x86 platform, because it is almost impossible to disassemble a binary file with 100% accuracy and coverage on that platform. This paper presents the design, implementation, and evaluation of a binary analysis and instrumentation infrastructure for the Windows/x86 platform called BIRD (binary interpretation using runtime disassembly), which provides two services to developers of security-enhancing program transformation tools: converting binary code into assembly language instructions for further analysis, and inserting instrumentation code at specific places of a given binary without affecting its execution semantics. Instead of requiring a high-fidelity instruction set architectural emulator, BIRD combines static disassembly with an on-demand dynamic disassembly approach to guarantee that each instruction in a binary file is analyzed or transformed before it is executed. It takes 12 student months to develop the first BIRD prototype, which can successfully work for all applications in Microsoft office suite as well as Internet explorer and IIS Web server, including all DLLs that they use. Moreover, the additional throughput penalty of the BIRD prototype on production server applications such as Apache, IIS, and BIND is uniformly below 4%.",
"corpus_id": 4484632,
"score": -1,
"title": "BIRD: binary interpretation using runtime disassembly"
} |
{
"abstract": "IBM Research undertook a challenge to build a computer system that could compete at the human champion level in real time on the American TV Quiz show, Jeopardy! The extent of the challenge includes fielding a real-time automatic contestant on the show, not merely a laboratory exercise. The Jeopardy! Challenge helped us address requirements that led to the design of the DeepQA architecture and the implementation of Watson. After 3 years of intense research and development by a core team of about 20 researches, Watson is performing at human expert-levels in terms of precision, confidence and speed at the Jeopardy! Quiz show. Our results strongly suggest that DeepQA is an effective and extensible architecture that may be used as a foundation for combining, deploying, evaluating and advancing a wide range of algorithmic techniques to rapidly advance the field of QA.",
"corpus_id": 1831060,
"title": "Articles Building Watson: An Overview of the"
} | {
"abstract": "Recent TREC results have demonstrated the need for deeper text understanding methods. This paper introduces the idea of automated reasoning applied to question answering and shows the feasibility of integrating a logic prover into a Question Answering system. The approach is to transform questions and answer passages into logic representations. World knowledge axioms as well as linguistic axioms are supplied to the prover which renders a deep understanding of the relationship between question text and answer text. Moreover, the trace of the proofs provide answer justifications. The results show that the prover boosts the performance of the QA system on TREC questions by 30%.",
"corpus_id": 34491971,
"title": "COGEX: A Logic Prover For Question Answering"
} | {
"abstract": "The task of payment card fraud detection using account information is considered. We apply to two approaches for organization of neural networks interaction: neural network committee and clustering approach. Finally, these two methods are compared.",
"corpus_id": 7045750,
"score": -1,
"title": "Payment card fraud detection using neural network committee and clustering"
} |
{
"abstract": "Web browsers routinely handle private information. Owing to a lax security model, browsers and JavaScript in particular, are easy targets for leaking sensitive data. Prior work has extensively studied information flow control (IFC) as a mechanism for securing browsers. However, two central aspects of web browsers - the Document Object Model (DOM) and the event handling mechanism - have so far evaded thorough scrutiny in the context of IFC. This paper advances the state-of-the-art in this regard. Based on standard specifications and the code of an actual browser engine, we build formal models of both the DOM (up to Level 3) and the event handling loop of a typical browser, enhance the models with fine-grained taints and checks for IFC, prove our enhancements sound and test our ideas through an instrumentation of WebKit, an in-production browser engine. In doing so, we observe several channels for information leak that arise due to subtleties of the event loop and its interaction with the DOM.",
"corpus_id": 1228173,
"title": "Information Flow Control for Event Handling and the DOM in Web Browsers"
} | {
"abstract": "With the use of external cloud services such as Google Docs or Evernote in an enterprise setting, the loss of control over sensitive data becomes a major concern for organisations. It is typical for regular users to violate data disclosure policies accidentally, e.g. when sharing text between documents in browser tabs. Our goal is to help such users comply with data disclosure policies: we want to alert them about potentially unauthorised data disclosure from trusted to untrusted cloud services. This is particularly challenging when users can modify data in arbitrary ways, they employ multiple cloud services, and cloud services cannot be changed. To track the propagation of text data robustly across cloud services, we introduce imprecise data flow tracking, which identifies data flows implicitly by detecting and quantifying the similarity between text fragments. To reason about violations of data disclosure policies, we describe a new text disclosure model that, based on similarity, associates text fragments in web browsers with security tags and identifies unauthorised data flows to untrusted services. We demonstrate the applicability of imprecise data tracking through BrowserFlow, a browser-based middleware that alerts users when they expose potentially sensitive text to an untrusted cloud service. Our experiments show that BrowserFlow can robustly track data flows and manage security tags for documents with no noticeable performance impact.",
"corpus_id": 9384600,
"title": "BrowserFlow: Imprecise Data Flow Tracking to Prevent Accidental Data Disclosure"
} | {
"abstract": "Timeout mechanisms are a useful feature for web applications. However, these mechanisms need to be used with care because, if used as-is, they are vulnerable to timing attacks. This paper focuses on internal timing attacks, a particularly dangerous class of timing attacks, where the attacker needs no access to a clock. In the context of client-side web application security, we present JavaScript-based exploits against the timeout mechanism of the DOM (document object model), supported by the modern browsers. Our experimental findings reveal rather liberal choices for the timeout semantics by different browsers and motivate the need for a general security solution. We propose a foundation for such a solution in the form of a runtime monitor. We illustrate for a simple language that, while being more permissive than a typical static analysis, the monitor enforces termination-insensitive noninterference.",
"corpus_id": 15396578,
"score": 2,
"title": "Securing Timeout Instructions in Web Applications"
} |
{
"abstract": "THE allowed conformations of nucleic acids depend on the flexibility of the structure of the nucleotides from which they are constructed. Many X-ray crystallographic studies of the degree of flexibility of these structures have shown that nucleosides exhibit a larger number of preferred conformations than nucleotides, and this led Sundaralingam1 to suggest that a nucleotide is more ‘rigid’ than a nucleoside. Berthod and Pullman2, however, have drawn attention to the fact that considerations of conformational energy do not imply such a rigid structure.",
"corpus_id": 4299510,
"title": "Molecular conformation of deoxyguanosine 5′-phosphate"
} | {
"abstract": "The nucleosides Ia and IIa exist in syn and anti conformations, respectively, both in solid state and solution. Compound Ia undergoes significant conformational change, accompanied by increased population of the anti conformer, upon conversion to the corresponding 5'-mono- and- diphosphate derivatives, whereas conformation of IIa remains reasonably constant between nucleoside and nucleotides. While Ia possessed the C2'-endo-C3'-exo geometry, IIa had the opposite C2'-exo-C3'-endo conformation. The C5' of the two nucleosides bore axial and equatorial conformations, respectively.",
"corpus_id": 19598792,
"title": "Conformational studies of two isomeric ring-expanded purine nucleosides and their 5'-mono- and -diphosphate derivatives."
} | {
"abstract": "A new mechanism for the rearrangement of vinyl allene oxide geometric isomers to stereodefined cyclopentenones is proposed based on DFT computations. This mechanism comprises two steps, first the ring opening of the oxirane to give a vinylcyclopropanone, and then a [1,3]-C sigmatropic rearrangement. Depending primarily on the allene oxide double bond geometry the stepwise pathway is either competitive (for E allene oxides) or favored (for Z allene oxides) relative to the already described SN2-like concerted pathway. All bond-forming reactions take place through helically chiral transition states, which allows the stereochemical information of the substrates to be transferred to that of the products, in particular in the case of (enantiopure) Z allene oxides. In addition to revealing one more of the fascinating mechanisms with memory of chirality, the results deepen our understanding of the important jasmonate and clavulone biosynthetic pathways that occur in plants and corals.",
"corpus_id": 21519594,
"score": 1,
"title": "A unifying mechanism for the rearrangement of vinyl allene oxide geometric isomers to cyclopentenones."
} |
{
"abstract": "This paper gives a first look at slides2wiki, a new scheme for easily providing collaborative lecture notes. Unlike previous web-based courseware schemes, the slides2wiki approach integrates with existing techniques that computer-science course instructors already use to prepare their classes. This tool is used to create a site where students may collaborate to create their own notes, using the lecture slides as a starting point. Adopting a surprisingly low-tech approach that uses familiar tools and paradigms, slides2wiki avoids many of the stumbling blocks of previous approaches to web-based course support.",
"corpus_id": 2092301,
"title": "Automated use of a Wiki for collaborative lecture notes"
} | {
"abstract": "The prosper class permits producing high quality slides; it is also easily extendable. This documentation is meant to be a user manual as well as a technical note describing how to create your own styles. 1 Using the class LTEX files using the prosper class may be eventually translated into two different formats: • the Adobe PostScript format for printing transparencies; • the Adobe Portable Document Format (PDF) for displaying slides on computers with Acrobat Reader in full-screen mode. When translated into PDF files, prosper slides benefit from additional possibilities such as transition effects between slides and incremental display of a slide with several animation effects. The currently supported transitions are: • Split: two lines sweep across the screen revealing the new slide; • Blinds: multiple lines, evenly distributed across the screen, appear and synchronously sweep in the same direction to reveal the new slide; • Box: a box sweeps from the center, revealing the new slide; • Wipe: a single line sweeps across the screen from one edge to the other, revealing the new slide; • Dissolve: the old page image dissolves to reveal the new slide; • Glitter: similar to Dissolve, except the effect sweeps across the image in a wide band moving from one side of the screen to the other; • Replace: the effect is simply to replace the old page with the new page. Figure 1 presents a bird’s-eye view of the structure of a LTEX file using the prosper class.",
"corpus_id": 208911406,
"title": "Making slides in LTEX with Prosper"
} | {
"abstract": "In this paper, we describe efforts to develop and evaluate a large-scale experiment in ubiquitous computing applied to education. Specifically, we are concerned with the general problem of capturing a rich, multimedia experience, and providing useful access into the record of the experience by automatically integrating the various streams of captured information. We describe the Classroom 2000 project and two years of experience developing and using automated tools for the capture, integration and access to support university lecture courses. We will report on observed use of the system by both teachers and learners and how those observations have influenced and will influence the development of a capture, integration and access system for everyday use.",
"corpus_id": 14897368,
"score": 2,
"title": "Investigating the capture, integration and access problem of ubiquitous computing in an educational setting"
} |
{
"abstract": "In 2009 the International Stem Cell Banking Initiative (ISCBI)\ncontributors and the Ethics Working Party of the International\nStem Cell Forum published a consensus on principles of best\npractice for the procurement, cell banking, testing and\ndistribution of human embryonic stem cell (hESC) lines for\nresearch purposes [1], which was broadly also applicable to\nhuman induced pluripotent stem cell (hiPSC) lines. Here, we\nrevisit this guidance to consider what the requirements would\nbe for delivery of the early seed stocks of stem cell lines\nintended for clinical applications. The term ‘seed stock’ is\nused here to describe those cryopreserved stocks of cells\nestablished early in the passage history of a pluripotent stem\ncell line in the lab that derived the line or a stem cell bank,\nhereafter called the ‘repository’.",
"corpus_id": 1958093,
"title": "Points to consider in the development of seed stocks of pluripotent stem cells for clinical applications: International Stem Cell Banking Initiative (ISCBI)."
} | {
"abstract": "Stem cell banking has been a topic of discussion and debate for more than a decade since the first public services to supply human embryonic stem cells (hESCs) were established in the USA and the UK. This topic has received a recent revival with numerous ambitious programmes announced to deliver large collections of human induced pluripotency cell (hiPSC) lines. This chapter will provide a brief overview charting the development of stem cell banks, their value, and their likely role in the future.",
"corpus_id": 3672400,
"title": "Stem Cell Banking"
} | {
"abstract": "The aim of this study was to analyze the suitability of pluripotent stem cells derived from the amnion (hAECs) as a potential cell source for revitalization in vitro. hAECs were isolated from human placentas, and dental pulp stem cells (hDPSCs) and dentin matrix proteins (eDMPs) were obtained from human teeth. Both hAECs and hDPSCs were cultured with 10% FBS, eDMPs and an osteogenic differentiation medium (StemPro). Viability was assessed by MTT and cell adherence to dentin was evaluated by scanning electron microscopy. Furthermore, the expression of mineralization-, odontogenic differentiation- and epithelial–mesenchymal transition-associated genes was analyzed by quantitative real-time PCR, and mineralization was evaluated through Alizarin Red staining. The viability of hAECs was significantly lower compared with hDPSCs in all groups and at all time points. Both hAECs and hDPSCs adhered to dentin and were homogeneously distributed. The regulation of odontoblast differentiation- and mineralization-associated genes showed the lack of transition of hAECs into an odontoblastic phenotype; however, genes associated with epithelial–mesenchymal transition were significantly upregulated in hAECs. hAECs showed small amounts of calcium deposition after osteogenic differentiation with StemPro. Pluripotent hAECs adhere on dentin and possess the capacity to mineralize. However, they presented an unfavorable proliferation behavior and failed to undergo odontoblastic transition.",
"corpus_id": 247302439,
"score": 1,
"title": "Human Amnion Epithelial Cells: A Potential Cell Source for Pulp Regeneration?"
} |
{
"abstract": "After nearly a century's study, what do psychologists now know about intergroup bias and conflict? Most people reveal unconscious, subtle biases, which are relatively automatic, cool, indirect, ambiguous, and ambivalent. Subtle biases underlie ordinary discrimination: comfort with one's own in–group, plus exclusion and avoidance of out–groups. Such biases result from internal conflict between cultural ideals and cultural biases. A small minority of people, extremists, do harbor blatant biases that are more conscious, hot, direct, and unambiguous. Blatant biases underlie aggression, including hate crimes. Such biases result from perceived intergroup conflict over economics and values, in a world perceived to be hierarchical and dangerous. Reduction of both subtle and blatant bias results from education, economic opportunity, and constructive intergroup contact.",
"corpus_id": 1877175,
"title": "What We Know Now About Bias and Intergroup Conflict, the Problem of the Century"
} | {
"abstract": "People often cling to beliefs even in the face of disconfirming evidence and interpret ambiguous information in a manner that bolsters strongly held attitudes. The authors tested a motivational account suggesting that these defensive reactions would be ameliorated by an affirmation of an alternative source of self-worth. Consistent with this interpretation, participants were more persuaded by evidence impugning their views toward capital punishment when they were self-affirmed than when they were not (Studies 1 and 2). Affirmed participants also proved more critical of an advocate whose arguments confirmed their views on abortion and less confident in their own attitudes regarding that issue than did unaffirmed participants (Study 3). Results suggest that assimilation bias and resistance to persuasion are mediated, in part, by identity-maintenance motivations.",
"corpus_id": 144741153,
"title": "When Beliefs Yield to Evidence: Reducing Biased Evaluation by Affirming the Self"
} | {
"abstract": "Recent academic debate into women’s experiences of tourism employment has emphasised the extremely heterogeneous nature of such work and the need for sensitivity to local political, economic, social and cultural contexts. This article focuses on one such context which has received little attention – state socialism – and we explore women’s experiences of tourism work in socialist Romania. Such work had characteristics in common with non-socialist contexts, but in other ways took a form which was distinctive to the socialist state. It was characterised by extensive training, good pay and opportunities for promotion (at least to middle management level). The socialist state also devised unique solutions to the problem of the seasonality of tourism work. However, women also faced extensive surveillance by the state’s security services and faced harsh penalties for under-performance.",
"corpus_id": 147867747,
"score": 0,
"title": "Exploring women’s employment in tourism under state socialism: Experiences of tourism work in socialist Romania"
} |
{
"abstract": "AIM\nTo review the literature on attitudes of health care professionals to termination of pregnancy and draw out underlying themes.\n\n\nBACKGROUND\nThe controversy surrounding therapeutic abortion is unremitting with public opinion often polemic and unyielding. Nurses and midwives are at the centre of this turmoil, and as more termination of pregnancies are being performed using pharmacological agents, they are becoming ever more involved in direct care and treatment. Attitudes towards termination of pregnancy have been found to vary depending on the nationality of those asked, the professionals involved, experience in abortion care, as well as personal attributes of those asked such as their obstetric history and religious beliefs. The reasons for women undergoing abortion were also found to influence attitudes to a greater or lesser extent.\n\n\nCONCLUSION\nThis paper explores research studies undertaken into attitudes of health care professionals towards termination of pregnancy, to appreciate the complexity of the debate. It is possible that the increased involvement of nurses in termination of pregnancy, that current methods demand, may lead to change in attitudes. Consideration is given to a number of remedies to create an optimum environment for women undergoing termination of pregnancy.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nThis paper establishes via a literature review that attitudes in those working in this area of care depend upon a variety of influences. Suggestions are made for measures to be put into place to foster appropriate attitudes in those working in termination of pregnancy services.",
"corpus_id": 2528617,
"title": "A review of termination of pregnancy: prevalent health care professional attitudes and ways of influencing them."
} | {
"abstract": "Background. The purpose of the present study was to study the attitudes among Danish health care professionals likely to encounter ethical controversies of ART and related subjects.",
"corpus_id": 41225513,
"title": "Attitudes among health care professionals on the ethics of assisted reproductive technologies and legal abortion"
} | {
"abstract": "This quantitative research study evaluates the health care infrastructure necessary to provide medical care in US hospitals during a flu pandemic. These hospitals are identified within the US health care system because they operate airborne infectious isolation rooms. Data were obtained from the 2006 American Hospital Association annual survey. This data file provides essential information on individual US hospitals and identifies the health care capabilities in US communities. Descriptive statistics were evaluated to examine hospitals with the appropriate infrastructure to treat a flu pandemic. In addition, geographic information system software was used to identify geographic areas where essential infrastructure is lacking. The study found 3,341 US hospitals operate airborne infectious isolation rooms, representing 69% of reporting hospitals. The results also indicate that those hospitals with airborne infectious isolation rooms are larger and are located in metropolitan areas. The study has managerial implications associated with local medical disaster response and policy implications on the allocation of disaster resources.",
"corpus_id": 12205304,
"score": 1,
"title": "Medical Response Planning for Pandemic Flu"
} |
{
"abstract": "The aim of this in vitro study is to explore the correlations between quantitative ultrasound (QUS) parameters and micro-structure parameters of cancellous bone. Cube of swine cancellous bone were resort to progressively decalcification and subjected to QUS and micro computed tomography. Ethylene Diamine Tetraacetic Acid (EDTA) decalcification was used to remove calcium. SOS, an important QUS parameter, has a downtrend as calcium run off. At the same time, both BV/TV and Tb.Th have a downtrend; meanwhile, BS/BV and Tb.Pf have an uptrend during the process of decalcification. SOS shows a great correlation with Micro-Structure parameters which indicate that quantitative ultrasound has the potential to reflect bone microstructure.",
"corpus_id": 15082674,
"title": "Correlations between speed of sound and microstructure in swine cancellous bone during decalcification"
} | {
"abstract": "In an in vitro study, we found significant associations between QUS variables and properties and geometrical parameters of the compact bone of human finger phalanges. QUS variables were not only related to BMD but also to other skeletal properties, which explained 70% of the variability of speed of sound.",
"corpus_id": 27060614,
"title": "Assessing Bone Status Beyond BMD: Evaluation of Bone Geometry and Porosity by Quantitative Ultrasound of Human Finger Phalanges"
} | {
"abstract": "This experiment study aimed to investigate the physical, chemical and mechanical properties of fly ash cement concrete for road construction. Research has shown that 30% of fly ash and 70% of cement has a superior performance. Characteristics compared to the standard requirements conformable code. Moreover, the use of fly ash would result in reduction of the cost of materials in construction and the reduction of greenhouse gas emission. High strength of concrete can be made and the incorporation of admixture or substitute to improve the properties of concrete. Test result of specimens indicates the workability, and bonding strength of properties, and different reaction when the water ratio a change its content. Slump test having an appropriate workable mixing the slump of a concrete, gave sufficient compressive strength. Test results of 14 days specimens having different results but it only treated when it downs to minimum level of required conformable code.",
"corpus_id": 15082992,
"score": 1,
"title": "Analysis of Fly Ash Cement Concrete for Road Construction"
} |
{
"abstract": "Stephen Duggan and Lyndall Boyle Footscray Institute of Technology This paper is concerned witha study of comparative curriculum practicevvithin the tertiary sector. Within Australia, curriculum practice, innovation, plaruring and evaluation has occurred mainly within the primary and secondary school system. However, since the mid-1980s, educational strategies for national interest have seen the evolution of informed curriculum research and development within the tertiary sector, as universities and colleges endeavour to meet nationally determined educational goals and objectives. This study relates the research process involved when tertiary educators (and researchers) are faced with the task of reconciling local, regional and national objectives. In particular, it considers the dynamics of planning for nationally determined priorities as a basis for implementing sustainable and informed curriculum innovati0n and evaluation.",
"corpus_id": 153721565,
"title": "Curriculum Decision Making for National Interest in the Tertiary Sector: An Evaluation of a Curriculum Project"
} | {
"abstract": "Jim Cummins presents a theoretical framework for analyzing minority students' school failure and the relative lack of success of previous attempts at educational reform, such as compensatory education and bilingual education. The author suggests that these attempts have been unsuccessful because they have not altered significantly the relationships between educators and minority students and between schools and minority communities. He offers ways in which educators can change these relationships, thereby promoting the empowerment of students which can lead them to succeed in school.",
"corpus_id": 145391540,
"title": "Empowering minority students: A framework for intervention."
} | {
"abstract": "Statewide studies conducted throughout the United States during the past 2 decades focusing on community recreation programming for people with disabilities have found these services to be lacking. Confusion regarding programmatic responsibility, and a paucity of available inclusive recreation curricula, were pervasive among recreation agencies in the states studied. In this study, 484 community leisure service agencies were surveyed to determine if recommended professional practices for inclusive recreation programming were being implemented and by whom. This sample included parks and recreation departments, community education departments, YMCAs, YMCA camps, and Jewish Community Centers throughout Minnesota. The purpose of this study was to identify the barriers these agencies encountered and inclusive practices they employed. Analysis revealed no statistically significant differences in the manner with which agencies of different types, city size, or survey form (i.e., mail or telephone) responded to the survey questions. Concerning barriers to successful community recreation inclusion, agencies reported financial constraints (e.g., insufficient funds for hiring disability specialists, securing additional equipment) and staffing constraints (e.g., perceived staff skill deficiencies and participant-to-staff ratio inadequacies) as the two prevalent obstacles preventing the provision of inclusive programming. The most often cited \"organizational\" practices used to successfully include people with disabilities included collaborative program planning (e.g., agency staff work closely with family members in designing programs) and the use of marketing strategies to reach participants of varying abilities. \"Programmatic\" practices, which were cited more frequently than organizational practices, most often included the use of adaptations and the conducting of formative evaluations. These findings are then compared to previous statewide studies. Recommendations for future research studies complete the article.",
"corpus_id": 153388308,
"score": 1,
"title": "Inclusive Community Leisure Services: Recommended Professional Practices and Barriers Encountered"
} |
{
"abstract": "This paper provides a high level overview of a game called ORIENT which aims at cultural and emotional learning by engaging adolescents in a role-play with virtual characters in a virtual world. The paper focuses on defining and explaining the learning objectives, and gives an overview of how we intend to achieve these objectives using narrative concepts, affective characters, and innovative technologies for interaction.",
"corpus_id": 26715,
"title": "1 ORIENT : An Inter-Cultural Role-Play Game"
} | {
"abstract": "It has been widely acknowledged in the areas of human memory and cognition that behaviour and emotion are essentially grounded by autobiographic knowledge. In this paper we propose an overall framework of human autobiographic memory for modelling believable virtual characters in narrative story-telling systems and role-playing computer games. We first lay out the background research of autobiographic memory in Psychology, Cognitive Science and Artificial Intelligence. Our autobiographic agent framework is then detailed with features supporting other cognitive processes which have been extensively modelled in the design of believable virtual characters (e.g. goal structure, emotion, attention, memory schema and reactive behaviour-based control at a lower level). Finally we list directions for future research at the end of the paper.",
"corpus_id": 14417475,
"title": "Autobiographic Knowledge for Believable Virtual Characters"
} | {
"abstract": "Interactive dynamic influence diagram (I-DID) is one of the graphical frameworks for sequential decision making in partially observable environment. Subject agent in I-DID maintains beliefs over not only physical states of the environment, but also over models of the other agents. Consequently, solving I-DIDs suffers from the exponential growth of models ascribed to the other agents over time. Previous methods to solve I-DIDs aim at clustering equivalent models by comparing the entire or partial policy trees of the candidate models, which is time-consuming. In this paper, we present a new method for further reducing the model space by identifying the true model of the other agent and pruning the other irrelevant models. Toward this, we use an information-theoretic methodmutual information to measure the relevance between the candidate models and the true model in terms of predicted and observed actions of the other agent. We construct a dynamic Bayesian network to learn the value of parameters needed in the computation of mutual information. This approach bounds the model space by containing only the true model of the other agent. We evaluate our approach on multiple problem domains and empirically demonstrate the efficiency in solving I-DIDs.",
"corpus_id": 9392366,
"score": 1,
"title": "Efficient solutions of interactive dynamic influence diagrams using model identification"
} |
{
"abstract": "BackgroundWe evaluated our previously derived admission criteria for agreement with physician decisions and outpatient failure among patients presenting to emergency departments (EDs) with pneumonia.MethodsAmong patients presenting to seven Intermountain EDs in the urban region of Utah with pneumonia December 1 2009-December 1 2010, we measured hospital admission rates and outpatient failure, defined as either 7-day secondary hospitalization or death in 30 days for patients initially discharged home from the ED. We measured our admission criteria’s ability to predict hospital admission and its hypothetical rates of admission and outpatient failure with strict adherence to the criteria. We compared our admission criteria to other electronically calculable criteria, CURB-65 and A-DROP.ResultsIn 2,308 patients, admission rate was 57%, 30-day mortality 6.1%, 7-day secondary hospitalization 5.8%, and outpatient failure rate 6.4%. Our admission criteria predicted hospital admission with an AUC of 0.77, compared to 0.73 for CURB-65 ≥ 2 and 0.78 for A-DROP≥ 2. Hypothetical 100% concordance with our admission criteria decreased the hospitalization rate to 52% and reduced the outpatient failure rate to 3.9%, slightly better than A-DROP ≥ 2 (54% and 4.3%) and CURB-65 ≥ 2 (49% and 5.1%).ConclusionsOur admission criteria agreed acceptably with overall observed admission decisions for patients presenting to EDs with pneumonia, but may safely reduce hospital admission rates and increase recognition of patients at risk for outpatient failure compared to CURB-65 ≥ 2 or A-DROP ≥ 2.",
"corpus_id": 1730112,
"title": "Validating hospital admission criteria for decision support in pneumonia"
} | {
"abstract": "The pneumocococcal urine antigen test increases specific microbiological diagnosis over conventional culture methods in pneumonia patients. Data are limited regarding its yield and effect on antibiotic prescribing among patients with community-onset pneumonia in clinical practice. We performed a secondary analysis of 2837 emergency department patients admitted to seven Utah hospitals over 2 years with international diagnostic codes version 9 codes and radiographic evidence of pneumonia. Mean age was 64.2 years, 47.2% were male and all-cause 30-day mortality was 9.6%. Urinary antigen testing was performed in 1110 (39%) patients yielding 134 (12%) positives. Intensive care unit patients were more likely to undergo testing, and have a positive result (15% versus 8.8% for ward patients; p<0.01). Patients with risk factors for healthcare-associated pneumonia had fewer urinary antigen tests performed, but 8.4% were positive. Physicians changed to targeted antibiotic therapy in 20 (15%) patients, de-escalated antibiotic therapy in 76 patients (57%). In 38 (28%) patients, antibiotics were not changed. Only one patient changed to targeted therapy suffered clinical relapse. Length of stay and mortality were lower in patients receiving targeted therapy. Pneumococcal urinary antigen testing is an inexpensive, noninvasive test that favourably influenced antibiotic prescribing in a “real world”, multi-hospital observational study. Pneumococcal urinary antigen test in pneumonia http://ow.ly/sm8R303lOe0",
"corpus_id": 10301750,
"title": "Pneumococcal urinary antigen test use in diagnosis and treatment of pneumonia in seven Utah hospitals"
} | {
"abstract": "The well-sampled Late Cretaceous fossil record of North America remains the only high-resolution dataset for evaluating patterns of dinosaur diversity leading up to the terminal Cretaceous extinction event. Hadrosaurine hadrosaurids (Dinosauria: Ornithopoda) closely related to Edmontosaurus are among the most common megaherbivores in latest Campanian and Maastrichtian deposits of western North America. However, interpretations of edmontosaur species richness and biostratigraphy have been in constant flux for almost three decades, although the clade is generally thought to have undergone a radiation in the late Maastrichtian. We address the issue of edmontosaur diversity for the first time using rigorous morphometric analyses of virtually all known complete edmontosaur skulls. Results suggest only two valid species, Edmontosaurus regalis from the late Campanian, and E. annectens from the late Maastrichtian, with previously named taxa, including the controversial Anatotitan copei, erected on hypothesized transitional morphologies associated with ontogenetic size increase and allometric growth. A revision of North American hadrosaurid taxa suggests a decrease in both hadrosaurid diversity and disparity from the early to late Maastrichtian, a pattern likely also present in ceratopsid dinosaurs. A decline in the disparity of dominant megaherbivores in the latest Maastrichtian interval supports the hypothesis that dinosaur diversity decreased immediately preceding the end Cretaceous extinction event.",
"corpus_id": 3199415,
"score": 0,
"title": "Cranial Growth and Variation in Edmontosaurs (Dinosauria: Hadrosauridae): Implications for Latest Cretaceous Megaherbivore Diversity in North America"
} |
{
"abstract": "In (KM02)a model has been developed to support trust in eCommerce. The model is composed of four main modules where each module is a set of factors the consumer is looking for to trust a virtual merchant. These four modules represent the merchant existence, affiliation, policy and performance. In this paper, we present a model to implement the existence module by developing an information extraction system which aims at localising the required information on the merchant's website. The system is based on rules that reflect the different ways the information is repre- sented on the websites, their structures and the layout of the websites. The extracted information is then stored in a database to be used for a future evaluation of the trust associated with the merchant website.",
"corpus_id": 971542,
"title": "Extracting Unstructured Information from the WWW to Support Merchant Existence in eCommerce"
} | {
"abstract": "Lack of trust has been identified as one of the reasons why many visits to eCommerce websites do not turn into proper transactions. To support trust, an information framework model based on research on eCommerce trust has been developed. The model identifies the kind of information a consumer expects to find on an eCommerce website and that is shown to increase his trust toward the online merchant. An information extraction system has been developed to help gather the required information from the websites. In this paper, we first validate the information model through a questionnaire using a consumer sample. This is then followed by an evaluation of the current implementation of eCommerce websites with regards to the developed trust information model. 1. I TRODUCTIO A D MOTIVATIO One of the areas that has been revolutionised the most by the development of the internet is business and commerce. Business transactions are no longer bound by geographical boundaries, time differences or distance barriers. This new business setting is known as eCommerce. ECommerce is the success story of the last few years. However, there are many hindrance factors which cause it to fail to reach its full potential. Han and Noh [8] found that several critical failure factors of eCommerce need to be addressed seriously by the eCommerce industry to ensure its usage will continue to grow. Their findings are mainly on the dissatisfaction of customer on the unstable eCommerce system, a low level of personal data security, inconvenience system and disappointing purchases. Other problems with eCommerce include delays in deliveries, quality of the goods and fraud. Indeed, consumers’ loss to Internet fraud has increase from US$3.2 millions in 1999 [2] to more than US$ 14.5 millions in 2002 [16] and this is increasing every year. This has affected consumers’ trust towards online business. The question that many customers are asking is “who to trust in the cyber space?” and most importantly, how to quantify trust? Many variables should be considered when attempting to quantify or just trying to understand the trust relationship between the vendor and the customer. In this paper, we present a model for eCommerce trust, its implementation and an evaluation of current eCommerce websites based on our model. The remaining of the paper is organised as follows. In section 2 we review some of the models used to evaluate trust in eCommerce and in section 3 we present our proposed trust model. In section 4 we summarise the extraction system developed to support the gathering of the various variables required by the trust model and in section 5 we present three methods for the evaluation of the trust model. We conclude and evaluate our system in section 6.",
"corpus_id": 15826053,
"title": "STRATEGISI G CO SUMER LOGISTIC REQUIREME TS I ECOMMERCE TRA SACTIO S: EVALUATIO OF CURRE T IMPLEME TATIO S"
} | {
"abstract": "This paper introduces a sequence of λ-expressions modeling the binary expansion of integers. We derive expressions computing the test for zero, the successor function, and the predecessor function, thereby showing the sequence to be an adequate numeral system, i.e. one in which all recursive functions are lambda-definable. These functions can be computed efficiently; To this end, we introduce a notion of complexity that is independent of the order of evaluation.",
"corpus_id": 57946010,
"score": 1,
"title": "An adequate and efficient left-associated binary numeral system in the λ-calculus"
} |
{
"abstract": "Drosophila egg production depends upon the nutritional available to females. When food is in short supply, oogenesis is arrested and apoptosis of the nurse cells is induced at mid-oogenesis via a mechanism that is probably controlled by ecdysteroid hormone. We have shown that expression of some ecdysone-response genes is correlated with apoptosis of egg chambers. Moreover, ecdysteroid injection and application of juvenile hormone induces and suppresses the apoptosis, respectively. In this study, we investigated which tissues show increases in the concentration of ecdysteroids under nutritional shortage to begin to link together nutrient intake, hormone regulation and the choice between egg development or apoptosis made within egg chambers. We measured ecdysteroid levels in the whole body, ovaries and haemolymph samples by RIA and found that the concentration of ecdysteroid increased in all samples. This contributes to the idea that nutritional shortage leads to a rapid high ecdysteroid concentration within the fly and that the high concentration induces apoptosis. Low concentrations of ecdysteroid are essential for normal oogenesis. We suggest there is threshold concentration in the egg chambers and that apoptosis at mid-oogenesis is induced when the ecdysteroid levels exceed the threshold. Starvation causes the ovary to retain the ecdysteroid it produces, thus enabling individual egg chambers to undergo apoptosis and thus control the number of eggs produced in relation to food intake.",
"corpus_id": 1111021,
"title": "Nutritional status affects 20-hydroxyecdysone concentration and progression of oogenesis in Drosophila melanogaster."
} | {
"abstract": "Dietary cholesterol levels control follicle stem cell proliferation in the Drosophila ovary via regulation of Hedgehog protein localization.",
"corpus_id": 32058,
"title": "Diet controls Drosophila follicle stem cell proliferation via Hedgehog sequestration and release"
} | {
"abstract": "The structural basis of the outer membrane permeability for the bacterium Escherichia coli is studied by atomic force microscopy (AFM) in conjunction with biochemical treatment and analysis. The surface of the bacterium is visualized with unprecedented detail at 50 and 5 A lateral and vertical resolutions, respectively. The AFM images reveal that the outer membrane of native E. coli exhibits protrusions that correspond to patches of lipopolysaccharide (LPS) containing hundreds to thousands of LPS molecules. The packing of the nearest neighbor patches is tight, and as such the LPS layer provides an effective permeability barrier for the Gram-negative bacteria. Treatment with 50 mM EDTA results in the release of LPS molecules from the boundaries of some patches. Further metal depletion produces many irregularly shaped pits at the outer membrane, which is the consequence of progressive release of LPS molecules and membrane proteins. The EDTA-treated cells were analyzed for metal content and for their reactiv...",
"corpus_id": 38814845,
"score": 1,
"title": "High-Resolution Atomic Force Microscopy Studies of the Escherichia coli Outer Membrane: Structural Basis for Permeability"
} |
{
"abstract": "In this paper we investigate variations in the adoption of LEED-certified commercial buildings across 174 core-based statistical areas in the United States. Drawing upon a unique database and using a robust analytical framework, the determinants of the proportion LEED-certified space are modeled. We find that, despite high growth rates, LEED-certified stock accounts for a relatively small proportion of the total commercial stock. The average proportion is less than 1%. A further contribution of the paper is that our concentration measure avoids the biases associated with simple percentage measures that were used in previous studies of this topic. Strongest predictors of the proportion of LEED-certified commercial space in a local market are market size, educational attainment and economic growth. In terms of policy effectiveness, it is found that only a mandatory requirement to obtain LEED certification for new buildings has a significant positive effect on market penetration.",
"corpus_id": 153509787,
"title": "Determinants of Green Building Adoption"
} | {
"abstract": "Considering the cooperative relationship between owners and contractors in sustainable construction projects, as well as the synergistic effects created by cooperative behaviors, a cooperative incentive model was developed using game theory. The model was formulated and analyzed under both non-moral hazard and moral hazard situations. Then, a numerical simulation and example were proposed to verify the conclusions derived from the model. The results showed that the synergistic effect increases the input intensity of one party’s resource transfer into the increase of marginal utility of the other party, thus the owner and contractor are willing to enhance their levels of effort. One party’s optimal benefit allocation coefficient is positively affected by its own output efficiency, and negatively affected by the other party’s output efficiency. The effort level and expected benefits of the owner and contractor can be improved by enhancing the cooperative relationship between the two parties, as well as enhancing the net benefits of a sustainable construction project. The synergistic effect cannot lower the negative effect of moral hazard behaviors during the implementation of sustainable construction projects. Conversely, the higher levels of the cooperative relationship, the wider the gaps amongst the optimal values under both non-moral hazard and moral hazard situations for the levels of effort, expected benefits and net project benefits. Since few studies to date have emphasized the effects of cooperative relationship on sustainable construction projects, this study constructed a game-based incentive model to bridge the gaps. This study contributes significant theoretical and practical insights into the management of cooperation amongst stakeholders, and into the enhancement of the overall benefits of sustainable construction projects.",
"corpus_id": 42219526,
"title": "Incentive Model Based on Cooperative Relationship in Sustainable Construction Projects"
} | {
"abstract": "Recent contributions to the literature have resulted in a standard modelling of office markets. The models provide considerable insight into the working of office markets. • Nonetheless, a major difficulty is the use of data for a single city or aggregate data for the U.S. The latter implicitly assumes that model structure is invariant across cities. In this article we test for structural differences in office markets by size class. Rental data from REIS Reports for twenty-one metropolitan areas for the time period 1981 to 1990 are used to model office market behavior. Results suggest market outcomes vary by city size, larger markets are better modelled using standard procedures, and Manhattan behaves quite differently from the other markets. Copyright American Real Estate and Urban Economics Association.",
"corpus_id": 154367694,
"score": 2,
"title": "Did Office Market Size Matter in the 1980s? A Time‐Series Cross‐Sectional Analysis of Metropolitan Area Office Markets"
} |
{
"abstract": "Introduction In our earlier papers we suggested that the biological entities (ranging in size from 10–300 micron) which we have isolated from the stratosphere originate from space, rather than Earth (Wainwright et al., 2013a, b). We base this conclusion on the current paradigm which states that particles greater than 5 microns radius cannot cross the tropopause to arrive at the heights in the stratosphere from where we have sampled them (Rosen, 1969, Zolensky & Mackinnon, 1985). Also, the sampling stubs on which we isolated these biomorphs were found to be remarkably free of the contaminating material (such as fungal spores, pollen grains and volcanic dust) which we would expect to find had these biological entities been elevated from Earth to the stratosphere. Furthermore, marked impact events caused by inorganic micrometeorites, together with a wide variety of cosmic dust particles, also occur on the same sampling stubs as do some of the biomorphs; while it could be argued that the biomorphs reached the stubs post-impact, we nevertheless suggest that this association remains highly noteworthy. We ask the question if a mechanism exists which can elevate the biological entities we find, from Earth to the stratosphere, how is it able to “sieve out” only the biomorphs (which are of varying sizes and masses) from the general debris that would be carried to the stratosphere were these particles elevated from Earth to the stratosphere? It could be argued that the biomorphs in question originate from a terestrial aquatic environment, but if so, we would again expect them to be associated with other marine, or freshwater organisms and debris; we certainly would not expect a marine water spout to be elevated to a height of 22–27km, and carry with it biomorphs. We contend however, that the biomorphs isolated here formerly existed in a watery environment, namely a comet, the icy debris of which would have been largely lost during its transport to the stratosphere. While we are convinced that the our earlier findings (Wainwright et al., 2013a, b) strongly suggest that our stratsopheric biomorphs originate from space, we continue to seek further evidence to strengthen our argument. Here we provide findings based on the use of nanomanipulation of a spherical particle which we have isolated on one of our sampling stubs. We provide evidence to strongly suggest that that this entity is biological in nature and, by using nanomanipulation, show that this biomorph, having produced a marked impact crater in the sampling stub, must have been travelling at speed (from space), when it impacted the stub. The results of this study, we assert, prove that a biological entity, originating from space, has been captured in the stratosphere en route to Earth. The implication of this finding is, we suggest obvious and profound, namely that this, and other biological entities, are continuously raining down to Earth from space.",
"corpus_id": 4979507,
"title": "Associated With A Titanium Sphere Isolated From The Stratosphere"
} | {
"abstract": "Astronomically, there are viable mechanisms for distributing organic material throughout the Milky Way. Biologically, the destructive effects of ultraviolet light and cosmic rays means that the majority of organisms arrive broken and dead on a new world. The likelihood of conventional forms of panspermia must therefore be considered low. However, the information content of damaged biological molecules might serve to seed new life (necropanspermia).",
"corpus_id": 119236576,
"title": "Panspermia, Past and Present: Astrophysical and Biophysical Conditions for the Dissemination of Life in Space"
} | {
"abstract": "The discovery of single-celled bacteria that live in the extreme temperatures of hydrothermal vents deep in the ocean or in hot springs on land led to the discovery of a third branch of life--the Archaea. In his commentary, DeLong summarizes new evidence showing that Archaea are not confined to extreme living conditions but can be found in abundance as planktonic organisms in cold ocean waters below 100 m. Their occupation of this enormous habitat qualifies these organisms as one of the most abundant prokaryotes on Earth.",
"corpus_id": 12019045,
"score": 1,
"title": "Archaeal Means and Extremes"
} |
{
"abstract": "A recognized drawback of the currently available chemical cross-linking reagents used to fix bioprostheses is the potential toxic effects a recipient may be exposed to from the fixed tissues and/or the residues. It is, therefore, desirable to provide a cross-linking reagent which is of low cytotoxicity and may form stable and biocompatible cross-linked products. To achieve this goal, a naturally occurring cross-linking reagent -- genipin -- which has been used in herbal medicine and in the fabrication of food dyes, was used by our group to fix biological tissues. The study was to assess the cytotoxicity of genipin in vitro using 3T3 fibroblasts (BALB/3T3 C1A31-1-1). Glutaraldehyde, the most commonly used cross-linking reagent for tissue fixation, was used as a control. The cytotoxicity of the glutaraldehyde- and genipin-fixed tissues and their residues was also evaluated and compared. The observation in the light microscopic examination revealed that the cytotoxicity of genipin was significantly lower than that of glutaraldehyde. Additionally, the results obtained in the MTT assay implied that genipin was about 10000 times less cytotoxic than glutaraldehyde. Moreover, the colony forming assay suggested that the proliferative capacity of cells after exposure to genipin was approximately 5000 times greater than that after exposure to glutaraldehyde. It was noted that the cells seeded on the surface of the glutaraldehyde-fixed tissue were not able to survive. In contrast, the surface of the genipin-fixed tissue was found to be filled with 3T3 fibroblasts. Additionally, neocollagen fibrils made by these fibroblasts were observed on the genipin-fixed tissue. This fact suggested that the cellular compatibility of the genipin-fixed tissue was superior to its glutaraldehyde-fixed counterpart. Also, the residues from the glutaraldehyde-fixed tissue markedly reduced the population of the cultured cells, while those released from the genipin-fixed tissue had no toxic effect on the seeded cells. In conclusion, as far as cytotoxicity is concerned, genipin is a promising cross-linking reagent for biological tissue fixation.",
"corpus_id": 536401,
"title": "In vitro evaluation of cytotoxicity of a naturally occurring cross-linking reagent for biological tissue fixation."
} | {
"abstract": "The purpose of this study was to prepare and evaluate in vitro the feasibility and cytocompatibility of a novel composite (GGT) as a large defect bone substitute. The composite is tricalcium phosphate ceramic particles combined with genipin crosslinked gelatin. After soaking the GGT composites in Ringer solutions at 37 degrees C for 7, 14, 28, 42, 56, and 84 days, the in vitro biologic degradation rate and biocompatibility were determined. Substances released from soaked GGT composites were analyzed with an ultraviolet visible light spectrophotometer. In addition, the solution soaking the GGT was co-cultured with osteoblasts to determine whether or not the released substances from GGT could facilitate the growth of bone cells. After they had been cultured for 2 days, the osteoblasts were tested for differentiation and proliferation by alkaline phosphatase (ALP) activity and a MTT assay. Results indicate that the concentration of the genipin solution is a critical factor in deciding the crosslinking degree of the GGT composite. Complete crosslinking reaction in the GGT composite occurred when 0.5 wt % of genipin had been added. Cytotoxic testing revealed that 80 ppm of the genipin in the culture medium served as the level over which cytotoxicity to osteoblasts could be produced. In addition, we found that gelatin and calcium continuously were released from the GGT composite in the soaking solution, which promoted differentiation and proliferation of the osteoblasts.",
"corpus_id": 276172,
"title": "In vitro evaluation of degradation and cytotoxicity of a novel composite as a bone substitute."
} | {
"abstract": "Thirty-one glutaraldehyde-treated bovine aortic valves (BAVs) and 105 glycerol-treated human dura mater valves (HDVs) were used in 51 various artificial hearts up to 316 days in calves. Multiple valves were implanted in the same animal under different hemodynamic conditions. A comparative study of these valves was performed in terms of blood compatibility and durability with relation to the different hemodynamic environments. Both BAVs and HDVs showed good blood compatibility. The degradation of collagen bundles of the valves began as early as 7 days in BAVs and 13 days in HDVs, and was seen in the hinged portions of the cusps. The fiber separation and resultant void formation were followed with insudation of blood elements and subsequent calcification. Calcification was dystrophic in nature and was encountered in 70.9% of BAVs and 7.6% of HDVs. All 17 BAVs used more than 30 days were calcified; in HDVs the earliest calcified lesion was seen in a 78 day specimen. The pathological changes were more severe in the left side than the right of the total artificial hearts. These results clearly indicated that the HDV is more durable than the glutaraldehyde-treated BAV. It was suggested that degradation of these tissue valves is greatly affected by the degree of hemodynamic stress on the valve cusp. Although glutaraldehyde treatment has increased the durability of tissue valves in general, the structure of the valve tissue also plays an important role in long-term durability.",
"corpus_id": 2430514,
"score": 2,
"title": "Bovine aortic and human dura mater valves: a comparative study in artificial hearts in calves."
} |
{
"abstract": "Background: HIV and AIDS are major public health problems in the world and Africa. In Cameroon, the HIV prevalence is 5.1%. Cellphones have been found to be useful in the provision of modern health care services using short message services (SMS). This study assessed the effectiveness of SMS in improving the adherence of people living with HIV and AIDS to their treatment and care in Cameroon. Methods: This intervention study used a randomized controlled trial design. Ninety participants seeking treatment at the Nkwen Baptist Health Center were recruited between August and September 2011 using a purposive sampling method. They were randomly allocated into the intervention and control groups, each containing 45 participants. In the intervention group, each participant received four SMSs per week at equal intervals for four weeks. The patients were investigated for adherence to ARVs by evaluating the number of times treatment and medication refill appointments were missed. Data were collected using an interviewer-administered questionnaire before and after intervention and analysed on STATA. Results: The baseline survey indicated that there were 55(61.1%) females and 35(38.9%) males aged 23 - 62 years; the mean age was 38.77 ± 1.08. Most participants were teachers [12 (13.3%)], farmers [11 (12.2%)], and businessmen [24 (26.7%)]. Adherence to ARVs was 64.4% in the intervention group and 44.2% in the control group (p = 0.05). 2(4.4%) patients in the control group failed to respect their drug refill appointments while all the 45(100%) participants in the intervention group respected their drug refill appointments. 54.17% of married people and 42.9% of the participants with primary and secondary levels of education missed their treatment. Key reasons for missing treatment were late home coming (54%), forgetfulness (22.5%), and travelling out of station without medication (17.5%). Other factors responsible for non-adherence included involvement in outdoor business (60.87%), ARV stock out (37.8%), and not belonging to a support group (10.23%). Twenty eight (62.22%) subjects in the intervention group were able to take their treatment regularly and on time. Conclusion: SMS improved adherence to ARVs. Key constraints which affect adhere to ARV medication can be addressed using SMS.",
"corpus_id": 2446474,
"title": "A Randomized Controlled Trial on the Usefulness of Mobile Text Phone Messages to Improve the Quality of Care of HIV and AIDS Patients in Cameroon"
} | {
"abstract": "Methods: Four hundred and thirty-one adult patients who had initiated ART within 3 months were enrolled and randomly assigned to a control group or one of the four intervention groups. Participants in the intervention groups received SMS reminders that were either short or long and sent at a daily or weekly frequency. Adherence was measured using the medication event monitoring system. The primary outcome was whether adherence exceeded 90% during each 12-week period of analysis and the 48-week study period. The secondary outcome was whether there were treatment interruptions lasting at least 48 h.",
"corpus_id": 33248730,
"title": "Mobile phone technologies improve adherence to antiretroviral treatment in a resource-limited setting: a randomized controlled trial of text message reminders."
} | {
"abstract": "BACKGROUND--The ventilatory cost of carbon dioxide (CO2) elimination on exercise (VE/VCO2) is increased in chronic heart failure (CHF). This reflects increased physiological dead space ventilation secondary to mismatching between perfusion and ventilation during exercise. The objectives of this study were to investigate the relation of this increased VE/VCO2 slope to the syndrome of CHF or to limitation of the exercise related increase of pulmonary blood flow, or both. PATIENTS AND METHODS--Maximal treadmill exercise tests with respiratory gas analysis were performed in 45 patients with CHF (defined as resting left ventricular ejection fraction < 40% on radionuclide scan); 15 normal controls; 23 patients with coronary artery disease and normal resting left ventricular function; and 13 pacemaker dependent patients (six with and seven without CHF) directly comparing exercise responses in rate responsive and fixed rate mode. RESULTS--Patients with CHF had a steeper VE/VCO2 slope than normal controls: this was related inversely to peak VO2 below 20 mol/min/kg. In patients with coronary artery disease in whom peak VO2 (at respiratory exchange ratio > 1) was as limited as in the patients with CHF but resting left ventricular function was normal, the VE/VCO2 slope was normal. In pacemaker dependent patients fixed rate pacing resulted in lower exercise capacity and peak VO2 than rate responsive pacing; the VE/VCO2 slope was normal in patients without CHF but steeper than normal in patients with CHF; the VE/VCO2 slope was steeper during fixed rate than during rate responsive pacing in these patients with CHF. CONCLUSIONS--These findings suggest that the perfusion/ventilation mismatch during exercise in CHF is related to the chronic consequences of the syndrome and not directly to limitation of exercise related pulmonary flow. Only when the syndrome of CHF is present can matching between perfusion and ventilation be acutely influenced by changes in pulmonary flow.",
"corpus_id": 20199670,
"score": 1,
"title": "Perfusion/ventilation mismatch during exercise in chronic heart failure: an investigation of circulatory determinants."
} |
{
"abstract": "Postoperative radiotherapy (PRT) is widely advocated for patients with squamous cell carcinomas of the head and neck that are considered to be at high risk of recurrence after surgical resection. The aims of this study were to evaluate the treatment outcomes of PRT for patients with laryngeal carcinoma and to identify the value of several prognostic factors. We reviewed the records of 256 patients treated for laryngeal squamous cell carcinoma between January 1993 and December 2005. Disease-free survival (DFS) and overall survival (OS) were estimated using the Kaplan-Meier method. Log-rank test was employed to identify significant prognostic factors for DFS and OS. The Cox proportional hazards model was applied to identify covariates significantly associated with the aforementioned endpoints. Our results showed the 3-, 5-, and 10-year DFS for all patients were 69.9%, 59.5%, and 34.9%, respectively. The 3-, 5-, and 10-year OS rates were 80.8%, 68.6%, and 38.8%, respectively. Significant prognostic factors for both DFS and OS on univariate analysis were grade, primary site, T stage, N stage, overall stage, lymph node metastasis, overall treatment times of radiation, the interval between surgery and radiotherapy, and radiotherapy equipment. Favorable prognostic factors for both DFS and OS on multivariate analysis were lower overall stage, no cervical lymph node metastasis, and using 60Co as radiotherapy equipment. In conclusion, our data suggest that lower overall stage, no cervical lymph node metastasis, and using 60Co as radiotherapy equipment are favorable prognostic factors for DFS and OS and that reducing the overall treatment times of radiation to 6 weeks or less and the interval between surgery and radiotherapy to less than 3 weeks are simple measures to remarkably improve treatment outcome.",
"corpus_id": 2291987,
"title": "Treatment results and prognostic factors of patients undergoing postoperative radiotherapy for laryngeal squamous cell carcinoma"
} | {
"abstract": "A personal series of 765 previously untreated patients with laryngeal carcinoma seen between 1962 and 1988 was analysed for the importance of prognostic factors. There were numerous significant correlations between tumour prognostic factors, particularly with neck node status. Palpable cervical nodes increased in frequency with increasing T status, and palpable lymph nodes were commoner in less well differentiated tumours, and in supra and sub-glottic tumours. These correlations were very highly significant. Increasing T stage was associated with increasing N stage. T stage was also associated with site, glottic tumours being far more likely to be T1 than supra or sub-glottic tumours. T stage was not related to histological grade. Histological grade correlated with site, glottic tumours being well differentiated much more often. When survival was analysed by univariate methods there were highly significant differences with increasing T stage and N stage, between the various histological grades and the various sites. However, when survival was analysed by multifactorial methods taking interactions into account, only N status was a significant prognostic factor. When patients with palpable nodes submitted to surgery were analysed, it transpired that clinical staging and node level were relatively unimportant compared with pathological findings: both the number of nodes invaded and the presence of tumour outside lymph nodes (extracapsular rupture) were highly significant.",
"corpus_id": 22126783,
"title": "Prognosis in laryngeal carcinoma: tumour factors."
} | {
"abstract": "THE EARLIEST studies of laryngeal pathology were postmortem examinations, so the first classifications, by necessity, were anatomical. In 1790 1 Morgagni referred to two of Valsalva's cases; these were actually cases of laryngopharyngeal carcinoma. Within 25 years of the development of the laryngoscopic mirror by Garcia, clinical classifications that better served the laryngologist were developed. In 1866, Krishaber designated as laryngeal cancer only the endolaryngeal lesions. Isambert in 1879 expressed disagreement and classified tumors developing in the opening and in the pharyngeal wall of the larynx as extrinsic. In 1879, Krishaber published a second work on cancer of the larynx accepting Isambert's criticism. Therefore, Krishaber is generally accepted in the English and American literature as originating the classification of laryngeal cancer into extrinsic and intrinsic. 2 It was in 1925 that the well-known Broders' 3 system of grading was introduced. His system was basically histologic and used the maturation",
"corpus_id": 31326454,
"score": 2,
"title": "LARYNGEAL CARCINOMA CLASSIFIED BY CLINICAL STAGING."
} |
{
"abstract": "Purpose To identify the underlying cause of disease in a large family with North Carolina macular dystrophy (NCMD). Methods A large four-generation family (RFS355) with an autosomal dominant form of NCMD was ascertained. Family members underwent comprehensive visual function evaluations. Blood or saliva from six affected family members and three unaffected spouses was collected and DNA tested for linkage to the MCDR1 locus on chromosome 6q12. Three affected family members and two unaffected spouses underwent whole exome sequencing (WES) and subsequently, custom capture of the linkage region followed by next-generation sequencing (NGS). Standard PCR and dideoxy sequencing were used to further characterize the mutation. Results Of the 12 eyes examined in six affected individuals, all but two had Gass grade 3 macular degeneration features. Large central excavation of the retinal and choroid layers, referred to as a macular caldera, was seen in an age-independent manner in the grade 3 eyes. The calderas are unique to affected individuals with MCDR1. Genome-wide linkage mapping and haplotype analysis of markers from the chromosome 6q region were consistent with linkage to the MCDR1 locus. Whole exome sequencing and custom-capture NGS failed to reveal any rare coding variants segregating with the phenotype. Analysis of the custom-capture NGS sequencing data for copy number variants uncovered a tandem duplication of approximately 60 kb on chromosome 6q. This region contains two genes, CCNC and PRDM13. The duplication creates a partial copy of CCNC and a complete copy of PRDM13. The duplication was found in all affected members of the family and is not present in any unaffected members. The duplication was not seen in 200 ethnically matched normal chromosomes. Conclusions The cause of disease in the original family with MCDR1 and several others has been recently reported to be dysregulation of the PRDM13 gene, caused by either single base substitutions in a DNase 1 hypersensitive site upstream of the CCNC and PRDM13 genes or a tandem duplication of the PRDM13 gene. The duplication found in the RFS355 family is distinct from the previously reported duplication and provides additional support that dysregulation of PRDM13, not CCNC, is the cause of NCMD mapped to the MCDR1 locus.",
"corpus_id": 2980309,
"title": "North Carolina macular dystrophy (MCDR1) caused by a novel tandem duplication of the PRDM13 gene"
} | {
"abstract": "Amacrine interneurons, which are highly diversified in morphological, neurochemical, and physiological features, play crucial roles in visual information processing in the retina. However, the specification mechanisms and functions in vision for each amacrine subtype are not well understood. We found that the Prdm13 transcriptional regulator is specifically expressed in developing and mature amacrine cells in the mouse retina. Most Prdm13-positive amacrine cells are Calbindin- and Calretinin-positive GABAergic or glycinergic neurons. Absence of Prdm13 significantly reduces GABAergic and glycinergic amacrines, resulting in a specific defect of the S2/S3 border neurite bundle in the inner plexiform layer. Forced expression of Prdm13 distinctively induces GABAergic and glycinergic amacrine cells but not cholinergic amacrine cells, whereas Ptf1a, an upstream transcriptional regulator of Prdm13, induces all of these subtypes. Moreover, Prdm13-deficient mice showed abnormally elevated spatial, temporal, and contrast sensitivities in vision. Together, these results show that Prdm13 regulates development of a subset of amacrine cells, which newly defines an amacrine subtype to negatively modulate visual sensitivities. Our current study provides new insights into mechanisms of the diversification of amacrine cells and their function in vision.",
"corpus_id": 25147839,
"title": "Prdm13 Regulates Subtype Specification of Retinal Amacrine Interneurons and Modulates Visual Sensitivity"
} | {
"abstract": "Cervical cancer is the fourth most prevalent cancer type among all malignancies, so it is of great significance to find its actual pathogenesis mechanisms. In the present study, 90 women were enrolled, and high-throughput sequencing technology was firstly used to analyze the vaginal microbiota of healthy women (C group), cervical intraepithelial neoplasia patients (CIN group) and cervical cancer patients (CER group). Our results indicates that compared with C group, a higher HPV infection rate as well as increased Neutrophil ratio and tumor marker squamous cell carcinoma antigen (SCCA) were obtained, and a decrease in Lymphocyte ratio and Hemoglobin were also present. In addition, the cervical cancer showed a strong association with reduced probiotics Lactobacillus, increased pathogens Prevotella spp., Sneathia spp. and Pseudomonas spp. These results prove that the immunological changes generated by the cervical cancer and the vaginal microbiota can interact with each other. However, further study investigating the key bacteria for cervical cancer is still needed, which can be a clue for the diagnosis or treatment of cervical cancer.",
"corpus_id": 227313041,
"score": 1,
"title": "Revealing the Disturbed Vaginal Micobiota Caused by Cervical Cancer Using High-Throughput Sequencing Technology"
} |
{
"abstract": "Abstract Objective: This study evaluates the benefits of and indications for the orbito-cranial approach (OCA) in pediatric patients. Methods and results: The authors report their recent experience of using the OCA in 9 pediatric patients, 6 boys and 3 girls. The patients' ages ranged from 3 to 17 years (mean 9.6±5.16 years). Follow-up periods varied between 6 and 21 months (mean 12.6±5.9 months). Five patients were operated on for craniopharyngiomas, 2 for chiasmatic-hypothalamic astrocytomas, 1 for a recurrent hypothalamic gangliocytoma, and 1 for a hypothalamic hamartoma. In 7 cases a neuronavigation system (BrainLab) was utilized. The lesions were removed totally in 5 patients, near-totally in 1, subtotally in 2, and partially in 1 patient. An average increase of 30% in the area of vertical exposure significantly decreased the need for brain retraction. There was no mortality in this series. The only complications connected with the surgical approach were transient subgaleal cerebro-spinal fluid collections in 7 of 9 children and a subgaleal-peritoneal shunt placement in another patient. Conclusions: Our experience with this series of patients suggests that the OCA is as safe and beneficial in pediatric patients as it is in adults. It facilitates tumor removal by providing shorter access to and better exposure of the suprasellar area, thereby minimizing brain retraction.",
"corpus_id": 385498,
"title": "Application of the orbito-cranial approach in pediatric neurosurgery"
} | {
"abstract": "A surgical approach to the skull base is described. It allows excellent exposure of the cranial base with minimal brain retraction. Deep lesions can be handled via subfrontal, transsylvian, or subtemporal routes during the same operation. This approach is most suitable for large lesions in the suprasellar, parasellar, and retrosellar areas and for those that extend into the cavernous sinus, along the tentorial notch, or into the orbit. After the single bone flap is replaced, there is little or no functional, anatomical, or cosmetic deficit. Our experience in 16 cases and suggestion for the use of this approach are presented.",
"corpus_id": 35742246,
"title": "Supraorbital-pterional approach to skull base lesions."
} | {
"abstract": "BackgroundChronic heart failure (CHF) is a chronic debilitating condition with economic consequences, mostly because of frequent hospitalisations. Physical activity and adequate self-management capacity are important risk reduction strategies in the management of CHF. The Home-Heart-Walk is a self-monitoring intervention. This model of intervention has adapted the 6-minute walk test as a home-based activity that is self-administered and can be used for monitoring physical functional capacity in people with CHF. The aim of the Home-Heart-Walk program is to promote adherence to physical activity recommendations and improving self-management in people with CHF.Methods/DesignA randomised controlled trial is being conducted in English speaking people with CHF in four hospitals in Sydney, Australia. Individuals diagnosed with CHF, in New York Heart Association Functional Class II or III, with a previous admission to hospital for CHF are eligible to participate. Based on a previous CHF study and a loss to follow-up of 10%, 166 participants are required to be able to detect a 12-point difference in the study primary endpoint (SF-36 physical function domain).All enrolled participant receive an information session with a cardiovascular nurse. This information session covers key self-management components of CHF: daily weight; diet (salt reduction); medication adherence; and physical activity. Participants are randomised to either intervention or control group through the study randomisation centre after baseline questionnaires and assessment are completed. For people in the intervention group, the research nurse also explains the weekly Home-Heart-Walk protocol. All participants receive monthly phone calls from a research coordinator for six months, and outcome measures are conducted at one, three and six months. The primary outcome of the trial is the physical functioning domain of quality of life, measured by the physical functioning subscale of the Medical Outcome Study Short Form -36. Secondary outcomes include physical functional capacity measured by the standard six minute walk test, self-management capacity, health related quality of life measured by Medical Outcome Study Short Form -36 and Minnesota Living With Heart Failure Questionnaire, self-efficacy and self-care behaviour.DiscussionA self-monitoring intervention that can improve individual's exercise self-efficacy, self-management capacity could have potential significance in improving the management of people with chronic heart failure in community settings.Trial RegistrationAustralian New Zealand Clinical Trial Registry 12609000437268",
"corpus_id": 1817644,
"score": 1,
"title": "An intervention to promote physical activity and self-management in people with stable chronic heart failure The Home-Heart-Walk study: study protocol for a randomized controlled trial"
} |
{
"abstract": "Anopheles sinensis is a major malaria vector. Insect odorant‐binding proteins (OBPs) may function in the reception of odorants in the olfactory system. The classification and characterization of the An. sinensis OBP genes have not been systematically studied. In this study, 64 putative OBP genes were identified at the whole‐genome level of An. sinensis based on the comparison between OBP conserved motifs, PBP_GOBP, and phylogenetic analysis with An. gambiae OBPs. The characterization of An. sinensis OBPs, including the motif's conservation, gene structure, genomic organization and classification, were investigated. A new gene, AsOBP73, belonging to the Plus‐C subfamily, was identified with the support of transcript and conservative motifs. These An. sinensis OBP genes were classified into three subfamilies with 37, 15 and 12 genes in the subfamily Classic, Atypical and Plus‐C, respectively. The genomic organization of An. sinensis OBPs suggests a clustered distribution across nine different scaffolds. Eight genes (OBP23–28, OBP63–64) might originate from a single gene through a series of historic duplication events at least before divergence of Anopheles, Culex and Aedes. The microsynteny analyses indicate a very high synteny between An. sinensis and An. gambiae OBPs. OBP70 and OBP71 earlier classified under Plus‐C in An. gambiae are recognized as belonging to the group Obp59a of the Classic subfamily, and OBP69 earlier classified under Plus‐C has been moved to the Atypical subfamily in this study. The study established a basic information frame for further study of the OBP genes in insects as well as in An. sinensis.",
"corpus_id": 3117397,
"title": "Genome‐wide identification and characterization of odorant‐binding protein (OBP) genes in the malaria vector Anopheles sinensis (Diptera: Culicidae)"
} | {
"abstract": "Bradysia odoriphaga is an agricultural pest insect affecting the production of Chinese chive and other liliaceous vegetables in China, and it is significantly attracted by sex pheromones and the volatiles derived from host plants. Despite verification of this chemosensory behavior, however, it is still unknown how B. odoriphaga recognizes these volatile compounds on the molecular level. Many of odorant binding proteins (OBPs) and chemosensory proteins (CSPs) play crucial roles in olfactory perception. Here, we identified 49 OBP and 5 CSP genes from the antennae and body transcriptomes of female and male adults of B. odoriphaga, respectively. Sequence alignment and phylogenetic analysis among Dipteran OBPs and CSPs were analyzed. The sex- and tissue-specific expression profiles of 54 putative chemosensory genes among different tissues were investigated by quantitative real-time PCR (qRT-PCR). qRT-PCR analysis results suggested that 22 OBP and 3 CSP genes were enriched in the antennae, indicating they might be essential for detection of general odorants and pheromones. Among these antennae-enriched genes, nine OBPs (BodoOBP2/4/6/8/12/13/20/28/33) were enriched in the male antennae and may play crucial roles in the detection of sex pheromones. Moreover, some OBP and CSP genes were enriched in non-antennae tissues, such as in the legs (BodoOBP3/9/19/21/34/35/38/39/45 and BodoCSP1), wings (BodoOBP17/30/32/37/44), abdomens and thoraxes (BodoOBP29/36), and heads (BodoOBP14/23/31 and BodoCSP2), suggesting that these genes might be involved in olfactory, gustatory, or other physiological processes. Our findings provide a starting point to facilitate functional research of these chemosensory genes in B. odoriphaga at the molecular level.",
"corpus_id": 4570674,
"title": "Sex- and Tissue-Specific Expression Profiles of Odorant Binding Protein and Chemosensory Protein Genes in Bradysia odoriphaga (Diptera: Sciaridae)"
} | {
"abstract": "OBJECTIVE\nThe aim of this study was to examine whether bioenergetic exercises (BE) significantly influence the inpatient psychotherapeutic treatment results for Turkish immigrants with chronic somatoform disorders.\n\n\nMETHOD\nIn a 6-week randomized, prospective, controlled trial, we treated a sample of 128 Turkish patients: 64 were randomly assigned to BE and 64 participated in gymnastic exercises in lieu of BE. The Symptom Checklist (SCL-90-R) and State-Trait Anger Expression Inventory (STAXI) were employed.\n\n\nRESULTS\nAccording to the intent-to-treat principle, the bioenergetic analysis group achieved significantly better treatment results on most of the SCL-90-R and STAXI scales.\n\n\nCONCLUSIONS\nBE appears to improve symptoms of somatization, social insecurity, depressiveness, anxiety, and hostility in the inpatient therapy of subjects with chronic somatoform disorders. Reduction of the anger level and reduction in directing anger inwards, with a simultaneous increase of spontaneous outward emotional expression, could be expected.",
"corpus_id": 10268390,
"score": 0,
"title": "Bioenergetic exercises in inpatient treatment of Turkish immigrants with chronic somatoform disorders: a randomized, controlled study."
} |
{
"abstract": "Simultaneous switching output buffer (SSO) noise and impedance of power distribution network (PDN) for a 3D systemin package (SiP) with 4k-IO widebus structure has been investigated. The 3D SiP consisted of 3 stacked chips and an organic interposer. These three chips were a memory chip on the top, a silicon interposer in the middle, and a logic chip on the bottom. The size of each chip was the same, and 9.93 mm by 9.93 mm. More than 4096 of through silicon vias (TSV's) were formed to the silicon interposer. Next, these 3 stacked chips were assembled on the organic interposer, whose size was 26 mm by 26mm. SSO noise is one of critical issues for the 3D SiP with 4k-IO widebus structure. So, the SSO noise was measured in the miniIO power supply system in an evaluation board. Furthermore the PDN impedance for each chip was measured by direct contact method. Then, the total PDN impedance was synthesized to confirm the anti-resonance peak of it.",
"corpus_id": 8249846,
"title": "Measurement of SSO noise and PDN impedance of 3D SiP with 4k-IO widebus structure"
} | {
"abstract": "Leakage current has become a significant source of power consumptions of CMOS circuit, as the technology node continues to shrink. Our study shows that the equivalent on-die leakage resistance monotonically decreases as the supply voltage increases and exceeds MOSFET threshold voltage. We propose a system-level power distribution network (PDN) design optimization with voltage-dependent leakage resistance considered in a standard RLC tank model. Our results show that the voltage-dependent leakage resistance can impact on the PDN noise and affect the optimal value of the circuit parameters to minimize the noise. An equivalent constant leakage resistor is proposed to replace the voltage-dependent model for quick noise prediction.",
"corpus_id": 46101577,
"title": "Power distribution network design optimization with on-die voltage-dependent leakage path"
} | {
"abstract": "Network coding has been known as a spectrally efficient technique in wireless networks. However, when it is applied to a two-way relay channel (TWRC), it suffers from performance degradation caused by the asymmetric position of the relay. In this paper, we suggest remedying this problem by using hierarchical modulation at the source node. We investigate how hierarchical modulation can be incorporated and optimized with network coding. Our results are encouraging in that hierarchically modulated network coding (HMNC) significantly improves end-to-end bit-error probability and spectral efficiency in asymmetric relay channels, as compared with direct transmission (DT), bidirectional network coding (BNC), and coded bidirectional relay (CBR).",
"corpus_id": 30193610,
"score": 0,
"title": "Hierarchically Modulated Network Coding for Asymmetric Two-Way Relay Systems"
} |
{
"abstract": "As the Internet grows at a very rapid pace, so does the incidence of attack events and documented unlawful intrusions. The network intrusion detection systems (NIDSes) are designed to identify attacks against networks or a host that are invisible to firewalls, thus providing an additional layer of security. NIDSes detect and filter the malicious packets by inspecting packet payloads to find worm signatures. The payload inspection operation dominates the throughput of an NIDS since every byte of packet payload needs to be examined. At network speeds of 1 Gbps or above, it can be difficult to keep up with intrusion detection in software, and hardware systems or software with hardware assist are normally required. This paper presents FTSE, a ternary content addressable memory (TCAM) based pattern matching engine. In this paper we show how FTSE can be used effectively to perform string matching for thousands of strings at multiple-Gigabit speed. We also describe how FTSE can be implemented feasibly with an FPGA/ASIC, a 2.25 Mb TCAM, and a small SSRAM. Our analysis shows that this approach for string matching is very effective and the throughput of our design can achieve up to 8 Gbps for 2,085 snort rules.",
"corpus_id": 140904,
"title": "FTSE: the FNIP-like TCAM searching engine"
} | {
"abstract": "In today's Internet, worms and viruses cause service disruptions with enormous economic impact. Current attack prevention mechanisms rely on end-user cooperation to install new system patches or upgrade security software, yielding slow reaction time. However, malicious attacks spread much faster than users can respond, making effective attack prevention difficult network-based mechanisms, by avoiding end-user coordination, can respond rapidly to new attacks. Such mechanisms require the network to inspect the packet payload at line rates to detect and filter those packets containing worm signatures. These signature sets are large (e.g., thousands) and complex. Software-only implementations are unlikely to meet the performance goals. Therefore, making a network-based scheme practical requires efficient algorithms suitable for hardware implementations. This work develops a ternary content addressable memory (TCAM) based multiple-pattern matching scheme. The scheme can handle complex patterns; such as arbitrarily long patterns, correlated patterns, and patterns with negation. For the ClamAv virus database with 1768 patterns whose sizes vary from 6 bytes to 2189 bytes, the proposed scheme can operate at a 2 Gbps rate with a 240 KB TCAM.",
"corpus_id": 9389762,
"title": "Gigabit rate packet pattern-matching using TCAM"
} | {
"abstract": "There are a few studies on the relation between the control contestability and performance, but the results are mixed. With the data of 1378 listed company in China during 2000-2007, the paper analysis the impacts of control contestability on performance using 5 ownership restricting indexes. It shows that the degree of ownership restricting is greater, the net profit is better, and Tobin's Q is lower. Moreover, the restricting of the second large shareholder against the largest shareholder has incentive and tunneling effect, and the impacts of the ownership restricting on net asset per share also has the two effects. These results may be helpful to recognize the influence of ownership restricting further, and they also supply conferences to the designing optimal ownership structure.",
"corpus_id": 30269739,
"score": 0,
"title": "Notice of RetractionCorporate Ownership, Control Contestability and Performance of Listed Company in China"
} |
{
"abstract": "Background— Alterations at the level of the coronary circulation with aging may play an important role in the evolution of age-associated changes in left ventricular (LV) fibrosis and function. However these age-associated changes in the coronary vasculature remain poorly defined primarily due to the lack of high resolution imaging technologies. The current study was designed to utilize cardiac micro–computed tomography (micro-CT) technology as a novel imaging strategy, to define the 3-dimensional coronary circulation in the young and aged heart and its relationship to LV fibrosis and function. Methods and Results— Young (2 months old; n=10) and aged (20 months old; n=10) Fischer rats underwent cardiac micro-CT imaging as well as echocardiography, blood pressure, and fibrosis analysis. Importantly, when indexed to LV mass, which increased with age, the total and intramyocardial vessel volumes were lower, whereas the epicardial vessel volume, with and without indexing to LV mass, was significantly higher in the aged hearts compared with the young hearts. Moreover, the aged hearts had a significantly lower percentage of intramyocardial vessel volume and a significantly higher percentage of epicardial vessel volume, when normalized to the total vessel volume, compared with the young hearts. Further, the aged hearts had significant LV fibrosis and mild LV dysfunction compared with the young hearts. Conclusions— This micro-CT imaging study reports the reduction in normalized intramyocardial vessel volume within the aged heart, in association with increased epicardial vessel volume, in the setting of increased LV fibrosis, and mild LV dysfunction.",
"corpus_id": 2355155,
"title": "Cardiac Micro–Computed Tomography Imaging of the Aging Coronary Vasculature"
} | {
"abstract": "Myocardial aging is characterized by left ventricular (LV) fibrosis leading to diastolic and systolic dysfunction. Studies have established the potent antifibrotic and antiproliferative properties of C-type natriuretic peptide (CNP); however, the relationship between circulating CNP, LV fibrosis, and associated changes in LV function with natural aging are undefined. Accordingly, we characterized the relationship of plasma CNP with LV fibrosis and function in 2-, 11-, and 20-month–old male Fischer rats. Further in vitro, we established the antiproliferative actions of CNP and the participation of the clearance receptor using adult human cardiac fibroblasts. Here we establish for the first time that a progressive decline in circulating CNP characterizes natural aging and is strongly associated with a reciprocal increase in LV fibrosis that precedes impairment of diastolic and systolic function. Additionally, we demonstrate in cultured adult human cardiac fibroblasts that the direct antiproliferative actions of high-dose CNP may involve a non-cGMP pathway via the clearance receptor. Together, these studies provide new insights into myocardial aging and the relationship to the antifibrotic and antiproliferative peptide CNP.",
"corpus_id": 7894347,
"title": "The Aging Heart, Myocardial Fibrosis, and its Relationship to Circulating C-Type Natriuretic Peptide"
} | {
"abstract": "The mode of renin release from renal juxtaglomerular cells into circulation is still unsolved in several aspects. Here we studied the intracellular organization of renin-storage vesicles and their changes during controlled stimulation of renin release. This was accomplished using isolated perfused mouse kidneys with 3-dimensional electron microscopic analyses of renin-producing cells. Renin was found to be stored in a network of single granules and cavern-like structures, and dependent on the synthesis of glycosylated prorenin. Acute stimulation of renin release led to increased exocytosis in combination with intracellular fusion of vesicles to larger caverns and their subsequent emptying. Renin release from the kidneys of SCID-beige mice, which contain few but gigantic renin-storage vesicles, was no different from that of kidneys from wild-type mice. Thus, our findings suggest that renin is released by mechanisms similar to compound exocytosis.",
"corpus_id": 5484900,
"score": 1,
"title": "Structural analysis suggests that renin is released by compound exocytosis."
} |
{
"abstract": "More than half of the students in the Latin American and the Caribbean region are below Pisa level 1 which means that the majority of the students in our region cannot identify information and carry out routine procedures according to direct instructions in explicit situations. There have been some good experiences in each country to reverse the depicted situation but it is not enough and this is not happening in all countries. I will talk about these experiences. In all of them professional mathematicians need to help teachers to have the necessary knowledge, and become more effective instructors that can raise the standard of every student.",
"corpus_id": 153511226,
"title": "Relations between the Discipline and the School Mathematics in Latin American and Caribbean Countries"
} | {
"abstract": "Widespread agreement exists that U.S. teachers need improved mathematics knowl- edge for teaching. Over the past decade, policymakers have funded a range of profes- sional development efforts designed to address this need. However, there has been little success in determining whether and when teachers develop mathematical knowl- edge from professional development, and if so, what features of professional devel- opment contribute to such teacher learning. This was due, in part, to a lack of measures of teachers' content knowledge for teaching mathematics. This article attempts to fill these gaps. In it we describe an effort to evaluate California's Mathematics Professional Development Institutes (MPDIs) using novel measures of knowledge for teaching mathematics. Our analyses showed that teachers participating in the MPDIs improved their performance on these measures during the extended summer workshop portion of their experience. This analysis also suggests that program length as measured in days in the summer workshop and workshop focus on mathematical analysis, reasoning, and communication predicted teachers' learning.",
"corpus_id": 86857291,
"title": "Learning Mathematics for Teaching: Results from California's Mathematics Professional Development Institutes"
} | {
"abstract": "Background: Out of hospital cardiac arrest is associated with a high rate of mortality, and poor neurological outcomes. Favourable neuro-protective effects are associated with induced hypothermia and international recommendations exist for therapeutic hypothermia. . Objective: This study reviews current practice for therapeutic hypothermia for out of hospital cardiac arrest patients within one ICU. It aims to identify and improve adherence to the practice guidelines. Setting: This project was conducted in an adult ICU which admits 2,000 patients yearly. Methods: A retrospective chart audit was used to document current practice for a 12 month period. Results: Of the sample of 33 patients, four patients (12%) were at the goal temperature of 32.5 33.5 o C, in the target time of two hours. Nearly half (n=17) were not cooled at all. The length of time the patient was in the ICU prior to active cooling commencing varied from <1 hour (n=15, 45%) to > 3 hours (n=5, 15%). Twenty-four percent (n=9) were cooled for the recommended length of time. There were medical orders stating a target temperature in nearly half of the cases (n=18), however, only 27% (n=9) were consistent with the guidelines. Interventions: A number of strategies have been initiated. They aim to improve communication and ready access to the required materials. Conclusions: The audit indicated that less than a third of the patients experienced therapeutic induced hypothermia and only 12% were at goal temperature within the required two hours. Strategies to improve guideline implementation have been initiated.",
"corpus_id": 13812116,
"score": 0,
"title": "THERAPEUTIC HYPOTHERMIA GUIDELINES FOR OUT-OF-HOSPITAL CARDIAC ARREST AND CLINICAL PRACTICE Keywords : hypothermia , cardiac arrest , intensive care"
} |
{
"abstract": "Objective: To determine whether a rule-based system for fetal heart rate interpretation can result in reduced metabolic acidemia without increasing obstetrical intervention. Methods: Rates of vacuum-assisted delivery and Cesarean sections, and umbilical artery pH and base excess values were determined over a 5-year period in a single hospital with 3907 deliveries in Japan. Results were compared for 2 years before and 2 years after a 6-month training period in rule-based fetal heart rate interpretation. Results: The pre- and post-training rates of unscheduled Cesarean deliveries (4.8% vs. 6.0%) and vacuum deliveries (21.2% vs. 18.1%) did not differ significantly. The rates of umbilical arterial pH <7.15 (1.51% vs. 0.18%, p < 0.05) and base excess <–12 mEq/L (1.76% vs. 0.25%, p < 0.05) were significantly lower after training. Conclusion: A standardized fetal heart rate pattern management system was associated with a 7-fold reduction of newborn metabolic acidemia with no change in operative intervention.",
"corpus_id": 472072,
"title": "Immediate newborn outcome and mode of delivery: Use of standardized fetal heart rate pattern management"
} | {
"abstract": "Objective. The improvement of the accuracy of fetal heart rate (FHR) pattern interpretation to improve perinatal outcomes remains an elusive challenge. We examined the impact of an FHR centralization system on the incidence of neonatal acidemia and cesarean births. Methods. We performed a regional, population-based, before-and-after study of 9,139 deliveries over a 3-year period. The chi-squared test was used for the statistical analysis. Results. The before-and-after study showed no difference in the rates of acidemia, cesarean births, or perinatal death in the whole population. A subgroup analysis using the 4 hospitals in which an FHR centralization system was continuously connected (compliant group) and 3 hospitals in which the FHR centralization system was connected on demand (noncompliant group) showed that the incidence acidemia was significantly decreased (from 0.47% to 0.11%) without a corresponding increase in the cesarean birth rate due to nonreassuring FHR patterns in the compliant group. Although there was no difference in the incidence of nonreassuring FHR patterns in the noncompliant group, the total cesarean birth rate was significantly higher than that in the compliant group. Conclusion. The continuous FHR centralization system, in which specialists help to interpret results and decide clinical actions, was beneficial in reducing the incidence of neonatal acidemia (pH < 7.1) without increasing the cesarean birth rate due to nonreassuring FHR patterns.",
"corpus_id": 14526975,
"title": "The Regional Centralization of Electronic Fetal Heart Rate Monitoring and Its Impact on Neonatal Acidemia and the Cesarean Birth Rate"
} | {
"abstract": "Recovery of function following ischaemic stroke is a fascinating clinical observation. It comprises several modes, e.g. spectacular recovery in a matter of hours or days and gradual recovery over months or even years. That a non-functioning neural system can regain its function, even partially so, is challenging because of the obvious therapeutic implications. Until the mid-70s, however, dogmas largely prevailed which underpinned the then nihilistic approach to stroke patients. Proving these dogmas wrong has been a major achievement of modern stroke research. Thanks particularly to physiological imaging, key observations from the basic neurosciences have translated into the clinical realm in ways immediately understandable to the clinician, allowing the emergence of pathophysiology-based management.",
"corpus_id": 516182,
"score": 0,
"title": "Stroke Research in the Modern Era: Images versus Dogmas"
} |
{
"abstract": "The acute toxic effects of aristolochic acid (AA) were tested in rats and mice of both sexes. Oral or intravenous administration in high doses was followed by death from acute renal failure within 15 days. Histologically, the predominant features were severe necrosis affecting the renal tubules, atrophy of the lymphatic organs and large areas of superficial ulceration in the forestomach, followed by hyperplasia and hyperkeratosis of the squamous epithelium. The LD50 ranged from 56 to 203 mg/kg orally or 38 to 83 mg/kg intravenously, depending on species and sex.",
"corpus_id": 958787,
"title": "Acute toxicity of aristolochic acid in rodents"
} | {
"abstract": "Aristolochic acids (AAs) are a group of naturally occurring compounds present in many plant species of the Aristolochiaceae family. Exposure to AA is a significant risk factor for severe nephropathy, and urological and hepatobiliary cancers (among others) that are often recurrent and characterized by the prominent mutational fingerprint of AA. However, herbal medicinal products that contain AA continue to be manufactured and marketed worldwide with inadequate regulation, and possible environmental exposure routes receive little attention. As the trade of food and dietary supplements becomes increasingly globalized, we propose that further inaction on curtailing AA exposure will have far-reaching negative effects on the disease trends of AA-associated cancers. Our Review aims to systematically present the historical and current evidence for the mutagenicity and carcinogenicity of AA, and the effect of removing sources of AA exposure on cancer incidence trends. We discuss the persisting challenges of assessing the scale of AA-related carcinogenicity, and the obstacles that must be overcome in curbing AA exposure and preventing associated cancers. Overall, this Review aims to strengthen the case for the implementation of prevention measures against AA’s multifaceted, detrimental and potentially fully preventable effects on human cancer development. Environmental exposure to aristolochic acid-containing plant material and its use in traditional medicines have been linked to a wide range of cancers. In this Review, Das et al. describe the evidence for aristolochic acid as a potent carcinogen and explore the impact of public health measures on preventing aristolochic acid-linked cancers and nephropathy, with a call to action for the implementation of further preventative measures.",
"corpus_id": 250697936,
"title": "Aristolochic acid-associated cancers: a public health risk in need of global action"
} | {
"abstract": "There has been limited focus on small town tourism as a research focus in South Africa until the mid-2000s. However, since then, there has been a major multidisciplinary scholarly interest into this field of tourism and urban studies. Previous literature reviews mostly covered work done by geographers. This chapter reviews the expanded literature grouped into selected overarching themes that include the following: second homes; LED and developmental issues of small town tourism; economic impacts of tourism; nature-based tourism and rural dynamics; and niche tourism.",
"corpus_id": 134297936,
"score": 0,
"title": "A Decade of Small Town Tourism Research in South Africa"
} |
{
"abstract": "We give a few sufficient conditions for the existence of periodic solutions of the equation \\(\\dot{z}=\\sum_{j=0}^n a_j(t)z^j-\\sum_{k=1}^r c_k(t)\\overline{z}^k\\) where \\(n \\gt r\\) and \\(a_j\\)'s, \\(c_k\\)'s are complex valued. We prove the existence of one up to two periodic solutions.",
"corpus_id": 4690166,
"title": "Planar nonautonomous polynomial equations IV. Nonholomorphic case"
} | {
"abstract": "Abstract We consider the planar equation z = ∑ ak, l(t) zk z l, where ak, l is a T-periodic complex-valued continuous function, equal to 0 for almost all k, l ∈ N . We present sufficient conditions imposed on ak, l which guarantee the existence of its T-periodic solutions and, in the case a0, 0 = 0, the conditions for the existence of nonzero ones. We use a method which computes the fixed point index of the Poincare-Andronov operator in isolated sets of fixed points generated by so-called periodic blocks. The method is based on the Lefschetz fixed point theorem and the topological principle Of Wazewski.",
"corpus_id": 119807959,
"title": "On Periodic Solutions of Planar Polynomial Differential Equations with Periodic Coefficients"
} | {
"abstract": "Le présent article donne la démonstration des résultats que j'ai annoncés dans quatre Notes aux Comptes-Rendus [28]1). Il est divisé en quatre chapitres. Le premier chapitre élabore une technique d'approximation des applications différentiables; les théorèmes démontrés sont en quelque sorte une formulation différentiable du théorème d'approximation simpliciale de la Topologie; grâce à eux, toute la théorie pourra être établie sans faire appel au théorème de triangulation des variétés différentiables. Le chapitre II est consacré au problème de la réalisation des classes d'homologie d'une variété par des sous-variétés; on y obtient les résultats essentiels: En homologie mod 2, toutes le8 classes dont la dimension est inférieure à la moitié de la dimension de la variété sont réalisables par des sous-variétés. En homologie entière, pour toute classe d'homologie z de la variété orientable V, il exi8te un entier non nul N tel que la classe multiple N· z 80it réalisable par une sousvariété. Le chapitre III applique les résultats précédents au problème de Steenrod: Toute classe d'homologie d'un polyèdre fini est-elle l'image de la classe fondamentale d'une variété 1 On y montre que, Ri le problème admet une réponse affirmative en homologie mod 2, il existe au contraire, pour toute dimension ~ 7, des classes d'homologie entière qui ne sont l'image d'aucune variété différentiable compacte. Le chapitre IV, enfin, est consacré à l'étude des conditions pour qu'une variété soit une variété-bord, et à la classification des variétés cobordantes. Ici encore, on obtient des résultats assez complets pour les classes «mod 2», sans condition d'orientabilité. Par contre, je n'ai pu donner que des résultats fragmentaires pour les groupes Qk qui s'introduisent dans la classification des variétés orientées, à cause de difficultés algébriques liées en particulier au comportement des puissances de Steenrod dans la suite spec-",
"corpus_id": 120243638,
"score": 2,
"title": "Quelques propriétés globales des variétés différentiables"
} |
{
"abstract": "The seminal 2003 paper by Cosley, Lab, Albert, Konstan, and Reidl, demonstrated the susceptibility of recommender systems to rating biases. To facilitate browsing and selection, almost all recommender systems display average ratings before accepting ratings from users which has been shown to bias ratings. This effect is called Social Inuence Bias (SIB); the tendency to conform to the perceived \\norm\" in a community. We propose a methodology to 1) learn, 2) analyze, and 3) mitigate the effect of SIB in recommender systems. In the Learning phase, we build a baseline dataset by allowing users to rate twice: before and after seeing the average rating. In the Analysis phase, we apply a new non-parametric significance test based on the Wilcoxon statistic to test whether the data is consistent with SIB. If significant, we propose a Mitigation phase using polynomial regression and the Bayesian Information Criterion (BIC) to predict unbiased ratings. We evaluate our approach on a dataset of 9390 ratings from the California Report Card (CRC), a rating-based system designed to encourage political engagement. We found statistically significant evidence of SIB. Mitigating models were able to predict changed ratings with a normalized RMSE of 12.8% and reduce bias by 76.3%. The CRC, our data, and experimental code are available at: http://californiareportcard.org/data/",
"corpus_id": 5925866,
"title": "A methodology for learning, analyzing, and mitigating social influence bias in recommender systems"
} | {
"abstract": "This paper provides a comprehensive review of explanations in recommender systems. We highlight seven possible advantages of an explanation facility, and describe how existing measures can be used to evaluate the quality of explanations. Since explanations are not independent of the recommendation process, we consider how the ways recommendations are presented may affect explanations. Next, we look at different ways of interacting with explanations. The paper is illustrated with examples of explanations throughout, where possible from existing applications.",
"corpus_id": 1674804,
"title": "A Survey of Explanations in Recommender Systems"
} | {
"abstract": "In this work, we present a lens-assisted quasi-optical THz transmitter using log-periodic toothed antenna (LPTA) integrated photomixer for beam forming and beam switching. The directivity of the proposed quasi-optical THz transmitter featuring one LPTA and highly-resistive silicon quasi-optics exceeds 26 dBi within the frequency range of 300–400 GHz. A steerable beam direction in the range of ±56° is achieved by a linear shift of the LPTA position on the extended hemispherical lens assembly. Further, a beam switching approach is realized with a 1×2 LPTA array and shows tilted main beam angles of ±33°. Finally, we study the influence of mutual coupling on the input antenna impedance of the linear antenna array.",
"corpus_id": 12345337,
"score": -1,
"title": "THz beam forming and beam switching using lens-assisted quasi-optical THz transmitter"
} |
{
"abstract": "The colourless crystals of (PPh4 )3 [PW12 O40 ]⋅3 C3 H7 NO (1) are converted to the dark blue crystals of {(PPh4 )3 [PW12 O40 ]⋅3C3 H7 NO}0.85 {(PPh4 )3 (C3 H7 NO)+. [PWV WVI11 O40 ]- ⋅2C3 H7 NO)}0.15 (2) upon irradiation with visible light in an interesting single crystal to single crystal transformation. This photochromic conversion is accompanied by the reduction of concerned Keggin anion from {PWVI12 } to {PWV WVI11 }. This redox conversion is characterized by various spectroscopic techniques including single crystal X-ray diffraction studies. The photochromic properties of compound 1 can be controlled reversibly through the dimethylformamide (DMF) molecule as a function of temperature and proton exposure in a gas-solid reaction. The present work can be described as a new concept of programmable photochromism with the formation of photochromic pockets in crystalline 1 host (solid state), wherein a solvent can be plugged at a time to show light induced coloration.",
"corpus_id": 13991441,
"title": "A Reversible Redox Reaction in a Keggin Polyoxometalate Crystal Driven by Visible Light: A Programmable Solid-State Photochromic Switch."
} | {
"abstract": "Two recent systematic determinations of bond-valence parameters addressed the problem of the correlation between R 0 and b in different ways raising the question of which is to be preferred.",
"corpus_id": 26640206,
"title": "What is the best way to determine bond-valence parameters?"
} | {
"abstract": "Phosphate-based inorganic-organic hybrid nanoparticles (IOH-NPs) with the general composition [M](2+)[Rfunction(O)PO3](2-) (M = ZrO, Mg2O; R = functional organic group) show multipurpose and multifunctional properties. If [Rfunction(O)PO3](2-) is a fluorescent dye anion ([RdyeOPO3](2-)), the IOH-NPs show blue, green, red, and near-infrared fluorescence. This is shown for [ZrO](2+)[PUP](2-), [ZrO](2+)[MFP](2-), [ZrO](2+)[RRP](2-), and [ZrO](2+)[DUT](2-) (PUP = phenylumbelliferon phosphate, MFP = methylfluorescein phosphate, RRP = resorufin phosphate, DUT = Dyomics-647 uridine triphosphate). With pharmaceutical agents as functional anions ([RdrugOPO3](2-)), drug transport and release of anti-inflammatory ([ZrO](2+)[BMP](2-)) and antitumor agents ([ZrO](2+)[FdUMP](2-)) with an up to 80% load of active drug is possible (BMP = betamethason phosphate, FdUMP = 5'-fluoro-2'-deoxyuridine 5'-monophosphate). A combination of fluorescent dye and drug anions is possible as well and shown for [ZrO](2+)[BMP](2-)0.996[DUT](2-)0.004. Merging of functional anions, in general, results in [ZrO](2+)([RdrugOPO3]1-x[RdyeOPO3]x)(2-) nanoparticles and is highly relevant for theranostics. Amine-based functional anions in [MgO](2+)[RaminePO3](2-) IOH-NPs, finally, show CO2 sorption (up to 180 mg g(-1)) and can be used for CO2/N2 separation (selectivity up to α = 23). This includes aminomethyl phosphonate [AMP](2-), 1-aminoethyl phosphonate [1AEP](2-), 2-aminoethyl phosphonate [2AEP](2-), aminopropyl phosphonate [APP](2-), and aminobutyl phosphonate [ABP](2-). All [M](2+)[Rfunction(O)PO3](2-) IOH-NPs are prepared via noncomplex synthesis in water, which facilitates practical handling and which is optimal for biomedical application. In sum, all IOH-NPs have very similar chemical compositions but can address a variety of different functions, including fluorescence, drug delivery, and CO2 sorption.",
"corpus_id": 33477666,
"score": 2,
"title": "Multifunctional phosphate-based inorganic-organic hybrid nanoparticles."
} |
{
"abstract": "A monolithic gas sensor system is presented, which includes on a single chip an array of three metal-oxide covered microhotplates, each of which is individually addressable and can be operated at a defined temperature. The monolithically co-integrated circuitry consists of three independent temperature control loops for the microhotplates and an on-chip temperature sensor. An innovative design of a circular-shape microhotplate has been realized. Temperature homogeneity and thermal efficiency of the membrane were optimized for use with thick films of nanocrystalline metal oxides as sensitive layers. A SnO/sub 2/-based sensing material with noble metal doping was operated at different temperatures as well as in temperature pulsing mode. Chemical sensor measurements have been conducted with, e.g., CO to show the system performance.",
"corpus_id": 8352878,
"title": "Smart single-chip CMOS microhotplate array for metal-oxide-based gas sensors"
} | {
"abstract": "—modeling and simulation of a micromachined microhotplate (MHP) designed to achieve low power dissipation and uniform temperature distribution on the sensing area at operating temperatures of up to 700°C is presented in this paper. At the operating temperature of 700°C, it is demonstrated that as the silicon nitride (Si 3 N 4) and silicon carbide (SiC) membrane and heat distributor layer, respectively, is increased from 0.3 µm to 3 µm, the power dissipation of the MHP increases while the mechanical displacement of the MHP membrane decreases. On the other hand, the temperature gradient on the MHP decreases as the thickness of the SiC temperature distributor layer is increased and is a minimum with a value of 0.005°C/μm for SiC thickness of 2 µm and above. However for an increase in the tin dioxide (SnO 2) thickness from 0.3 µm to 3 µm, the power dissipation on the MHP is not affected while the mechanical displacement decreases. A comparison between simulation and mathematically modeled results for power dissipation and current density of the MHP showed close agreement. An optimized simulated device exhibited low power dissipation of 9.25 mW and minimum mechanical deflection of 1.2 µm at the elevated temperature of 700°C.",
"corpus_id": 16032914,
"title": "Design , Simulation and Modeling of a Micromachined High Temperature Microhotplate for Application in Trace Gas Detection"
} | {
"abstract": "This paper presents a feedback steering control strategy for a vehicle in an automatic driving context. Two main contributions in terms of control are highlighted. On the one hand, the automatic reference trajectories generation from geometric path constraints (obstacles). Thanks to the flatness property of the considered model, the longitudinal velocity will be controlled around a quasi-constant value while lateral and yaw dynamics targets will allow to avoid obstacles. On the other hand, a sensitivity-based methodology will be presented to choose the best possible gains parameterization in a state Riccati dependent equation (SDRE) feedback controller. Both direct and adjoint sensitivity methods are used, together with a dynamic inversion of the system, in order to optimize the performances of the controller. Obstacle avoiding simulation results will be validated and compared with other nonlinear optimal feedback controllers, from a realistic industrial simulator environment for vehicle dynamics",
"corpus_id": 22575547,
"score": 1,
"title": "Flatness-Based Vehicle Steering Control Strategy With SDRE Feedback Gains Tuned Via a Sensitivity Approach"
} |
{
"abstract": "The residue behavior and dietary intake risk of three fungicides (pyrimethanil, iprodione, kresoxim-methyl) in tomatoes (Lycopersicon esculentum Mill.) grown in greenhouse were investigated. A simple, rapid analytical method for the quantification of fungicide residues in tomatoes was developed using gas chromatography coupled with mass spectrum detection (GC-MSD). The fortified recoveries were ranged from 87% to 103% with relative standard deviations (RSDs) varied from 4.7% to 12.1%. The results indicated that the dissipation rate of the studied fungicides in tomatoes followed first order kinetics with half lives in the range of 8.6-11.5 days. The final residues of all the fungicides in tomatoes were varied from 0.241 to 0.944 mg/kg. The results of dietary intake assessment indicated that the dietary intake of the three fungicides from tomatoes consumption for Chinese consumers were acceptable. This study would provide more understanding of residue behavior and dietary intake risk by these fungicides used under greenhouse conditions.",
"corpus_id": 8434960,
"title": "Residue behavior and dietary intake risk assessment of three fungicides in tomatoes (Lycopersicon esculentum Mill.) under greenhouse conditions."
} | {
"abstract": "ABSTRACT The dissipation dynamics and residue amounts of lambda‐cyhalothrin, thiamethoxam and clothianidin in apple were investigated by using rapid resolution liquid chromatography triple quadrupole mass spectrometer (RRLC‐MS/MS) and gas chromatography mass spectrometry (GC‐MS). The developed method performed satisfactory recoveries of 88%–105% and the limit of quantitation (LOQ) was 0.01 mg kg−1. The suspension concentrate (SC) formulation of lambda‐cyhalothrin and thiamethoxam was applied on apple field in accordance with good agricultural practice (GAP). The half‐lives of two pesticides ranged from 7.01 d to 17.3 d and the terminal residues were <0.01–0.21 mg kg−1. Based on the Chinese dietary pattern, the dietary risk of lambda‐cyhalothrin and total thiamethoxam were predicted by comparing intake amounts with the toxicological data, namely acceptable daily intake (ADI) and acute reference dose (ARfD). The chronic and acute risk quotients were 0.1080–0.4463 and 0.0008–0.2005, respectively, which showed negligible risk for general consumers. The pre‐harvest interval (PHI) of 21 d was suggested for the formulation in compliance with maximum residue limit (MRL) and dietary risk assessment, meanwhile, the MRL of 0.1 mg kg−1 was recommended for thiamethoxam in apple. These results were vital for guiding reasonable usage of two insecticides and for approval of formulation use. HighlightsThe GC‐MS was employed to determine two isomers of lambda‐cyhalothrin.The RRLC‐MS/MS was used to simultaneous detection of thiamethoxam and clothianidin.GAP field trails were conducted to investigated dissipation behavior and residue of three compounds.Chronic and acute dietary risk of lambda‐cyhalothrin and total thiamethoxam were assessed based on Chinese dietary pattern.The PHI of 21 d for formulation and MRL of 0.1 mg kg−1 for thiamethoxam in apple were recommended.",
"corpus_id": 53568866,
"title": "Dissipation behavior and dietary risk assessment of lambda‐cyhalothrin, thiamethoxam and its metabolite clothianidin in apple after open field application"
} | {
"abstract": "Field trials were conducted at PAU, Regional Station, Abohar to estimate harvest residue of Matco 8–64(metalaxyl 8% + mancozeb 64%) in Kinnow mandarin fruits and soil during February, 2006 to January, 2007. Three doses of the fungicide, 2.5g/l, 5.0g/l and 10g/l and untreated control were evaluated by applying the treatment in soil as soil drench and spray using ten litre of water per tree. The treatments were given in February and again in August, 2006. The fruit and soil samples were collected in January, 2007 for residue analysis. The residue of metalaxyl was detected by GLC and mancozeb by spectrophotometrically from B.C.K.V., Mohanpur, West Bengal. The results revealed that metalaxyl and mancozeb residue of Matco 8–64 were at below the detectable limits at harvest in Kinnow fruits and soil. Therefore, the application of Matco 8–64 in soil and spray at the recommended rates(25g/tree in 10 litres of water as soil drench and spray2.5g/l of water) is advocated for the management of citrus foot rot/gummosis caused by Phytophthora parasitica.",
"corpus_id": 82356282,
"score": 2,
"title": "Determination of harvest residues of Matco 8–64 (metalaxyl 8%+mancozeb 64%) in kinnow mandarin fruits and soil"
} |
{
"abstract": "The standard interpretation of Harrod' s economics dynamics heavily dependent on the so-called \"Harrod-Domar\" model of economics growth is misleading for an accurate assessment of Harrod' s model. In arder to do that this articles explores the connection between Harrod 's economics dynamics and Keynes' work by showing how sorne key analytical tools and the \"vision\", in a schumpeterian sense, for the formulation of an economics dynamics in Harrod are based on, or developed from, Keynes's General Theory and his criticism to the \"classical economics\". Keynes' influence is observed in the general structure of the model and on the mechanisms under which is analyzed the long-term economic growth. It is intended to show those key elements that make evident that and the degree of their importance in the mathematical structure of the model and in the extent of Harrod's conclusions. With that aims is examined comparatively sorne chief points as the rol of the savinginvestment process, the rol of supply factors, the implicit notion of time, the significance of fu// employment equilibrium and the notion of equilibrium employed in the analysis of growth.",
"corpus_id": 152767067,
"title": "La dinámica económica de Harrod y el paradigma Keynesiano"
} | {
"abstract": "Introduction to the Routledge Classics edition by Tadeusz Kowalik Translator's Note A Note on Rosa Luxemburg Introduction Part 1: The Problem of Reproduction Part 2: Historical Exposition of the Problem Part 3: The Historical Conditions of Accumulation Index",
"corpus_id": 159059633,
"title": "Accumulation of Capital"
} | {
"abstract": "IMPORTANCE Attention-deficit/hyperactivity disorder (ADHD) is conceptualized as a neurodevelopmental disorder that is strongly heritable. However, to our knowledge, no study to date has examined the genetic and environmental influences explaining interindividual differences in the developmental course of ADHD symptoms from childhood to adolescence (ie, systematic decreases or increases with age). The reason ADHD symptoms persist in some children but decline in others is an important concern, with implications for prognosis and interventions.",
"corpus_id": 42529214,
"score": 0,
"title": "Genetic and Environmental Influences on the Developmental Course of Attention-Deficit / Hyperactivity Disorder Symptoms FromChildhood to Adolescence"
} |
{
"abstract": "The past 40 years have seen growing inequality of access and attainment of college degrees by socioeconomic status and race/ethnicity. As higher education continues to lose status in the policy priorities of most states, a larger share of college costs are being shifted to students and their families. Lowand middle-income students are being priced out of higher education due to a combination of increasing tuition prices, declining purchasing power of student aid programs, and a growing focus by colleges and universities on attracting wealthy, high-achieving students via the strategic use of institutional aid. For example, in California between 1990-91 and 2013-14, tuition and fees have increased more than threefold at the CSUs and more than fourfold at the UCs, whereas state spending per FTE in both sectors is close to its lowest point since 1980-81.",
"corpus_id": 155701248,
"title": "The Politics of Higher Education Finance: The Role of States and Institutions"
} | {
"abstract": "received his PhD in Economics from Florida State University. He may be reached by email at agillen@centerforcollegeaffordability.org. The Center for College Affordability and Productivity (CCAP) is a non-partisan, nonprofit research center based in Washington, DC that is dedicated to researching public policy and economic issues relating to postsecondary education. CCAP aims to facilitate a broader dialogue that challenges conventional thinking about costs, efficiency and innovation in postsecondary education in the United States.",
"corpus_id": 154948387,
"title": "Introducing Bennett Hypothesis 2.0"
} | {
"abstract": "The paper investigates the intervening influence of interactional justice between procedural justice and job performance (task, contextual and adaptive performance) of the faculty members of Karachi (Pakistan) and Dhaka (Bangladesh) based government colleges by using Structural Equation Modelling (SEM). Data, for this study, has been collected through pre-designed close-ended questionnaire. The intervening variable fully mediated the relationship between procedural justice and job performance. The result of this study indicates that the performance of government college faculty members can be improved by ensuring fair procedures and dignified treatment of faculty members in the working environment. It can be concluded that teachers can accommodate harsh procedures, subject to courteously and fairly communicated. Significance of this study is that it has investigated the least researched areas in Pakistan and Bangladesh. Its findings can be helpful to the government and college administration while making and implementing policies for college education development in both countries.",
"corpus_id": 62835252,
"score": 1,
"title": "Exploring Intervening Influence of Interactional Justice between Procedural Justice and Job Performance: Evidence from South Asian Countries"
} |
{
"abstract": "Evoked and induced event-related neural oscillations have recently been proposed as a key mechanism supporting higher-order cognition. Cognitive decay and abnormal electromagnetic sensory gating reliably distinguish schizophrenia (SZ) patients and healthy individuals, demonstrated in chronic (CHR) and first-admission (FA) patients. Not yet determined is whether altered event-related modulation of oscillatory activity is manifested at early stages of SZ, thus reflects and perhaps embodies the development of psychopathology, and provides a mechanism for the gating deficit. The present study compared behavioral and functional brain measures in CHR and FA samples. Cognitive test performance (MATRICS Consortium Cognitive Battery, MCCB), neuromagnetic event-related fields (M50 gating ratio), and oscillatory dynamics (evoked and induced modulation of 8-12Hz alpha) during a paired-click task were assessed in 35 CHR and 31 FA patients meeting the criteria for ICD-10 diagnoses of schizophrenia as well as 28 healthy comparison subjects (HC). Both patient groups displayed poorer cognitive performance, higher M50 ratio (poorer sensory gating), and less induced modulation of alpha activity than did HC. Induced alpha power decrease in bilateral posterior regions varied with M50 ratio in HC but not SZ, whereas orbitofrontal alpha power decrease was related to M50 ratio in SZ but not HC. Results suggest disruption of oscillatory dynamics at early stages of illness, which may contribute to deficient information sampling, memory updating, and higher cognitive functioning.",
"corpus_id": 1374903,
"title": "Functional cognitive and cortical abnormalities in chronic and first-admission schizophrenia"
} | {
"abstract": "Deficit in P50 sensory gating has repeatedly been shown in schizophrenia. In order to determine the contribution of trait and/or state features to P50 gating deficit in schizophrenia we evaluated the P50 gating in patients with first-episode schizophrenia (FES) at acute and post-acute phases. Subject groups comprised 16 patients with FES and 24 healthy controls. Patients were tested at the acute phase of the illness and retested at the post-acute phase when their positive symptoms improved. During the testing at the acute phase five patients were neuroleptic-naive and the others were taking atypical antipsychotics which were started recently in order to control the acute excitation. Patients were receiving risperidone, olanzapine or quetiapine treatment at the post-acute phase. P50 gating was impaired in patients at the acute phase compared to controls. However, at the post-acute phase P50 gating was increased compared to the acute phase, reaching to the gating values of controls. P50 gating improvement might be emerged from atypical antipsychotic medication, although this can only be definitively determined by randomized studies including different antipsychotics.",
"corpus_id": 2469154,
"title": "P50 gating at acute and post-acute phases of first-episode schizophrenia"
} | {
"abstract": "Chronic migraine is a costly and highly disabling condition that impacts millions of people in the United States. While chronic migraine is hypothesized to result from more infrequent forms of migraine, the precise mechanism by which this develops is still being researched. This study sought to better characterize the treatment patterns, disorder characteristics, and medical and disability profile of the chronic migraine population using the largest dataset of chronic migraineurs ever collected. The survey was started by 8,359 individuals and 4,787 met the inclusion criteria for diagnosed chronic migraine The number of stressful life events participants experienced due to their migraines related to number of therapies tried (p<0.00, eta2=0.215), depression (p<0.00, eta2=0.178), number of comorbidities (p<0.00, eta2=0.172), anxiety (p<0.00, eta2=0.162), number of physician visits in the past year (p<0.00, eta2=0.103), and chronic pain levels (p<0.00, eta2=0.077).. The results of this survey suggest that chronic migraineurs may misattribute aspects of psychiatric or medical comorbidities to their chronic migraines. Further, the sample underutilized mental health services and were unsatisfied with their migraine treatments. Providers to chronic migraineurs should ensure that patients are receiving appropriate mental health care in order to alleviate psychological distress as well as to potentially lessen negative life events previously associated with migraine symptoms.",
"corpus_id": 8428779,
"score": 1,
"title": "The Chronic Migraineur and Health Services: National Survey Results"
} |
{
"abstract": null,
"corpus_id": 18991009,
"title": "Mapping Stacked Decision Forests to Deep and Sparse Convolutional Neural Networks for Semantic Segmentation"
} | {
"abstract": "This report describes my research activities in the Hasso Plattner Institute and summarizes my Ph.D. plan and several novels, end-to-end trainable approaches for analyzing medical images using deep learning algorithm. In this report, as an example, we explore different novel methods based on deep learning for brain abnormality detection, recognition, and segmentation. This report prepared for the doctoral consortium in the AIME-2017 conference.",
"corpus_id": 7961631,
"title": "Deep Learning for Medical Image Analysis"
} | {
"abstract": "Grid search and manual search are the most widely used strategies for hyper-parameter optimization. This paper shows empirically and theoretically that randomly chosen trials are more efficient for hyper-parameter optimization than trials on a grid. Empirical evidence comes from a comparison with a large previous study that used grid search and manual search to configure neural networks and deep belief networks. Compared with neural networks configured by a pure grid search, we find that random search over the same domain is able to find models that are as good or better within a small fraction of the computation time. Granting random search the same computational budget, random search finds better models by effectively searching a larger, less promising configuration space. Compared with deep belief networks configured by a thoughtful combination of manual search and grid search, purely random search over the same 32-dimensional configuration space found statistically equal performance on four of seven data sets, and superior performance on one of seven. A Gaussian process analysis of the function from hyper-parameters to validation set performance reveals that for most data sets only a few of the hyper-parameters really matter, but that different hyper-parameters are important on different data sets. This phenomenon makes grid search a poor choice for configuring algorithms for new data sets. Our analysis casts some light on why recent \"High Throughput\" methods achieve surprising success--they appear to search through a large number of hyper-parameters because most hyper-parameters do not matter much. We anticipate that growing interest in large hierarchical models will place an increasing burden on techniques for hyper-parameter optimization; this work shows that random search is a natural baseline against which to judge progress in the development of adaptive (sequential) hyper-parameter optimization algorithms.",
"corpus_id": 15700257,
"score": -1,
"title": "Random search for hyper-parameter optimization"
} |
{
"abstract": "The effects of a sonicated Porphyromonas gingivalis ATCC 33277 protein extract on the mitogenic and chemotactic responses of human periodontal ligament (PDL) cells to the recombinant human platelet-derived growth factor-BB homodimer (PDGF-BB) were examined in vitro. Proliferation of PDL cells was inhibited by P. gingivalis extract at concentrations higher than 10 micrograms/mL protein. At 100 micrograms/mL of P. gingivalis extract, cells did not proliferate. DNA synthesis in PDL cells, as revealed by [3H]-thymidine incorporation, was also inhibited by approximately 50% in the presence of 50 micrograms/mL P. gingivalis extract for 24 hours. In contrast, PDGF-BB at 1 ng/mL enhanced DNA synthesis in PDL cells, followed by maximum enhancement at concentrations higher than 10 ng/mL PDGF-BB. However, this mitogenic response to PDGF-BB was markedly reduced in the presence of 20 micrograms/mL of P. gingivalis extract and did not reach the maximum level even if PDGF-BB concentrations were increased to 250 ng/mL. PDL cells exhibited a chemotactic response to PDGF-BB at 1 ng/mL, which was also inhibited by pretreatment of the cells with P. gingivalis extract at 10 to 50 micrograms/mL. Scatchard analysis of a [125I]-PDGF binding assay demonstrated that PDL cells have both high and low PDGF binding affinity sites. Treatment of the cells with P. gingivalis extract decreased the number of PDGF-binding sites to approximately 35% of the control level, while it caused only a slight change in the affinities of both types of binding site. These results indicated that the P. gingivalis extract reduced mitogenic and chemotactic responses of human PDL cells, possibly through mechanisms involving a decrease in PDGF-binding capacity of these cells. Due to this inhibitory effect of P. gingivalis, the normal levels of PDGF in periodontal lesions may not be sufficient to promote periodontal regeneration through activation of PDL cell proliferation and migration. Therefore, the therapeutic use of PDGF-BB, as a supplement to pre-existing PDGF and as an adjunct, while also eliminating P. gingivalis from periodontal lesions, would help periodontal tissue regeneration.",
"corpus_id": 699444,
"title": "Porphyromonas gingivalis reduces mitogenic and chemotactic responses of human periodontal ligament cells to platelet-derived growth factor in vitro."
} | {
"abstract": "Strains of B. gingivalis were shown to produce collagenolytic activity capable of dissolving reconstituted collagen (type I) fibrils and of cleaving the helical domain of types I. II and III collagens at 22° C. The catalytic activity was dependent on free thiol groups and on metal ions, as indicated by inhibition by thiol blocking reagents and metal chelators. The activity was associated with the bacterial cells and was not secreted to the medium. Under optimal conditions. 100 Units of collagenase per gram cell pellet (wet weight) were released by detergents such as Triton X-100 and SDS. Zymography of detergent extracts revealed that collagen-degrading strains, but not an inactive control strain (W), contained a discrete Mr 90 000 gelatin cleaving protease which may be identical to the collagenolytic enzyme. The initial attack on the helical domain of type I collagen occurred near the COOH-terminus. The a1 and a2 chains were cleaved at the same site, generating a major helical fragment consisting of three shortened (Mr 82 000) a-chains. Subsequent cleavages of this shortened collagen molecule resulted in generation of multiple fragments from the component a-chains in the Mr 60 000 to 6000 range. This cleavage pattern was clearly distinct from the characteristic 3/4–1/4 pattern produced by vertebrate collagenases. Type II and III collagens were also cleaved first near the COOH-terminus, generating fragments of similar size to those produced from type I collagen. In view of its ability to dissolve reconstituted collagen fibrils at 35°C and its ability to attack the helical domain of interstitial collagens in solution at 22°C, we suggest that this enzyme tentatively be classified as a true collagenase.",
"corpus_id": 13490979,
"title": "Characterization of collagenolytic activity from strains of Bacteroides gingivalis."
} | {
"abstract": "Whether left ventricular noncompaction (LVNC) is a distinct cardiomyopathy or a morphologic trait shared by different cardiomyopathies remains controversial. Current guidelines from professional organizations recommend different strategies for diagnosing and treating patients with LVNC. This state-of-the-art review discusses new insights into the basic mechanisms leading to LVNC, its clinical manifestations, treatment modalities, anatomy and pathology, embryology, genetics, epidemiology, and imaging. Three markers currently define LVNC: prominent left ventricular trabeculae, deep intertrabecular recesses, and a thin compacted layer. Although new genetic data from mice and humans supports LVNC as a distinct cardiomyopathy, evidence for LVNC as a shared morphological trait is not ruled out. Criteria supporting LVNC as a shared morphological trait may depend on consensus guidelines from the multiple professional organizations. Enhanced imaging and increased use of genetics are both predicted to significantly impact our overall understanding of the basic mechanisms causing LVNC and its optimal management.",
"corpus_id": 25814104,
"score": 1,
"title": "Left ventricular noncompaction: a distinct cardiomyopathy or a trait shared by different cardiac diseases?"
} |
{
"abstract": "The neuropsychologic evaluation of patients under consideration for movement disorder surgery is recognized as being an essential component of the preoperative process. Patients with early‐stage concomitant dementia must be identified and the relative risk of postoperative cognitive decline evaluated. Knowledge of the patterns of an individual's strengths and weaknesses might also be a factor in deciding on a neurosurgical procedure. Although the advent of pallidal deep brain stimulation (DBS) has possibly resulted in reduced risk of induced cognitive impairment, even this procedure has been associated with negative sequelae. DBS within the subthalamic nucleus is becoming the method of choice and this may lead to cognitive and behavioral compromise, especially in the elderly patient. The team considering the establishment of neurosurgical treatment is often at a loss to decide how much neuropsychologic testing is required to determine relative risks of cognitive or behavioral morbidity as a consequence of the procedure. A brief summary of expected outcome and of pertinent family process and psychodynamic issues are addressed. This article is intended to serve as a guide to permit clinicians to choose the appropriate length and depth of neuropsychologic assessment, but also to highlight the confounding factors often present in these patients.",
"corpus_id": 1107345,
"title": "Neuropsychologic assessment of patients for movement disorder surgery"
} | {
"abstract": "Background: The clinical condition of advanced Parkinson’s disease (PD) patients is often complicated by motor fluctuations and dyskinesias which are difficult to control with available oral medications. Objective: To compare clinical and neuropsychological 12 month outcome following subcutaneous apomorphine infusion (APO) and chronic deep brain stimulation of the subthalamic nucleus (STN-DBS) in advanced PD patients. Methods: Patients with advanced PD and medically untreatable fluctuations underwent either APO (13 patients) or STN-DBS (12 patients). All patients were clinically (UPDRS-III, AIMS, 12 h on-off daily) and neuropsychologically (MMSE, Hamilton-17 depression, NPI) evaluated at baseline and at 12 months. APO was discontinued at night. Results: At 12 months APO treatment (74.78±24.42 mg/day) resulted in significant reduction in off time (−51%) and no change in AIMS. Levodopa equivalent medication doses were reduced from 665.98±215 mg/day at baseline to 470±229 mg/day. MMSE, NPI, and Hamilton depression scores were unchanged. At 12 months STN-DBS resulted in significant clinical improvement in terms of reduction in daily off time (−76%) and AIMS (−81%) as well as levodopa equivalent medication doses (980±835 to 374±284 mg/day). Four out of 12 patients had stopped oral medications. MMSE was unchanged (from 28.6±0.3 to 28.4±0.6). Hamilton depression was also unchanged, but NPI showed significant worsening (from 6.58±9.8 to 18.16±10.2; p<0.02). Category fluency also declined. Conclusions: Both APO and STN-DBS resulted in significant clinical improvement in complicated PD. STN-DBS resulted in greater reduction in dopaminergic medications and provided 24 h motor benefit. However, STN-DBS, unlike APO, appears to be associated with significant worsening on NPI resulting from long term behavioral problems in some patients.",
"corpus_id": 5607427,
"title": "Clinical and neuropsychological follow up at 12 months in patients with complicated Parkinson’s disease treated with subcutaneous apomorphine infusion or deep brain stimulation of the subthalamic nucleus"
} | {
"abstract": "It has been hypothesized that insulin‐like growth factors (IGFs) and components of the growth‐hormone (GH)‐IGF axis may underlie reported associations of poor fetal and childhood growth with schizophrenia. We have investigated the association of schizophrenia with 16 SNPs spanning the IGF1 gene with an inter‐marker distance of approximately 2–3 kb. We also examined associations with four common functional polymorphisms of genes involved in aspects of the GH‐IGF system—the IGF1 receptor (IGF1R), insulin receptor substrate (IRS1), growth hormone (GH1), and IGF binding protein‐3 (IGFBP3). The study was based on an analysis of pooled DNA samples from 648 UK and Irish cases of schizophrenia and 712 blood donor controls and of 297 Bulgarian parent offspring trios. In replicated pool analyses, none of the 16 SNPs in IGF1 nor the 4 key SNPs in the other growth pathway genes were associated with schizophrenia. SNP coverage of IGF1 was extensive, so our findings do not support a major role for IGF‐I in the aetiology of schizophrenia. © 2006 Wiley‐Liss, Inc.",
"corpus_id": 10926232,
"score": 1,
"title": "IGF1, growth pathway polymorphisms and schizophrenia: A pooling study"
} |
{
"abstract": "Abstract Iodine has always been connected to thyroid gland, and the fact that thyroid tissue traps, organificates and stores iodine more than other tissues is well known, hence the use of radioiodine as a diagnostic and therapeutic tool for thyroid disorders. However, false-positive cases do occur. We present a case of a 34-year-old patient who underwent total thyroidectomy for papillary carcinoma. Results of follow up TSH, thyroglobulin and thyroglobulin antibody tests after surgery lead to two rounds of radioactive iodine. After that, a radioiodine whole-body scan showed high uptake in the pelvis above bladder. Computed tomography scan showed a pelvic heterogeneous mass with some calcifications. Surgical removal and subsequent pathology confirmed the absence of metastasis. The final pathological diagnosis was serous cystadenoma, endometriosis cyst and leiomyoma. As the real cause behind false-positive iodine uptake by these tissues has yet to be determined, careful assessment should be considered in any suspicious case.",
"corpus_id": 3758598,
"title": "False-positive radioiodine accumulation in a huge pelvic mass after thyroidectomy for papillary carcinoma, a case report from Syria"
} | {
"abstract": "Thyroglossal duct cyst (TDC) is a cystic expansion of a remnant of the thyroglossal duct tract. Carcinomas in the TDC are extremely rare and are usually an incidental finding after the Sistrunk procedure. In this report, an unusual case of a 36-year-old woman with concurrent papillary thyroid carcinoma arising in the TDC and on the thyroid gland is presented, followed by a discussion of the controversies surrounding the possible origins of a papillary carcinoma in the TDC, as well as the current management options.",
"corpus_id": 10351082,
"title": "Simultaneous Papillary Carcinoma in Thyroglossal Duct Cyst and Thyroid"
} | {
"abstract": "Homogeneous Spiking Neural P Systems (HSN P systems, for short) are a class of neural-like computing models in membrane computing, which are inspired by neurons that they are “designed” by nature to have the same “set of rules”, “working” in a uniform way to transform input into output. HSN P systems can be converted to weighted homogeneous SNP systems. In this work, based on the above two known systems, we consider a restricted variant of SN P systems called local homogeneous weighted SN P systems (LHWSN P systems, for short), where neurons in same module have the same set of rules. As a result, we prove that such systems can achieve Turing completeness. Specifically, it is proved that using only standard spiking rules is sufficient to compute and accept the family of sets of Turing computable natural numbers, moreover local homogeneity reduces the time required for the execution of the system.",
"corpus_id": 26210730,
"score": 0,
"title": "Local Homogeneous Weighted Spiking Neural P Systems"
} |
{
"abstract": "Single-layer transition metal dichalcogenides (TMDCs) can adopt two distinct structures corresponding to different coordination of the metal atoms. TMDCs adopting the $T$-type structure exhibit a rich and diverse set of phenomena, including charge density waves (CDWs) in a $\\sqrt{13}\\ifmmode\\times\\else\\texttimes\\fi{}\\sqrt{13}$ supercell pattern in ${\\mathrm{TaS}}_{2}$ and ${\\mathrm{TaSe}}_{2}$, and a possible excitonic insulating phase in ${\\mathrm{TiSe}}_{2}$. These properties make the $T$-TMDCs desirable components of layered heterostructure devices. In order to predict the emergent properties of combinations of different layered materials, one needs simple and accurate models for the constituent layers which can take into account potential effects of lattice mismatch, relaxation, strain, and structural distortion. Previous studies have developed ab initio tight-binding Hamiltonians for $H$-type TMDCs [S. Fang et al., Phys. Rev. B 98, 075106 (2018)]. Here we extend this work to include $T$-type TMDCs. We demonstrate the capabilities and limitations of our model using three example systems: a one-dimensional sinusoidal ripple, which represents a longitudinal acoustic phonon; the $2\\ifmmode\\times\\else\\texttimes\\fi{}2$ CDW in ${\\mathrm{TiSe}}_{2}$; and the $\\sqrt{13}\\ifmmode\\times\\else\\texttimes\\fi{}\\sqrt{13}$ CDW in ${\\mathrm{TaS}}_{2}$. Using the technique of band unfolding we compare the electronic structure of the distorted crystals to the pristine band structure and find our tight-binding model reproduces many features revealed by direct density functional theory calculations, provided the magnitude of the distortions remains in the linear regime. This model of the strain response of single layers is a necessary ingredient for the construction of models of van der Waals heterostructures with multiple layers, because the deformation and strain from mechanical relaxations in a twisted bilayer have important effects on the electronic structure.",
"corpus_id": 214622988,
"title": "Effects of structural distortions on the electronic structure of \nT\n-type transition metal dichalcogenides"
} | {
"abstract": "We derive electronic tight-binding Hamiltonians for strained graphene, hexagonal boron nitride and transition metal dichalcogenides based on Wannier transformation of {\\it ab initio} density functional theory calculations. Our microscopic models include strain effects to leading order that respect the crystal symmetry and local crystal configuration, and are beyond the central force approximation which assumes only pair-wise distance dependence. Based on these models, we also derive and analyze the effective low-energy k $\\cdot$ p Hamiltonians. Our {\\it ab initio} approaches complement the symmetry group representation construction for such effective low-energy Hamiltonians and provide the values of the coefficients for each symmetry-allowed term. These models are relevant for the design of electronic device applications, since they provide the framework for describing the coupling of electrons to other degrees of freedom including phonons, spin and the electromagnetic field. The models can also serve as the basis for exploring the physics of many-body systems of interesting quantum phases.",
"corpus_id": 4802452,
"title": "Electronic structure theory of strained two-dimensional materials with hexagonal symmetry"
} | {
"abstract": "Transition metal dichalcogenides (TMDs) represent a family of materials with versatile electronic, optical, and chemical properties. Most TMD bulk crystals are van der Waals solids with strong bonding within the plane but weak interlayer bonding. The individual layers can be readily isolated. Single layer TMDs possess intriguing properties that are ideal for both fundamental and technologically relevant research studies. We review the structure and phases of single and few layered TMDs. We also describe recent progress in phase engineering in TMDs. The ability to tune the chemistry by choosing a unique combination of transition metals and chalcogen atoms along with controlling their properties by phase engineering allows new functionalities to be realized with TMDs.",
"corpus_id": 7424172,
"score": -1,
"title": "Phase engineering of transition metal dichalcogenides."
} |
{
"abstract": "Wish you could nab a colleague for a quick consult on a clinical problem? Curbside Consults brings the expert to you to offer general advice on situations that crop up in everyday practice. Send us your question, and you may see it answered in an upcoming issue. (Sorry, we cannot return or answer questions that are not used in Curbside Consults.) Contact us: 11• By e-mail: pgmcurbcon@ mcgraw-hill.com",
"corpus_id": 2550710,
"title": "Is it multiple sclerosis?"
} | {
"abstract": "Over a hundred years ago, Charcot set down what he considered to be some of the clinical characteristics of multiple sclerosis (MS). His triad was not specific but it was the first attempt to separate this disease from the many others affecting the nervous system. The history of clinical diagnostic criteria demonstrates the evolution from rather tentative classifications of restricted value to the more elaborate 1983 scheme which incorporates some laboratory procedures under the rubric paraclinical tests, considered to be extensions of the neurological examination, as well as a new category based on the presence of specific abnormalities of the cerebrospinal fluid (CSF). It is curious that until then the term definite MS had been avoided except for autopsy-proven cases, perhaps a wise move, since exact diagnosis may require long term observation. All the proposed schemes have been based on the twin principles of dissemination in both time and space. The diagnosis of MS must remain a clinical one, supported but not supplanted by the increasingly popular magnetic resonance imaging, which is non-specific and is frequently overinterpreted by radiologists lacking appropriate clinical information. Reliance on the MRI as the principal if not exclusive basis for the diagnosis leads to error in as many as one third of cases. This assumes a great deal of importance considering that such non-MS patients may be counted in epidemiological surveys and included in therapeutic trials for disease-modifying drugs, or eventually treated with these very expensive drugs with still controversial long term efficacy. Not surprisingly, attempts to develop reliable criteria for the MRI diagnosis of MS have been unsuccessful in view of the lack of specificity of that procedure. Great care should be taken to exclude the presence of extrinsic cervical spine lesions which might impinge on the cord, leading to the formation of plaques, or mimic the course of MS. An MRI of the cervical spine is recommended in all patients suspected of having MS who have symptoms suggestive of spinal cord involvement. The diagnosis of MS is, and will remain, based on clinical criteria which codify the characteristic dissemination in time and space of MS.",
"corpus_id": 9784078,
"title": "Diagnostic criteria for multiple sclerosis"
} | {
"abstract": "Paroxysmal symptoms are known to occur in multiple sclerosis and have a wide clinical range. We report two patients whose paroxysmal symptoms resolved with bromocriptine. A 35 year old woman with a three year history of multiple sclerosis complained of paroxysmal upper and lower limb paresthesiae. She described these as \"tingling sensations\" beginning in her feet and ascending to her waist, and from her hands up to her shoulders, bilaterally. These sensations occurred in the upper and lower extremities simultaneously as well as independently. The symptoms lasted a few hours every day and remitted spontaneously. Occasionally, she complained of mild slurring of speech during these paroxysmal attacks. Neurological examination showed weakness of the hamstrings bilaterally and symmetric diffuse hyperreflexia. Previous attempts to control her symptoms with carbamazepine, barbiturates and amitriptyline gave little relief. Bromocriptine at an initial dose of 2-5 mg twice a day was started, and led to appreciable reduction in the patient's symptoms. The dose was increased to 5 mg twice a day a week later, and the symptoms completely resolved. Discontinuation of bromocriptine six months later led to an immediate recurrence of her paroxysmal symptoms as described previously. Resumption of bromocriptine treatment was again successful in resolving her symptoms. Two further attempts to discontinue bromocriptine were unsuccessful, as the patient's symptoms recurred on each occasion. The patient tolerated bromocriptine well except for mild",
"corpus_id": 29736578,
"score": 2,
"title": "Onset symptoms of multiple sclerosis."
} |
{
"abstract": "The effect of galanin on growth hormone (GH) secretion was investigated in monolayer cultures of rat anterior pituitary cells. Galanin caused a gradual increase in GH concentrations into the culture medium that was maximal at 90 minutes and sustained after 180 minutes. The ED50 for galanin-stimulated GH secretion was approximately 200 nM compared to an ED50 for rat GH-releasing factor (rGRF)-stimulated GH secretion of 10pM. Galanin and rGRF were additive in increasing GH release into the incubation medium. These data indicate that porcine-derived galanin has a direct effect on pituitary GH secretion in vitro.",
"corpus_id": 3034092,
"title": "Galanin stimulates rat pituitary growth hormone secretion in vitro."
} | {
"abstract": "The present study was undertaken to investigate the direct actions of rat galanin (R-GAL) on growth hormone (GH) release from the rat anterior pituitary in vitro. R-GAL modestly but significantly stimulated GH release without an increase in intra- and extracellular cyclic AMP levels in monolayer cultures of rat anterior pituitary cells. This stimulatory effect of R-GAL was dose-dependent but not additive with that of GH-releasing factor (GRF). R-GAL-stimulated GH release was less sensitive to the inhibitory effect of somatostatin than was GRF-stimulated GH release. In perfusions of rat anterior pituitary fragments, R-GAL induced a gradual and sustained increase of GH release. Incremental GH release derived in part from preformed stored GH. These data confirm that R-GAL acts at the pituitary level to stimulate GH release by a mechanism distinct from that of GRF.",
"corpus_id": 662415,
"title": "Characterization of the stimulatory effect of galanin on growth hormone release from the rat anterior pituitary."
} | {
"abstract": "Introduction 1. Foundations of orthodoxy 2. The hegemony of systems 3. From functionalism to fragmentation 4. Closed paradigms and analytical openings 5. Multiple paradigm research 6. Postmodernism and organisation Notes Bibliography Author index Subject index.",
"corpus_id": 142854950,
"score": 0,
"title": "Sociology and Organization Theory: Positivism, Paradigms and Postmodernity"
} |
{
"abstract": "SummaryA series of Bromus rubens and B. mollis populations were sampled in the coastal range and northern part of the Central Valley of California in order to study their population ecology in demographic terms. Quantitative estimates were obtained on plants collected directly in nature, and their progenies in controlled environments with randomized block design in the greenhouse.Two parameters of population growth — the intrinsic rate of increase, r, and the carrying capacity, K-were estimated by using the logistic model (r=ln R and K=equilibrium population size). It was found that B. mollis is a relatively K-type species, while B. rubens is a relatively r-type species.The effects of density on competition between individuals in pure and mixed populations of B. mollis and B. rubens were studied. In both species, increasing density induced greater mortality and a striking plastic reduction in the size and reproductive potential of the individuals. Further, B. rubens showed a relatively greater mortality and less plastic response to densities than B. mollis in both pure and mixed stands. Two different types of plasticity were considered: one in response to changing density (d-plasticity); and the other in response to changing enironmental conditions (e-plasticity). High plasticity in one of them need not imply that the other one is high too. B. rubens showed higher e-plasticity, but lower d-plasticity than B. mollis.The relationships between r, K and competitive ability were discussed. Two types of K-strategy were distinguished: one involving greater nonreproductive effort with longer life span, or lowered mortality (Type-I) and the other with density-induced adjustments in body size along with survival in higher numbers (Type-II). Different populations of these two Bromus species showed different values of r and K (Type-II) and different competitive abilities. It was found that higher r was usually accompanied by lower K (Type-II), while higher K (Type-II) was accompanied by lower competitive ability, which in turn is correlated with higher d-plasticity. In general, coexistence was predicted on the basis of estimates derived from the interspecific competition experiments.",
"corpus_id": 1068592,
"title": "Population regulation in Bromus rubens and B. mollis: Life cycle components and competition"
} | {
"abstract": "Traditional conservation biology regards environmental fluctuations as detrimental to persistence, reducing long‐term average growth rates and increasing the probability of extinction. By contrast, coexistence models from community ecology suggest that for species with dormancy, environmental fluctuations may be essential for persistence in competitive communities. We used models based on California grasslands to examine the influence of interannual fluctuations in the environment on the persistence of rare forbs competing with exotic grasses. Despite grasses and forbs independently possessing high fecundity in the same types of years, interspecific differences in germination biology and dormancy caused the rare forb to benefit from variation in the environment. Owing to the buildup of grass competitors, consecutive favorable years proved highly detrimental to forb persistence. Consequently, negative temporal autocorrelation, a low probability of a favorable year, and high variation in year quality all benefited the forb. In addition, the litter produced by grasses in a previously favorable year benefited forb persistence by inhibiting its germination into highly competitive grass environments. We conclude that contrary to conventional predictions of conservation and population biology, yearly fluctuations in climate may be essential for the persistence of rare species in invaded habitats.",
"corpus_id": 8194116,
"title": "Effects of Temporal Variability on Rare Plant Persistence in Annual Systems"
} | {
"abstract": "The behaviour of individual plants of certain species has been studied on permanent quadrats laid out during the period 1942-1944 in forest and haymeadow in central Sweden. The results observed during the first four to six years have been published in a preliminary paper (TAMM 1948). Of several interesting conclusions the most important concerned the rate of replacement of the species, both from seedlings and by branching. The rate of replacement, particularly from seedlings, has hitherto not received due attention. Earlier investigations, and particularly those conducted in Finland, have shown that in many plant communities only a very small percentage of the seedlings ever attain the floral stage (LINKOLA 1935; PERTTULA 1941). The true rate of reproduction is unknown, however, even in the meadow community which Linkola studied so closely. In Sweden, MALMSTR6M (1949) has pointed out the great difference in the rate and type of reproduction in forest communities for the \"mobile\" and \"equilibrium\" stages, but no figures are available for the rate of replacement in these various stages. In the preliminary paper figures were presented for the death rate and the rate of reproduction by vegetative propagation of certain species. It appears as if the conditions for the seedlings were even more adverse on the plots studies here than in the case of the types of vegetation studied by the Finnish investigators. The rate of renewal from seedlings was found to be almost nil in many cases, and the individual plants of the species studied must have been rather old. The figures presented were preliminary on account of the short period of observation and the difficulty of deciding the extent to which the communities under study were stable or developing towards a climax. Another observation of ecological interest concerned the rate of flowering of certain species in different years. Annual differences in the number of flowering individuals on the same plot were found not only for orchids (Orchis mascula and sambucina) but also for Primula veris and Sanicula europaea. It was concluded that \"flowering years\" occur for perennial herbs as well as for orchard",
"corpus_id": 87196306,
"score": 2,
"title": "Further Observations on the Survival and Flowering of Some Perennial Herbs, I"
} |
{
"abstract": "Background Anaemia reduces cognitive potential in school children, retards their growth and predisposes them to other diseases. As there is a paucity of data on the current burden of P. falciparum, S. mansoni and soil transmitted helminths (STH) infections and their correlation with schoolchildren’s anemia in the Democratic Republic of Congo (DRC), we collect these data. Methods This study reports baseline data collected from a randomized controlled trial investigating the impact of IPT with SP and SP-PQ on anemia and malaria morbidity in Congolese schoolchildren (Trial registration: NCT01722539; PACTR201211000449323). S. mansoni and STH infections were assessed using kato-katz technique. Malaria infection and hemoglobin concentration were assessed using Blood smear and Hemocontrol device, respectively. Results A total of 616 primary schoolchildren from 4 to 13 years old were enrolled in the study. The prevalence of Plasmodium spp. infection was 18.5% (95%CI:15.6–21.9). Amongst those infected, 24 (21%), 40 (35.1%), 40 (35.1%), 10 (8.8%), had light, moderate, heavy, very high malaria parasite density, respectively. Above 9 years of age (p = 0.02), male and history of fever (p = 0.04) were both associated with malaria infection. The overall prevalence of S. mansoni infection was 6.4% (95%CI:4.4–9.1). Girls were associated with S. mansoni infection (p = 0.04). T. trichiura was the most prevalent STH infection (26.3%), followed by A. lumbricoides (20.1%). Co-infection with malaria-S. mansoni and malaria-STH was, respectively, 1.5% (CI95%:0.7–3.3) and 6.4% (CI95% 4.4–9.1). The prevalence of anemia was found to be 41.6% (95%CI:37.7–45.6) and anemia was strongly related with Plasmodium ssp infection (aOR:4.1; CI95%:2.6–6.5;p<0.001) and S. mansoni infection (aOR:3.3;CI95%:1.4–7.8;p<0.01). Conclusion Malaria and S. mansoni infection were strongly associated with high prevalence of anemia in schoolchildren. Therefore, specific school-based interventions, such as intermittent preventive treatment or prophylaxis, LLITN distribution, anthelminthic mass treatment and micronutrient supplementation are needed to improve school children’s health.",
"corpus_id": 2585383,
"title": "Malaria, Schistosomiasis and Soil Transmitted Helminth Burden and Their Correlation with Anemia in Children Attending Primary Schools in Kinshasa, Democratic Republic of Congo"
} | {
"abstract": "In endemic areas, malaria and its adverse effects in schoolchildren may be prevented by intermittent preventive treatment (IPTsc). However, the most appropriate drug regimen for IPTsc remains to be identified. A randomised controlled trial was conducted in Kinshasa, DRC. Enrolled schoolchildren were assigned to a passive control arm (n = 212), sulfadoxine/pyrimethamine (SP) (n = 202) or SP plus piperaquine (SP/PQ) (n = 202). The primary endpoint was haemoglobin (Hb) change. Secondary endpoints were anaemia, parasitaemia prevalence and clinical malaria incidence. Data were analysed by modified intention-to-treat (mITT) and per-protocol. A linear mixed mode was used due to repeated measurements. Of 616 enrolled children, 410 (66.6%) were eligible for mITT analysis. The control arm was used as reference. After 12 months, the Hb level increased by 0.20 g/dL (95% CI -0.61 to 0.47; P = 0.168) and 0.39 g/dL (0.12-0.66; P <0.01) in the SP and SP/PQ arms, respectively. SP treatment reduced anaemia, malaria parasitaemia and clinical malaria by 10% (0-20%; P = 0.06), 19% (2-33%; P = 0.042) and 25% (-32 to 57%; P = 0.37), respectively. The corresponding values for SP/PQ were 28% (19-37%; P <0.001), 40% (26-52%; P <0.001) and 58% (17-79%; P <0.01). No deaths or severe adverse events (SAEs) were observed. SP/PQ offered substantial protection against anaemia, malaria parasitaemia and clinical malaria and showed no SAEs. SP/PQ, a combination of two long-acting non-artemisinin-based antimalarials, may be a valuable option for IPTsc in Africa.",
"corpus_id": 7875619,
"title": "Efficacy and safety of intermittent preventive treatment in schoolchildren with sulfadoxine/pyrimethamine (SP) and SP plus piperaquine in Democratic Republic of the Congo: a randomised controlled trial."
} | {
"abstract": "From January 1972 to June 1983, 12 cases of confirmed constrictive pericarditis were found at the Cleveland Clinic, occurring as a long-term complication of cardiac surgery. These patients had had valve replacement, coronary artery bypass surgery, or other surgical procedures. Average time interval from initial cardiac surgery to definitive diagnosis was 12.6 months (range, 5 weeks to 34 months). Two patients were treated medically and 10 were treated surgically. Pathogenesis of pericardial constriction following cardiac surgery is unknown. Possible factors are mesothelial injury, bleeding, postpericardiotomy syndrome, and povidone-iodine irrigation. Constrictive pericarditis should be considered in the differential diagnosis of patients with right-sided heart failure after cardiac surgery.",
"corpus_id": 44565755,
"score": 1,
"title": "Constrictive pericarditis following cardiac surgery--Cleveland Clinic experience: report of 12 cases and review."
} |
{
"abstract": "Thin films of La1-xSrxFeO3 have been prepared by the ALD (atomic layer deposition) technique using La(thd)3 (Hthd = 2,2,6,6-tetramethylheptane-3,5-dione), Sr(thd)2, Fe(thd)3, and ozone as precursors. A so-called ALD window is found in the temperature range 200 to 360 degrees C for LaFeO3. The effect of the pulsing procedure for the precursors on the composition of the films is examined. The results are discussed in relation to a model which ascribes differences between pulsed and obtained stoichiometries to individually different surface-area demands of the precursors. The La1-xSrxFeO3 films turned out to contain only small amounts of carbonate impurities despite the fact that films prepared from Sr(thd)2 and ozone under the same conditions contains virtually pure SrCO3. Films of La1-xSrxFeO3 have been deposited on substrates of (amorphous) soda-lime glass and single crystals of Si(100), SrTiO3(100), and LaAlO3(012). Annealed films on soda-lime glass and Si(100) substrates turned out to be polycrystalline with virtually random orientation of the crystallites. Those on MgO(100) and SrTiO3(100) substrates showed some degree of crystal orientation, whereas the annealed films on LaAlO3(012) proved to contain distinctly oriented crystallites with mosaic features.",
"corpus_id": 19535693,
"title": "Growth of La1-xSrxFeO3 thin films by atomic layer deposition."
} | {
"abstract": "Abstract Hexagonal orthoferrite h -ErFeO 3 thin films are synthesized by Atomic Layer Deposition on SiO 2 (100 nm)/Si substrate, followed by rapid thermal annealing at 650–700 °C. Structural, chemical and morphological characterizations of as-deposited and annealed layers are performed by X-ray Reflectivity/Diffraction and Time-of-Flight Secondary Ion-Mass Spectrometry. The formation of the hexagonal phase, which is metastable compared to the more stable orthorhombic ErFeO 3 , is explained within a simple model considering the different activation energies for the nucleation of hexagonal and orthorhombic phases. The possibility to grow h -ErFeO 3 in contact with SiO 2 /Si by chemical methods opens perspective for the inclusion of new multiferroics in silicon-based devices.",
"corpus_id": 100449139,
"title": "Atomic Layer Deposition of hexagonal ErFeO3 thin films on SiO2/Si"
} | {
"abstract": "When TiO2 films are deposited on a Si substrate, the magnetic screening of the substrate is proposed to reduce the deep-level trap by Ti atom bombardment. The substrate is placed in the field of a permanent magnet. It is found that the films deposited with the magnet do not form any deep level in the substrate, and exhibit better optical property and crystalline structure than those deposited without the magnet.",
"corpus_id": 93235190,
"score": 2,
"title": "Deposition of TiO2 Films by Reactive Sputtering in Magnetic Field"
} |
{
"abstract": "A method is described for joint precise point positioning and attitude determination with tight coupling of two single-frequency low-cost global navigation satellite system receivers and an inertial sensor. The sensor fusion is performed with an extended Kalman filter. The carrier phase ambiguities are determined in a constrained tree search using soft a priori information on the antenna distance. A code multipath parameter is determined for each satellite to improve the accuracy. Ionospheric corrections are estimated also by single-frequency receivers.",
"corpus_id": 255953,
"title": "Tightly coupled precise point positioning and attitude determination"
} | {
"abstract": "The GPS double difference carrier phase measurements are ambiguous by an unknown integer number of cycles. High precision relative GPS positioning based on short observational timespan data, is possible, when reliable estimates of the integer double difference ambiguities can be determined in an efficient manner. In this contribution a new method is introduced that enables very fast integer least-squares estimation of the ambiguities. The method makes use of an ambiguity transformation that allows one to reformulate the original ambiguity estimation problem as a new problem that is much easier to solve. The transformation aims at decorrelating the least-squares ambiguities and is based on an integer approximation of the conditional least-squares transformation. This least-squares ambiguity decorrelation approach, flattens the typical discontinuity in the GPS-spectrum of ambiguity conditional variances and returns new ambiguities that show a dramatic improvement in correlation and precision. As a result, the search for the transformed integer least-squares ambiguities can be performed in a highly efficient manner.",
"corpus_id": 3477413,
"title": "The least-squares ambiguity decorrelation adjustment: a method for fast GPS integer ambiguity estimation"
} | {
"abstract": "This thesis describes the research results in the improvement of a new GPS processing approach: precise point positioning (PPP). Currently, PPP is implemented with the so-called Traditional Model based on the un-differenced dual frequency code and carrier phase observations aided by the precise satellite orbit and clock products. Decimetre to centimetre accuracy is achievable while an average of half an hour of convergence time is required. In order for the PPP system to be used in real-time positioning and navigation applications, accelerating ambiguity convergence therefore is essential for a fast positioning convergence solution. With the newly developed code-phase ionosphere-free combination in this research, an alternative PPP processing method – P1-P2-CP Model – was proposed, which has a lower measurement noise and a smaller residual error. But the biggest gain of the P1-P2-CP Model is the feasibility of the fixed ambiguity resolution which brings fewer unknowns, therefore accelerating solution convergence. In the model's implementation, a variance adjustment procedure was applied to obtain more precise stochastic information of both observations and parameters, and a partial ambiguity searching and fixing approach based on a pseudo-fixing concept was preliminarily developed. Included in this thesis are the numerical results and analyses of float solutions in both static and kinematics processing. Fixed solution results in static processing mode are also presented. Further considerations for the improvements of the convergence performance are also addressed. iv ACKNOWLEDGEMENTS",
"corpus_id": 107052119,
"score": 2,
"title": "Improving Ambiguity Convergence in Carrier Phase-Based Precise Point Positioning"
} |
{
"abstract": "The effectiveness of the recently developed Fixed-Node Quantum Monte Carlo method for lattice fermions, developed by van Leeuwen and co-workers, is tested by applying it to the 1d Kondo lattice, an example of a one-dimensional model with a sign problem. The principles of this method and its implementation for the Kondo lattice model are discussed in detail. We compare the fixed-node upper bound for the ground-state energy at half filling with exact-diagonalization results from the literature, and determine several spin correlation functions. Our ‘best estimates’ for the ground-state correlation functions do not depend sensitively on the input trial wave function of the fixed-node projection, and are reasonably close to the exact values. We also calculate the spin gap of the model with the Fixed-Node Monte Carlo method. For this it is necessary to use a many-Slater-determinant trial state. The lowest-energy spin excitation is a running spin soliton with wave number π, in agreement with earlier calculations.",
"corpus_id": 6402927,
"title": "Fixed-Node Monte Carlo Calculations for the 1d Kondo Lattice Model"
} | {
"abstract": "By analyzing the Gutzwiller-projected, self-consistent mean-field solutions we demonstrate that for all coupling strengths of the half-filled, one-dimensional Kondo lattice (1) the spin excitations are local triplets, (2) the charge gap is greater than the spin gap, and (3) doping by Kondo holes induces residual spin-1/2 local moments. The implications of these results on a number of experiments and their relevance to the «Kondo insulators» will be discussed",
"corpus_id": 38371465,
"title": "Spin-triplet solitons in the one-dimensional symmetric Kondo lattice."
} | {
"abstract": "Measurements of susceptibility {chi}, resistivity {rho}, and thermoelectric power {ital S} have been performed on single-crystal CeNiSn. Only along the {ital a} axis of the orthorhombic structure do {chi}({ital T}) and {rho}({ital T}) exhibit pronounced peaks at 12 K, whereas no anomaly was found in the specific heat. The gap energies estimated from {rho}({ital T}) are 2.4, 5.5, and 5.0 K along the {ital a}, {ital b}, and {ital c} axes, respectively. Near 3 K, {ital S}{sub {ital a}}({ital T}) and {ital S}{sub {ital c}}({ital T}) exhibit extremely sharp peaks, which indicate the presence of a density of states within the gap. The magnetic contribution to the specific heat divided by temperature, {ital C}{sub {ital m}}/{ital T} versus {ital T} reveals a maximum of 0.19 J/K{sup 2} mol near 6.7 K. These results suggest that an antiferromagnetic correlation develops near 12 K, which induces the formation of the pseudogap in the narrow band of heavy quasiparticles.",
"corpus_id": 41716733,
"score": 2,
"title": "Formation of an anisotropic energy gap in the valence-fluctuating system of CeNiSn."
} |
{
"abstract": "The influence factors on anaerobic ammonium oxidation (ANAMMOX), such as HRT, temperature, pH and NH<sub>4</sub><sup>+</sup> -N/NO<sub>2</sub><sup>-</sup>-N ratio have been studied. Experiment results indicate that the optimum condition of HRT, temperature, pH and NH<sub>4</sub><sup>+</sup>-N/NO<sub>2</sub><sup>-</sup>-N ratio were 12h, 30~35°C, 7.02 - 8.36 and 0.95~1.2. At the optimum condition, the average removal rate of NH<sub>4</sub><sup>+</sup>-N, NO<sub>2</sub><sup>-</sup> -N and TN were separately 96.8%, 97.8% and 92.4% , when influent average concentrations of TN was 365.5~ 432.4mg/L.",
"corpus_id": 6817548,
"title": "Notice of RetractionExperiment Studies on Influence Factors of Anaerobic Ammonium Oxidation (ANAMMOX)"
} | {
"abstract": "The aim of this work was to examine the applicability of the anaerobic ammonium oxidation (anammox) process to three kinds of low BOD/N ratio wastewaters from animal waste treatment processes in batch mode. A rapid decrease of NO(2)(-) and NH(4)(+) was observed during incubation with wastewaters from AS and UASB/trickling filter and their corresponding control artificial wastewaters. This nitrogen removal resulted from the anammox reaction, because the ratio of removed NO(2)(-) and NH(4)(+) was close to the theoretical ratio of the anammox reaction. Comparison of the inorganic nitrogen removal rate of the actual wastewater and that of control artificial wastewater showed that these two kinds of wastewater were very suitable for anammox treatment. Incubation with wastewater from RW did not show a clear anammox reaction; however, diluting it by half enabled the reaction, suggesting the presence of an inhibitory factor. This study showed that the three kinds of wastewater from animal waste treatment processes were suitable for anammox treatment.",
"corpus_id": 42599226,
"title": "Nitrogen removal from animal waste treatment water by anammox enrichment."
} | {
"abstract": "Lubricants, usually Newtonian fluids, are assumed to experience laminar flow. The basic equations used to describe the flow are the Navier-Stokes equation of motion. The study of hydrodynamic lubrication is, from a mathematical standpoint, the application of a reduced form of these Navier-Stokes equations in association with the continuity equation. The Reynolds equation can also be derived from first principles, provided of course that the same basic assumptions are adopted in each case. Both methods are used in deriving the Reynolds equation, and the assumptions inherent in reducing the Navier-Stokes equations are specified. Because the Reynolds equation contains viscosity and density terms and these properties depend on temperature and pressure, it is often necessary to couple the Reynolds with energy equation. The lubricant properties and the energy equation are presented. Film thickness, a parameter of the Reynolds equation, is a function of the elastic behavior of the bearing surface. The governing elasticity equation is therefore presented.",
"corpus_id": 117887219,
"score": 1,
"title": "Basic lubrication equations"
} |
{
"abstract": "Patients with Graves´ Disease (GD) have a higher risk of developing more severe and prolonged hypocalcaemia after total thyroidectomy (TT) than patients who undergo surgery for benign atoxic goitre ...",
"corpus_id": 865670,
"title": "Calcium Homeostasis in Patients with Graves' Disease"
} | {
"abstract": "CONTEXT AND OBJECTIVE\nmagnesium ion concentration is directly related and phosphorus ion concentration is inversely related to calcemia. The aim of this study was to evaluate the evolution of magnesium and phosphorus ion levels in patients undergoing thyroidectomy and correlate these with changes to calcium concentration.\n\n\nDESIGN AND SETTING\nprospective study at the Alpha Institute of Gastroenterology, Hospital das Clínicas, Universidade Federal de Minas Gerais.\n\n\nMETHODS\nthe study included 333 patients, of both genders and mean age 45 ± 15 years, who underwent thyroidectomy between 2000 and 2005. Total calcium, phosphorus and magnesium were measured in the blood preoperatively and 24 and 48 hours postoperatively. Ionic changes were evaluated according to the presence or absence of postoperative hypocalcemia.\n\n\nRESULTS\nthere were statistically significant drops in blood phosphorus levels 24 and 48 hours after thyroidectomy, compared with preoperative values, in the patients without hypocalcemia. In the patients who developed hypocalcemia, there was a significant drop in plasma phosphorus on the first postoperative day and an increase (also statistically significant) on the second day, in relation to preoperative phosphorus levels. A significant drop in postoperative magnesium was also observed on the first and second days after thyroidectomy in the patients with hypocalcemia, in relation to preoperative levels. In the patients without hypocalcemia, the drop in magnesium was significant on the first day, but there was no difference on the second day.\n\n\nCONCLUSION\ndespite the postoperative changes, neither magnesium nor phosphorus ion levels had any role in post-thyroidectomy calcemia.",
"corpus_id": 1454710,
"title": "Evolution of blood magnesium and phosphorus ion levels following thyroidectomy and correlation with total calcium values."
} | {
"abstract": "BACKGROUND\nRecent recommendations suggest that total thyroidectomy (TT) is the preferred treatment for benign thyroid disease. This approach remains controversial because of the increased risk of morbidity compared with a partial thyroidectomy (PT). The aim of this study was to determine the use of thyroidectomy for benign disease over a 15-year period.\n\n\nMETHODS\nOne hundred nineteen thousand eight hundred eighty-five patients from the Nationwide Inpatient Sample database (1993-2007) underwent surgery for benign thyroid disease. Logistic regression was used to assess the relation between extent of thyroidectomy and the year of admission, hospital volume, and surgical outcomes.\n\n\nRESULTS\nThe use of TT increased from 17.6% (1993-1997) to 39.6% (2003-2007) compared with 82.4% and 60.4% for PT over the same periods (P < .0001). A greater proportion of TTs was performed in high-volume centers in which the rates of postoperative complications were lower than low-volume centers.\n\n\nCONCLUSIONS\nThe use of TT for benign thyroid disease has increased over the last 15 years in the United States. This pattern of practice is in keeping with the trends reported in recent literature.",
"corpus_id": 29491288,
"score": 2,
"title": "Utilization of thyroidectomy for benign disease in the United States: a 15-year population-based study."
} |
{
"abstract": "Most human-drone interfaces, such as joysticks and remote controllers, require attention and developed skills during teleoperation. Wearable interfaces could enable a more natural and intuitive control of drones, which would make this technology accessible to a larger population of users. In this letter, we describe a soft exoskeleton, so called FlyJacket, designed for naive users that want to control a drone with upper body gestures in an intuitive manner. The exoskeleton includes a motion-tracking device to monitor body movements, an arm support system to prevent fatigue, and is coupled to goggles for first-person-view from the drone perspective. Tests were performed with participants flying a simulated fixed-wing drone moving at a constant speed; participants' performance was more consistent when using the FlyJacket with the arm support than when performing the same task with a remote controller. Furthermore, participants felt more immersed, had more sensation of flying, and reported less fatigue when the arm support was enabled. The FlyJacket has been demonstrated for the teleoperation of a real drone.",
"corpus_id": 4626131,
"title": "FlyJacket: An Upper Body Soft Exoskeleton for Immersive Drone Control"
} | {
"abstract": "This paper presents a novel flexible sliding thigh frame for a gait enhancing mechatronic system. With its two-layered unique structure, the frame is flexible in certain locations and directions, and stiff at certain other locations, so that it can fit well to the wearer's thigh and transmit the assisting torque without joint loading. The paper describes the basic mechanics of this 3D flexible frame and its stiffness characteristics. We implemented the 3D flexible frame on a gait enhancing mechatronic system and conducted experiments. The performance of the proposed mechanism is verified by simulation and experiments.",
"corpus_id": 12884960,
"title": "Flexible sliding frame for gait enhancing mechatronic system (GEMS)"
} | {
"abstract": "Immersive technology for human-centric cyberphysical systems includes broad concepts that enable users in the physical world to connect with the cyberworld with a sense of immersion. Complex systems such as virtual reality, augmented reality, brain-computer interfaces, and brain-machine interfaces are emerging as immersive technologies that have the potential for improving manufacturing systems. Industry 4.0 includes all technologies, standards, and frameworks for the fourth industrial revolution to facilitate intelligent manufacturing. Industrial immersive technologies will be used for smart manufacturing innovation in the context of Industry 4.0’s human machine interfaces. This research provides a thorough review of the literature, construction of a domain ontology, presentation of patent metatrend statistical analysis, and data mining analysis using a technology function matrix and highlights technical and functional development trends using latent Dirichlet allocation (LDA) models. A total of 179 references from the IEEE and IET databases and 2,672 patents are systematically analyzed to identify current trends. The paper establishes an essential foundation for the development of advanced human-centric cyberphysical systems in complex manufacturing processes.",
"corpus_id": 51605426,
"score": -1,
"title": "Immersive Technology for Human-Centric Cyberphysical Systems in Complex Manufacturing Processes: A Comprehensive Overview of the Global Patent Profile Using Collective Intelligence"
} |
{
"abstract": "A complete RF coil system, as has been previously defined, is capable of generating any steady-state RF field, at the MR frequency, that is compatible with Maxwell's equations. A coil system is complete if it is capable of generating all basis vector fields in the multipole expansion of the electromagnetic fields. A complete coil system has the potential to reach the ultimate intrinsic signal-to-noise as an MRI receiver coil. It also offers maximum flexibility in tailoring the spatial RF field distribution as an excitation coil. Here, computer simulations have been performed on array coils employing composite coil elements, assuming the current loops are small and can be approximated by magnetic dipoles. We demonstrate that a coil array can be configured to approximate a truncated complete array coil and to generate the basis magnetic vector fields up to certain orders in the multipole expansion of the electromagnetic fields.",
"corpus_id": 7318364,
"title": "Towards a complete coil array."
} | {
"abstract": "Composite MRI arrays consist of triplets where two orthogonal upright loops are placed over the same imaging area as a standard surface coil. The optimal height of the upright coils is approximately half the width for the 7 cm coils used in this work. Resistive and magnetic coupling is shown to be negligible within each coil triplet. Experimental evaluation of imaging performance was carried out on a Philips 3 T Achieva scanner using an eight‐coil composite array consisting of three surface coils and five upright loops, as well as an array of eight surface coils for comparison. The composite array offers lower overall coupling than the traditional array. The sensitivities of upright coils are complementary to those of the surface coils and therefore provide SNR gains in regions where surface coil sensitivity is low, and additional spatial information for improved parallel imaging performance. Near the surface of the phantom the eight‐channel surface coil array provides higher overall SNR than the composite array, but this advantage disappears beyond a depth of approximately one coil diameter, where it is typically more challenging to improve SNR. Furthermore, parallel imaging performance is better with the composite array compared with the surface coil array, especially at high accelerations and in locations deep in the phantom. Composite arrays offer an attractive means of improving imaging performance and channel density without reducing the size, and therefore the loading regime, of surface coil elements. Additional advantages of composite arrays include minimal SNR loss using root‐sum‐of‐squares combination compared with optimal, and the ability to switch from high to low channel density by merely selecting only the surface elements, unlike surface coil arrays, which require additional hardware. Copyright © 2014 John Wiley & Sons, Ltd.",
"corpus_id": 19984562,
"title": "Experimental verification of SNR and parallel imaging improvements using composite arrays"
} | {
"abstract": "Spontaneous symmetry breaking is well understood under equilibrium conditions as a consequence of the singularity of the thermodynamic limit. How a single global orientation of the order parameter dynamically emerges from an initially symmetric state during a phase transition, however, is not captured by this paradigm. Here, we present a series of symmetry arguments suggesting that singling out a global choice for the ordered state is in fact forbidden under unitary time evolution, even in the presence of an environment and infinitesimal symmetry breaking perturbations. We thus argue that the observation of phase transitions in our everyday world presents a manifestation of the unitarity of quantum dynamics itself being spontaneously broken. We argue that this agrees with the observation that Schrödinger’s time dependent equation is rendered unstable for macroscopic objects owing to the same singular thermodynamic limit that affects equilibrium configurations.",
"corpus_id": 252564871,
"score": 1,
"title": "Phase transitions as a manifestation of spontaneous unitarity violation"
} |
{
"abstract": "During object-based sensorimotor tasks, humans look at target locations for subsequent hand actions. These anticipatory eye movements or guiding fixations seem to be necessary for a successful performance. By practicing such a sensorimotor task, humans become faster and perform fewer guiding fixations (Foerster and Schneider, In Prep; Foerster et al. in J Vis 11(7):9:1–16, 2011). We aimed at clarifying whether this decrease in guiding fixations is the cause or effect of faster task completion time. Participants may learn to use less visual input (fewer fixations) allowing shorter completion times. Alternatively, participants may speed up their hand movements (e.g., more efficient motor control) leaving less time for visual intake. The latter would imply that the number of fixations is directly connected to task speed. We investigated the relationship between the number of fixations and task speed in a computerized version of the number connection task (Foerster and Schneider in Ann N Y Acad Sci 2015. doi:10.1111/nyas.12729). Eye movements were recorded while participants clicked in ascending order on nine numbered circles. In 90 learning trials, they clicked the sequence with a constant spatial configuration as fast as possible. In the subsequent experimental phase, they should perform 30 trials again under high-speed instruction and 30 trials under slow-speed instruction. During slow-speed instruction, fixation rates were lower with longer fixation durations and more fixations were performed than during high-speed instruction. The results suggest that the number of fixations depends on both the need for visual intake and task completion time. It seems that the decrease in anticipatory eye movements through sensorimotor learning is at the same time a result and a cause of faster task performance.",
"corpus_id": 3768715,
"title": "Anticipatory eye movements in sensorimotor actions: on the role of guiding fixations during learning"
} | {
"abstract": "When performing sequential manual actions (e.g., cooking), visual information is prioritized according to the task determining where and when to attend, look, and act. In well-practiced sequential actions, long-term memory (LTM)-based expectations specify which action targets might be found where and when. We have previously demonstrated (Foerster and Schneider, 2015b) that violations of such expectations that are task-relevant (e.g., target location change) cause a regression from a memory-based mode of attentional selection to visual search. How might task-irrelevant expectation violations in such well-practiced sequential manual actions modify attentional selection? This question was investigated by a computerized version of the number-connection test. Participants clicked on nine spatially distributed numbered target circles in ascending order while eye movements were recorded as proxy for covert attention. Target’s visual features and locations stayed constant for 65 prechange-trials, allowing practicing the manual action sequence. Consecutively, a task-irrelevant expectation violation occurred and stayed for 20 change-trials. Specifically, action target number 4 appeared in a different font. In 15 reversion-trials, number 4 returned to the original font. During the first task-irrelevant change trial, manual clicking was slower and eye scanpaths were larger and contained more fixations. The additional fixations were mainly checking fixations on the changed target while acting on later targets. Whereas the eyes repeatedly revisited the task-irrelevant change, cursor-paths remained completely unaffected. Effects lasted for 2–3 change trials and did not reappear during reversion. In conclusion, an unexpected task-irrelevant change on a task-defining feature of a well-practiced manual sequence leads to eye-hand decoupling and a “check-after-surprise” mode of attentional selection.",
"corpus_id": 9483012,
"title": "Task-Irrelevant Expectation Violations in Sequential Manual Actions: Evidence for a “Check-after-Surprise” Mode of Visual Attention and Eye-Hand Decoupling"
} | {
"abstract": "We report on a patient with sensorincural deafness, oncyho- and osteodystrophy and mental retardation (DOOR syndrome) and review the literature. It appears that abnormal dermatoglyphics are a frequent feature of the DOOR syndrome, as all patients with DOOR syndrome in whom dermatoglyphic investigations were done, had multiple arches on their fingertips.",
"corpus_id": 13257341,
"score": 1,
"title": "DOOR syndrome: additional case and literature review"
} |
{
"abstract": "SummaryDevelopment of the barrel field representation of the forelimb in the primary somatosensory cortex (SI) was studied in normal and deafferented neonatal rat pups by means of the peroxidase conjugated lectin peanut agglutinin (PNA), which most likely binds to radial glial cells within barrel boundaries. 1. Alterations in lectin binding were seen in animals sacrificed on postnatal day 8 (PND-8) if deafferentation took place on PND-1 (day of birth) through PND-6. 2. Deafferentation on PND-5 or on PND-6 had the least effect on lectin binding. In these animals, lectin binding was reduced, although the prospective representation was intact. 3. Deafferentation on PND-2, 3, and 4 had the greatest effect on lectin binding. In these animals, lectin binding was reduced and the prospective cortical representation was disrupted. 4. Deafferentation on PND-1 resulted in reduced lectin binding, however the prospective cortical representation was only slightly impaired compared to that in animals deafferented on PND-2, 3, and 4. 5. These results suggest that SI barrel field boundaries are important to plasticity and that a sensitive period for predevelopment of the forelimb barrels consists of postnatal days 1 through 6. Furthermore, the formation of normal SI barrel field boundaries requires an ongoing interaction between incoming afferents and radial glial cells.",
"corpus_id": 3224606,
"title": "Early development of SI cortical barrel subfield representation of forelimb in normal and deafferented neonatal rat as delineated by peroxidase conjugated lectin, peanut agglutinin (PNA)"
} | {
"abstract": "In-utero alcohol exposure produces sensorimotor developmental abnormalities that often persist into adulthood. The rodent cortical barrel field associated with the representation of the body surface was used as our model system to examine the effect of prenatal alcohol exposure (PAE) on early somatosensory cortical development. In this study, pregnant female rats were intragastrically gavaged daily with high doses of alcohol (6 gm/kg body weight) throughout the first 20 days of pregnancy. Blood alcohol levels were measured in the pregnant dams on gestational days 13 (G13) and G20. The ethanol treated group (EtOH) was compared to the normal control chowfed (CF) group, nutritionally matched pairfed (PF) group, and cross-foster (XF) group. Cortical barrel development was examined in pups across all treatment groups from G25, corresponding to postnatal day 2 (P2), to G32 corresponding to P9. The EtOH and control group pups were weighed, anesthetized, and perfused. Brains were removed and weighed with, and without cerebellum and olfactory bulbs, and neocortex was removed and weighed. Cortices were then flattened, sectioned tangentially, and stained with a metabolic marker, cytochrome oxidase (CO) to reveal the barrel field. Progression of barrel development was distinguished into three categories: (a) absent, (b) cloudy barrel-like pattern, and (c) well-formed barrels with intervening septae. The major findings are: (1) PAE delayed barrel field development by one or more days, (2) the barrel field first appeared as a cloudy pattern that gave way on subsequent days to an adult-like pattern with clearly demarcated intervening septal regions, (3) the barrel field developed differentially in a lateral-to-medial gradient in both alcohol and control groups, (4) PAE delayed birth by one or more days in 53% of the pups, (5) regardless of whether pups were born on G23 (normal expected birth date for non-alcohol controls) or as in the case for the alcohol-delayed pups born as late as G27, the barrel field was never present at birth suggesting the importance of postnatal experience on barrel field development, and (6) PAE did not disrupt the normal barrel field pattern, although both total body and brain weights were compromised. These findings suggest that PAE delays the development of the somatosensory cortex (SI); such delays may interfere with timing and formation of cortical circuits. It is unknown whether other nuclei along the somatosensory pathway undergo similar delays in development or if PAE selectively disrupts cortical circuitry.",
"corpus_id": 10942843,
"title": "Prenatal alcohol exposure delays the development of the cortical barrel field in neonatal rats"
} | {
"abstract": "We have established a population average surface-based atlas of human cerebral cortex at term gestation and used it to compare infant and adult cortical shape characteristics. Accurate cortical surface reconstructions for each hemisphere of 12 healthy term gestation infants were generated from structural magnetic resonance imaging data using a novel segmentation algorithm. Each surface was inflated, flattened, mapped to a standard spherical configuration, and registered to a target atlas sphere that reflected shape characteristics of all 24 contributing hemispheres using landmark constrained surface registration. Population average maps of sulcal depth, depth variability, three-dimensional positional variability, and hemispheric depth asymmetry were generated and compared with previously established maps of adult cortex. We found that cortical structure in term infants is similar to the adult in many respects, including the pattern of individual variability and the presence of statistically significant structural asymmetries in lateral temporal cortex, including the planum temporale and superior temporal sulcus. These results indicate that several features of cortical shape are minimally influenced by the postnatal environment.",
"corpus_id": 6851982,
"score": 1,
"title": "A Surface-Based Analysis of Hemispheric Asymmetries and Folding of Cerebral Cortex in Term-Born Human Infants"
} |
{
"abstract": "The common image denoising methods only consider how to restore well image information from noise images, but neglect the effects of residual information between restored images and given images. To enhance denoised image’s quality, a new image denoising method considering residual information in different frequency bands is discussed in this paper. In this method, an original image is divided into high and low frequency sub-band images by the contourlet transform algorithm. And each sub-band image is first denoised by the K-singular value decomposition (K-SVD) denoising model, thus each residual sub-band image is correspondingly obtained. Further, each residual image is again denoised by K-SVD denoising model. Finally, for each sub-band image denoised and its residual image, the inverse transform of contourlet transform algorithm is used to restore the original image. Compared our method proposed here with common denoising methods of wavelet, contourlet, K-SVD, experimental results show that our method fusing residual information in different frequency bands behaves better denoising effect.",
"corpus_id": 7635348,
"title": "K-SVD Based Image Denoising Method Using Image Residual Information in Different Frequency Bands"
} | {
"abstract": "In this paper, we propose a novel generic image prior-gradient profile prior, which implies the prior knowledge of natural image gradients. In this prior, the image gradients are represented by gradient profiles, which are 1-D profiles of gradient magnitudes perpendicular to image structures. We model the gradient profiles by a parametric gradient profile model. Using this model, the prior knowledge of the gradient profiles are learned from a large collection of natural images, which are called gradient profile prior. Based on this prior, we propose a gradient field transformation to constrain the gradient fields of the high resolution image and the enhanced image when performing single image super-resolution and sharpness enhancement. With this simple but very effective approach, we are able to produce state-of-the-art results. The reconstructed high resolution images or the enhanced images are sharp while have rare ringing or jaggy artifacts.",
"corpus_id": 10605041,
"title": "Gradient Profile Prior and Its Applications in Image Super-Resolution and Enhancement"
} | {
"abstract": "The primary objective of this paper is to compare the large‐sample as well as the small‐sample properties of different methods for estimating the parameters of a three‐parameter generalized Gaussian distribution. Three estimators, namely, the moment method (MM), the maximum‐likelihood (ML), and the moment/Newton‐step (MNS) estimators, are considered. The applicability of general asymptotic optimality results of the efficient ML and MNS estimation techniques is studied in the generalized Gaussian context. The asymptotic normal distributions of the estimators are obtained. The asymptotic relative superiority of the ML estimator or its variant, the MNS estimator, over the moment method is studied in terms of asymptotic relative efficiency. Based on this study, it is concluded that deviations from normality in the underlying distribution of the data necessitate the use of the efficient ML or MNS methods. In the small‐sample case, a detailed comparative study of the estimators is made possible by extensive Monte Carlo simulations. From this study, it is concluded that the maximum‐likelihood method is found to be significantly superior for heavy‐tailed distributions. In a region of the parameter space corresponding to the vicinity of the Gaussian distribution, the moment method compares well with the other methods. Further, the MNS estimator is shown to perform best for light‐tailed distributions. The simulation results are shown to lend support to analytically derived asymptotic results for each of the methods.",
"corpus_id": 122172758,
"score": 2,
"title": "Parametric generalized Gaussian density estimation"
} |
{
"abstract": "Handwriting recognition consists in obtaining the transcription of a text image. Recent word spotting methods based on attribute embedding have shown good performance when recognizing words. However, they are holistic methods in the sense that they recognize the word as a whole (i.e. they find the closest word in the lexicon to the word image). Consequently, these kinds of approaches are not able to deal with out of vocabulary words, which are common in historical manuscripts. Also, they cannot be extended to recognize text lines. In order to address these issues, in this paper we propose a handwriting recognition method that adapts the attribute embedding to sequence learning. Concretely, the method learns the attribute embedding of patches of word images with a convolutional neural network. Then, these embeddings are presented as a sequence to a recurrent neural network that produces the transcription. We obtain promising results even without the use of any kind of dictionary or language model.",
"corpus_id": 4766464,
"title": "Handwriting Recognition by Attribute Embedding and Recurrent Neural Networks"
} | {
"abstract": "Many tasks are related to determining if a particular text string exists in an image. In this work, we propose a new framework that learns this task in an end-to-end way. The framework takes an image and a text string as input and then outputs the probability of the text string being present in the image. This is the first end-to-end framework that learns such relationships between text and images in scene text area. The framework does not require explicit scene text detection or recognition and thus no bounding box annotations are needed for it. It is also the first work in scene text area that tackles suh a weakly labeled problem. Based on this framework, we developed a model called Guided Attention. Our designed model achieves much better results than several state-of-the-art scene text reading based solutions for a challenging Street View Business Matching task. The task tries to find correct business names for storefront images and the dataset we collected for it is substantially larger, and more challenging than existing scene text dataset. This new real-world task provides a new perspective for studying scene text related problems. We also demonstrate the uniqueness of our task via a comparison between our problem and a typical Visual Question Answering problem.",
"corpus_id": 195346653,
"title": "Guided Attention for Large Scale Scene Text Verification"
} | {
"abstract": null,
"corpus_id": 45157843,
"score": -1,
"title": "Deep Big Simple Neural Nets Excel on Handwritten Digit Recognition"
} |
{
"abstract": "Novel modification methods for lipase biocatalysts effective in hydrolysis of fish oil for enrichment of polyunsaturated fatty acids (PUFAs) were described. Based on conventional immobilization in single aqueous medium, immobilization of lipase in two phase medium composed of buffer and octane was employed. Furthermore, immobilization (in single aqueous or in two phase medium) coupled to fish oil treatment was integrated. Among these, lipase immobilized in two phase medium coupled to fish oil treatment (IMLAOF) had advantages over other modified lipases in initial reaction rate and hydrolysis degree. The hydrolysis degree increased from 12% with the free lipase to 40% with IMLAOF. Strong polar and hydrophobic solvents had negative impact on immobilization-fish oil treatment lipases, while low polar solvents were helpful to maintain the modification effect of immobilization-fish oil treatment. After five cycles of usage, the immobilization-fish oil treatment lipases still maintained more than 80% of relative hydrolysis degree.",
"corpus_id": 3395021,
"title": "Enzymatic enrichment of polyunsaturated fatty acids using novel lipase preparations modified by combination of immobilization and fish oil treatment."
} | {
"abstract": "Geotrichum sp. lipase modified with a combined method composed of crosslinking and bioimprinting was employed to selectively hydrolyze waste fish oil for enrichment of eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) in glycerides. Crosslinked polymerization by monomer (polyethylene glycol 400 dimethyl acrylate), crosslinker (trimethylolpropane trimethylacrylate), and photoinitiator (benzoin methyl ether) coupled to bioimprinting using palmitic acid as imprint molecule, resulted in much more effective enzyme preparation used in aqueous hydrolysis reaction. Since the crosslinked polymerization modification maintained bioimprinted property and gave good dispersion of enzyme in reaction mixture, the crosslinked bioimprinted enzyme exhibited higher hydrolysis temperature, enhanced specific activity, shorter hydrolysis time, and better operational stability compared to free lipase. Crude fish oil was treated at 45 °C with this crosslinked bioimprinted lipase for 8 h, and 46% hydrolysis degree resulted in the production of glycerides containing 41% of EPA and DHA (EPA+DHA), achieving 85.7% recovery of initial EPA and DHA. The results suggested that bioimprinted enzymes did not lose their induced property in aqueous environment when prepared according to the described crosslinking–bioimprinting method. It could also be seen that the crosslinked bioimprinted lipase was effective in producing glycerides that contained a higher concentration of polyunsaturated fatty acid with better yield.",
"corpus_id": 207353610,
"title": "Preparation of a Crosslinked Bioimprinted Lipase for Enrichment of Polyunsaturated Fatty Acids from Fish Processing Waste"
} | {
"abstract": "Lignin is one of the most abundant aromatic biopolymers and a major component of plant cell walls. It occurs via oxidative coupling of monolignols, which are synthesized from the phenylpropanoid pathway. Lignin is the primary material responsible for biomass recalcitrance, has almost no industrial utility, and cannot be simply removed from growing plants without causing serious developmental defects. Fortunately, recent studies report that lignin composition and distribution can be manipulated to a certain extent by using tissue-specific promoters to reduce its recalcitrance, change its biophysical properties, and increase its commercial value. Moreover, the emergence of novel synthetic biology tools to achieve biological control using genome bioediting technologies and tight regulation of transgene expression opens new doors for engineering. This review focuses on lignin bioengineering strategies and describes emerging technologies that could be used to generate tomorrow's bioenergy and biochemical crops.",
"corpus_id": 4660459,
"score": 1,
"title": "Lignin bioengineering."
} |
{
"abstract": "Five groups of suspensions composed of polystyrene particles, having similar size but different effective surface charge, were adopted to investigate the effects of surface charge and volume fraction on the homogeneity of colloidal crystals through checking the difference between D(exp) and D(uni) by reflection spectroscopy method (D(exp), D(uni) are the experimental and the expected value of the average nearest neighbor interparticle distance by assuming a uniform structure, respectively). We found volume fractions (ranging from 0.006 to 0.02) and structure types basically have no influence on the values of D(exp)/D(uni). Moreover, for crystals formed by lowly charged particles, D(exp)/D(uni) is approximately equal to 1, implying the crystals are homogeneous. With the increase of effective surface charge, D(exp) gradually deviates from D(uni) and the formed crystals become inhomogeneous. Our experimental observations are in accordance with the previous simulation results. Additionally, we also found D(exp)/D(uni) initially drops quickly with increasing effective surface charge and then it tends to an asymptotic value (~0.85), it is supposedly due to the saturation of effective charge. Our relevant computer simulations confirmed that the study scheme that using D(exp)/D(uni) as an indicator to assess the homogeneity of crystal structure is tenable and the simulation results are consistent with experiments.",
"corpus_id": 7289471,
"title": "Influence of the surface charge on the homogeneity of colloidal crystals."
} | {
"abstract": "Inoculation is widely used to tune the microstructure of a polycrystalline solid from the melt and thus its material properties. We present here a systematic time resolved experimental study of inoculation in a charged colloidal model system investigating the changes of the crystallization scenario upon adding spherical seeds and show that the nucleation kinetics solely determines the resulting microstructure.",
"corpus_id": 95517653,
"title": "Experimental visualization of inoculation using a charged colloidal model system"
} | {
"abstract": "Synthetic biomaterials mimicking bone morphology have expanded at a tremendous rate. Among all, one stands out: bioactive glass. Bioactive glasses opened the door to a new genre of research into materials able to promote the regeneration of functioning bone tissue. However, despite their ability to promote cell attachment, proliferation and differentiation, these materials are mainly used as granules. However to promote loaded and sustained bone repair, a 3D structure, with open and highly interconnected pores, is desirable. 3D scaffolds are generally produced into green bodies via various techniques. The particles are then bound together via sintering. However, the highly disrupted silica network of the typical bioactive glasses composition leads to crystallization. Therefore, sintering of the most commonly used bioactive glass compositions (i.e. 45S5 and S53P4) leads to partly to fully crystallize bodies. The impact of crystallization on bioactivity still leads to large debate among the scientific community. Does crystallization reduce or suppress the materials bioactivity? Within this chapter, the processing routes for scaf- fold manufacture are presented, as well as an introduction to the thermal processing of glasses to form glass and glass-ceramics and the consequent effect on bioactivity is discussed.",
"corpus_id": 21869832,
"score": 1,
"title": "Glass and Glass-Ceramic Scaffolds: Manufacturing Methods and the Impact of Crystallization on In-Vitro Dissolution"
} |
{
"abstract": "Purpose The aim of this study was to investigate the topical efficacy of a new purified extract from Madagascar, Gotu Kola (Centella asiatica [L.] Urban), both on human explants and on human volunteers, in relation to skin wrinkling and skin protection against ultraviolet light exposure. The extract, with a peculiar content of biologically active molecules, was investigated as a novel anti-inflammaging and antiglycation agent. Its typical terpenes, known as collagen synthesis promoters, represent at least 45% of the extract. It also contains a polyphenolic fraction cooperating to the observed properties. Methods C. asiatica purified extract was assayed on human skin explants maintained alive, and several parameters were evaluated. Among the most relevant, the thymine dimerization was evaluated by immunostaining. Malondialdehyde formation was evaluated as free-radical scavenging marker by enzyme-linked immunosorbent assay. The expression of interleukin-1α was observed by enzyme-linked immunosorbent assay as well. The product was further evaluated as an antiglycation agent, being glycation quantified by the advanced glycation product carboxymethyl lysine. C. asiatica purified extract was also evaluated as an antiwrinkling agent in a single-blind, placebo-controlled study. Formulated in a simple oil-in-water emulsion, the extent of wrinkling was assessed by skin replicas, skin firmness, skin elasticity, and collagen density measurements. Results C. asiatica purified extract could protect DNA from ultraviolet light-induced damage, decreasing the thymine photodimerization by over 28% (P<0.05). A reduced (26%, P<0.01) expression of interleukin-1α was also observed, supporting its anti-inflammatory potential. C. asiatica purified extract showed in vitro a total inhibition of carboxymethyl lysine formation induced by the glycating agent methylglyoxal. A clear epidermal densification of collagen network in the papillary dermis was observed. These in vitro data have been confirmed by clinical results. Conclusion These results qualify C. asiatica purified extract as an antiaging ingredient, addressing skin damage caused by inflammaging and glycation by relying on the synergy of triterpens and polyphenolics.",
"corpus_id": 2179545,
"title": "Anti-inflammaging and antiglycation activity of a novel botanical ingredient from African biodiversity (Centevita™)"
} | {
"abstract": "Introduction Uncontrolled diabetes mellitus (DM) is related to skin disorders, particularly dry skin. Pathogenesis of dry skin in type 2 diabetes mellitus (T2DM) rises from the chronic hyperglycemia causing an increase in advanced glycation end-products (AGEs), proinflammatory cytokines, and oxidative stress. Combination of oral and topical Centella asiatica (CA) is expected to treat dry skin in T2DM patients more effectively through decreasing N(6)-carboxymethyl-lysine (CML) and interleukin-1α (IL-1α) and increasing superoxide dismutase (SOD) activity. Methods A three-arm prospective, double-blind, randomized, controlled study was performed to evaluate the efficacy of the oral and topical CA extract in 159 T2DM patients with dry skin. The subjects were divided into the CA oral (CAo) 2 × 1.100 mg + CA topical (CAt) 1% ointment group, oral placebo (Plo) + CAt group, and Plo and topical placebo (Plt) group. Dry skin assessment was performed on day 1, 15, and 29, while evaluation of CML, IL-1α, and SOD activity was on day 1 and 29. Result Effectivity of CAo + CAt combination was assessed based on HbA1c and random blood glucose (RBG). In well-controlled blood glucose, on day 29, the percentage of SRRC decrement was greater in the CAo + CAt group compared to the control group (p = 0.04). SCap value in the CAo + CAt group was greater than that in the control group (p = 0.01). In the partially controlled blood glucose, increment of SOD activity in the CAo + CAt group was greater than that in the control group (p = 0.01). There were medium-to-strong correlation between CML with SOD (r = 0.58, p < 0.05) and IL-1α with SOD (r = 0.70, p < 0.05) in well-controlled blood glucose. Systemic and topical adverse events were not significantly different between groups. Conclusion CAo and CAt combination can be used to significantly improve dry skin condition through increasing SOD activity in T2DM patients with controlled blood glucose.",
"corpus_id": 221559341,
"title": "Oral and Topical Centella asiatica in Type 2 Diabetes Mellitus Patients with Dry Skin: A Three-Arm Prospective Randomized Double-Blind Controlled Trial"
} | {
"abstract": "The isolated dark low-mass objects in our Galaxy, such as free-floating planets (FFP), can be detected by microlensing observations. By the light curve can be defined three parameters, but only the Einstein time, TE involves the mass, the distance and the transverse velocity of the lens. To break this degeneracy, have to be detected the perturbations in the light curve due to the relative accelerations among the observer, the lens and the source. Recently, toward Galactic bulge are planned space-based microlensing observations by WFIRST, which can be located in L2 point or geosynchronous orbit (GSO). Using the simulations Monte Carlo in C++ we investigate that the better position for the parallax e ffect detection in microlensing events caused by FFPs is L2 point than GSO.",
"corpus_id": 32512806,
"score": 0,
"title": "L2 Point vs. Geosynchronous Orbit for Parallax Effect by Simulations"
} |
{
"abstract": "Based on a new formulation of far field cross-correlation involving the currents on the radiating sources, we propose a general methodology employing a Genetic Algorithm (GA) to find optimum distributions of current amplitudes and phases on MIMO antennas such that the resulting system has good cross-correlation (high diversity gain.) The obtained currents can help guide the design and fabrication process of final MIMO antennas by providing valuable information about which current distributions can achieve the best “complementarity” of the individual far fields such that the total diversity gain is maximized. Moreover, this approach will help to explicate what is meant by a `MIMO antenna' from the electromagnetic viewpoint by defining a MIMO antenna as an antenna supporting the optimum currents such as those obtained by the proposed method itself. The method is quite general and can be applied to arbitrary antenna types and array topologies. Verifications for examples comprised of small arrays of half-wavelength dipoles are provided and the practical significance of the results is discussed.",
"corpus_id": 1205715,
"title": "A Generalized Methodology for Obtaining Antenna Array Surface Current Distributions With Optimum Cross-Correlation Performance for MIMO and Spatial Diversity Applications"
} | {
"abstract": "In this paper, we compare the procedures and practicality/complexity of the antenna current green function (ACGF) and theory of characteristic modes (TCM) when applied to antenna design problems. Some recent works on both methods are summarized. The ACGF is an analytical method that relies on the antenna current equation, that is why its application is limited. In TCM the main focus, thus far has been on the chassis behavior and excitation of the modes on the chassis. In real scenarios, the chassis can not be used as the main radiating element because it acts as a base to many electronic components and this will obviously effect the natural modes of the antenna.",
"corpus_id": 42654849,
"title": "A comparison between the antenna current green function and theory of characteristic modes"
} | {
"abstract": "From the Publisher: \nAuthoritative coverage of a revolutionary technique for overcoming problems in electromagnetic design Genetic algorithms are stochastic search procedures modeled on the Darwinian concepts of natural selection and evolution. The machinery of genetic algorithms utilizes an optimization methodology that allows a global search of the cost surface via statistical random processes dictated by the Darwinian evolutionary concept. These easily programmed and readily implemented procedures robustly locate extrema of highly multimodal functions and therefore are particularly well suited to finding solutions to a broad range of electromagnetic optimization problems. Electromagnetic Optimization by Genetic Algorithms is the first book devoted exclusively to the application of genetic algorithms to electromagnetic device design. Compiled by two highly competent and well-respected members of the electromagnetics community, this book describes numerous applications of genetic algorithms to the design and optimization of various low- and high-frequency electromagnetic components. Special features include: \n*Introduction by David E. Goldberg, \"A Meditation on the Application of Genetic Algorithms\" \n*Design of linear and planar arrays using genetic algorithms \n*Application of genetic algorithms to the design of broadband, wire, and integrated antennas \n*Genetic algorithmdriven design of dielectric gratings and frequency-selective surfaces \n*Synthesis of magnetostatic devices using genetic algorithms \n*Application of genetic algorithms to multiobjective electromagnetic backscattering optimization \n*A comprehensive list of the up-to-date references applicable to electromagneticdesign problemsSupplemented with more than 250 illustrations, Electromagnetic Optimization by Genetic Algorithms is a powerful resource for electrical engineers interested in modern electromagnetic designs and an indispensable reference for university researchers.",
"corpus_id": 54137923,
"score": 2,
"title": "Electromagnetic Optimization by Genetic Algorithms"
} |
{
"abstract": "Face representation is a crucial step of face recognition systems. An optimal face representation should be discriminative, robust, compact, and very easy-to-implement. While numerous hand-crafted and learning-based representations have been proposed, considerable room for improvement is still present. In this paper, we present a very easy-to-implement deep learning framework for face representation. Our method bases on a new structure of deep network (called Pyramid CNN). The proposed Pyramid CNN adopts a greedy-filter-and-down-sample operation, which enables the training procedure to be very fast and computation-efficient. In addition, the structure of Pyramid CNN can naturally incorporate feature sharing across multi-scale face representations, increasing the discriminative ability of resulting representation. Our basic network is capable of achieving high recognition accuracy ($85.8\\%$ on LFW benchmark) with only 8 dimension representation. When extended to feature-sharing Pyramid CNN, our system achieves the state-of-the-art performance ($97.3\\%$) on LFW benchmark. We also introduce a new benchmark of realistic face images on social network and validate our proposed representation has a good ability of generalization.",
"corpus_id": 15760480,
"title": "Learning Deep Face Representation"
} | {
"abstract": "Despite significant recent advances in the field of face recognition [10, 14, 15, 17], implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors.",
"corpus_id": 206592766,
"title": "FaceNet: A Unified Embedding for Face Recognition and Clustering"
} | {
"abstract": null,
"corpus_id": 13894358,
"score": -1,
"title": "A Comparative Study on Shape Retrieval Using Fourier Descriptors with Different Shape Signatures"
} |
{
"abstract": "Three major groups call on demographers to produce medium- and long-term population forecasts at the national, regional, or global levels - or produce them themselves. They are: other scientists, government and international agencies, and the general public, including private industry. What these consumers of forecasts demand of demographers, or what demographers think that they should demand, has been changing. The types of forecasts demanded are changing, the relevant dimension of forecasts is expanding, and users are increasingly requiring that forecasts include an indication of the degree of uncertainty of the forecast. Because the demands placed on demographers for population forecasts have been changing, it is an appropriate time to rethink some of their basic aspects. In this volume we address what we see as key issues in population forecasting: in what dimensions and at what levels of disaggregation should forecasts be provided? (And, in particular, are the traditional dimensions of age and sex sufficient?) Should population forecasts take note of limits to population or interactions between population and other variables? And how should uncertainty be treated? We believe that, at least in part, these issues are driven by changes in what users of forecasts want from population forecasters. \n\nThe reader will note that we have used the term \"forecasts\" rather than the more common \"projections.\" Demographers claim to produce population \"projections,\" which are correctly computed numerical outcomes of a specified algorithm whose form, initial values, and controlling parameters or transition values are specified by the analyst. By definition, a projection must be correct unless arithmetical or other errors are made. However, users of population projections require population \"forecasts.\" Forecasts are what Donald Pittenger (1980) called a \"population projection selected as a likely outcome.\" Thus although a demographer makes a \"projection,\" the user employs it as a \"forecast.\" Some demographers cling to the distinction and wash their hands of what users do with their \"projections\" or how they interpret them. But we think that this distinction between \"projections\" and \"forecasts\"is false because demographers present only one or a limited number of the many possible projections. On what basis do they choose the projection or set of projections? Surely, on the basis that they judge the projection (or central projection of a set of projections) to be the most likely to occur. This point was made almost 50 years ago by Harold Dorn: it is difficult to see why a demographer would present anything other than the most likely outcome as the preferred middle projection (Dorn 1950). Similarly, although users are told that high and low population forecast variants are not confidence intervals, they are often taken to be so by users. And why should they not? Why else would a high and low variant be reported unless the demographer thought that they indicated the highest numbers and lowest numbers that were possible, although not highly probable? For these reasons, we favor the term \"forecast\" over the term \"projection\" where there is any, even implicit, predictive intent.",
"corpus_id": 153340097,
"title": "Introduction: The need to rethink approaches to population forecasts"
} | {
"abstract": "\"This paper argues that it is premature to decide whether simple forecasting models in demography are more (or less) accurate than complex models and whether causal models are more (or less) accurate than noncausal models. It is also too early to say under what conditions one type of model can outperform another. The paper also questions the wisdom of searching for a single best model or approach. It suggests that combining forecasts may improve accuracy.\" (SUMMARY IN FRE)",
"corpus_id": 6435253,
"title": "Simple versus complex models: evaluation, accuracy, and combining."
} | {
"abstract": " Abstract—The objective of this study is to study the determinants of Hong Kong's housing prices. Empirical results from this study suggest that Hong Kong's housing prices are (1) positively related to per-capita GDP; (2) positively related to Hong Kong's export; (3) negatively related to departure of Hong Kong's residents; (4) negatively related the total inward and outward movements of goods vehicles; (5) positively related to commercial property prices; and (6) negatively related to factory prices.",
"corpus_id": 32601671,
"score": 1,
"title": "Determinants of Hong Kong's Housing Prices"
} |
{
"abstract": "In these day, as the e-commerce industry is growing and becoming complex, everyone uses online websites for getting reviews and giving the reviews on the website in the form of comments. This comments varies from worst level to best level. So in order to categorize these comments or to predict the best outcome among the posted comments recommendation is needed. Recommender System is an e cient tool that consider an individual's opinion to identify their content more appropriately and selectively. This system has been applied to various domain, but in the eld of research area, service based recommendation system plays a major role. In recent years, the growth rate of online hotel searching has been increased much faster and makes this online hotel searching a very di cult task due to the abundant amount of online information. Reviews written by the travelers replaces the word-of-mouth but then to searching becomes the time consuming task based on user preference. Reviews crawled from the travelers visiting sites are a common and valuable source of information for recommendation of a hotel, yet little attention has been paid as, how to present the reviews of a reviewers in an understandable format. The purpose of this paper is to recommend the travelers the name of the hotels based on their preferences, by analyzing the other travelers reviews together with the rating value to improve the prediction accuracy. Users do not rate adequate hotels to enable collaborating ltering based recommendation, which can lead to an issue called as cold start problem. To solve this issue the following contributions will be made in this paper like: 1. Paper will combine Collaborative Filtering with Aspect Based Sentiment Analysis to overcome cold start user problem, while ensuring high accuracy. 2. Context like traveler type, user's location and user's preference will be included as an additional information for personalized recommendation. 3. Several experiments will be conducted on tripadvisor datasets and the result is expected that proposed hybrid framework is competitive against classical approaches. This paper will include various elements like semi-supervised clustering algorithm is used to group the features of the same vocabulary into nine, prede ned categories known as hotel aspects. Lexicon based approach will be used to find the sentiment orientation towards each hotel aspects based on the defined context. Item-based ltering technique will be used to predict the unrated items/aspects. Thus, context-based approach will increase the recommendation results.",
"corpus_id": 20006016,
"title": "Hotel Recommendation System"
} | {
"abstract": "We propose a set of novel quantitative measures for controversy derived from statistical properties of textual social media. We empirically establish strong positive correlations between social media controversy and sales performance across multiple datasets. The power of the newly proposed measures is further illustrated in a linear regression model for predicting product sales.",
"corpus_id": 29139578,
"title": "Controversy is Marketing: Mining Sentiments in Social Media"
} | {
"abstract": "Deep learning is a potential paradigm changer for the design of wireless communications systems (WCS), from conventional handcrafted schemes based on sophisticated mathematical models with assumptions to autonomous schemes based on the end-to-end deep learning using a large number of data. In this article, we present a basic concept of the deep learning and its application to WCS by investigating the resource allocation (RA) scheme based on a deep neural network (DNN) where multiple goals with various constraints can be satisfied through the end-to-end deep learning. Especially, the optimality and feasibility of the DNN based RA are verified through simulation. Then, we discuss the technical challenges regarding the application of deep learning in WCS.",
"corpus_id": 51936656,
"score": -1,
"title": "Application of End-to-End Deep Learning in Wireless Communications Systems"
} |
{
"abstract": "Abstract The co-occurrence of conduct problems (CP) and depressive symptoms (DS) is an important topic in developmental psychopathology; however, research in this area is still in its early stages. Using data from a school-based longitudinal sample of 2,453 adolescents with five waves from Grade 6 to 9, we examined the prevalence, etiology, and consequences of the co-occurrence of CP and DS. A person-centered approach, general growth mixture modeling, was applied to obtain CP and DS trajectory groups. The risk factors and consequences of the co-occurrence problem were examined using the trajectory groups. As hypothesized in a nonclinical sample, a small proportion of boys (8.8%) and girls (3.7%) reported to be high in both CP and DS over time. Among the adolescents with the highest level of CP, only 6.3% of the boys and 6.0% of the girls experienced the highest level of DS. However, among those with the highest level of DS trajectories, 42.9% of the boys and 10.2% of the girls reported the highest level of CP, indicating a gender-specific risk of the co-occurrence problem for depressed boys. Psychosocial and family factors were identified as vulnerable precursors to co-occurring CP and DS, a finding in line with the multiple domain risk model for CP and the transactional model for DS. The study also found that adolescents with the co-occurrence problem were more similar to those with “pure DS” than those with “pure CP” in academic adjustment at the ninth grade.",
"corpus_id": 12854884,
"title": "Concurrent changes in conduct problems and depressive symptoms in early adolescents: A developmental person-centered approach"
} | {
"abstract": "This chapter gives an overview of recent advances in latent variable analysis. Emphasis is placed on the strength of modeling obtained by using a flexible combination of continuous and categorical latent variables. To focus the discussion and make it manageable in scope, analysis of longitudinal data using growth models will be considered. Continuous latent variables are common in growth modeling in the form of random effects that capture individual variation in development over time. The use of categorical latent variables in growth modeling is, in contrast, perhaps less familiar, and new techniques have recently emerged. The aim of this chapter is to show the usefulness of growth model extensions using categorical latent variables. The discussion also has implications for latent variable analysis of cross-sectional data. The chapter begins with two major parts corresponding to continuous outcomes versus categorical outcomes. Within each part, conventional modeling using continuous latent variables will be described",
"corpus_id": 4977095,
"title": "Latent Variable Analysis: Growth Mixture Modeling and Related Techniques for Longitudinal Data"
} | {
"abstract": "Despite significant advances over the last two decades in our understanding of the origins and development of antisocial behaviour (Patterson & Yourger, 1993; Offord, et al. 1996; Farrington, 1998; LeBlanc & Loeber, 1998), several questions have continued to plague the field. For example, what would a comprehensive longitudinal picture of sex differences in antisocial behaviour look like and what would it reveal about the aetiology, continuities and discontinuities of antisocial behaviour? Should sex differences in the amount of antisocial behaviour be taken as evidence that males and females experience different developmental patterns? How should research and clinical communities deal with the raised prevalence figures of conduct disorder among boys? Should the diagnostic criteria for conduct disorder in girls be relaxed? This book by four internationally respected authors describes an integrated and novel approach to the above questions. Essentially, the book addresses an anomaly that has arisen from two facts about antisocial behaviour which, when considered together, distinguish antisocial behaviour from depression, anxiety, ADHD, autism and other disorders of childhood, namely that there is a male preponderance in antisocial behaviour, and that there is a large increase in antisocial behaviour during adolescence. In addressing this anomaly in a systematic way by investigating age-related sex differences in antisocial behaviour, the fundamental causes of such behaviour are described in a way that has not previously been done. The book comes to a powerful conclusion that will no doubt influence future directions in the field, namely that the more severe, early-onset presentation of antisocial behaviour that is typical of only 5% of males is associated with neuro-cogntitive features with probable strong genetic and biological influences. By contrast, females ’ antisocial involvement tends to fluctuate more according to circumstances and therefore is more influenced by social factors, notably the socialization influences by male peers. These influences are particularly relevant during middle adolescence. The book’s conclusion is based on the data of one long-term longitudinal study of a contemporary representative birth cohort of 1000 males and females born 1972–1973 – the Dunedin Multidisciplinary Health and Development Study. Subjects were followed up for nine assessments spanning a period from 3–21 years of age, thereby covering the peak ages for the emergence of antisocial behaviour. Data sources were wide and in the tradition of longitudinal studies included dimensional measures like parent, teacher and self-report. In addition, observer ratings, official police and court records, peer information reports and partners reports were used. These measures produced two methods of quantifying behaviour: a dimensional scale and a categorical diagnosis of conduct disorder. In demonstrating the comparison in findings between these two methods, important methodological lessons are taught to the young researcher and}or student who read this book. Chapter 1 provides an excellent guide to the rest of the book in that it succinctly describes the main hypotheses and aims of the book. It then sets out to describe the content of each chapter in a clear and logical manner. In doing so the argument on which the book is based unfolds clearly in the first chapter, and the reader is left with a good sense of study’s main conclusions by the time (s)he starts to fill in the details by reading the rest of the book. Chapter 2 addresses the study design and provides an informative overview of the New Zealand research setting. Chapters 3–5 bring together methodologies from developmental psychology, social psychology",
"corpus_id": 31651757,
"score": -1,
"title": "Sex Differences in Antisocial Behaviour: Conduct Disorder, Delinquency and Violence in the Dunedin Longitudinal Study. By T. Moffitt, A. Caspi, M. Rutter and P. Silva. (Pp. 278; £14.95/$21.95.) Cambridge University Press: Cambridge. 2001."
} |
{
"abstract": "Basketball is an inherently social sport, which implies that social dynamics within a team may influence the team's performance on the court. As NBA players use social media, it may be possible to study the social structure of a team by examining the relationships that form within social media networks. This paper investigates the relationship between publicly available online social networks and quantitative performance data. It is hypothesized that network centrality measures for an NBA team's network will correlate with measurable performance metrics such as win percentage, points differential and assists per play. The hypothesis is tested using exponential random graph models (ERGM) and investigating correlation between network and performance variables. The results show that there are league-wide trends correlating certain network measures with game performance, and also quantifies the effects of various player attributes on network formation.",
"corpus_id": 53080553,
"title": "Correlating NBA Team Network Centrality Measures with Game Performance"
} | {
"abstract": "To the trained-eye, experts can often identify a team based on their unique style of play due to their movement, passing and interactions. In this paper, we present a method which can accurately determine the identity of a team from spatiotemporal player tracking data. We do this by utilizing a formation descriptor which is found by minimizing the entropy of role-specific occupancy maps. We show how our approach is significantly better at identifying different teams compared to standard measures (i.e., Shots, passes etc.). We demonstrate the utility of our approach using an entire season of Prozone player tracking data from a top-tier professional soccer league.",
"corpus_id": 987111,
"title": "Identifying Team Style in Soccer Using Formations Learned from Spatiotemporal Tracking Data"
} | {
"abstract": "This paper presents a duplication-less storage system over the engineering-oriented cloud computing platforms. Our deduplication storage system, which manages data and duplication over the cloud system, consists of two major components, a front-end deduplication application and a mass storage system as back-end. Hadoop distributed file system HDFS is a common distribution file system on the cloud, which is used with Hadoop database HBase. We use HDFS to build up a mass storage system and employ HBase to build up a fast indexing system. With a deduplication application, a scalable and parallel deduplicated cloud storage system can be effectively built up. We further use VMware to generate a simulated cloud environment. The simulation results demonstrate that our deduplication storage system is sufficiently accurate and efficient for distributed and cooperative data intensive engineering applications.",
"corpus_id": 17395142,
"score": -1,
"title": "A novel approach to data deduplication over the engineering-oriented cloud systems"
} |
{
"abstract": "In order to automate the image evaluation task, an engineering model for predicting the visual differences of color images is developed. The present CVDP consists of a color appearance model, a set of contrast sensitivity functions, the modified cortex transform, and a multichannel interaction model for masking effects. Based ona pixel-by- pixel difference metric similar to the CIELAB color difference, the predictions of the simplified CVDP are found to correlate fairly with the psychophysical test results over 51 pairs of natural images with some detection failures. These failures can be eliminated by including additional image quality metrics: the clarity in the shadow and highlight areas and the graininess in the mid-tone areas. The modified model is found to be able to identify 55 percent of those visually indistinguishable image pairs. The preliminary results using the complete CVDP for selected image pairs indicate that the effects of masking introduce only little changes to the results of the simplified CVDP.",
"corpus_id": 5212133,
"title": "Image evaluation using a color visual difference predictor (CVDP)"
} | {
"abstract": "In order to develop a human vision model to simulate both grating detection and brightness perception, we have chosen four visual functional components. They include a front-end low-pass filter, a cone-type dependent local compressive nonlinearity described by a modified Naka-Rushton equation, a cortical representation of the image in the Fourier domain, and a frequency dependent compressive nonlinearity. The model outputs were fitted to contrast sensitivity functions over 7 mean illuminance levels ranging from 0.0009 to 900 trolands simultaneously with a set of 6 free parameters. The fits account for 97.8% of the total variance in the reported experimental data. Furthermore, the same model was used to simulate contrast and brightness perception. Some visual patterns that can produce simultaneous contrast or crispening effect were used as input images to the model. The outputs are consistent with the perceived brightness, using the same set of parameter values that was used in the above-mentioned fits. The model also simulated the perceived contrast contours on seeing a frequency-modulated grating and the whiteness percepts at different adaptation levels. In conclusion, a model that is based on simple visual properties is promising for deriving a unified model of pattern detection and brightness perception.",
"corpus_id": 36071137,
"title": "Approaching a unified model of pattern detection and brightness perception"
} | {
"abstract": "In this letter, the authors report the fabrication of GaN-based light-emitting diodes (LEDs) with mesh indium-tin-oxide p-contact and nanopillars on patterned sapphire substrate. Using hydrothermal ZnO nanorods as the etching hard mask, the authors successfully formed vertical GaN nanopillars inside the mesh regions and on the mesa-etched regions. It was found that 20-mA forward voltage and reverse leakage currents observed from the proposed LED were only slightly larger than those observed from the conventional LEDs. It was also found that output power of the proposed LED was more than 80% larger than that observed from conventional LED prepared on flat sapphire substrate.",
"corpus_id": 7680298,
"score": 1,
"title": "GaN-Based LEDs With Mesh ITO p-Contact and Nanopillars"
} |
{
"abstract": null,
"corpus_id": 28464948,
"title": "To Plan or not to Plan? Discourse Planning in Slot-Value Informed Sequence to Sequence Models for Language Generation"
} | {
"abstract": "Teaching machines to accomplish tasks by conversing naturally with humans is challenging. Currently, developing task-oriented dialogue systems requires creating multiple components and typically this involves either a large amount of handcrafting, or acquiring costly labelled datasets to solve a statistical learning problem for each component. In this work we introduce a neural network-based text-in, text-out end-to-end trainable goal-oriented dialogue system along with a new way of collecting dialogue data based on a novel pipe-lined Wizard-of-Oz framework. This approach allows us to develop dialogue systems easily and without making too many assumptions about the task at hand. The results show that the model can converse with human subjects naturally whilst helping them to accomplish tasks in a restaurant search domain.",
"corpus_id": 10565222,
"title": "A Network-based End-to-End Trainable Task-oriented Dialogue System"
} | {
"abstract": "Along with the burst of open source projects, software theft (or plagiarism) has become a very serious threat to the healthiness of software industry. Software birthmark, which represents the unique characteristics of a program, can be used for software theft detection. We propose a system call dependence graph based software birthmark called SCDG birthmark, and examine how well it reflects unique behavioral characteristics of a program. To our knowledge, our detection system based on SCDG birthmark is the first one that is capable of detecting software component theft where only partial code is stolen. We demonstrate the strength of our birthmark against various evasion techniques, including those based on different compilers and different compiler optimization levels as well as two state-of-the-art obfuscation tools. Unlike the existing work that were evaluated through small or toy software, we also evaluate our birthmark on a set of large software. Our results show that SCDG birthmark is very practical and effective in detecting software theft that even adopts advanced evasion techniques.",
"corpus_id": 16958074,
"score": -1,
"title": "Behavior based software theft detection"
} |
{
"abstract": "By using a highly sensitive and specific radioimmunoassay and the slot-blot technique, transferrin was quantified in fresh samples of aqueous humor from patients with primary open-angle glaucoma (POAG, n = 36) or secondary glaucoma (SG, n = 18). The levels were compared with those in aqueous humor obtained from age-matched patients without glaucoma (n = 33) and in primary and secondary aqueous humor from normal dogs (n = 10) in which breakdown of the blood-aqueous barrier was induced experimentally. The concentration of transferrin in the aqueous humor of human control subjects ranged from 0.3-3.4 mg/dl (mean +/- standard deviation, 1.36 +/- 0.66 mg/dl); in POAG samples, from 0.34 to greater than 10 mg/dl (2.07 +/- 1.90 mg/dl); and in SG samples, from 0.29 to greater than 10 mg/dl (2.79 +/- 2.24 mg/dl). The level of transferrin in secondary aqueous humor samples obtained from dogs was as much as ninefold greater than that in primary aqueous humor. The protein concentration in the human aqueous humor samples was 11.87 +/- 4.47 mg/dl for control subjects, 62.11 +/- 56.74 mg/dl for patients with POAG, and 124.53 +/- 152.67 mg/dl for those with SG. In dogs, the protein levels were 7.97 +/- 3.12 mg/dl for primary aqueous humor and 191.9 +/- 149.8 mg/dl for secondary aqueous humor. A significant correlation (r = 0.744, P less than 0.01) was found between total protein and transferrin contents in the samples of aqueous humor from patients with glaucoma but not in the samples from age-matched control subjects.(ABSTRACT TRUNCATED AT 250 WORDS)",
"corpus_id": 2620285,
"title": "Quantitative and qualitative analyses of transferrin in aqueous humor from patients with primary and secondary glaucomas."
} | {
"abstract": "Treatment of human fibroblasts with epidermal growth factor (EGF) results in a rapid increase (less than 5 min) in the ability of the cells to bind 125I-labeled transferrin to surface receptors. Scatchard analyses of EGF-treated cells indicate that this increase was due to an increase in the number of transferrin receptors at the cell surface rather than to alterations in ligand-receptor affinity. The EGF-induced increase in transferrin receptors was transient, reaching a peak by 5 min and then declining back to near basal levels by 45 min. Increases in transferrin receptor number were observed when approximately equal to 1% of the EGF receptors were occupied and were maximal at 16% occupancy. EGF treatment accelerated the rate at which previously internalized 125I-labeled transferrin-receptor complexes were returned to the cell surface. The kinetics and magnitude of the loss of intracellular transferrin receptors was sufficient to account for the increase in surface transferrin receptors. We conclude from these studies that one of the earliest effects of EGF treatment is the induced translocation of an intracellular compartment to the cell surface. This intracellular compartment contains transferrin receptors and may be part of the pathway involved in the normal recycling of cell surface proteins.",
"corpus_id": 32229320,
"title": "Epidermal growth factor rapidly induces a redistribution of transferrin receptor pools in human fibroblasts."
} | {
"abstract": "OBJECTIVES\nWe sought to investigate whether positron emission tomography/computed tomography (CT) angiography using [11C]-PK11195, a selective ligand for peripheral benzodiazepine receptors expressed in activated macrophages, can be used to image vascular inflammation.\n\n\nBACKGROUND\nActivated macrophages and T lymphocytes are fundamental elements in the pathogenesis of large-vessel vasculitides.\n\n\nMETHODS\nFifteen patients (age 52+/-16 years) with systemic inflammatory disorders (6 consecutive symptomatic patients with clinical suspicion of active vasculitis and 9 asymptomatic control patients) underwent positron emission tomography with [11C]-PK11195 and CT angiography. [11C]-PK11195 uptake was measured by calculating target-to-background ratios of activity normalized to venous blood.\n\n\nRESULTS\nCoregistration of positron emission tomography with contrast-enhanced CT angiography facilitated localization of [11C]-PK11195 arterial wall uptake. Visual analysis revealed focal [11C]-PK11195 uptake in the arterial wall of all 6 symptomatic patients, but in none of the asymptomatic controls. Although serum inflammatory biomarkers (C-reactive protein, erythrocyte sedimentation rate, white cell count) did not differ significantly between the 2 groups, symptomatic patients had increased [11C]-PK11195 vascular uptake (target-to-background ratio 2.41+/-1.59 vs. 0.98+/-0.10; p=0.001).\n\n\nCONCLUSIONS\nBy binding to activated macrophages in the vessel wall, [11C]-PK11195 enables noninvasive imaging of vascular inflammation. Alternative longer-lived radioligands for probing peripheral benzodiazepine receptors are being tested for wider clinical applications.",
"corpus_id": 13169928,
"score": 1,
"title": "Imaging of vascular inflammation with [11C]-PK11195 and positron emission tomography/computed tomography angiography."
} |
{
"abstract": "Numerous studies have shown an epidemiological link between meat consumption and the incidence of cancer, and it has been suggested that this relationship may be motivated by the presence of carcinogenic contaminants on it. Among the most frequently detected contaminants in meat are several types of persistent organic pollutants (POPs), and it is well known that many of them are carcinogenic. On the other hand, an increasing number of consumers choose to feed on what are perceived as healthier foods. Thus, the number of consumers of organic food is growing. However, environmental contamination by POPs is ubiquitous, and it is therefore unlikely that the practices of organic food production are able to prevent this contamination. To test this hypothesis, we acquired 76 samples of meat (beef, chicken, and lamb) of two modes of production (organic and conventional) and quantified their levels of 33 carcinogenic POPs. On this basis, we determined the human meat-related daily dietary exposure to these carcinogens using as a model a population with a high consumption of meat, such as the Spanish population. The maximum allowable meat consumption for this population and the carcinogenic risk quotients associated with the current pattern of consumption were calculated. As expected, no sample was completely free of carcinogenic contaminants, and the differences between organically and conventionally produced meats were minimal. According to these results, the current pattern of meat consumption exceeded the maximum limits, which are set according to the levels of contaminations, and this is associated with a relevant carcinogenic risk. Strikingly, the consumption of organically produced meat does not diminish this carcinogenic risk, but on the contrary, it seems to be even higher, especially that associated with lamb consumption.",
"corpus_id": 256722,
"title": "Consumption of organic meat does not diminish the carcinogenic potential associated with the intake of persistent organic pollutants (POPs)"
} | {
"abstract": "For the first time in South America, a four-year survey (2011-2014) was conducted to assess the occurrence of polychlorinated dibenzo-p-dioxins and furans (PCDD/Fs) and dioxin-like polychlorinated biphenyls (dl-PCBs) in different raw meats (bovine, pork, ovine, chicken, and turkey) sampled from ten of the fifteen regions of Chile. When expressed as pg World Health Organization Toxic Equivalent (WHO-TEQ2005)g-1 fat, the highest PCDD/F values for each species were 0.54 (bovine-2012), 0.27 (pork-2013), 0.23 (ovine-2011), 0.61 (chickens-2013), and 0.34 (turkey-2012). The highest mean dl-PCBs levels were 0.18 (bovine-2011), 0.05 (pork-2014), 0.13 (ovine-2011), 0.1 (chicken-2014), and 0.21 (turkey-2013). Penta- and tetra-chlorinated congeners dominated PCDD/F WHO-TEQ2005 profiles during the survey, while PCB 126 dominated dl-PCBs profiles. Statistically significant interspecies differences were found. Dietary intake was also estimated, and the highest total PCDD/F and dl-PCBs values, found in poultry meat, were 0.09pgWHO-TEQ2005kg-1bwd-1 (2013) for adults and 0.36pgWHO-TEQ2005kg-1bwd-1 (2013) for children. The concentrations and dietary intakes for the studied compounds in raw meat were below international and national maximum permitted limits.",
"corpus_id": 4587163,
"title": "A four-year survey in the farming region of Chile, occurrence and human exposure to polychlorinated dibenzo-p-dioxins and dibenzofurans, and dioxin -like polychlorinated biphenyls in different raw meats."
} | {
"abstract": "INTRODUCTION AND OBJECTIVES\nThe ENRICA study aims to assess the frequency and distribution of the main components of the natural history of cardiovascular disease in Spain, including food consumption and other behavioral risk factors, biological risk factors, early damage of target organs, and diagnosed morbidity.\n\n\nMETHODS\nA cross-sectional survey of 11,991 individuals representative of the non-institutionalized population aged 18 years and older in Spain was conducted from June 2008 to October 2010. Data collection comprised 3 sequential stages: a) computer-assisted telephone interview to obtain information on lifestyle, knowledge and attitudes about cardiovascular disease risk factors, and the signs and symptoms of heart attack and stroke, subjective health, and morbidity; b) first home visit, to collect blood and urine samples for analysis by a central laboratory, and c) second home visit, to measure anthropometric variables and blood pressure and to administer a computer-assisted dietary history; data on functional limitations are also collected from participants aged 65 years and older.\n\n\nDISCUSSION\nThe ENRICA study has shown the feasibility of a large home-based health interview and examination survey in Spain. It will provide valuable information to support and evaluate national strategies against cardiovascular disease and other chronic diseases in Spain. Moreover, a 3-year prospective follow-up of the study participants, including a new physical exam, is planned to start in the second semester of 2011 and will update lifestyle information and biological variables. (ClinicalTrials.gov number, NCT01133093).",
"corpus_id": 783294,
"score": 2,
"title": "[Rationale and methods of the study on nutrition and cardiovascular risk in Spain (ENRICA)]."
} |
{
"abstract": "As part of the US Geological Survey’s Land Cover Trends project, land-use/land-cover change estimates between 1973 and 2000 are presented for the basin and range ecoregions, including Northern, Central, Mojave, and Sonoran. Landsat data were employed to estimate and characterize land-cover change from 1973, 1980, 1986, 1992, and 2000 using a post-classification comparison. Overall, spatial change was 2.5% (17,830 km2). Change increased steadily between 1973 and 1986 but decreased slightly between 1992 and 2000. The grassland/shrubland class, frequently used for livestock grazing, constituted the majority of the study area and had a net decrease from an estimated 83.8% (587,024 km2) in 1973 to 82.6% (578,242 km2) in 2000. The most common land-use/land-cover conversions across the basin and range ecoregions were indicative of the changes associated with natural, nonmechanical disturbances (i.e., fire), and grassland/shrubland loss to development, agriculture, and mining. This comprehensive look at contemporary land-use/land-cover change provides critical insight into how the deserts of the United States have changed and can be used to inform adaptive management practices of public lands.",
"corpus_id": 153869866,
"title": "Late twentieth century land-cover change in the basin and range ecoregions of the United States"
} | {
"abstract": "Wildfires burned 24,254 ha of critical habitat designated for the recovery of the threatened Mojave desert tortoise (Gopherus agassizii) in southern Nevada during 2005. The proliferation of non-native annual grasses has increased wildfire frequency and extent in recent decades and continues to accelerate the conversion of tortoise habitat across the Mojave Desert. Immediate changes to vegetation are expected to reduce quality of critical habitat, yet whether tortoises will use burned and recovering habitat differently from intact unburned habitat is unknown. We compared movement patterns, home-range size, behavior, microhabitat use, reproduction, and survival for adult desert tortoises located in, and adjacent to, burned habitat to understand how tortoises respond to recovering burned habitat. Approximately 45% of home ranges in the post-fire environment contained burned habitat, and numerous observations (n = 12,223) corroborated tortoise use of both habitat types (52% unburned, 48% burned). Tortoises moved progressively deeper into burned habitat during the first 5 years following the fire, frequently foraging in burned habitats that had abundant annual plants, and returning to adjacent unburned habitat for cover provided by intact perennial vegetation. However, by years 6 and 7, the live cover of the short-lived herbaceous perennial desert globemallow (Sphaeralcea ambigua) that typically re-colonizes burned areas declined, resulting in a contraction of tortoise movements from the burned areas. Health and egg production were similar between burned and unburned areas indicating that tortoises were able to acquire necessary resources using both areas. This study documents that adult Mojave desert tortoises continue to use habitat burned once by wildfire. Thus, continued management of this burned habitat may contribute toward the recovery of the species in the face of many sources of habitat loss. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.",
"corpus_id": 82477206,
"title": "Desert tortoise use of burned habitat in the Eastern Mojave desert"
} | {
"abstract": "Measurement and calculation of deposition of china dust in various portions of respiratory tract by exhalation partitioning (measured by CO/sub 2/ content) are discussed. Data on retention of various sized dusts in upper respiratory or alveolar regions were converted to percent deposition curves. Upper respiratory deposition decreased from approx. 75% at 5 ..mu..m to approx. 10% at 1.5 ..mu..m. Alveolar deposition peaked (approx. 55%) at 1 ..mu..m.",
"corpus_id": 39298494,
"score": 1,
"title": "Influence of Particle Size upon the Retention of Particulate Matter in the Human Lung."
} |
{
"abstract": "We observed three children with a clinically similar presentation of erythematous nodules that expanded centrifugally leaving lipoatrophy. Areas of lipoatrophy coalesced, resulting in clinical pictures similar to partial or total lipodystrophy. Histologic study revealed a lobular panniculitis with a mixed infiltrate of lymphocytes and mononuclear phagocytes. Of these three children, one had insulin-dependent diabetes mellitus and Hashimoto's thyroiditis, one developed juvenile rheumatoid arthritis, and the third developed insulin-dependent diabetes mellitus, suggesting that the pathogenic mechanism may be an expression of autoimmunity.",
"corpus_id": 54131,
"title": "Lipoatrophic panniculitis: a possible autoimmune inflammatory disease of fat. Report of three cases."
} | {
"abstract": "case reports. McNicol and Smith (1964) reported on 15 patients with hypothermia, all of whom had raised blood urea levels; and in five out of eight cases described ty Fruehan (1960) there was a pronounced decrease in urine output 12 to 24 hours after admission. The beneficial effects of hypothermia on the preservation of renal function after transplantation are recorded (Schloerb, et al., 1959; Calne, et al., 1963) and hence the impairment of renal function that occurs after accidental exposure may be related to periods of relative ischaemia during rewarming. The histological changes reported here suggest that the renal failure was a direct result of ischaemic damage to the kidneys. Renal damage complicating accidental hypothermia may result not from the hypothermia itself but may occur as the body temperature is increased, suggesting that plasma expanders and mannitol should be used more vigorously during the rewarming phase.",
"corpus_id": 43474315,
"title": "Case of Lipodystrophy"
} | {
"abstract": "In a recent report in Arthritis Research & Therapy, Kang and colleagues [1] assessed a series of patients with rheumatoid arthritis (RA) to establish whether adipokines could be a link between inflammation, insulin resistance, and atherosclerosis in RA. \n \nWe have noticed that Kang and colleagues did not pay attention to our former studies on the same issue. In this regard, in the last decade, we conducted a series of studies on insulin resistance and adipokines in a cohort of Spanish patients with long-standing RA, undergoing anti-tumor necrosis factor-alpha (anti-TNF-α) infliximab therapy because of severe disease, refractory to conventional disease-modifying anti-rheumatic drugs [2-6]. \n \nKang and colleagues described that resistin was associated with erythrocyte sedimentation rate (ESR) (r = 0.322, P <0.001), C-reactive protein (CRP) (r = 0.209, P = 0.004), and increased disease duration (r = 0.176, P = 0.014) [1]. These data are not new. We previously reported a close association between laboratory markers of inflammation, particularly CRP and resistin levels [3]. In our series, we found a significant association between the mean ESR (r = 0.405, P = 0.03) and CRP (r = 0.571, P = 0.0005) from disease diagnosis and ESR (r = 0.486, P = 0.004), CRP (r = 0.599, P = 0.0005), and platelet count (r = 0.559, P = 0.0007) at the time of the study and resistin levels [3]. These findings, along with these new data described by Kang and colleagues, highlight the potential role of resistin in the inflammatory cascade in RA. \n \nKang and colleagues also found a positive correlation between adiponectin and ESR (r = 0.162, P = 0.025) [1]. Prior to these results, in our series of patients with severe and active disease despite anti-TNF-α therapy, we observed that high-grade inflammation was independently and negatively correlated with circulating adiponectin concentrations [4]. CRP levels correlated with circulating adiponectin concentrations (partial r (pr) = −0.370, P = 0.04), independently of age and gender [4]. In contrast, low adiponectin levels clustered with metabolic syndrome features that contribute to atherogenesis in RA [4]. Adiponectin concentrations correlated with triglycerides/high-density lipoprotein (HDL) cholesterol ratios (pr = −0.396, P = 0.03), total cholesterol/HDL cholesterol ratios (pr = −0.444, P = 0.01), and high fasting plasma glucose levels (pr = −0.366, P = 0.04), independently of CRP levels and the body mass index [4]. These results also suggest an implication of adiponectin in the development of cardiovascular disease in RA. \n \nIn the series by Kang and colleagues, leptin was associated with homeostasis model assessment-estimated insulin resistance (r = 0.369, P <0.001) [1]. In our series of RA patients with active disease despite anti-TNF-α therapy, there was a positive correlation between body mass index of RA patients and serum leptin levels (r = 0.665, P <0.001) [5]. Also, a significant correlation of leptin with biomarkers of endothelial activation (vascular cell adhesion molecule-1; r = 0.349, P = 0.04) was observed [5]. However, no significant correlations between leptin levels and disease duration, ESR and CRP levels, disease activity score using 28 joint counts, lipids, insulin sensitivity, resistin, adiponectin, or the cumulative prednisone dose at the time of the study were found [5]. Therefore, in Western patients with severe and active RA, leptin levels seem to be related to adiposity [5]. However, in our series, circulating visfatin levels were unrelated to disease activity, adiposity, or metabolic syndrome [6]. \n \nAlthough adipokines have been demonstrated to exert a key role in the interface between obesity, inflammation, insulin resistance, and atherosclerosis in the general population, we agree with Kang and colleagues that information on their potential contribution is still limited in RA. In this regard, in Western individuals with RA, adipokines have not been demonstrated to represent a significant risk factor for indirect measures of organic arterial wall atherosclerotic damage, as assessed by carotid intima-media thickness in our cohort of long-standing active RA patients undergoing infliximab treatment [7], or by coronary artery calcification evaluation, as shown in recent work by Rho and colleagues [8].",
"corpus_id": 18824200,
"score": 1,
"title": "Response to ‘Adipokines, inflammation, insulin resistance, and carotid atherosclerosis in patients with rheumatoid arthritis’"
} |
{
"abstract": "We describe the Lightweight Communications and Marshalling (LCM) library for message passing and data marshalling. The primary goal of LCM is to simplify the development of low-latency message passing systems, especially for real-time robotics research applications.",
"corpus_id": 10900899,
"title": "LCM: Lightweight Communications and Marshalling"
} | {
"abstract": null,
"corpus_id": 46266981,
"title": "Aerial Informatics and Robotics Platform"
} | {
"abstract": "In this work we propose Kinect Deform, an algorithm which targets enhanced 3D reconstruction of scenes containing non-rigidly deforming objects. It provides an innovation to the existing class of algorithms which either target scenes with rigid objects only or allow for very limited non-rigid deformations or use precomputed templates to track them. Kinect Deform combines a fast non-rigid scene tracking algorithm based on octree data representation and hierarchical voxel associations with a recursive data filtering mechanism. We analyze its performance on both real and simulated data and show improved results in terms of smoothness and feature preserving 3D reconstructions with reduced noise.",
"corpus_id": 8537795,
"score": -1,
"title": "Kinect Deform: Enhanced 3D Reconstruction of Non-rigidly Deforming Objects"
} |
{
"abstract": "Dysregulation of the mammalian target of rapamycin pathway is the underlying pathogenic mechanism in tuberous sclerosis complex (TSC). Other syndromes caused by genetic alterations in this pathway frequently manifest as vascular anomalies or asymmetric overgrowth. Rarely, these features have been documented in TSC.",
"corpus_id": 1937936,
"title": "Tuberous Sclerosis Complex Associated with Vascular Anomalies or Overgrowth"
} | {
"abstract": "Congenital lymphedema has been described as a possible rare association of tuberous sclerosis complex (TSC), with only six previous cases reported in the literature. TSC is an autosomal dominant, multisystem disorder connected to aberrant regulation of the mammalian target of rapamycin (mTOR) pathway. The aim of this study is to review cases of lymphedema in a large cohort of TSC patients. The medical records of 268 patients seen at The Herscot Center for Children and Adults with Tuberous Sclerosis Complex at the Massachusetts General Hospital from 2002 to 2012 were retrospectively reviewed for reports of lymphedema or edema of unknown etiology. Genotypic and phenotypic data were collected in accordance with institutional review board (IRB) approval. This cohort presents two new cases of congenital lymphedema in TSC patients and acquired lymphedema was found in eight additional cases. Thus, we report 10 new cases of lymphedema in TSC (4%). The two patients with congenital lymphedema were female, as were the previous six reported cases. The frequency of lymphedema reported here (4%) is higher than the estimated prevalence in the general population (0.133–0.144%), suggesting a higher frequency of lymphedema in TSC. This study shows that patients with TSC and lymphedema are more likely to be females with renal AMLs and suggests that congenital lymphedema is a gender‐specific (female) manifestation of TSC. Exploration of the potential role of mTOR antagonists may be important in treatment of lymphedema in TSC patients. © 2014 Wiley Periodicals, Inc.",
"corpus_id": 22755055,
"title": "Lymphedema in tuberous sclerosis complex"
} | {
"abstract": "Background Forty-nine million people or 83 per cent of the entire population of 59 million rely on the public healthcare system in South Africa. Coupled with a shortage of medical professionals, high migration, inequality and unemployment; healthcare provision is under extreme pressure. Due to negligence by the health professionals, provincial health departments had medical-legal claims estimated at R80 billion in 2017/18. In the same period, provincial health spending accounted for 33 per cent of total provincial expenditure of R570.3 billion or 6 per cent of South Africa’s Gross Domestic Product. Despite this, healthcare outcomes are poor and provinces are inefficient in the use of the allocated funds. This warrants a scientific investigation into the technical efficiency of the public health system. Methods The study uses data envelopment analysis (DEA) to assess the technical efficiency of the nine South African provinces in the provision of healthcare. This is achieved by determining, assessing and comparing ways that individual provinces can benchmark their performance against peers to improve efficiency scores. DEA compares firms operating in homogenous conditions in the usage of multiple inputs to produce multiple outputs. Therefore, DEA is ideal for measuring the technical efficiency of provinces in the provision of public healthcare. In DEA methodology, the firms with scores of 100 per cent are technically efficient and those with scores lower than 100 per cent are technically inefficient. This study considers six DEA models using the 2017/18 total health spending and health staff as inputs and the infant mortality rate as an output. The first three models assume the constant returns to scale (CRS) while the last three use the variable return to scale (VRS) both with an input-minimisation objective. Results The study found the mean technical efficiency scores ranging from 35.7 to 87.2 per cent between the health models 1 and 6. Therefore, inefficient provinces could improve the use of inputs within a range of 64.3 and 20.8 per cent. The Gauteng province defines the technical efficiency frontiers in all the six models. The second-best performing province is the North West province. Other provinces like KwaZulu-Natal, Limpopo and the Eastern Cape only perform well under the VRS. The other three provinces are inefficient. Conclusions Based on the VRS models 4 to 6, the study presents three policy options. Policy option 1 (model 4): the efficiency gains from addressing health expenditure wastage in four inefficient provinces amounts to R17 billion. Policy option 2 (model 5): the potential savings from the same provinces could be obtained from reducing 17,000 health personnel, advisably, in non-core areas. In terms of Policy option 3 (model 6), three inefficient provinces should reduce 6940 health workers while the same provinces, inclusive of KwaZulu-Natal could realise health expenditure savings of R61 million. The potential resource savings from improving the efficiency of the inefficient provinces could be used to refurbish and build more hospitals to alleviate pressure on the public health system. This could also reduce the per capita numbers per public hospital and perhaps their performance as overcrowding is reportedly negatively affecting their performance and health outcomes. The potential savings could also be used to appoint and train medical practitioners, specialists and researchers to reduce the alarming numbers of medical legal claims. Given the existing challenges, South Africa is not ready to implement the National Health Insurance (NHI) Scheme, as it requires additional financial and human resources. Instead, huge improvements in public healthcare provision could be achieved by re-allocating the resources ‘saved’ through efficiency measures by increasing the quality of public healthcare and extending healthcare to more recipients.",
"corpus_id": 210926344,
"score": 1,
"title": "Technical efficiency of provincial public healthcare in South Africa"
} |
{
"abstract": "Entanglement in fishing gear occurs in endangered manatees and may result in serious injury or death. Such incidents may happen more frequently at night when the animal’s visual sense is limited. In this study, we examined the differences in behavioral response of captive manatees to a net obstacle during light (day) and dark (night) periods. We used a plastic net as the obstacle, and video-recorded the manatees’ behavior. The experiments showed that captive manatees avoided the obstacle during the day more frequently than at night, which suggests that the manatees can perceive the obstacle more readily during light periods. However, there was no difference in the frequency of bumping or actively touching the obstacle between light and dark periods. The results suggest that the manatees can recognize the net obstacle even at night by purposely touching it, but they avoid it less frequently, and that entanglement during light periods may occur during accidental bumping, rather than from a failure to recognize it altogether.",
"corpus_id": 2174142,
"title": "The differences in behavioral responses to a net obstacle between day and night in captive manatees; does entanglement happen at night?"
} | {
"abstract": "Documenting the extent of fishery gear interactions is critical to wildlife conservation efforts, especially for reducing entanglements and ingestion. This study summarizes fishery gear interactions involving common bottlenose dolphins (Tursiops truncatus truncatus), Florida manatees (Trichechus manatus latirostris) and sea turtles: loggerhead (Caretta caretta), green turtle (Chelonia mydas), leatherback (Dermochelys coriacea), hawksbill (Eretmochelys imbricata), Kemp's ridley (Lepidochelys kempii), and olive ridley (Lepidochelys olivacea) stranding in Florida waters during 1997-2009. Fishery gear interactions for all species combined were 75.3% hook and line, 18.2% trap pot gear, 4.8% fishing nets, and 1.7% in multiple gears. Total reported fishery gear cases increased over time for dolphins (p<0.05), manatees (p<0.01), loggerheads (p<0.05) and green sea turtles (p<0.05). The proportion of net interaction strandings relative to total strandings for loggerhead sea turtles increased (p<0.05). Additionally, life stage and sex patterns were examined, fishery gear interaction hotspots were identified and generalized linear regression modeling was conducted.",
"corpus_id": 21386881,
"title": "Fishery gear interactions from stranded bottlenose dolphins, Florida manatees and sea turtles in Florida, U.S.A."
} | {
"abstract": "This study is a rare example of “the ecosystem approach to management” that has been carried out for the purpose of providing practical support to decision-makers in managing a Site of National Interest (SIN) where activities such as fishing, aquaculture and swimming are restricted. Benthic ecosystem functioning was assessed to verify whether it would be possible to exclude the less contaminated part from the SIN and its legislative constraints. At five macrosites subjected to diversified industrialization and anthropization, we evaluated the structural characteristics of the sediments, both heterotrophic and phototrophic communities, and the main processes of production, transformation and consumption of organic matter at seven stations, plus a reference site. Along the north-eastern boundary of the bay, the port, shipbuilding and iron foundry areas, characterised by high levels of contaminants, low macrozoobenthic diversity, major organic contents (up to 51.1 mgC g-1) and higher numbers of hydrocarbon degrading bacteria (up to 5,464 MPN gdry-1), differed significantly (RANOSIM = 0.463, p = 2.9%) from the other areas (stations). Oxygen consumption (-15.221.59 mgC m-2) prevailed over primary production and the trophic state was net heterotrophic. In contrast, on the other side of the harbour (residential area/centre bay), contamination levels were below the legal limits and both the microalgal and macrobenthic communities displayed higher biodiversity. Higher macrofaunal abundances (up to 753174.7 ind.m-2), primary production rates (up to 58.608.41 mgC m-2) and exoenzymatic activities were estimated. nMDS and SIMPROF analyses performed on benthic communities significantly separated the most contaminated stations from the other ones. Overall, by applying this holistic approach, a better environmental situation was highlighted along the southern boundary of the bay and according to these results this part of the bay could be excluded from the SIN. However, further sampling is required along a finer sampling grid in the less contaminated side of the port in order to confirm these first results. Our work is one of the first case studies where such an ecosystem approach has been applied to a port area, in order to provide practical support to decision-makers involved in the spatial planning of harbour zones.",
"corpus_id": 11619448,
"score": 1,
"title": "The Port of Trieste (Northern Adriatic Sea)—A Case Study of the “Ecosystem Approach to Management”"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.