query dict | pos dict | neg dict |
|---|---|---|
{
"abstract": "The author considers a flowshop of machines with no intermediated storage between two successive machines. The problem is to find the permutation of machines which will maximize the throughput. Based on the method of branching, it is shown that for the case of four machines the optimal order is to arrange the two slowest machines in the first and last positions and the two fastest machines in the middle positions and the fastest one next to the slowest one, provided the processing times on the machines are comparable in the sense of stochastic ordering.<<ETX>>",
"corpus_id": 837277,
"title": "A note on optimal order of M machines in tandem with blocking"
} | {
"abstract": "A tandem stage queuing system has n stages A1, A2, ', An in series, where stage Ai consists of mi parallel channels each having the same constant service time si. In addition, one of the stages may be a single-channel stage Aj having variable service times that are always â§si/mi for i â j. Customers arrive at the first stage and proceed through the stages, in order, on a first-come, first-served, unlimited queuing basis. For any sequence of customer arrival times, the time spent in the system by each customer is independent of the order of the stages. It follows that certain steady-state and time-and customer-dependent problems for the system and for its stages, involving waiting time, queue length, and busy period, can for any interarrival distribution be reduced to corresponding problems for a system of fewer stages, possibly a single stage.",
"corpus_id": 62640043,
"title": "Reduction Methods for Tandem Queuing Systems"
} | {
"abstract": "In an era of increasing energy production from renewable sources, the demand for components for renewable energy systems has dramatically increased. Consequently, managers and investors are interested in knowing whether a company associated with the semiconductor and related device manufacturing sector, especially the photovoltaic (PV) systems manufacturers, is a money-making business. We apply a new approach that extends prior research by applying decision trees (DTs) to identify ratios (i.e., indicators), which discriminate between companies within the sector that do (designated as “green”) and do not (“red”) produce elements of PV systems. Our results indicate that on the basis of selected ratios, green companies can be distinguished from the red companies without an in-depth analysis of the product portfolio. We also find that green companies, especially operating in China are characterized by lower financial performance, thus providing a negative (and unexpected) answer to the question posed in the title.",
"corpus_id": 210876471,
"score": 1,
"title": "Is Investing in Companies Manufacturing Solar Components a Lucrative Business? A Decision Tree Based Analysis"
} |
{
"abstract": "The demand upon irrigated agriculture to modernize to improve productivity is increasing. This paper defines some of the key challenges the irrigation experts face in responding to this societal demand on irrigation, with a focus on water delivery. Histories of management and design in irrigation engineering still influence current perceptions of irrigation, although new ideas from research have influenced the ways irrigation management has developed in the last 30 years. Currently, modernization of irrigation is understood as transforming irrigation management to enable it to serve the demands of farmers. The position of irrigation experts changed from being the main person responsible for an entire production system to being a service‐oriented manager. Taking into account farmer demand and responses to water deliveries is still a challenge for many agencies. The paper argues that, even though farmer interventions are not always appropriate, farmer interventions are to be taken as standard in irrigation. Engineers should ensure that systems deliver what is asked for, through selecting physical components within a clearly defined operational strategy. Such a service‐oriented approach demands irrigation experts with high technical qualifications in hydraulics and hydrology. Copyright © 2009 John Wiley & Sons, Ltd.",
"corpus_id": 154984338,
"title": "From central control to service delivery? reflections on irrigation management and expertise"
} | {
"abstract": "‘Water control’ is central to the political economy of water distribution in large‐scale irrigation in India. The changes in water distribution, irrigation technology, and agrarian development ‐through the introduction of the ‘block system’, technical devices called ‘modules’ and volumetric water pricing ‐ in the Nira Left Bank Canal (Bombay Presidency) in the period 1900–40, are discussed to show the relationship of the three dimensions of water control: technical, managerial and socio‐political. This analysis points to the crucial, but contradictory role of the state in triggering processes of agricultural modernisation through intervention in water management. The debate on the ‘success’ of the block system continues to the present day, but little progress has been made in designing solutions for inequality in water distribution. The article suggests that liberalisation policies create political and institutional space for changing accountability relations, and agricultural price regimes relevant to wa...",
"corpus_id": 154250281,
"title": "Modules for modernisation: Colonial irrigation in India and the technological dimension of agrarian change"
} | {
"abstract": "In the first paper of this series, the Loma de Quinto Irrigation District (LQD) was characterised, and water use was assessed. In this work, the analysis of the LQD is completed with field irrigation evaluations, solid-set sprinkler irrigation simulations and irrigation scheduling for optimal crop yield. The results of the irrigation evaluations indicated that the average Christiansen coefficient of uniformity (CU) for solid-sets, centre-pivots and linear-moves was 68.0, 75.5, and 80.0%, respectively. In solid-sets CU was severely reduced by wind speed. However, in centre-pivots and linear-moves CU was higher in evaluations with wind speeds between 2 and 6 m s-1 than under calm conditions. The evaluation data set was used to validate a ballistic solid-set sprinkler irrigation simulation model. The performance variables used for model validation were CU and the potential application efficiency of the low quarter (PAElq). Both variables were adequately predicted in the range of the observed values. The model was used to extend the evaluation results to all the solid-set plots in the LQD. CU maps were produced for different wind speeds and operating pressures. These maps can be used to identify plots with low irrigation performance. The effect of irrigation scheduling on crop yield and net benefit was analysed using the CropWat simulation model. Simulations of the 1997 irrigation practices performed on a limited number of plots detected a 12% decrease in crop yield due to deficit irrigation and/or large irrigation intervals. The introduction of an optimal irrigation schedule (avoiding yield reductions) would imply increasing the alfalfa seasonal irrigation depth by 101 mm, and applying light, frequent irrigation events. Due to labour scarcity in the LQD, the implementation of the optimal schedule would require a high degree of irrigation automation, which is currently unavailable. Taking into consideration the value of the additional yield and the costs of the extra irrigation water depth and the automation devices, the resulting net benefit would be 50 ha-1. The purpose of this analysis of the LQD is to contribute to the diagnostic analysis phase of an incipient Management Improvement Program at the LQD. In order to complete this phase, an interdisciplinary committee will perform a study not just on irrigation but on a wide scope of irrigated agriculture in the LQD. \n\nAuthor Keywords: Sprinkler irrigation; Irrigation uniformity; Simulation model",
"corpus_id": 152955965,
"score": 2,
"title": "Analysis of an irrigation district in northeastern Spain: II. Irrigation evaluation, simulation and scheduling"
} |
{
"abstract": "OBJECTIVE\nThis article presents the results of laser therapy in crystal (hydroxyapatite, calcium pyrophosphate, and urates) deposition-induced arthritis in rats and the clinical applications in humans.\n\n\nBACKGROUND DATA\nMicrocrystalline arthropathies are prevalent among geriatric patients, who are more vulnerable to the side effects of drugs. The effectiveness of laser therapy for pain relief, free of side effects, has been reported in painful conditions.\n\n\nMETHODS\nTwo milligrams of each of the above-mentioned crystals was injected in both joints of the back limbs in three groups of rats; these groups were then treated with laser irradiation. Three other groups received no treatment after the injections. We determined the plasmatic levels of inflammatory markers (fibrinogen, prostaglandin E2, and TNF(alpha)), tissues (prostaglandin E(2)) and conducted anatomopathological studies. Twenty-five patients with acute gout arthritis were randomized into two groups and treated over 5 days: group A, diclofenac 75 mg orally, twice a day; and group B, laser irradiation once a day. Forty-nine patients with knee chronic pyrophosphate arthropathy were randomized into two groups and treated over 21 days; group A, diclofenac 50 mg orally, twice a day; and group B, laser irradiation once a day. Thirty patients with shoulder chronic hydroxyapatite arthropathy were randomized into two groups and treated over 21 days; group A, diclofenac 50 mg orally, twice a day; and group B, laser irradiation once a day.\n\n\nRESULTS\nFibrinogen, prostaglandin E(2), and TNF(alpha) concentrations in the rats injected with crystals and treated with laser decreased significantly as compared with the groups injected with crystals without treatment. Both laser therapy and diclofenac achieved rapid pain relief in patients with acute gouty arthritis without significant differences in efficacy. Laser therapy was more effective than diclofenac in patients with chronic pyrophosphate arthropathy and in patients with chronic apatite deposition disease.\n\n\nCONCLUSION\nLaser therapy represents an effective treatment in the therapeutic arsenal of microcrystalline arthropathies.",
"corpus_id": 1848778,
"title": "Photobiomodulation of pain and inflammation in microcrystalline arthropathies: experimental and clinical results."
} | {
"abstract": "The aim of this study was to analyze the effects of low-level laser therapy (LLLT) on the prevention of cartilage damage after the anterior cruciate ligament transection (ACLT) in knees of rats. Thirty male rats (Wistar) were distributed into three groups (n = 10 each): injured control group (CG); injured laser-treated group at 10 J/cm2 (L10), and injured laser-treated group at 50 J/cm2 (L50). Laser treatment started immediately after the surgery and it was performed for 15 sessions. An 808 nm laser, at 10 and 50 J/cm2, was used. To evaluate the effects of LLLT, the qualitative and semi-quantitative histological, morphometric, and immunohistochemistry analysis were performed. Initial signs of tissue degradation were observed in CG. Interestingly, laser-treated animals presented a better tissue organization, especially at the fluence of 10 J/cm2. Furthermore, laser phototherapy was able of modulating some of the aspects related to the degenerative process, such as the prevention of proteoglycans loss and the increase in cartilage area. However, LLLT was not able of modulating chondrocytes proliferation and the immunoexpression of markers related to inflammatory process (IL-1 and MMP-13). This study showed that 808 nm laser, at both fluences, prevented features related to the articular degenerative process in the knees of rats after ACLT.",
"corpus_id": 11519461,
"title": "Low-level laser therapy prevents degenerative morphological changes in an experimental model of anterior cruciate ligament transection in rats"
} | {
"abstract": "We investigate the effects of nanoparticles on molecular solar thermal energy storage systems and how one can tune chemical reactivities of a molecular photo- and thermoswitch by changing the nanoparticles. We have selected the dihydroazulene/vinylheptafulvene system to illustrate the effects of the nanoparticles on the chemical reactivities of the molecular photo- and thermoswitch. We have utilized the following nanoparticles: a TiO2 nanoparticle along with nanoparticles of gold, silver and copper. We calculate the rate constants for the release of the thermal energy utilizing a QM/MM method coupled to a transition state method. The molecular systems are described by density functional theory whereas the nanoparticles are given by molecular mechanics including electrostatic and polarization dynamics. In order to investigate whether the significant stabilization of the transitions state provided by the nanoparticles is general to the DHA/VHF system, we calculated the transition state rate constant of the parent- and 3-amino-substituted-DHA/VHF systems at 298.15 K in the four different orientations and at the three different separations. We observe that the transition state rate constant of the parent system is only increased as the cyano groups are oriented towards the nanoparticle while the presence of the nanoparticle actually impedes the reactions using the three other orientations. On the other hand, for the substituted system the nanoparticle generally leads to a significant increase in the rate of the reaction. We find that the nanoparticles can have a substantial effect on the calculated rate constants. We observe, depending on the nanoparticle and the molecular orientation, increases of the rate constants by a factor of 106. This illustrates the prospects of utilizing nanoparticles for controlling the release of the stored thermal energy.",
"corpus_id": 235299136,
"score": 0,
"title": "Promoting the thermal back reaction of vinylheptafulvene to dihydroazulene by physisorbtion on nanoparticles."
} |
{
"abstract": "S OF PAPERS 373 P. BRAFFORT and D. FELDMAN, Some recursion-theoretic uses of the \"dequote\" operator. It is well known that our data-processing systems are (approximate) realizations of formal constructive systems; this explains the existing intense interaction between such topics as recursive hierarchies and programming language primitives. In high level languages, the use of \"DO FOR\" or \"DO WHILE\" loops has been advocated against the more standard \"GO TO\" primitives. The main argument was the possibility of a clear structuration of programmes, especially useful for correctness-proving tentatives. On the other hand, the question has been raised of the exact power of a new primitive operator in IVERSON's programming language and notational system APL. This operator, \"<f>\" (read \"dequote\"), has the effect of removing the \"quotes\" which delimit a string of characters, and then asking for execution of the resulting expression. For example, one has the following: K « ' 3 + 4 ' V 3 + 4 + V 7 We consider this operator as a very flexible tool for investigations of the logical properties of programmes. Our first results are the following: (a) If we denote by Ack(n) the «th function (which means Ack(l) = + , Ack(2) = + , etc.), we show that, if Ack(n) is given, Ack(/i + 1) can be defined in APL notation using neither the \"Go t o \" operator nor the recursion scheme. Ack(rc + 2) can be defined in the same way plus the use of one dequote operator. (b) Ack(n) can be defined from addition only under the same constraints as in (a), but using two dequote operators. (c) Using (b) and the Grzegorczyk's result, it is clear that all primitive recursive functions are obtained. We show that, under the same constraints, one also obtains all (general) recursive functions. JANE BRIDGE, Bachmann operations, normal functionals and Mahlo ordinals. Given any normal function /, a hierarchy of derivatives of/, {/0j„E6, can be constructed using application, iteration and diagonalization of the fixed point functional. Hierarchies of such functions with suitable indexing sets B are used to construct systems of ordinal notations (cf. Bachmann [Vierteljahrsschrift Naturforschenden Gesellschaft in Ziirich, vol. 95 (1950)], Isles [Intuitionism and proof theory, North-Holland, Amsterdam, 1970]). The method can be extended using stronger normal functionals in place of the fixed point functional. An adaptation of a method of Gaifman [Israel Journal of Mathematics, vol. 5 (1967)] leads to the following definition of two ai-sequences of functionals <//(>j<a„ </Lf>(<m where each Kt is normal. Let H0 = Ko = the fixed point functional. Suppose Ht [K{1 has been defined and/ : ON -*• ON is an increasing ordinal function (i.o.f.), not necessarily normal, a a limit ordinal. &tg is the range of the function g. The family of functions Qi(«>/) [Pt(«,f)] is the least Y such that (i) / e Y; (ii) g s Y => Ht(g) sY[geY~ Kt(g) e Y]; (iii) if (gt>z<B is a decreasing sequence of i.o.f.'s in Y of length j3 < a then the i.o.f. IBg, which enumerates C\\t<e<%?t> > m Y; (iv) if <£{>{<<, 's a decreasing sequence of i.o.f.'s in Ksuch that, for limit i < a, gt = I(g then Dg, the i.o.f. which enumerates {y : y e % , } u f~\\i<a@gf, is in Y. Hi+i(f) is the i.o.f. enumerating {o : aeSftgtor each ge Qi(«,f)}. lKi+i(f) is the normal function enumerating the closure in the order topology of {o : a e.&g for each g e Pi(a,f)}.] The functionals <Ai>, which are closely related to Isles's functionals <0;> [this JOURNAL, vol. 36 (1971)], although at first sight very similar to the functionals Hu are for normal functions/ no stronger than Hi iterated finitely many times. 374 ABSTRACTS OF PAPERS THEOREM 1 (GAIFMAN). Iff is a normal function, (i) HxW) enumerates the regular fixed points off, (ii) Hl(f) enumerates the fixed points of f that are pk-numbers. Using this theorem we can show THEOREM 2. Iff is a normal function, (i) Ki(f) enumerates the closure of the class of regular fixed points off, (ii) Ki+2(f) enumerates the closure of the class of pi-fixed points off. M. CHAMBERS and H. SAYEKI, Some axioms concerning the power of the continuum. Let A, B be ordered sets. For functions/, g of A into B,f < g iff there is a s A such that/(j8) < ^03) for all jS > a;f«g iff/(a) < g(a) for all a e A. R denotes the set of all real numbers with the",
"corpus_id": 3048782,
"title": "Meeting of the Association for Symbolic Logic, Orléans, France, 1972"
} | {
"abstract": "Abstract This paper explains recent work in proof theory from a neglected point of view. Proofs and their representations by formal derivations are treated as principal objects of study, not as mere tools for analyzing the consequence relation. Though the paper is principally expository it also contains some material not developed in the literature. In particular, adequacy conditions on criteria for the identity of proofs (in § 1c), and a reformulation of Godeľs second theorem in terms of the notion of canonical representation (in § 1d); the use of normalization, instead of normal form, theorems for a direct proof of closure under Church's rule of the theory of species [in § 2a(ii)] and the useless-ness of bar recursive functionals for (functional) interpretations of systems containing Church's thesis [in §2b(iii)]; the use of ordinal structures in a quantifier-free formulation of transfinite induction (in § 3); the irrelevance of axioms of choice to the explicit realizability of existential theorems both for classical and for Heyting's logical rules (in § 4c) and some new uses of Heyting's rules for analyzing the indefinite cumulative hierarchy of sets (in § 4d); a semantics for equational calculi suitable when terms are interpreted as rules for computation [in Appl. Ia(iii)], and, above all, an analysis of formalist semantics and its relation to realizability interpretations (in App. Ic). A less technical account of the present point of view is in [21].",
"corpus_id": 115470917,
"title": "A Survey of Proof Theory II"
} | {
"abstract": "In this note we show how to construct some simply-configured N-state binary Turing machines that will start on a blank tape and eventually halt after printing a very large number of ones. The number of ones produced by these machines can be expressed analytically in terms of functional difference equation. The latter expression furnishes the best lower bound presently known for Rado's noncomputable function, Σ(N), when N ≫ 5.",
"corpus_id": 206585418,
"score": 2,
"title": "A lower bound on Rado's sigma function for binary Turing machines"
} |
{
"abstract": "BACKGROUND\nSepsis is the leading cause of acute renal failure. Intermittent hemodialysis (IHD) is a common treatment for patients with acute renal failure. However, standard hemodialysis membranes achieve only little diffusive removal of circulating cytokines. Modified membranes may enable both successful IHD treatment and simultaneous diffusive cytokine removal.\n\n\nSTUDY DESIGN\nDouble-blind, crossover, randomized, controlled, phase 1 trial.\n\n\nSETTING & PARTICIPANTS\nTertiary intensive care unit. 10 septic patients with acute renal failure according to RIFLE class F.\n\n\nINTERVENTION\nEach patient was treated with 4 hours of high-cutoff (HCO)-IHD and 4 hours of high-flux (HF)-IHD.\n\n\nOUTCOMES & MEASUREMENTS\nWe chose relative change in plasma interleukin 6 (IL-6) concentrations from baseline to 4 hours as the primary outcome for effective cytokine removal. We measured plasma and effluent concentrations of cytokines (IL-6, IL-8, IL-10, and IL-18) and albumin.\n\n\nRESULTS\nMedian age was 53 years (25(th) to 75(th) percentiles, 43 to 71 years). Both treatments achieved equal control of uremia. Four hours of HCO-IHD accomplished a greater decrease in plasma IL-6 levels (-30.3%) than 4 hours of HF-IHD (1.1%; P = 0.05). HCO-IHD, but not HF-IHD, achieved substantial diffusive clearance of several cytokines (IL-6, 14.1 mL/min; IL-8, 75.2 mL/min; and IL-10, 25.5 mL/min). Such clearance also was associated with greater relative decreases in plasma IL-8 and IL-10 levels in favor of HCO-IHD (P = 0.02, P = 0.04). We found significantly greater relative changes from prefilter to postfilter plasma IL-6, IL-8, and IL-10 values in favor of HCO-IHD (P = 0.02, P = 0.01, P < 0.01). During HCO-IHD, cumulative albumin loss into the effluent was 7.7 g (25(th) to 75(th) percentiles, 4.8 to 19.6) versus less than 1.0 g for HF-IHD (P < 0.01).\n\n\nLIMITATIONS\nSmall phase 1 trial.\n\n\nCONCLUSION\nIn septic patients with acute renal failure, HCO-IHD achieved simultaneous uremic control and diffusive cytokine clearances and a greater relative decrease in plasma cytokine concentrations than standard HF-IHD.",
"corpus_id": 1094050,
"title": "Hemodialysis membrane with a high-molecular-weight cutoff and cytokine levels in sepsis complicated by acute renal failure: a phase 1 randomized trial."
} | {
"abstract": "Patients with septic shock by multidrug resistant microorganisms (MDR) are a specific sepsis population with a high mortality risk. The exposure to an initial inappropriate empiric antibiotic therapy has been considered responsible for the increased mortality, although other factors such as immune-paralysis seem to play a pivotal role. Therefore, beyond conventional early antibiotic therapy and fluid resuscitation, this population may benefit from the use of alternative strategies aimed at supporting the immune system. In this review we present an overview of the relationship between MDR infections and immune response and focus on the rationale and the clinical data available on the possible adjunctive immunotherapies, including blood purification techniques and different pharmacological approaches.",
"corpus_id": 296517,
"title": "The Role of Adjunctive Therapies in Septic Shock by Gram Negative MDR/XDR Infections"
} | {
"abstract": "This study assessed the risk of haematological, renal and hepatic toxicity associated with amphotericin B lipid complex (ABLC; Abelcet) in a multicentre, open-label, non-comparative study of 93 patients from 17 different hospitals who received ABLC because of proven or suspected systemic fungal infection or leishmaniasis. Most (66%) patients had onco-haematological diseases. Optimum treatment with ABLC comprised a slow (2-h) infusion dose of 5 mg/kg/day for a minimum period of 14 days. Biochemical and haematological parameters were measured pre-, during and post-treatment. In the overall patient group, the mean serum creatinine concentration was similar pre- and post-study (1.00 +/- 1.14 mg/dL vs. 1.20 +/- 1.19 mg/dL; p > 0.05). There were no significant changes pre- and post-treatment in concentrations of haemoglobin, potassium, transaminases and bilirubin. There was no significant correlation between the dose administered and the concentrations of serum creatinine (Spearmann 0.22). There was no greater nephrotoxicity in the patients with previous renal failure, or in those who had received amphotericin B previously. There were serious adverse events in five patients, but other alternative causes that could explain these events were present in three of these patients. Fevers or chills were experienced by 23% of the patients during the ABLC infusion, but only in one case did this necessitate the suspension of treatment. It was concluded that ABLC is a drug with low nephrotoxicity, even when administered to patients with pre-existing renal insufficiency. Adverse events were generally slight or moderate, and were managed easily with appropriate pre-medication.",
"corpus_id": 7853224,
"score": 1,
"title": "Assessment of nephrotoxicity in patients receiving amphotericin B lipid complex: a pharmacosurveillance study in Spain."
} |
{
"abstract": "Margolis and Schein in 2001 reported a 65-year-old Chinese-American lady with the clinical features of acute cholecystitis.1 During cholecystectomy and subsequent CBD exploration they found a mucosal web of the CHD. They also reviewed the cases of extra-hepatic bile duct webs and other similar intra-luminal causes of obstructive jaundice reported in literature till then. A search of the current literature did not reveal any pediatric patient to have been reported with an isolated mucosal web in the CHD leading to obstructive jaundice. Diagnosis may be missed on ultrasonography, but with MRCP, indirect evidence like proximal IHBRD with an abrupt cut off of the dilated duct at an unexpected location may suggest webs. CBD exploration would be the most effective way to pick the webs, as in our case. Intra-operative cholangiography must also be used when the findings are inconclusive. During exploration, care should be taken since these thin webs can be inadvertently punctured or torn even with minimal instrumentation or probing, which may lead to a missed diagnosis. Management of the web requires transecting the involved segment of extra-hepatic bile duct containing the web, followed by a bilio-enteric anastomosis. Simple excision and incision plus dilatation have also been described, but the long term results of these procedures are not known.1 We removed the segment of CHD containing the web along with the distal collapsed CBD and ligated the lower end. A hepatico-jejunostomy was done to restore bilio-enteric continuity and Ladd’s procedure was performed, given the associated malrotation of the midgut. A rare cause of obstructive jaundice in the pediatric age group is reported. The importance of reporting this case is to highlight the significance of suspecting these rare causes of obstructive jaundice which can be easily corrected surgically. When exploring the CBD, the surgeon must be cautious as the webs can easily be damaged and missed. Excision of the segment of bile duct and bilio-enteric anastomosis is generally curative. Prompt relief of the obstruction of bile flow also helps in avoiding morbidity. RAVI PATCHARU1 ANJAN KUMAR DHUA1 KANIKA SHARMA1 ROHAN MALIK1 MANISHA JANA2 VEERESHWAR BHATNAGAR3",
"corpus_id": 155364146,
"title": "Gastric Adenocarcinoma with Yolk Sac Differentiation, Fungal Super Infection and A Large Solitary Liver Metastasis - A Histologic Conundrum"
} | {
"abstract": "Alpha-fetoprotein (AFP)-producing hepatoid adenocarcinoma of the stomach is a rare and recently discovered entity. We report an unusual combination of hepatocellular carcinoma and hepatoid adenocarcinoma of the stomach with multiple liver metastases. The patient, a 62-year-old Japanese man, was clinically diagnosed as having hepatocellular carcinoma because of the presence of liver tumors, a markedly elevated serum AFP level, and a positive hepatitis C virus (HCV) antibody titer. Autopsy revealed multiple tumors in the liver; one was a primary hepatocellular carcinoma without metastasis, and the others were metastases from latent hepatoid adenocarcinoma of the stomach. In the hepatocellular carcinoma, bile production was observed although the tumor was immunohistochemically negative for AFP. On the other hand, both the primary gastric and metastatic liver hepatoid adenocarcinomas were positive for AFP. Therefore, hepatoid adenocarcinoma of the stomach was responsible for the excessive production of AFP and was the cause of death.",
"corpus_id": 22720800,
"title": "Primary hepatocellular carcinoma and hepatoid adenocarcinoma of the stomach with liver metastasis: an unusual association."
} | {
"abstract": "During a 10-yr-period, 24 cases of alpha-fetoprotein-producing gastric cancer were experienced in our department. The mean age was 62.5 yr, and the sex ratio of males to females was 3:1. Borrmann II and III types of gastric cancer were predominant (83.3%). The prognosis was dismal. Most of the patients, including three radically operated cases of early gastric cancer, died from liver metastasis within 2 yr. The 1-, 3-, and 6-yr survival rates were 37.5%, 8.3%, and 8.3%, respectively, for all cases and 75.0%, 25.0%, and 25.0% for radically operated cases. The incidences of synchronous and metachronous liver metastasis were 31.8% and 40.9%, significantly higher than the incidences of AFP-negative gastric cancer (p less than 0.91). Despite radical gastrectomy, metachronous liver metastasis occurred in 75.0% of the cases. Two radical hepatic resections, including extended right lobectomy, were performed in one patient with early gastric cancer who had repeated metachronous liver metastasis. However, the tumor recurred immediately. Apparently, radical gastrectomy or hepatic resection alone may not suffice for this particular type of cancer. The methods of treatment and follow-up considered should be different from that for other types of gastric cancer.",
"corpus_id": 23676781,
"score": 2,
"title": "Clinicopathologic features and long-term results of alpha-fetoprotein-producing gastric cancer."
} |
{
"abstract": "In this present study production of strontium metal from its oxide was studied under the pressure of 1-5 mbar by metallothermic process. In the experiments SrO which has 99 % purity was used. Effects of Al powder addition (100, 200, 300 % of stoichiometric ratio) and time were investigated on recovering of metallic strontium. Effects of BaO addition (100 %, 200 %, 300 %) was also investigated. The final residues were examined for their chemical composition. XRD, AAS and Flame Photometer devices were used for chemical analysis. More than 90 % of strontium metal recovery was observed.",
"corpus_id": 164214902,
"title": "Strontium Production from Strontium Oxide Using Vacuum Aluminothermic Process"
} | {
"abstract": "Vacuum metallothermic reduction is the main method to produce magnesium (Mg) metal from Mg-containing ores. In the process, a mixture of ore and reductant material is charged in retorts, which provide a vacuum atmosphere of about 100 Pa (1 mbar) in the process. The accounts in the literature stated that the process is viable for the metallothermic reduction of strontium (Sr) as well. The main problem for the reduction of the Sr is the high affinity of Sr to oxygen. Moreover, the Sr is an important metal for Mg alloying. In the present study, production of Mg-Sr alloy from oxide raw materials was studied through the vacuum metallothermic process. Thus, it was aimed to develop a simple and commercially viable method to directly produce Mg-Sr alloys, and the Sr content would be in the form of Mg 17 Sr 2 intermetallic alloy. In the experiments, calcined dolomite ore and SrO were used as raw materials. Investigated parameters were reductant type (FeSi and Al), process temperature, and process time on the recovery ratios of Mg and Sr in produced alloys. The highest recovery ratios, 97.1 pct for the Mg and 81.2 pct for the Sr, were obtained in the experiment conducted at 1250 °C for 480 minutes. The reductant material was the Al, and the Sr-Al addition ratio was 5 wt pct in the experiment.",
"corpus_id": 214715521,
"title": "Production of Magnesium-Strontium Alloys Through Vacuum Metallothermic Process"
} | {
"abstract": "We study the kinetic and thermodynamic properties of a bilayer patterning process induced by an electrohydrodynamic instability. We construct a parametric map, depending on the dielectric contrast and ratio of two film thicknesses, that describes the conditions under which hexagonally ordered pillars or holes can form when the viscosity of the upper layer is negligible. The distinct formation of arrays of pillars and holes results from the nonlinear interactions among different modes and, hence, is governed by the kinetics. The dynamic structures of pillars or holes continue to evolve to decrease the system’s free energy. During this evolution, individual pillars or holes coalesce in a coarsening process until a thermodynamically stable state is reached in the form of a localized pillar, hole, or a roll structure. The selection of the pillar or hole at the final steady state represents a thermodynamic preference that can be predicted qualitatively without solving the fully nonlinear partial differential equation.",
"corpus_id": 97277032,
"score": 1,
"title": "Electrohydrodynamic Instability of Dielectric Bilayers: Kinetics and Thermodynamics"
} |
{
"abstract": "Hydrogenated amorphous Si thin films were prepared by plasma-enhanced chemical vapor deposition technique. As-deposited samples were thermally annealed above 800°C to obtain nanocrystalline Si and the microstructures and carrier transport behaviors were evaluated. It was found that the crystallization of amorphous silicon can be improved by increasing the annealing temperature. Temperature-dependent Hall measurements were performed and the Hall mobility of the thermally annealed sample is 0.86 cm2/V·s, which is one order magnitude larger than that of as-deposited amorphous Si film. The nanocrystalline Si films exhibit a thermally activated electrical transport above room temperature.",
"corpus_id": 14484533,
"title": "Microstructures and carrier transport behaviors of nanocrystalline silicon thin films"
} | {
"abstract": "Hydrogenated nanocrystalline silicon (nc-Si:H) films were deposited by using 13.56MHz plasma-enhanced chemical vapor deposition at 260°C by means of a silane (SiH4) plasma heavily diluted with hydrogen (H2). The high-quality nc-Si:H film showed an oxygen concentration (CO) of ∼1.5×1017at.∕cm3 and a dark conductivity (σd) of ∼10−6S∕cm, while the Raman crystalline volume fraction (Xc) was over 80%. Top-gate nc-Si:H thin-film transistors employing an optimized ∼100nm nc-Si:H channel layer exhibited a field-effect mobility (μFE) of ∼150cm2∕Vs, a threshold voltage (VT) of ∼2V, a subthreshold slope (S) of ∼0.25V∕dec, and an ON∕OFF current ratio of ∼106.",
"corpus_id": 123172967,
"title": "High-mobility nanocrystalline silicon thin-film transistors fabricated by plasma-enhanced chemical vapor deposition"
} | {
"abstract": "A description is given of Quill, an extensible document creation system that is organized as a collection of cooperating editors, each with its own set of objects and commands. The objects implemented by the various editors can be nested without restriction, forming a hierarchical document that can be described by the Standard Generalized Markup Language (SGML). The user is presented with a 'what you see is what you get' (WYSIWYG) view of the document in which the various objects can be directly manipulated on the display screen. A system shell ensures consistency among the editors and coordinates their foreground and background activities to ensure keystroke responsiveness. Each Quill editor is a programming object that communicate with the shell and with other editors by means of a standard set of procedures. A rigorous specification of the shell/editor interface enables additional editors to be added to the Quill system without affecting the existing editors.<<ETX>>",
"corpus_id": 22393113,
"score": 0,
"title": "Quill: an extensible system for editing documents of mixed type"
} |
{
"abstract": "We present here the first structural report derived from breast cancer metastasis suppressor 1 (BRMS1), a member of the metastasis suppressor protein group, which, during recent years, have drawn much attention since they suppress metastasis without affecting the growth of the primary tumor. The relevance of the predicted N-terminal coiled coil on the molecular recognition of some of the BRMS1 partners, on its cellular localization and on the role of BRMS1 biological functions such as transcriptional repression prompted us to characterize its three-dimensional structure by X-ray crystallography. The structure of BRMS1 N-terminal region reveals that residues 51-98 form an antiparallel coiled-coil motif and, also, that it has the capability of homo-oligomerizing in a hexameric conformation by forming a trimer of coiled-coil dimers. We have also performed hydrodynamic experiments that strongly supported the prevalence in solution of this quaternary structure for BRMS1(51-98). This work explores the structural features of BRMS1 N-terminal region to help clarify the role of this area in the context of the full-length protein. Our crystallographic and biophysical results suggest that the biological function of BRMS1 may be affected by its ability to promote molecular clustering through its N-terminal coiled-coil region.",
"corpus_id": 7599122,
"title": "The structure of BRMS1 nuclear export signal and SNX6 interacting region reveals a hexamer formed by antiparallel coiled coils."
} | {
"abstract": "The BRMS1 metastasis suppressor was recently shown to negatively regulate NF-κB signaling and down regulate NF-κB-dependent uPA expression. Here we confirm that BRMS1 expression correlates with reduced NF-κB DNA binding activity in independently derived human melanoma C8161.9 cells stably expressing BRMS1. We show that knockdown of BRMS1 expression in these cells using small interfering RNA (siRNA) leads to the reactivation of NF-κB DNA binding activity and re-expression of uPA. Further, we confirm that BRMS1 expression does not alter IKKβ kinase activity suggesting that BRMS1-dependent uPA regulation does not occur through inhibition of the classical upstream activators of NF-κB. BRMS1 has been implicated as a corepressor of HDAC1 and consistent with this, we show that BRMS1 promotes HDAC1 recruitment to the NF-κB binding site of the uPA promoter and is associated with reduced H3 acetylation. We also confirm that BRMS1 expression stimulates disassociation of p65 from the NF-κB binding site of the uPA promoter consistent with its reduced DNA binding activity. These data suggest that BRMS1 recruits HDAC1 to the NF-κB binding site of the uPA promoter, modulates histone acetylation of p65 on the uPA promoter, leading to reduced NF-κB binding activity on its consensus sequence, and reduced transactivation of uPA expression.",
"corpus_id": 2199256,
"title": "BRMS1 contributes to the negative regulation of uPA gene expression through recruitment of HDAC1 to the NF-κB binding site of the uPA promoter"
} | {
"abstract": "In this preliminary study technical methodology for kinematic tracking and profiling of wrist carpal bones during unconstrained movements is explored. Heavily under-sampled and fat-saturated 3D Cartesian MRI acquisition were used to capture temporal frames of the unconstrained moving wrist of 5 healthy subjects. A slab-to-volume point-cloud based registration was then utilized to register the moving volumes to a high-resolution image volume set collected at a neutral resting position. Comprehensive error analyses for different acquisition parameter settings were performed to evaluate the performance limits of several derived kinematic metrics. Computational results suggested that sufficient volume coverage for the dynamic acquisitions was reached when collecting 12 slice-encodes at 2.5mm resolution, which yielded a temporal resolution of and 2.57 seconds per volumetric frame. These acquisition parameters resulted in total absolute errors of 1.9◦±1.8◦ (3◦±4.6◦) in derived rotation angles and 0.3mm±0.47mm (0.72mm±0.8mm) in center-of-mass displacement kinematic profiles within ulnar-radial (flexion-extension) motion. The results of this study have established the feasibility of kinematic metric tracking of unconstrained wrist motion using 4D MRI. Temporal metric profiles derived from ulnar-radial deviation motion demonstrated better performance than those derived from flexion/extension movements. Future work will continue to explore the use of these methods in deriving more complex kinematic metrics and their application to subjects with symptomatic carpal dysfunction.",
"corpus_id": 243847572,
"score": 0,
"title": "Unconstrained Kinematic MRI Tracking of Wrist Carpal Bones"
} |
{
"abstract": "The relationship between luminance (i.e., the photometric intensity of light) and its perception (i.e., sensations of lightness or brightness) has long been a puzzle. In addition to the mystery of why these perceptual qualities do not scale with luminance in any simple way, \"illusions\" such as simultaneous brightness contrast, Mach bands, Craik-O'Brien-Cornsweet edge effects, and the Chubb-Sperling-Solomon illusion have all generated much interest but no generally accepted explanation. The authors review evidence that the full range of this perceptual phenomenology can be rationalized in terms of an empirical theory of vision. The implication of these observations is that perceptions of lightness and brightness are generated according to the probability distributions of the possible sources of luminance values in stimuli that are inevitably ambiguous.",
"corpus_id": 80584,
"title": "Perceiving the intensity of light."
} | {
"abstract": "How cone synapses encode light intensity determines the precision of information transmission at the first synapse on the visual pathway. Although it is known that cone photoreceptors hyperpolarize to light over 4-5 log units of intensity, the relationship between light intensity and transmitter release at the cone synapse has not been determined. Here, we use two-photon microscopy to visualize release of the synaptic vesicle dye FM1-43 from cone terminals in the intact lizard retina, in response to different stimulus light intensities. We then employ electron microscopy to translate these measurements into vesicle release rates. We find that from darkness to bright light, release decreases from 49 to approximately 2 vesicles per 200 ms; therefore, cones compress their 10,000-fold operating range for phototransduction into a 25-fold range for synaptic vesicle release. Tonic release encodes ten distinguishable intensity levels, skewed to most finely represent bright light, assuming release obeys Poisson statistics.",
"corpus_id": 1237463,
"title": "Encoding Light Intensity by the Cone Photoreceptor Synapse"
} | {
"abstract": "Usually, gold and the Dollar are negatively related; when the Dollar price of gold increases, the Dollar depreciates against other currencies. This is intuitively puzzling because it seems to suggest that gold prices are associated with appreciation in other currencies. Why should the Dollar be different? We show here that there is actually no puzzle. The price of gold can be associated with currency depreciation in every country. The Dollar price of gold can be related to Dollar depreciation and the Euro (Pound, Yen) price of gold can be related to Euro (Pound, Yen) depreciation. Indeed, this is usually the case empirically.",
"corpus_id": 93544628,
"score": 0,
"title": "Gold and the Dollar (and the Euro, Pound, and Yen)"
} |
{
"abstract": "Runners with exercise‐induced high blood pressure have recently been reported to exhibit higher levels of cardiac markers, vasoconstrictors, and inflammation. The authors attempted to identify correlations between exercise‐related personal characteristics and the levels of biochemical/cardiac markers in marathon runners in this study. Forty healthy runners were enrolled. Blood samples were taken both before and after finishing a full marathon. The change in each cardiac/biochemical marker over the course of the marathon was determined. All markers were significantly (P<.001) increased immediately after the marathon (creatine kinase‐MB [CK‐MB]: 7.9±2.7 ng/mL, cardiac troponin I (cTnI): 0.06±0.10ng/mL, N‐terminal pro–B‐type natriuretic peptide (NT‐proBNP): 95.7±76.4, endothelin‐1: 2.7±1.16, high‐sensitivity C‐reactive protein [hs‐CRP]: 0.1±0.09, creatine kinase [CK]: 315.7±94.0, lactate dehydrogenase [LDH]: 552.8±130.3) compared with their premarathon values (CK‐MB: 4.3±1.3, cTnI: 0.01±0.003, NT‐proBNP: 27.6±31.1, endothelin‐1: 1.11±0.5, hs‐CRP: 0.06±0.07, CK: 149.2±66.0, LDH: 399±75.1). In middle‐aged marathon runners, factors related to increased blood pressure were correlated with marathon‐induced increases in cTnI, NT‐proBNP, endothelin‐1, and hs‐CRP. These correlations were observed independent of running history, records of finishing, and peak oxygen uptake.",
"corpus_id": 263779,
"title": "Correlation of Cardiac Markers and Biomarkers With Blood Pressure of Middle‐Aged Marathon Runners"
} | {
"abstract": "In order to understand the effect of endurance running on inflammation, it is necessary to quantify the extent to which acute and chronic running affects inflammatory mediators. The aim of this study was to summarize the literature on the effects of endurance running on inflammation mediators. Electronic searches were conducted on PubMED and Science Direct with no limits of date and language of publication. Randomized controlled trials (RCTs) and non-randomized controlled trials (NRCTs) investigating the acute and chronic effects of running on inflammation markers in runners were reviewed by two researchers for eligibility. The modified Downs and Black checklist for the assesssments of the methodological quality of studies was subsequently used. Fifty-one studies were finally included. There were no studies with elite athletes. Only two studies were chronic interventions. Results revealed that acute and chronic endurance running may affect anti- and pro-inflammatory markers but methodological differences between studies do not allow comparisons or generalization of the results. The information provided in this systematic review would help practitioners for better designing further studies while providing reference values for a better understanding of inflammatory responses after different running events. Further longitudinal studies are needed to identify the influence of training load parameters on inflammatory markers in runners of different levels and training background.",
"corpus_id": 32418331,
"title": "Acute and Chronic Effects of Endurance Running on Inflammatory Markers: A Systematic Review"
} | {
"abstract": "OBJECTIVES\nTo quantify the potential harm of beta blockers in patients with peripheral arterial disease.\n\n\nMATERIALS AND METHODS\nAll randomised controlled trials (RCTs) comparing beta blockers with placebo for the outcomes of claudication and maximal walking distance and time, calf blood flow, vascular resistance and skin temperature were searched using the Cochrane Controlled Trials Register, PubMed and CINAHL. Trials comparing different types of beta blockers were excluded.\n\n\nRESULTS\nSix RCTs fulfilling the above criteria, with a total of 119 patients, were included. The beta blockers studied were atenolol, propranolol, pindolol and metoprolol. None of the trials showed a statistically significant worsening effect of beta blockers on the outcomes measured. There were no reports of any adverse events with the beta blockers studied.\n\n\nCONCLUSIONS\nCurrently, there is no evidence to suggest that beta blockers adversely affect walking distance in people with intermittent claudication. Beta blockers should be used with caution if clinically indicated, especially in patients with critical ischaemia where acute lowering of blood pressure is contraindicated.",
"corpus_id": 19472622,
"score": 2,
"title": "Beta blockers for peripheral arterial disease."
} |
{
"abstract": null,
"corpus_id": 10626638,
"title": "Flexible pattern matching"
} | {
"abstract": "We introduce a new text-indexing data structure, the String B-Tree , that can be seen as a link between some traditional external-memory and string-matching data structures. In a short phrase, it is a combination of B-trees and Patricia tries for internal-node indices that is made more effective by adding extra pointers to speed up search and update operations. Consequently, the String B-Tree overcomes the theoretical limitations of inverted files, B-trees, prefix B-trees, suffix arrays, compacted tries and suffix trees. String B-trees have the same worst-case performance as B-trees but they manage unbounded-length strings and perform much more powerful search operations such as the ones supported by suffix trees. String B-trees are also effective in main memory (RAM model) because they improve the online suffix tree search on a dynamic set of strings. They also can be successfully applied to database indexing and software duplication.",
"corpus_id": 3352480,
"title": "The string B-tree: a new data structure for string search in external memory and its applications"
} | {
"abstract": null,
"corpus_id": 1930443,
"score": -1,
"title": "Putting Theories Together to Make Specifications"
} |
{
"abstract": "A significant tsunami can cause severe damage to coastlines and coastal structures due to inundation, erosion, as well as hydrodynamic and debris impact. However, although there exist many analytical, numerical, and experimental studies of tsunami wave propagation and inundation modeling, few studies considered the possibility of tsunami induced liquefaction failure of coastal sandy slopes. The objective of this work is to investigate the liquefaction potential of planar fine sand slopes during tsunami runup and drawdown. The transient pressure distribution acting on the slope due to wave runup and drawdown is computed by solving for the hybrid Boussinesq – nonlinear shallow water equations using a finite volume method. The subsurface pore water pressure distribution is solved using a finite element method. The numerical methods are validated by comparing the results with experimental measurements from a large-scale laboratory study of breaking solitary waves over a planar fine sand beach. Numerical predictions are shown for a 10m solitary wave over a 1:15 and 1:5 sloped fine sand beach. The results show that the soil near the bed surface along the seepage face created during the drawdown is subject to liquefaction failure. The 14 th World Conference on Earthquake Engineering October 12-17, 2008, Beijing, China",
"corpus_id": 164212140,
"title": "Can Tsunami Drawdown Lead to Liquefaction Failure of Coastal Sandy Slopes ?"
} | {
"abstract": "Abstract A pore‐water pressure probe (piezometer) was implanted in Mississippi delta sediments at a preselected site (Block 28, South Pass area, 29°00´N, 89°15´W) 145 m from an offshore production platform (water depth approx. 19 m) in September 1975. Total pore‐water pressures (uw ) were monitored for extended periods of time at depths of approximately 15 and 8 m below the mudline concurrently with hydrostatic pressures (u8 ) measured at depths of 15 m and approximately 1 m below the mudline. Relatively high excess pore‐water pressures, ue = (uw ‐u8 ), were recorded at the time of probe insertion measuring 99 kPa (14.4 psi) at 15 m and 50 kPa (7.3 psi) at 8 m. Six hours after the probe was implanted, excess pore pressures were still high at 81 kPa (11.8 psi, 15 m) and 37 kPa (5.4 psi, 8 m). Pore pressures appeared to become relatively constant at the 8‐m depth after 7 h had elapsed, and at the 15 m depth after 10–12 h. Excess pore‐water pressures averaged 72 kPa (10.4 psi, 15 m) and 32 kPa (4.6 psi, 8 m)...",
"corpus_id": 129167023,
"title": "Pore‐water pressure measurements: Mississippi delta submarine sediments"
} | {
"abstract": "Abstract Simple kinematic modeling of particle motion along a curved fault similar to the ruptured Chelungpu fault, Taiwan, indicates a unique spatial slip pattern. Specifically, we find that the large convergent slip on the curve is a result of the minimum deformation scenario of the fault geometry and regional northwestern movement of the Philippine Sea Plate (PSP). The modeled deformation regime portrays an accumulation of deformation in the curved region, which coincides very well with a long-term observed NW-SE-trending seismogenic zone in the central Taiwan. This consistence suggests that the Chelungpu fault is a preexisting curved fault. This is further evidenced by geological and geophysical observations. Because the spatial slip pattern is locally and regionally tectonically controlled, it indicates that the rupture behavior of the Chi-Chi earthquake is repeatable. Better knowledge of the fault geometry and the regional plate motion may help us to predict the possible spatial slip distribution of large earthquakes. This discovery is important for avoiding large buildings and constructions near predicted large slip regions.",
"corpus_id": 53558286,
"score": 1,
"title": "Rupture behavior of the 1999 Chi-Chi, Taiwan, earthquake—slips on a curved fault in response to the regional plate convergence"
} |
{
"abstract": "The rehydration characteristics of dehydrated West African pepper leaves were investigated at hydration temperatures of 28, 60, 70, and 80°C. Four treatments were given to the leaves: blanched and sun dried, unblanched and sun dried, blanched and shade dried, and unblanched and shade dried. The hydration process of the dehydrated leaves was adequately described by the Peleg's equation. As the hydration temperature increased from 28 to 70°C, there was a significant decrease in the Peleg's constant K1, while for most of the leaves the Peleg's constant K2 varied with temperature. Rehydration ratio values ranged from 3.75 in blanched shade dried leaves to 4.26 in unblanched sun dried leaves with the unblanched leaves generally exhibiting higher ratios than the blanched leaves.",
"corpus_id": 2090162,
"title": "Rehydration characteristics of dehydrated West African pepper (Piper guineense) leaves"
} | {
"abstract": "The study aimed to investigate the mass transfer kinetics and nutritional quality during osmotic dehydration (OD) and air-drying of papaya. The papaya was osmotically pretreated by different concentrations of sugar solutions (40, 50 and 60 °Brix) and osmotic solution temperatures (35, 45 and 55 °C). The ratio of fruit to the solution was kept at 1:4 (w/v) and pretreated process duration varied from 0 to 240 min. The present study demonstrated that water loss and the solute gain rate increased with the increasing of osmotic solution temperature, concentration and time. Mass transfer kinetics of osmotically pretreated papaya cubes were investigated based on the Peleg’s and Penetration models. The Peleg model showed the best fitted for water loss and solute gain whereas the Penetration model best described the water loss during osmotic dehydration of papaya. Effective diffusivity of water and solute gain was estimated using the analytical solution of Fick’s law of diffusion. Average effective diffusivity of water loss and solute gain was obtained in the range from 2.25 × 10−9 to 4.31 × 10−9 m2/s and 3.01 × 10−9 to 5.61 × 10−9 m2/s, respectively. Osmotically pretreated samples were dried with a convective method at a temperature of 70 °C. The moisture content, water activity and shrinkage of the dried papaya were decreased when the samples pretreated with a higher concentration of the osmotic solution and greater process temperature. The results also indicated that the highest osmotic solution temperature of 55 °C with the lowest concentration of 40 °Brix resulted in a significant decrease in phenolic content, antioxidant activity, and vitamin C content while higher osmotic solution concentration of 60 °Brix and the lowest temperature of the process (35 °C) retained maximum bioactive compounds.",
"corpus_id": 181512199,
"title": "Influence of Osmotic Dehydration on Mass Transfer Kinetics and Quality Retention of Ripe Papaya (Carica papaya L) during Drying"
} | {
"abstract": "Aims : Winterair pollution in Christchurch is dominated by particulate matter from solid fuel domestic heating. The aim of the study was to explore the relationship between particulate air pollution and admissions to hospital with cardio‐respiratory illnesses.",
"corpus_id": 29780562,
"score": 0,
"title": "Particulate air pollution and hospital admissions in Christchurch, New Zealand"
} |
{
"abstract": "Environmental and sensor challenges pose difficulties for the development of computer-assisted algorithms to segment synthetic aperture radar (SAR) sea ice imagery. In this research, in support of operational activities at the Canadian Ice Service, images containing visually separable classes of either ice and water or multiple ice classes are segmented. This work uses image intensity to discriminate ice from water and uses texture features to identify distinct ice types. In order to seamlessly combine image spatial relationships with various image features, a novel Bayesian segmentation approach is developed and applied. This new approach uses a function-based parameter to weight the two components in a Markov random field (MRF) model. The devised model allows for automatic estimation of MRF model parameters to produce accurate unsupervised segmentation results. Experiments demonstrate that the proposed algorithm is able to successfully segment various SAR sea ice images and achieve improvement over existing published methods including the standard MRF-based method, finite Gamma mixture model, and K-means clustering.",
"corpus_id": 5738934,
"title": "Unsupervised segmentation of synthetic aperture Radar sea ice imagery using a novel Markov random field model"
} | {
"abstract": "In this paper, we provide a review of the different approaches used for target decomposition theory in radar polarimetry. We classify three main types of theorem; those based on the Mueller matrix and Stokes vector, those using an eigenvector analysis of the covariance or coherency matrix, and those employing coherent decomposition of the scattering matrix. We unify the formulation of these different approaches using transformation theory and an eigenvector analysis. We show how special forms of these decompositions apply for the important case of backscatter from terrain with generic symmetries.",
"corpus_id": 46355522,
"title": "A review of target decomposition theorems in radar polarimetry"
} | {
"abstract": "Bidirectional bridgeless PFC is an advantageous topology for high power application among many bridgeless PFCs due to small EMI filter size and no reverse recovery problem. However due to the large junction capacitance of secondary leg's diodes, the dead angle is shown in the zero-crossing area. Thus, the input current distortion is intensified, and the unnecessary switching loss decreases the efficiency. In this paper, a simple gate skipping technique is implemented to the zero-crossing area of the bidirectional bridgeless PFC. As a result, the unnecessary switching loss of boost switches and the input current distortion are reduced. The improvement of the proposed method is verified by experimental results with the high line input and 800W(400V/2A) output prototype.",
"corpus_id": 38673402,
"score": -1,
"title": "Bidirectional bridgeless PFC with reduced input current distortion and switching loss using gate skipping technique"
} |
{
"abstract": "We introduce an PDE approach to a multiagent system's distributed control, so that the agents could track a target and also keep a desired formation. Treating the agents as a continuum, the agents' collective dynamics are modeled by a complex-valued partial differential equation (PDE). The states of PDE is the position of the agents, namely, the real part is axis-x and the image part is axis-y. We prove via PDE analysis that the tracking errors between the desired trajectory and the actual one are bounded on the condition of bounded velocity of the reference orbit. By discretization of the PDE, a simple leader-follower distributed control law is obtained, in which only neighbor-range relative positions are needed for the follower agents to track and keep formation. One can apply low-cost follower with limited sensing range into practise since the H1 norm bounded ensures the neighbors in communication graph is also the neighbors in physical distance. Various simulation studies demonstrate that the proposed approach effectively achieve the formation tracking objective with small error.",
"corpus_id": 795354,
"title": "A PDE approach to formation tracking control for multi-agent systems"
} | {
"abstract": "We study the problem of how much error is introduced in approximating the dynamics of a large vehicular platoon by using a partial differential equation, as was done in Barooah, Mehta, and Hespanha [Barooah, P., Mehta, P.G., and Hespanha, J.P. (2009), ‘Mistuning-based Decentralised Control of Vehicular Platoons for Improved Closed Loop Stability’, IEEE Transactions on Automatic Control, 54, 2100–2113], Hao, Barooah, and Mehta [Hao, H., Barooah, P., and Mehta, P.G. (2011), ‘Stability Margin Scaling Laws of Distributed Formation Control as a Function of Network Structure’, IEEE Transactions on Automatic Control, 56, 923–929]. In particular, we examine the difference between the stability margins of the coupled-ordinary differential equations (ODE) model and its partial differential equation (PDE) approximation, which we call the approximation error. The stability margin is defined as the absolute value of the real part of the least stable pole. The PDE model has proved useful in the design of distributed control schemes (Barooah et al. 2009; Hao et al. 2011); it provides insight into the effect of gains of local controllers on the closed-loop stability margin that is lacking in the coupled-ODE model. Here we show that the ratio of the approximation error to the stability margin is O(1/N), where N is the number of vehicles. Thus, the PDE model is an accurate approximation of the coupled-ODE model when N is large. Numerical computations are provided to corroborate the analysis.",
"corpus_id": 12790617,
"title": "Approximation error in PDE-based modelling of vehicular platoons"
} | {
"abstract": ". Action of outer derivations on nilpotent ideals of Lie algebras are considered. It is shown that for a nilpotent ideal I of a Lie algebra L over a field F the ideal I + D ( I ) is nilpotent, provided that charF = 0 or I nilpotent of nilpotency class less than p − 1 , where p = charF . In particular, the sum N ( L ) of all nilpotent ideals of a Lie algebra L is a characteristic ideal, if charF = 0 or N ( L ) is nilpotent of class less than p − 1 , where p = charF .",
"corpus_id": 250502223,
"score": 1,
"title": "On action of outer derivations on nilpotent ideals of Lie algebras"
} |
{
"abstract": "The molecular basis for the cytogenetic appearance of chromosomal fragile sites is not yet understood. Late replication and further delay of replication at fragile sites expressing alleles has been observed for FRAXA, FRAXE and FRA3B fragile site loci. We analysed the timing of replication at the FRA10B and FRA16B loci to determine whether late replication is a feature which is shared by all fragile sites and, therefore, is a necessary condition for chromosomal fragile site expression. The FRA10B locus was located in a transitional region between early and late zones of replication. Fragile and non-fragile alleles exhibit a similar replication pattern proximal to the repeat, but fragile alleles are delayed relative to non-fragile ones on the distal side. Although fragility at FRA10B appears to be caused by expansion of an AT-rich repeat in the region, replication time near the repeat was similar in fragile and non-fragile alleles. The FRA16B locus was late replicating and appeared to replicate even later on fragile chromosomes. While these observations are compatible with the hypothesis that delayed replication may play a role in fragile site expression, they suggest that replication delay may not need to occur at the expanded repeat region itself in order to be permissive for fragility.",
"corpus_id": 1694478,
"title": "Analysis of replication timing at the FRA10B and FRA16B fragile site loci"
} | {
"abstract": "Chromosomes of many eukaryotic organisms including humans contain a large number of repetitive sequences. Several types of commonly present DNA repeats have the capacity to adopt hairpin and cruciform secondary structures. Inverted repeats, AT- and GC-rich micro- and minisatellites, comprising this class of sequence motifs, are frequently found in chromosomal regions that are prone for gross rearrangements in somatic and germ cells. Recent studies in yeast and mammals indicate that a double-strand break occurring at the sites of unstable repeats can be an initial event in the generation of chromosome rearrangements. The repeat-induced chromosomal instability is responsible for a number of human diseases and has been implicated in carcinogenesis. In this review, we discuss the molecular mechanisms by which hairpins and cruciforms can trigger chromosomal fragility and subsequent aberrations in eukaryotic cells. We also address the relationship between secondary structure-mediated genetic instability and human pathology.",
"corpus_id": 1731996,
"title": "Hairpin- and cruciform-mediated chromosome breakage: causes and consequences in eukaryotic cells."
} | {
"abstract": "Processes related to electronically excited states are central in many areas of science; however, accurately determining excited-state energies remains a major challenge in theoretical chemistry. Recently, higher energy stationary states of non-linear methods have themselves been proposed as approximations to excited states, although the general understanding of the nature of these solutions remains surprisingly limited. In this letter, we present an entirely novel approach for exploring and obtaining excited stationary states by exploiting the properties of non-Hermitian Hamiltonians. Our key idea centres on performing analytic continuations of conventional quantum chemistry methods. Considering Hartree-Fock theory as an example, we analytically continue the electron-electron interaction to expose a hidden connectivity of multiple solutions across the complex plane, revealing a close resemblance between Coulson-Fischer points and non-Hermitian degeneracies. Finally, we demonstrate how a ground-state wave function can be morphed naturally into an excited-state wave function by constructing a well-defined complex adiabatic connection.",
"corpus_id": 73441434,
"score": 0,
"title": "Complex adiabatic connection: A hidden non-Hermitian path from ground to excited states."
} |
{
"abstract": "نموذج مقترح للتنبؤ بأزمات أسعار الصرف في مصر باستخدام منهج الإشارة المطور رانيا الزرير ربا فهمي كوكش يعتبر سعر الصرف أحد المؤشرات الاقتصادية والمالية التي تعبر عن جودة الأداء الاقتصادي لأية دولة، كما يعتبر مؤشرا شديد الحساسية نظرا للمؤثرات الداخلية والخارجية التي يتعرض لها، لذا حاولت الباحثة وضع تصور مبدئي لأهم محددات سعر الصرف والتي يمكن أن تكون مؤشرات إنذار مبكر لاحتمالية حدوث أزمة سعر صرف. وقد استخدمت الباحثة نموذج الإشارة المطور من قبل (Kaminsky,1998) وقد تم التطبيق على الحالة المصرية خلال الفترة الممتدة بين الربع الأول من العام 2010 والربع الأول من العام 2016. كما استخدمت الباحثة مؤشر Exchange market pressure index كمتغير تابع، وتم رصد حركته خلال فترة الدراسة وتحديد الفترات التي تجاوز بها قيمة العتبة والتي تمثلت بالربع الأول من العام 2010 والربع الأول من العام 2015 والعام 2016 وتعريفها كفترة أزمة سعر صرف[ [1] ]. وقد تم رصد خمسة متغيرات اقتصادية كلية أثبتت معنويتها في التنبؤ بأزمة سعر الصرف وفقا لنسبة NTSR وهي نسبة الائتمان المحلي إلى الناتج المحلي الإجمالي ومعدل نمو كل من الصادرات والواردات والناتج المحلي الإجمالي وحجم الاحتياطيات الأجنبية ومعدل نمو العرض النقدي M0. Exchange rate is considered one of the most important economic and financial indicators that reflects the quality of the economical behavior of any country, furthermore the exchange rate is considered as a high sensitive indicator according to the internal and external effects. For that researcher tried to build an initial vision for the most important determinants of the exchange rate that could be considered an early warning indicator to predict a currency crisis. The researcher used signal approach of (Kaminsky,1998) as a model and applied it on the Egyptian situation during the period from the first quarter of 2010 to the first quarter 2016. The researcher also used the exchange market pressure index as an independent variable and studied its movement during the mentioned period and determined the periods when it exceeded the threshold value, which were represented in the first quarter of 2013,2015,2016 and define it as a period of currency crisis [1]. Discovered five economical macro variables proved it’s significant in predicting currency crisis according to NTSR ratio which represents local credit to GDP and the growth rate, imports, exports, GDP and total foreign currency reserves. The total and the growth of money supply M0. * * طالبة دراسات عليا (دكتوراه)- قسم مصارف وتأمين – كلية الاقتصاد- جامعة دمشق – دمشق – سورية- البريد الالكتروني: ruba988@hotmail.com",
"corpus_id": 159137647,
"title": "Proposed model for prediction exchange rate crises in Egypt using the developed signal approach"
} | {
"abstract": "Despite the extensive literature on prediction of banking crises by Early Warning Systems (EWSs), their practical use by policy makers is limited, even in the international financial institutions. This is a paradox since the changing nature of banking risks as more economies liberalise and develop their financial systems, as well as ongoing innovation, makes the use of EWS for informing policies aimed at preventing crises more necessary than ever. In this context, we assess the logit and signal extraction EWS for banking crises on a comprehensive common dataset. We suggest that logit is the most appropriate approach for global EWS and signal extraction for country-specific EWS. Furthermore, it is important to consider the policy maker's objectives when designing predictive models and setting related thresholds since there is a sharp trade-off between correctly calling crises and false alarms.",
"corpus_id": 39650068,
"title": "Comparing early warning systems for banking crises"
} | {
"abstract": "SUMMARY \n \nPlant yield within and between four cultivars of perennial ryegrass infected with ryegrass mosaic virus (RMV) was closely related to symptom severity. \n \n \n \nDistribution of symptom severity was continuous in four perennial ryegrass and four Italian ryegrass cultivars infected with a severe RMV isolate, and also in another perennial ryegrass cultivar infected with a severe isolate of the virus, a mild one and one of intermediate severity. Symptom expression was polygenically inherited in both Italian (cv. RvP) and perennial (cv. S.24) ryegrass. Both additive and non-additive genetic variation was present in RvP, but the variation in S.24 was additive only. No significant maternal inheritance was present in either species.",
"corpus_id": 84692949,
"score": 1,
"title": "Tolerance to ryegrass mosaic virus and its inheritance"
} |
{
"abstract": "The majority opinion of those who have contributed to the literature on conversion in sub-Saharan Africa suggests that Islam has been more ‘successful’ than Christianity in attracting the faithful. The standard inventory of explanations for this state of affairs include the following: first, it has been commonly noted, Islam has proved to be more compatible than Christianity with indigenous customs, cosmology, and morality. A second point that has been argued with some consistency (though evidencing not a small measure of ethno-centric bias) is that ‘it is easier for the African to govern himself by the few rules set forth by Mohammedanism…than…by the all-embracing stringent laws of Christianity’. A third, more encompassing stance, implies that conversion to Islam can be accounted for sociologically, ‘while the acceptance of Christianity involves the recognition of divine truth’, in which case a similar line of analysis is uncalled for. Thus, according to William Arens, a thorough review of the voluminous literature indicates that there is an ‘ideological flavour’ in much of what is accepted as objective and authoritative material on this topic, and that a more balanced understanding of the facts could be realised if greater attention was given to the study of the social context of evangelical Christianity in Black Africa.",
"corpus_id": 155044647,
"title": "Christians, Colonists, and Conversion: a View from the Nilotic Sudan"
} | {
"abstract": "Scholarly opinion on the conversion of the Kingdom of Kongo to Christianity has generally been that it was superficial, diplomatically oriented, impure, dangerous to national sovereignty or rejected by the mass of the population. This article argues that although Christianity in Kongo took a distinctly African form it was widely accepted both in Kongo and in Europe as being the religion of the country. This was possible because Kongo, as a voluntary convert, had considerable leeway to contribute to its particular form of Christianity. Also, European priests were much more tolerant of syncretism in Kongo than in regions like Mexico, where colonial occupation accompanied the propagation of Christianity. Kongo's control over the theological content allowed the religion to gain mass acceptance while its control over the Church organization and finance allowed it never to be an instrument for foreign domination, in spite of Portuguese attempts to use it as a ‘fifth column’. When European priests arrived in Kongo during the Portuguese colonial occupation at the end of the nineteenth century, they rejected the local form of Christianity, thus ending its acceptance among Europeans as Christianity.",
"corpus_id": 162511713,
"title": "The Development of an African Catholic Church in the Kingdom of Kongo, 1491–1750"
} | {
"abstract": "The concept of the objet petit a is central to Lacan's theory of desire, which arguably represents his major contribution to psychoanalysis. It is an expression of the lack inherent in human beings, whose incompleteness and early helplessness produce a quest for fulfillment beyond the satisfaction of biological needs. The objet petit a is a fantasy that functions as the cause of desire; as such, it determines whether desire will be expressed within the limits of the pleasure principle or “beyond,” in pursuit of an unlimited jouissance, an impossible and even deadly enjoyment. Parallels between the objet petit a and Winnicott's transitional object are explored and its functions illustrated through analysis of Pedro Almodovar's film Talk to Her. A clinical case is presented in which the question of desire seemed crucial.",
"corpus_id": 24887059,
"score": 1,
"title": "Rethinking Desire: The Objet Petit A in Lacanian Theory"
} |
{
"abstract": "We have mutated a conserved leucine in the putative membrane‐spanning domain to serine in human GABAA β2 and investigated the actions of a number of GABAA agonists, antagonists and modulators on human α1β2ΔL259Sγ2s compared to wild type α1β2γ2s GABAA receptors, expressed in Xenopus oocytes. The mutation resulted in smaller maximum currents to γ‐aminobutyric acid (GABA) compared to α1β2γ2s receptors, and large leak currents resulting from spontaneous channel opening. As reported, this mutation significantly decreased the GABA EC50 (110 fold), and reduced desensitization. Muscimol and the partial agonists 4,5,6,7‐tetrahydroisoxazolo[5,4‐c]pyridin‐3‐ol (THIP) and piperidine‐4‐sulphonic acid (P4S) also displayed a decrease in EC50. In addition to competitively shifting GABA concentration response curves, the antagonists bicuculline and SR95531 both inhibited the spontaneous channel activity on α1β2ΔL259Sγ2s receptors, with different degrees of maximum inhibition. The effects of a range of allosteric modulators, including benzodiazepines and anaesthetics were examined on a submaximal GABA concentration (EC20). Compared to wild type, none of these modulators potentiated the EC20 response of α1β2ΔL259Sγ2s receptors, however they all directly activated the receptor in the absence of GABA. To conclude, the above mutation resulted in receptors which exhibit a degree of spontaneous activity, and are more sensitive to agonists. Benzodiazepines and other agents modulate constitutive activity, but positive modulation of GABA is lost. The competitive antagonists bicuculline and SR95531 can also act as allosteric channel modulators through the same GABA binding site.",
"corpus_id": 1361115,
"title": "Mutation at the putative GABAA ion‐channel gate reveals changes in allosteric modulation"
} | {
"abstract": "The gamma-aminobutyric acid type A (GABA(A)) receptors are the major inhibitory, postsynaptic, neurotransmitter receptors in the central nervous system. The binding of gamma-aminobutyric acid (GABA) to the GABA(A) receptors induces the opening of an anion-selective channel that remains open for tens of milliseconds before it closes. To understand how the structure of the GABA(A) receptor determines the functional properties such as ion conduction, ion selectivity and gating we sought to identify the amino acid residues that line the ion conducting channel. To accomplish this we mutated 26 consecutive residues (250-275), one at a time, in and flanking the M2 membrane- spanning segment of the rat alpha1 subunit to cysteine. We expressed the mutant alpha1 subunit with wild-type beta1 and gamma2 subunits in Xenopus oocytes. We probed the accessibility of the engineered cysteine to covalent modification by charged, sulfhydryl-specific reagents added extracellularly. We assume that among residues in membrane-spanning segments, only those lining the channel would be susceptible to modification by polar reagents and that such modification would irreversibly alter conduction through the channel. We infer that nine of the residues, alpha1 Val257, alpha1 Thr26l, alpha1 Thr262, alpha1 Leu264, alpha1 Thr265, alpha1 Thr268, alpha1 Ile27l, alpha1 Ser272 and alpha1 Asn275 are exposed in the channel. On a helical wheel plot, the exposed residues, except alpha1 Thr262, lie on one side of the helix in an arc of 120 degrees. We infer that the M2 segment forms an alpha helix that is interrupted in the region of alpha1 Thr262. The modification of residues as cytoplasmic as alpha1 Val257 in the closed state of the channel suggests that the gate is at least as cytoplasmic as alpha1 Val257. The ability of the positively charged reagent methanethiosulfonate ethylammonium to reach the level of alpha1 Thr261 suggests that the charge-selectivity filter is at least as cytoplasmic as this residue.",
"corpus_id": 6168100,
"title": "Identification of channel-lining residues in the M2 membrane-spanning segment of the GABA(A) receptor alpha1 subunit"
} | {
"abstract": "DNA replication in Escherichia coli is normally initiated at a single origin, oriC, dependent on initiation protein DnaA. However, replication can be initiated elsewhere on the chromosome at multiple ectopic oriK sites. Genetic evidence indicates that initiation from oriK depends on RNA‐DNA hybrids (R‐loops), which are normally removed by enzymes such as RNase HI to prevent oriK from misfiring during normal growth. Initiation from oriK sites occurs in RNase HI‐deficient mutants, and possibly in wild‐type cells under certain unusual conditions. Despite previous work, the locations of oriK and their impact on genome stability remain unclear. We combined 2D gel electrophoresis and whole genome approaches to map genome‐wide oriK locations. The DNA copy number profiles of various RNase HI‐deficient strains contained multiple peaks, often in consistent locations, identifying candidate oriK sites. Removal of RNase HI protein also leads to global alterations of replication fork migration patterns, often opposite to normal replication directions, and presumably eukaryote‐like replication fork merging. Our results have implications for genome stability, offering a new understanding of how RNase HI deficiency results in R‐loop‐mediated transcription‐replication conflict, as well as inappropriate replication stalling or blockage at Ter sites outside of the terminus trap region and at ribosomal operons.",
"corpus_id": 10818723,
"score": 1,
"title": "Replication of the Escherichia coli chromosome in RNase HI‐deficient cells: multiple initiation regions and fork dynamics"
} |
{
"abstract": "During grouping tasks for data exploration and sense-making, the criteria are normally not well-defined. When users are bringing together data objects thought to be similar in some way, implicit brushing continually detects for groups on the freeform workspace, analyzes the groups' text content or metadata, and draws attention to related data by displaying visual hints and animation. This provides helpful tips for further grouping, group meaning refinement and structure discovery. The sense-making process is further enhanced by retrieving relevant information from a database or network during the brushing. Closely related to implicit brushing, target snapping provides a useful means to move a data object to one of its related groups on a large display. Natural dynamics and smooth animations also help to prevent distractions and allow users to concentrate on the grouping and thinking tasks. Two different prototype applications, note grouping for brainstorming and photo browsing, demonstrate the general applicability of the technique.",
"corpus_id": 2569105,
"title": "Implicit brushing and target snapping: data exploration and sense-making on large displays"
} | {
"abstract": "A method is disclosed for predicting the service life of silicon nitride material under operating conditions of a ceramic turbine engine in whch an article is subjected to high temperatures under oxidizing conditions. The method for selecting a silicon nitride article to be used under these high temperature oxidizing conditions is generally as follows.",
"corpus_id": 14461279,
"title": "SenseMaker: an information-exploration interface supporting the contextual evolution of a user's interests"
} | {
"abstract": "The lodestones and leylines interaction technique simplifies navigation in electronic spaces by coordinating physical and conceptual movement-gently constraining motion to follow automatically computed paths to predicted destinations. This approach simplifies physical movement, ensures that movement leads to interesting locations and supports navigation to locations not visible from the current location. It is illustrated in a spatial multiscale environment where pilot data show reliable performance improvements.",
"corpus_id": 24803975,
"score": 2,
"title": "Predictive targeted movement in electronic spaces"
} |
{
"abstract": "Carbonation presents a good prospect for stabilizing alkaline waste materials. The risk of metal leaching from carbonated waste was investigated in the present study; in particular, the effect of the carbonation process and leachate pH on the leaching toxicity of the alkaline air pollution control (APC) residues from municipal solid waste incinerator was evaluated. The pH varying test was conducted to characterize the leaching characteristics of the raw and carbonated residue over a broad range of pH. Partial least square modeling and thermodynamic modeling using Visual MINTEQ were applied to highlight the significant process parameters that controlled metal leaching from the carbonated residue. By lowering the pH to 8-11, the carbonation process reduced markedly the leaching toxicity of the alkaline APC residue; however, the treated APC residue showed similar potential risk of heavy metal release as the raw ash when subjected to an acid shock. The carbonated waste could, thereby, not be disposed of safely. Nonetheless, carbonation could be applied as a temporary stabilization process for heavy metals in APC residues in order to reduce the leaching risk during its transportation and storage before final disposal.",
"corpus_id": 4797116,
"title": "Temporary stabilization of air pollution control residues using carbonation."
} | {
"abstract": "This work analyzes the performance of an innovative biogas upgrading method, Alkali absorption with Regeneration (AwR) that employs industrial residues and allows to permanently store the separated CO2. This process consists in a first stage in which CO2 is removed from the biogas by means of chemical absorption with KOH or NaOH solutions followed by a second stage in which the spent absorption solution is contacted with waste incineration Air Pollution Control (APC) residues. The latter reaction leads to the regeneration of the alkali reagent in the solution and to the precipitation of calcium carbonate and hence allows to reuse the regenerated solution in the absorption process and to permanently store the separated CO2 in solid form. In addition, the final solid product is characterized by an improved environmental behavior compared to the untreated residues. In this paper the results obtained by AwR tests carried out in purposely designed demonstrative units installed in a landfill site are presented and discussed with the aim of verifying the feasibility of this process at pilot-scale and of identifying the conditions that allow to achieve all of the goals targeted by the proposed treatment. Specifically, the CO2 removal efficiency achieved in the absorption stage, the yield of alkali regeneration and CO2 uptake resulting for the regeneration stage, as well as the leaching behavior of the solid product are analyzed as a function of the type and concentration of the alkali reagent employed for the absorption reaction.",
"corpus_id": 5248395,
"title": "Performance of a biogas upgrading process based on alkali absorption with regeneration using air pollution control residues."
} | {
"abstract": "Abstract Municipal solid waste incinerator (MSWI) bottom ash is predominantly composed of high temperature solids. In a natural atmospheric environment many of these solids are metastable and will alter to form thermodynamically stable assemblages of minerals. Results of our research have revealed that the weathering products of MSWI bottom ash, which has been disposed of in the open, are similar to those found in weathered volcanic ashes and scoriae. The most obvious change is the transformation of glassy constituents into clay-like materials. These secondary products may have a significant influence on the leaching characteristics of contaminants such as heavy metals. Although the influence of this alteration process on total leaching is yet to be determined, our results suggest new topics of future research on geochemical engineering concepts, where natural weathering processes are exploited to minimize deleterious effects of ash disposal and utilization.",
"corpus_id": 129804648,
"score": 2,
"title": "Weathering of MSWI bottom ash with emphasis on the glassy constituents"
} |
{
"abstract": "The paper establishes the mathematical model about the vehicles routing problem (VRP) of transporting dangerous goods in Zhengzhou Coal Material Supply and Marketing Company. Then, use artificial fish swarm algorithm to explore the optimal solution of the VRP. The algorithm first initializes a group of artificial fishes, and a repair operator guarantee the current state of each artificial fish represents a feasible distribution scheme and then these artificial fishes find the globally optimal solution through implementation of the designed random behavior, and behaviors of prey, swarm and follow. At last, it compares with sweep algorithm and genetic algorithm and the results show the validity of artificial fish swarm algorithm.",
"corpus_id": 2063843,
"title": "Resolving Single Depot Vehicle Routing Problem with Artificial Fish Swarm Algorithm"
} | {
"abstract": "On the base of describing the vehicle routing problem with reverse logistics and time windows,this paper constructs a multi-objective mathematical model of this problem based on minimizing the cost.This model jointly considers positive logistics and reverse logistics,and it helps to improve the loading rate of vehicle.An improved particle swarm optimization is proposed to this kind of problem.In the end,this algorithm is tested on a computer,and obtains a good result.",
"corpus_id": 62928856,
"title": "Particle Swarm Optimization to Vehicle Routing Problem with Reverse Logistics"
} | {
"abstract": "Wireless link layer multicast is an important service primitive for emerging applications, such as live video, streaming audio, and other content telecasts. The broadcast nature of the wireless channel is amenable to multicast because a single packet transmission may be received by all clients in the multicast group. However, in view of diverse channel conditions at different clients, the rate of such a transmission is bottlenecked by the rate of the weakest client. Multicast throughput degrades severely. Attempts to increase the data rate result in lower reliability and higher unfairness. This paper utilizes smart beamforming antennas to improve multicast performance in wireless LANs. The main idea is to satisfy the stronger clients with a high-rate omnidirectional transmission, followed by high-rate directional transmission(s) to cover the weaker ones. By selecting an optimal transmission strategy (using dynamic programming), we show that the multicast throughput can be maximized while achieving a desired delivery ratio at all the clients. We use testbed measurements to verify our main assumptions. We simulate our protocol in Qualnet, and observe consistent performance improvements over a range of client topologies and time-varying channel conditions.",
"corpus_id": 818515,
"score": 1,
"title": "Link layer multicasting with smart antennas: No client left behind"
} |
{
"abstract": "*Rheumatology Department, Coimbra University Hospital le. For these reasons, OC are the contraceptive method of choice for the majority of Western world women between the ages of 15 and 44 years. Decision on giving OC to patients with Systemic Lupus Erythematosus (SLE) puts special issues and concerns. SLE is a chronic systemic autoimmune disease which etiology probably involves a complex interaction between environmental, infectious and hormonal factors in a genetically susceptible subject. Despite it can affect any gender at any age, SLE is much more common among women and its incidence is significantly increased during reproductive years. Among the risk factors for SLE OC have been evocated as etiologic factors. During its course, SLE presents a wide range of manifestations alternating periods of exacerbation and remission. OC was also associated with an increased risk of flares with variable severity. Other clinical problems with higher occurrence among SLE patients as thrombotic events can be potentiated when OC are used. Being SLE a disease with major expression among women, OC have been questioned for these patients over time. During periods of active disease pregnancy is contraindicated, due to risks for the patient and the baby, associated both to SLE and its treatment. For these cases, an effective contraception is mandatory but also puts special issues. On the other hand, many SLE patients will be on a low activity or remission state with much less aggressive medication for most of the time. Cumulative damage due to SLE and comorbidities such as cardiovascular disease, antiphospholipid syndrome antibodies also has to be considered for pregnancy and contraception decisions. Physicians who care of SLE women are commonly submitted to questions about these issues, not only for their patients but also from other health professionals. Advice on the benefits and risks of exogenous hormones for OC is an important and difficult aspect of the care of women with SLE. This advice should be done based on the best evidence from Abstract",
"corpus_id": 3056779,
"title": "Why did physicians believe in a potential negative role of female hormones in SLE ?"
} | {
"abstract": "The overall goal of this paper is to provide for the first time a comprehensive critical review of the literature on contraceptive failure in developed countries, primarily the United States. The first two sections of our paper lay the groundwork for a critical assessment of the extensive body of studies on this subject, by systematically exploring the concepts and measurement of contraceptive efficacy and the methodological pitfalls that snare many investigators and compromise their results. The next two sections focus on results in the literature. First we provide a method-by-method critique of the available studies and then we summarize our conclusions in a single table that provides efficacy information necessary for women and couples to make an informed choice of a method of contraception. We close with a set of substantive observations and also a set of methodological recommendations intended to improve the quality and comparability of findings from future research.",
"corpus_id": 21281599,
"title": "Contraceptive failure in the United States: a critical review of the literature."
} | {
"abstract": "Tuberculosis has a much shorter incubation period than is widely thought, say Marcel A Behr and colleagues, and this has implications for prioritising research and public health strategies",
"corpus_id": 52069684,
"score": 1,
"title": "Revisiting the timetable of tuberculosis"
} |
{
"abstract": "Abstract There is limited data on the effectiveness of combined medial patellofemoral ligament (MPFL) reconstruction and tibial tubercle transfer (TTT) in patients with patella instability. The aim of our study was to analyze the functional outcome in patients treated with MPFL reconstruction and TTT. Between July 2008 and April 2013, 18 patients (21 knees) underwent combined MPFL reconstruction and TTT; 15 patients (16 knees) with a mean age of 24 years (16‐41) had a mean follow‐up of 30 months (26‐55). There was significant improvement in outcome scores in 12 out of 15 patients. KOOS score improved from 68.25 (44‐93.9) to 77.05 (48.8‐96.4) and KUJALA score improved from 63.3 (41‐88) to 78.06 (45‐99). Nine patients achieved at least a preinstability level of activity. Out of these nine patients, four had activity level better than the preinstability level. The remaining six patients had a lower activity level than preinstability level (2—lack of confidence and 4—lifestyle modification). Fourteen patients were satisfied and happy to recommend this procedure. There were three postoperative complications, with two cases of stiffness and one case of nonunion of the tibial tuberosity. Thus, the restoration of tibial tubercle to trochlear groove distance, patella height, and MPFL reconstruction yields good results in carefully selected patients.",
"corpus_id": 3384756,
"title": "Combined Medial Patellofemoral Ligament Reconstruction and Tibial Tubercle Transfer Results at a Follow‐Up of 2 years"
} | {
"abstract": "PURPOSE\nTo report the outcomes for combined tibial tubercle osteotomy (TTO) and medial patellofemoral ligament (MPFL) reconstruction and assess for potential risk factors for recurrent instability and/or poor outcomes.\n\n\nMETHODS\nThe medical record at our institution was reviewed for patients treated with MPFL reconstruction and TTO for recurrent lateral patellar instability from 1998 to 2014. Preoperative imaging was assessed for trochlear dysplasia according to the Dejour classification (high grade = B, C, D) and the presence of patella alta using the Caton-Deschamps ratio (>1.2). The indication for combined MPFL reconstruction and TTO was MPFL insufficiency and a lateralized tibial tubercle. Outcomes were determined by recurrent instability, return to sport, and Kujala and International Knee Documentation Committee (IKDC) scores.\n\n\nRESULTS\nThirty knees in 28 patients (14 M, 14 F) with a mean age of 22.6 ± 9.1 years (range, 13-51 years) were included with a mean follow-up of 48 ± 28 months (24-123 months). Seventy-three percent (22/30) had high-grade trochlear dysplasia, and 63% (19/30) had patella alta. One patient had a postoperative dislocation and 1 had a subluxation event. The Caton-Deschamps ratio decreased by a mean of 0.2 (P = .001), leaving 30% with postoperative patella alta. The mean postoperative scores were as follows: Tegner = 5 ± 2, Kujala = 89 ± 16 (45-100), and IKDC = 85 ± 17 (44-100). Eighty-three percent (15/18) returned to their preoperative sport. Female gender was a risk factor for lower IKDC (77.3 vs. 92.6, P = .01) and Kujala (82.2 vs. 95.0, P = .03) scores. Medialization greater than 10 mm was directly correlated to lower IKDC (P = .02) and Kujala (P = .01) scores.\n\n\nCONCLUSIONS\nThe combination of MPFL reconstruction and TTO in patients with trochlear dysplasia results in low recurrence of instability. Patients on average had good subjective outcomes and were able to return to sport. Female gender and tibial tubercle medialization greater than 10 mm were associated with worse outcomes.\n\n\nLEVEL OF EVIDENCE\nLevel IV, therapeutic case series.",
"corpus_id": 46894578,
"title": "Combined Tibial Tubercle Osteotomy and Medial Patellofemoral Ligament Reconstruction for Recurrent Lateral Patellar Instability in Patients With Multiple Anatomic Risk Factors."
} | {
"abstract": "Data from a 1981 national survey of women's drinking show that women's problem drinking is associated with different configurations of roles at different ages. For women drinkers under 65, risks of problem drinking increase with age-specific patterns of role deprivation: the lack or loss of marital, employment, and childrearing roles. The demands of multiple roles do not appear to be a major cause of women's problem drinking at any age. Close role relationships with other drinkers may become a more important factor in risks of problem drinking among women drinkers after age 50. The complex findings demonstrate the need for age-specific analyses of the effects of women's roles and role relationships on patterns of problem drinking.",
"corpus_id": 146561349,
"score": 1,
"title": "Women's Roles and Problem Drinking Across the Lifespan"
} |
{
"abstract": "Potential changes in the mobility and bioavailability of risk and essential macro- and micro-elements achieved by adding various ameliorative materials were evaluated in a model pot experiment. Spring wheat (Triticum aestivum L.) was cultivated under controlled condition for 60 days in two soils, uncontaminated Chernozem and multi-element contaminated Fluvisol containing 4900 ± 200 mg/kg Zn, 35.4 ± 3.6 mg/kg Cd, and 3035 ± 26 mg/kg Pb. The treatments were all contained the same amount of sulfur and were as follows: (i) digestate from the anaerobic fermentation of biowaste, (ii) fly ash from wood chip combustion, and (iii) ammonium sulfate. Macro- and micro-nutrients Ca, Mg, K, Fe, Mn, Cu, P, and S, and risk elements Cd, Cr, Pb, and Zn were assayed in soil extracts with 0.11 mol/l solution of CH3COOH and in roots, shoots, and grain of wheat after 30 and 60 days of cultivation. Both digestate and fly ash increased levels of macro- and micro-nutrients as well as risk elements (especially Cd and Zn; the mobility of Pb decreased after 30 days of cultivation). The changes in element mobility in ammonium sulfate-treated soils appear to be due to both changes in soil pH level and inter-element interactions. Ammonium sulfate tended to be the most effective measure for increasing nutrient uptake by plants in Chernozem but with opposite pattern in Fluvisol. Changes in plant yield and element uptake in treated plants may have been associated with the higher proline content of wheat shoots cultivated in both soils compared to control. None of the treatments decreased uptake of risk elements by wheat plants in the extremely contaminated Fluvisol, and their accumulation in wheat grains significantly exceeded maximum permissible levels; these treatments cannot be used to enable cereal and other crop production in such soils. However, the combination of increased plant growth alongside unchanged element content in plant biomass in pots treated with digestate and fly ash suggests that these treatments have a beneficial impact on yield and may be effective treatments in crops grown for phytoremediation.",
"corpus_id": 2437921,
"title": "The effectiveness of various treatments in changing the nutrient status and bioavailability of risk elements in multi-element contaminated soil"
} | {
"abstract": "On the basis of a previous study performed in our laboratory, the use of organic and inorganic amendments can significantly modify the Hg mobility in soil. We have compared the effectiveness of organic and inorganic amendments such as digestate and fly ash, respectively, reducing the Hg mobility in Chernozem and Luvisol soils differing in their physicochemical properties. Hence, the aim of this work was to compare the impact of digestate and fly ash application on the chemical and biochemical parameters in these two mercury-contaminated soils in a model batch experiment. Chernozem and Luvisol soils were artificially contaminated with Hg and then incubated under controlled conditions for 21 days. Digestate and fly ash were applied to both soils in a dose of 10 and 1.5 %, respectively, and soil samples were collected after 1, 7, 14, and 21 days of incubation. The presence of Hg in both soils negatively affected to processes such as nitrification, provoked a decline in the soil microbial biomass C (soil microbial biomass C (MBC)), and the microbial activities (arylsulfatase, and β-glucosaminidase) in both soils. Meanwhile, the digestate addition to Chernozem and Luvisol soils contaminated with Hg improved the soil chemical properties (pH, dissolved organic carbon (DOC), N (Ntot), inorganic–N forms (N–NH4+ and N–NO3−)), as consequence of high content in C and N contained in digestate. Likewise, the soil MBC and soil microbial activities (dehydrogenase, arylsulfatase, and β-glucosaminidase) were greatly enhanced by the digestate application in both soils. In contrast, fly ash application did not have a remarkable positive effect when compared to digestate in Chernozem and Luvisol soil contaminated with mercury. These results may indicate that the use of organic amendments such as digestate considerably improved the soil health in Chernozem and Luvisol compared with fly ash, alleviating the detrimental impact of Hg. Probably, the chemical properties present in digestate may determine its use as a suitable amendment for the assisted-natural attenuation of mercury-polluted soils.",
"corpus_id": 8799651,
"title": "Organic and inorganic amendment application on mercury-polluted soils: effects on soil chemical and biochemical properties"
} | {
"abstract": "Abstract Inferences about leaf anatomical characteristics had largely been made by manually measuring diverse leaf regions, such as cuticle, epidermis and parenchyma to evaluate differences caused by environmental variables. Here we tested an approach for data acquisition and analysis in ecological quantitative leaf anatomy studies based on computer vision and pattern recognition methods. A case study was conducted on Gochnatia polymorpha (Less.) Cabrera (Asteraceae), a Neotropical savanna tree species that has high phenotypic plasticity. We obtained digital images of cross-sections of its leaves developed under different light conditions (sun vs. shade), different seasons (dry vs. wet) and in different soil types (oxysoil vs. hydromorphic soil), and analyzed several visual attributes, such as color, texture and tissues thickness in a perpendicular plane from microscopic images. The experimental results demonstrated that computational analysis is capable of distinguishing anatomical alterations in microscope images obtained from individuals growing in different environmental conditions. The methods presented here offer an alternative way to determine leaf anatomical differences.",
"corpus_id": 45439108,
"score": 1,
"title": "A computer vision approach to quantify leaf anatomical plasticity: a case study on Gochnatia polymorpha (Less.) Cabrera"
} |
{
"abstract": "Prostate brachytherapy is an intraoperative radiotherapy technique for irradiating prostate tumors by placing radioactive sources inside the prostate. CT image is used to calculate a personalized dose distribution (PDD) while the MRI is used to visualize the tumor and the organs at risk. Therefore, a registration of preoperative MRI and CT is essential since it could improve the overall precision of the treatment planning, the placement of radioactive sources inside the prostate as well as the visualization of the dose distribution with respect to the tumor. This registration should compensate for prostate deformations due to changes in size and form between the acquisitions of each modality. In this paper, we present an intensity-based non-rigid registration method that does not require any manual segmentation or visual identification of landmarks. This method is based on the maximization of the mutual information in combination with a deformation field parameterized by cubic B-Spline. The method was validated on clinical patient datasets; the preliminary evaluation shows encouraging results that satisfy the desired clinical accuracy.",
"corpus_id": 249059,
"title": "Non-rigid MRI/CT registration for effective planning of prostate brachytherapy"
} | {
"abstract": "Magnetic Resonance images (MRI) have superior soft tissue contrast compared with CT images. Therefore, MRI might be a better imaging modality to differentiate the prostate from surrounding normal organs. Methods to accurately register MRI to simulation CT images are essential, as we transition the use of MRI into the routine clinic setting. In this study, we present a finite element method (FEM) to improve the performance of a commercially available, B-spline-based registration algorithm in the prostate region. Specifically, prostate contours were delineated independently on ten MRI and CT images using the Eclipse treatment planning system. Each pair of MRI and CT images was registered with the B-spline-based algorithm implemented in the VelocityAI system. A bounding box that contains the prostate volume in the CT image was selected and partitioned into a tetrahedral mesh. An adaptive finite element method was then developed to adjust the displacement vector fields (DVFs) of the B-spline-based registrations within the box. The B-spline and FEM-based registrations were evaluated based on the variations of prostate volume and tumor centroid, the unbalanced energy of the generated DVFs, and the clarity of the reconstructed anatomical structures. The results showed that the volumes of the prostate contours warped with the B-spline-based DVFs changed 10.2% on average, relative to the volumes of the prostate contours on the original MR images. This discrepancy was reduced to 1.5% for the FEM-based DVFs. The average unbalanced energy was 2.65 and 0.38 mJ cm−3, and the prostate centroid deviation was 0.37 and 0.28 cm, for the B-spline and FEM-based registrations, respectively. Different from the B-spline-warped MR images, the FEM-warped MR images have clear boundaries between prostates and bladders, and their internal prostatic structures are consistent with those of the original MR images. In summary, the developed adaptive FEM method preserves the prostate volume during the transformation between the MR and CT images and improves the accuracy of the B-spline registrations in the prostate region. The approach will be valuable for the development of high-quality MRI-guided radiation therapy.",
"corpus_id": 23398624,
"title": "An adaptive MR-CT registration method for MRI-guided prostate cancer radiotherapy"
} | {
"abstract": "We compared the prostate volumes and rectal doses calculated by CT and CT-MRI fusion, and verified the usefulness of CT-MRI fusion in three-dimensional (3D) radiotherapy planning for localized prostate cancer. Three observers contoured the prostate and rectum of 13 patients with CT and CT-MRI fusion. Prostate delineations were classified into three sub-parts, and the volumes and distances to the rectum (PR distance) were calculated. 3D radiotherapy plans were generated. A dose-volume histogram (DVH) was constructed for the rectum. The intermodality and interobserver variations were assessed. CT-MRI fusion yielded a significantly lower prostate volume by 31%. In the sub-part analysis, the greatest difference was seen for the apical side. The PR distance was significantly extended by 3.5-mm, and the greatest difference was seen for the basal side. The irradiated rectal volume was reduced in the CT-MRI fusion-based plan. The reduction rates were greater in the relatively high-dose regions. The decrease of the prostate volume and length alteration of the distance between the prostate and rectum were correlated with the decrease of the irradiated rectal volume. The prostate volume delineated by CT-MRI fusion was negatively correlated with the decrease of the irradiated rectal volume. CT showed a tendency towards overestimation of the prostate volume and underestimation of the PR distance as compared to CT-MRI fusion. The rectal dose was significantly reduced in CT-MRI fusion-based plan. Using CT-MRI fusion, especially in cases with a small prostate, the irradiated rectal volume can be reduced, with consequent reduction in rectal complications.",
"corpus_id": 12929604,
"score": 2,
"title": "Usefulness of CT-MRI fusion in radiotherapy planning for localized prostate cancer."
} |
{
"abstract": "We consider the multi armed bandit problem in non-stationary environments. Based on the Bayesian method, we propose a variant of Thompson Sampling which can be used in both rested and restless bandit scenarios. Applying discounting to the parameters of prior distribution, we describe a way to systematically reduce the effect of past observations. Further, we derive the exact expression for the probability of picking sub-optimal arms. By increasing the exploitative value of Bayes' samples, we also provide an optimistic version of the algorithm. Extensive empirical analysis is conducted under various scenarios to validate the utility of proposed algorithms. A comparison study with various state-of-the-arm algorithms is also included.",
"corpus_id": 6511668,
"title": "Taming Non-stationary Bandits: A Bayesian Approach"
} | {
"abstract": "Although many algorithms for the multi-armed bandit problem are well-understood theoretically, empirical confirmation of their effectiveness is generally scarce. This paper presents a thorough empirical study of the most popular multi-armed bandit algorithms. Three important observations can be made from our results. Firstly, simple heuristics such as epsilon-greedy and Boltzmann exploration outperform theoretically sound algorithms on most settings by a significant margin. Secondly, the performance of most algorithms varies dramatically with the parameters of the bandit problem. Our study identifies for each algorithm the settings where it performs well, and the settings where it performs poorly. Thirdly, the algorithms' performance relative each to other is affected only by the number of bandit arms and the variance of the rewards. This finding may guide the design of subsequent empirical evaluations. In the second part of the paper, we turn our attention to an important area of application of bandit algorithms: clinical trials. Although the design of clinical trials has been one of the principal practical problems motivating research on multi-armed bandits, bandit algorithms have never been evaluated as potential treatment allocation strategies. Using data from a real study, we simulate the outcome that a 2001-2002 clinical trial would have had if bandit algorithms had been used to allocate patients to treatments. We find that an adaptive trial would have successfully treated at least 50% more patients, while significantly reducing the number of adverse effects and increasing patient retention. At the end of the trial, the best treatment could have still been identified with a high level of statistical confidence. Our findings demonstrate that bandit algorithms are attractive alternatives to current adaptive treatment allocation strategies.",
"corpus_id": 33115,
"title": "Algorithms for multi-armed bandit problems"
} | {
"abstract": "We consider negotiation settings in which two agents use natural language to bargain on goods. Agents need to decide on both high-level strategy (e.g., proposing \\$50) and the execution of that strategy (e.g., generating\"The bike is brand new. Selling for just \\$50.\"). Recent work on negotiation trains neural models, but their end-to-end nature makes it hard to control their strategy, and reinforcement learning tends to lead to degenerate solutions. In this paper, we propose a modular approach based on coarse di- alogue acts (e.g., propose(price=50)) that decouples strategy and generation. We show that we can flexibly set the strategy using supervised learning, reinforcement learning, or domain-specific knowledge without degeneracy, while our retrieval-based generation can maintain context-awareness and produce diverse utterances. We test our approach on the recently proposed DEALORNODEAL game, and we also collect a richer dataset based on real items on Craigslist. Human evaluation shows that our systems achieve higher task success rate and more human-like negotiation behavior than previous approaches.",
"corpus_id": 52119091,
"score": -1,
"title": "Decoupling Strategy and Generation in Negotiation Dialogues"
} |
{
"abstract": "The basal diameter of the annual shoot (1YD) affects vegetative growth and fruiting of the walnut trees. In order to determine interdependency between the 1YD and the older parent wood, 64 walnut genotypes belonging to four different branching and fruiting habits (morphotypes M-I, M-II, M-III and M-IV) were investigated. Year-to-year stability of 1YD was tested with the architectural analysis of a 3-year-old fruiting branch and its constituents (a 3-year-old bearer + corresponding 2-year-old + annual shoots) during 3 successive years. Based on Pearson's correlation coefficients and the multiple regression analysis of 12 quantitative traits, 12 models (four morphotype in 3 successive years) of 1YD were formed. They were compared with the standard model which was calculated on the basis of 1-year measurements of 1Y with no respect to the branching and fruiting type and comprises three quantitative traits, i.e. basal diameter of a 2-year-old parent shoot (2YD), the length of 2Y shoot (2YL), and the length of annual shoot (1YL). In a single year, the 1YD was influenced by two–five parameters. Five out of 12 models agreed with the standard model: in the lateral fruiting genotypes (M-IV), 1YD was always under the influence of the 2Y diameter, and the 1Y length. In addition, the number of nodes of the 2Y parent shoot had an important influence on 1Y diameter. In the terminal bearers (M-I), the impact of 2YD on the 1YD slightly increased with the tree age, and some other parameters, like 1Ynumber and 1Ynodes, which became to be important for 1YD. In the intermediate genotypes with mezotonic ramification (M-II), the number of vegetative buds per 1Y and angles of 1Y had significant effects on 1YD. In the intermediate bearers with acrotonic ramification (M-III), one to four other parameters were included into the model each year beside the 1Y number. Since the traits of a 2-year-old parent shoot have a great influence on the 1YD, the information from the year N can be used for the prediction of the annual shoot development in the year N+1. Such a prediction is more reliable in M-I and M-IV than in M-II and M-III. When we deal with the intermediate fruiting cultivars, 1Y number has to be considered in prediction of 1Y diameter beside 2YD and 1YL.",
"corpus_id": 6764516,
"title": "Stability of the annual shoot diameter in Persian walnut: a case study of different morphotypes and years"
} | {
"abstract": "The agronomic performance of fruit trees is significantly influenced by tree internal organization. Introducing architectural traits in breeding programs could thus lead to select new varieties with a regular bearing and lower input demand in order to reduce training and environmental costs. However, an interaction between tree ontogeny and genetic factors is expected. In this study, we investigated the genetic determinism of architectural traits in the olive tree, accounting for tree development over 5 years until first flowering occurrence. We studied an F1 progeny issued from a cross between two contrasted genotypes, ‘Olivière’ and ‘Arbequina’. Tree architecture was decomposed in quantitative traits, related to (1) growth and branching, (2) first flowering and fruiting. Models, including the year of growth, branching order and genotype effects, were built with variance function and covariance structure when necessary. After a model selection, broad sense heritabilities were calculated. During the first 3 years, both the mean values of vegetative traits and genetic factor significance depended on the shoot within-tree position. Dependencies between consecutive years were revealed for traits related to whole tree form. Whole tree form variables showed medium to high broad sense heritability values, whereas reproductive traits were highly heritable. This study demonstrates the existence of ontogenic trends in the olive tree, which result in traits heritable only at the tree periphery. A phenotyping strategy adapted to its architectural characteristics and a list of relevant traits, such as maximal internode length, is proposed. Transgressive effects suggest that genetic progress could be performed in future selection programs.",
"corpus_id": 17653579,
"title": "Genetic determinism of the vegetative and reproductive traits in an F1 olive tree progeny"
} | {
"abstract": "Bases of orchard productivity were evaluated in four 10-year-old apple orchard systems ('Empire' and 'Redchief Delicious' Malus domestics Borkh. on slender spindle/M.9, Y-trellis/M.26, central leader/M.9/MM.111, and central leader/M.7a). Trunk cross-sectional areas (TCA), canopy dimension and volume, and light interception were measured. Canopy dimension and canopy volume were found to be relatively poor estimators of orchard light inter- ception or yield, especially for the restricted canopy of the Y-trellis. TCA was correlated to both percentage of photosynthetically active radiation (PAR) intercepted and yields. Total light interception during the 7th to the 10th years showed the best correlation with yields of the different systems and explained most of the yield variations among systems. Average light interception was highest with the Y-trellis/M.26 system of both cultivars and approached 70% of available PAR with 'Empire'. The higher light interception of this system was the result of canopy architecture that allowed the tree canopy to grow over the tractor alleys. The central leader/M.7a had the lowest light interception with both cultivars. The efficiency of converting light energy into fruit (conversion efficiency = fruit yield/light intercepted) was significantly higher for the Y-trellis/M.26 system than for the slender spindle/M.9 or central leader/ M.9/MM.111 systems. The central leader/M.7a system bad the lowest conversion efficiency. An index of partitioning was calculated as the kilograms of fruit per square centimeter increase in TCA. The slender spindle/M.9 system had significantly higher partitioning index than the Y-trellis/M.26 or central leader/M.9/MM.111. The central leader/ M.7a system had the lowest partitioning index. The higher conversion efficiency of the Y/M.26 system was not due to increased partitioning to the fruit; however, the basis for the greater efficiency is unknown. The poor conversion efficiency of the central leader/M.7a was mostly due to low partitioning to the fruit. The Y-trellis/M.26 system was found to be the most efficient in both intercepting PAR and converting that energy into fruit. Orchard planting systems trials have the inherent limitation of providing information only for the particular set of spacings, rootstock, and tree forms growing in a given climate as deter- mined at the beginning of the experiment. To extend the ability to make inferences from such long-term experiments requires a more fundamental understanding of the principles involved in orchard system performance. The first principle is that total dry matter production and, in many cases, crop yield are related to total light interception (Agha and Buckley, 1986; Hunter and Proctor, 1986; Monteith, 1977; Palmer, 1976, 1989; Palmer and Jackson, 1974). This principle holds for essentially all crops (Monteith, 1977). The slope of this relationship is the net carbon uptake efficiency. In orchard systems that have discontinuous canopies, the spacing and canopy char- acteristics (height, width, shape, and leaf density) control total light interception, dry matter production, and thus potential yield. Consequently, orchard designs, canopy shapes, or incomplete can- opy development that limit total light interception will require pro- portionately higher efficiency of converting light energy into fruit to obtain high yields per unit area. The light interception model of Jackson and Palmer (1980) showed that the same total light interception could be achieved by five systems that varied by as much as 2-fold in height, 5-fold in clear alley width or basal canopy width, 3-fold in canopy volume, 2-fold in canopy surface area, and almost 2-fold in leaf area index. Since none of these factors alone controlled light interception, all must be considered in designing orchard canopies. Although maximum fruit yields ultimately are limited by light",
"corpus_id": 86071016,
"score": 2,
"title": "Bases of Yield and Production Efficiency in Apple Orchard Systems"
} |
{
"abstract": "Abstract Phytochemical investigation of a methanol extract of Panax pseudoginseng flower buds resulted in the isolation of 22 dammarane-type triterpenoid saponins, including three new compounds, pseudoginsenosides A-C (1-3), and 19 known analogs. Their chemical structures were identified by the comprehensive spectroscopic methods, including 1 D and 2 D NMR and mass spectra. In addition, their cytotoxic effects toward three human carcinoma cell lines, including liver (HepG2), breast (MCF7), and lung (A549) were also evaluated. Graphical Abstract",
"corpus_id": 238238123,
"title": "Dammarane-type triterpenoid saponins from the flower buds of Panax pseudoginseng with cytotoxic activity"
} | {
"abstract": "Six new dammarane-type triterpene diglycosides with a hydroperoxide group, floralginsenosides A, B, C, D, E, and F, were isolated from ginseng flower, the flower buds of Panax ginseng C. A. MEYER, together with seven known dammarane-type triterpene oligoglycosides. The structures of new floralginsenosides were elucidated on the basis of chemical and physicochemical evidence.",
"corpus_id": 3137811,
"title": "Medicinal flowers. XI. Structures of new dammarane-type triterpene diglycosides with hydroperoxide group from flower buds of Panax ginseng."
} | {
"abstract": "The ginseng dammarane saponin, ginsenoside-Rg1, and the oleanolic acid saponins, chiku-setsusaponin-IV and pseudo-ginsenoside-RT1, were isolated in yields of 0.02, 0.15 and 0.04%, respectively, from roots and small rhizomes of Panax pseudo-ginseng WALL. subsp. pseudo-ginseng HARA collected at Nielamu, Tibet. A saponin formulated as 3-O-α-L-arabinofuranosyl(1→4)-β-D-glucuronide of oleanolic acid was also isolated in a partially purified state. The chemotaxonomical significance of the results is disccussed.",
"corpus_id": 38251389,
"score": -1,
"title": "Saponins from Panax pseudo-ginseng Wall. subsp. pseudo-ginseng Hara collected at Nielamu, Tibet, China."
} |
{
"abstract": "Focusing on three of the largest coastal cities in the Republic of Ireland, this paper highlights the importance of a historical analysis of flood hazards in contextualising current events and potential future risks. Over the last decade, the cities of Dublin, Cork and Galway have experienced several major coastal, river and pluvial floods. In the aftermath of these floods, two distinct but related narratives have dominated public discourse and official responses. The first narrative presents recent floods as unprecedented and as possible evidence of climate change. The second constructs floods primarily as natural events and assumes that the optimal means of reducing flood losses is to prevent flood events. In this paper, I suggest that these narratives are not supported by a historical analysis of exposure and vulnerability to flood hazards in Irish cities. This paper draws primarily on newspaper archives to construct a record of past flooding that challenges these narratives in several ways and in doing so offers lessons for similar cities in other countries. I contend that these narratives are perpetuated by a narrow form of knowledge production (quantitative risk assessment) and a narrow range of data (numeric instrumental records). Incorporating a broader range of sources and data types into risk and vulnerability assessments may illuminate more creative strategies for reducing both contemporary and future flood losses.",
"corpus_id": 154515380,
"title": "Environmental knowledge and human experience: using a historical analysis of flooding in Ireland to challenge contemporary risk narratives and develop creative policy alternatives"
} | {
"abstract": "Societal adaptation to flooding is a critical component of contemporary flood policy. Using content analysis, this article identifies how two major flooding episodes (2009 and 2014) are framed in the Irish broadsheet news media. The article considers the extent to which these frames reflect shifts in contemporary flood policy away from protection towards risk management, and the possible implications for adaptation to living with flood risk. Frames help us make sense of the social world, and within the media, framing is an essential tool for communication. Five frames were identified: flood resistance and structural defences, politicisation of flood risk, citizen as risk manager, citizen as victim and emerging trade-offs. These frames suggest that public debates on flood management do not fully reflect shifts in contemporary flood policy, with negative implications for the direction of societal adaptation. Greater discussion is required on the influence of the media on achieving policy objectives.",
"corpus_id": 12250083,
"title": "The framing of two major flood episodes in the Irish print news media: Implications for societal adaptation to living with flood risk"
} | {
"abstract": "Our Heritage: A Brief History of School Design in the Nineteenth and Twentieth Centuries. Archecture for Education. State-of-the-Art Schools. color and Light. How to Prevent Obsolete Schools. Transforming the Learning Environment. Site Planning and the Master Plan. The Planning and Building Process. Designing Schools with Character. 20/20 Vision and Choice for the Future. Index.",
"corpus_id": 107604015,
"score": 0,
"title": "Planning and Designing Schools"
} |
{
"abstract": "We present a robust and non-heuristic algorithm that finds all extremum points of the error distribution function of numerically Laplace-transformed orbital energy denominators. The extremum point search is one of the two key steps for finding the minimax approximation. If pre-tabulation of initial guesses is supposed to be avoided, strategies for a sufficiently robust algorithm have not been discussed so far. We compare our non-heuristic approach with a bracketing and bisection algorithm and demonstrate that 3 times less function evaluations are required altogether when applying it to typical non-relativistic and relativistic quantum chemical systems.",
"corpus_id": 1639952,
"title": "Improvements on the minimax algorithm for the Laplace transformation of orbital energy denominators"
} | {
"abstract": "In this paper, we determine efficient imaginary frequency and imaginary time grids for second-order Møller-Plesset (MP) perturbation theory. The least-squares and Minimax quadratures are compared for periodic systems, finding that the Minimax quadrature performs slightly better for the considered materials. We show that the imaginary frequency grids developed for second order also perform well for the correlation energy in the direct random phase approximation. Furthermore, we show that the polarizabilities on the imaginary time axis can be Fourier-transformed to the imaginary frequency domain, since the time and frequency Minimax grids are dual to each other. The same duality is observed for the least-squares grids. The transformation from imaginary time to imaginary frequency allows one to reduce the time complexity to cubic (in system size), so that random phase approximation (RPA) correlation energies become accessible for large systems.",
"corpus_id": 2365126,
"title": "Low Scaling Algorithms for the Random Phase Approximation: Imaginary Time and Laplace Transformations."
} | {
"abstract": "A number of studies in the literature have looked into the use of real-time biometric data to improve one's own physiological performance and wellbeing. However, there is limited research that looks into the effects that sharing biometric data with others could have on one's social network. The video documents the design and development of HeartLink, a system that collects real-time personal biometric data such as heart rate and broadcasts this data online to anyone. Insights gained on designing systems to broadcast real-time biometric data are presented. The video also reports on the key results from testing HeartLink in two studies that were conducted during sport events.",
"corpus_id": 15019249,
"score": 1,
"title": "HeartLink: open broadcast of live biometric data to social networks"
} |
{
"abstract": "Three-dimensional (3D) axisymmetric invisibility cloaks with arbitrary shaped in layered-media background are presented using the transformation optics. The inner and outer boundaries of the cloaks can be non-conformal with arbitrary shapes, which considerably improve the ∞exibility of the cloaking applications. However, such kinds of 3D cloaks cannot be simulated using the commercial softwares due to the tremendous memory requirements and CPU time. By taking advantage of the rotationally symmetrical property, we propose an e-cient flnite-element method (FEM) to simulate and analyze the 3D cloaks, which can greatly reduce the CPU time and memory requirements. The method is based on the electric-fleld formulation, in which the transverse flelds are expanded in terms of second-order edge- based vector basis functions and the azimuth components are expanded using second-order nodal-based scalar basis functions. The FEM mesh is truncated using the absorbing boundary condition. Excellent cloaking performance of the 3D cloaks in layered-media background has been verifled by the proposed method.",
"corpus_id": 2060060,
"title": "Three-dimensional axisymmetric invisibility cloaks with arbitrary shapes in layered-medium background"
} | {
"abstract": "This work examines re∞ection of a light from a semi- inflnite medium which is modifled with an ordered monolayer of spherical nanoparticles placed on or under its surface. We derive analytical expressions for the electric flelds within and outside such structures and verify them with help of strict numerical simulations. We show that nanoparticles layer acts as an imaginary zero-thickness surface having complicated non-Fresnel re∞ection coe-cients with wavelength dependent phase shift. It is shown that such monolayers may reduce re∞ection relative to re∞ection from a pure substrate surface. We derive and analyse a zero-re∞ection condition in the simple intuitive form. It is shown that a single layer of nanocavities near the medium-vacuum interface may increase the transparency of a dielectric medium to values close to 100% in a wide wavelength range.",
"corpus_id": 6017577,
"title": "Optical Antireflection of a Medium by Nanostructural Layers"
} | {
"abstract": "A hybrid edge element approach for the computation of waveguide modes is presented. The electric field is decomposed into its transverse and longitudinal components, which are modeled in terms of two-dimensional edge elements and scalar nodal elements, respectively, thereby satisfying the Dirichlet boundary condition at the perfect electric conductor boundaries and dielectric interfaces. Failure to do so results in the generation of spurious modes. This approach allows for the modeling of a three-dimensional field quantity over a two-dimensional boundary, namely the waveguide cross section. Another approach, the method of moments, serves as an excellent means of verifying the results obtained through use of hybrid edge elements. A comparison of the results obtained from both techniques are presented, along with the associated field plots obtained from the hybrid edge element approach for several geometries.",
"corpus_id": 110684832,
"score": 2,
"title": "Waveguide mode solution using a hybrid edge element approach"
} |
{
"abstract": "Background and Purpose: Klebsiella pneumoniae and Klebsiella oxytoca are the two most common pathogens causing nosocomial infections in humans and are of great concern for developing multidrug resistance. In the present study, K. pneumoniae and K. oxytoca from clinical samples were evaluated for their antibiotic sensitivity patterns against commonly used antibiotics and production of extended-spectrum beta-lactamase (ESBL). Materials and Methods: The isolates were obtained from tracheal swabs, sputum, wound swabs, pus, blood and urine samples of hospitalized patients. Klebsiella pneumoniae and Klebsiella oxytoca were identified by cultural and biochemical methods. Antibiotic sensitivity test was performed by modified Kirby-Bauer disc diffusion technique. ESBL production in Klebsiella spp. was confirmed by double disc synergy test.Results and Conclusion: Out of 500 clinical isolates, 120 were found positive for Klebsiella among which 108 were K. pneumoniae and 12 were K. oxytoca based on indole test. Prevalence rate of Klebsiella was found more prominent in males aged over 50 years, mostly in urine samples. Overall resistance pattern of Klebsiella isolates to Ampicillin, Amoxicillin, Ceftriaxone, Ciprofloxacin, Co-trimoxazole, Gentamicin, Nalidixic acid, Tetracycline was 100%, 90%, 45%, 40%, 45%, 25%, 50%, 35% respectively. Multidrug resistance was found more common in K. pneumoniae (56%) than in K. oxytoca (50%). Prevalence rate of ESBL producing Klebsiella was found 45% among which K. pneumoniae (50%) were found more prominent than K. oxytoca (25%). All the ESBL producing Klebsiella isolates were found to be multidrug resistant, showing 100% resistance to Ampicillin, Amoxicillin, Ceftriaxone and Ciprofloxacin. Keywords: Multi-drug resistant, Klebsiella, extended-spectrum beta-lactamase.",
"corpus_id": 2772974,
"title": "Prevalence, antibiotic susceptibility profiles and ESBL production in Klebsiella pneumoniae and Klebsiella oxytoca among hospitalized patients"
} | {
"abstract": "The thermal profiles of 118 bacterial strains, representing six species of the family Enterobacteriaceae, isolated from a variety of native Australian mammals were determined under in vitro conditions. Each of the bacterial species had a unique thermal profile and differed in their minimum or maximum temperature for growth and in their response to changing temperatures. The taxonomic classification of the host from which the bacterial strains were isolated explained a significant amount of the variation in thermal profile among strains of a species. Host effects were detected at all taxonomic levels: order, family, genus, and species. The locality (State or Territory) or climate zone from which the strain was collected explained a significant amount of the variation in the thermal profile of Citrobacter freundii, Enterobacter cloacae and Klebsiella pneumoniae strains. Genetically similar strains, as determined by allozyme profiles, had similar thermal profiles for the bacterial species Hafnia alvei and Escherichia coli. The results of this study indicate that there are potentially many aspects of host biology that may determine the thermal profile of these bacteria.",
"corpus_id": 7335225,
"title": "Host and geographical factors influence the thermal niche of enteric bacteria isolated from native Australian mammals"
} | {
"abstract": "Abstract Ongoing archaeological site destruction will limit future opportunities to conduct field research in the Basin of Mexico. Preparing for this future requires assessment of the resources that may be lost and the research that might be accomplished. Developing priorities to assist future studies of ancient economies is complicated by the evolving nature of this research. Nonetheless, trends in this research over the past few decades highlight the increasingly heavy data requirements associated with addressing economic issues. Much future research will be undercut if the required data are neither gathered nor collectable. Furthermore, many unresolved issues should be addressed while opportunities for further research remain available. The examples presented here focus on the development of the regional economy and generally reflect the author's concerns with the subsistence economy. Among these issues are characterizing more clearly the role of the site of Coapexco within the Early Formative economy, clarifying Teotihuacan's relationship with the rest of the basin when the city first achieved hegemony, and obtaining a more complete picture of the economic concerns and internal economic organization of the city. These examples represent a broader set of research issues than can be discussed in a single paper, but they illustrate the kind of work archaeologists must consider completing while they can.",
"corpus_id": 162681199,
"score": 0,
"title": "TOWARDS FUTURE ECONOMIC RESEARCH IN THE BASIN OF MEXICO"
} |
{
"abstract": "The interacting continuous and discrete dynamics in hybrid systems may lead to Zeno executions, which are solutions of the system having infinitely many discrete transitions in finite time. Although physical systems do not show Zeno behaviour, models of real systems may be Zeno due to modelling abstraction. It is hard to analyse such models with the existing theory. Since abstraction is an important tool in the hierarchical design of hybrid systems, one would like to determine when it may lead to Zeno models. Zeno hybrid systems are studied in detail in the paper. Necessary and sufficient conditions for the existence of Zeno executions are given. The Zeno set is introduced as the ω limit set of a Zeno execution. Properties of the Zeno set are derived for a fairly large class of hybrid systems. Copyright 2001 © John Wiley & Sons, Ltd.",
"corpus_id": 2057416,
"title": "Zeno hybrid systems"
} | {
"abstract": "This paper studies the existence of solutions to a class of hybrid automata in which the underlying continuous dynamics are represented by inhomogeneous linear time-invariant systems whose inputs are controls that can be determined by the user. The principal result of the paper is a procedure that searches for global periodic nonterminating solutions of systems having a single cycle.",
"corpus_id": 2127774,
"title": "On the Existence of Solutions to Controlled Hybrid Automata"
} | {
"abstract": "We prove that an analytic quasiperiodically forced circle flow with a not super-Liouvillean base frequency and which is close enough to some constant rotation is C∞ rotations reducible, provided its fibered rotation number is Diophantine with respect to the base frequency. As a corollary, we obtain that among such systems, the linearizable ones and those displaying mode-locking are locally dense for the C∞-topology.",
"corpus_id": 253737351,
"score": 1,
"title": "Linearization of Quasiperiodically Forced Circle Flows Beyond Brjuno Condition"
} |
{
"abstract": "Properties of simple models of confined linear polymer chains were studied by means of the Monte Carlo method. Model chains were built of united atoms (statistical segments) and embedded to a simple cubic lattice. Then polymers were put into a slit formed by two parallel impenetrable surfaces. Chain lengths were varied up to 800 segments and the density of the polymer melt was changed up to 0.5. A Metropolis-like sampling Monte Carlo algorithm was used to determine the static properties of this model. The influence of the size of the confinement, the polymer melt concentration and the chain length on the chain’s size and the structure was studied. The universal behavior of all confined polymer linear chains under consideration was found and discussed.",
"corpus_id": 12818582,
"title": "Properties of Confined Polymer Melts"
} | {
"abstract": "A method is described for simulating the dynamical behavior of a linear polymer in dilute solution, subject to random collision with solvent molecules. Equilibrium distributions of various chain dimensions may be obtained by periodic inspection of the chain. Relaxation phenomena in such chains may also be studied. Results are given for equilibrium distribution and relaxation behavior of the end‐to‐end length, for chains of 8, 16, 32, and 64 beads. The equilibrium chain dimensions are in satisfactory accord with the calculations of Wall and his collaborators, while the relaxation times are close to those predicted with the aid of the hydrodynamic theory of Rouse and Zimm.",
"corpus_id": 94069764,
"title": "Monte Carlo Calculations on the Dynamics of Polymers in Dilute Solution"
} | {
"abstract": "Abstract G -value for γ-induced scission of main chains for authentically pure poly(methyl methacrylate) is 1.5. It ia independent of irradiation temperature below 273 K. It is remarkably affected below ca. 200 K by residual monomer or those produced by radiolysis. The temperature dependence of the G -value reported in previous papers is attributed to the monomer present in polymer samples.",
"corpus_id": 96213398,
"score": 1,
"title": "Temperature effect on the radiation-degradation of poly(methyl methacrylate)"
} |
{
"abstract": "Reversible logic gates are implemented over a high scalein the future technologies. Reversible logic is seen as a demandingfield with variegated applications like CMOS designs consumingless power. This paper proposed design of a full Adder/Subtractorcircuitry with the help of fault tolerant based Reversible logic gates. In the given paper, a full adder/subtractor is proposed with help ofMIG (Modified Islam Gate) & COG (Controlled Operation Gate)reversible logic gate comprised of pipelining. As observed from theoutcome session, it is evident that delay will be minimized by around61% by making use of COG & MIG Reversible logic gatescontrasting Feynman Double Gate based Full Adder/Subtractor.",
"corpus_id": 6602973,
"title": "Low Delay Based Full Adder/Subtractor by MIG and COG Reversible Logic Gate"
} | {
"abstract": "A set of p-valued logic gates (primitives) is called universal if an arbitrary p-valued logic function can be realized by a logic circuit built up from a finite number of gates belonging to this set. In this paper, we consider the problem of determining the number of universal single-gate libraries of p-valued reversible logic gates with two inputs and two outputs, under the assumption that constant signals can be applied to an arbitrary number of inputs. We have proved some properties of such gates and established that over 97% of ternary gates are universal.",
"corpus_id": 1122111,
"title": "On universality of general reversible multiple-valued logic gates"
} | {
"abstract": "Today's computers are based on irreversible logic devices, which have been known to be fundamentally energy-inefficient for several decades. Recently, alternative reversible logic technologies have improved rapidly, and are now becoming practical. \nIn traditional models of computation, pure reversibility seems to decrease overall computational efficiency; I provide a proof to this effect. However, traditional models ignore important physical constraints on information processing. \nThis thesis gives the first analysis demonstrating that in a realistic model of computation that accounts for thermodynamic issues, as well as other physical constraints, the judicious use of reversible computing can strictly increase asymptotic computational efficiency, as machine sizes increase. I project real benefits for supercomputing at a large (but achievable) scale in the fairly near term. And with proposed future computing technologies, I show that reversibility will benefit computing at all scales. \nNext, the thesis demonstrates that reversible computing techniques do not make computer design much more difficult. I describe how to design asymptotically efficient processors using an “adiabatic” reversible electronic logic technology that can be built with today's microprocessor fabrication processes. I describe a simple universal reversible parallel processor chip that our group recently fabricated, and a reversible instruction set for a more traditional RISC-style uniprocessor. \nFinally, I describe techniques for programming reversible computers. I present a high-level language and a compiler suitable for coding efficient reversible algorithms, and I describe a variety of example algorithms, including efficient reversible sorting, searching, arithmetic, matrix, and graph algorithms. As an example application, I present a linear-time, constant-space reversible program for simulating the Schrodinger wave equation of quantum mechanics. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)",
"corpus_id": 20812081,
"score": 2,
"title": "Reversibility for efficient computing"
} |
{
"abstract": "Orbital debris in low Earth orbit (LEO) are now sufficiently dense that the use of LEO space is threatened by runaway collisional cascading. A problem predicted more than thirty years ago, the threat from debris larger than about 1cm demands serious attention. A promising proposed solution uses a high power pulsed laser system on the Earth to make plasma jets on the objects, slowing them slightly, and causing them to re-enter and burn up in the atmosphere. In this paper, we reassess this approach in light of recent advances in low-cost, light-weight segmented design for large mirrors, calculations of laser-induced orbit changes and in design of repetitive, multi-kilojoule lasers, that build on inertial fusion research. These advances now suggest that laser orbital debris removal (LODR) is the most costeffective way to mitigate the debris problem. No other solutions have been proposed that address the whole problem of large and small debris. A LODR system will have multiple uses beyond debris removal. Inte...",
"corpus_id": 8512456,
"title": "Removing orbital debris with pulsed lasers"
} | {
"abstract": "Other papers in this conference discuss the ORION concept for laser space debris mitigation. An alternative approach to removing space debris nicknamed “Catcher’s Mitt” has been proposed. In this concept, a block of low density solid material is placed in a precessing, elliptical, near‐equatorial orbit to sweep out near‐Earth space between about 400 km and 1100 km altitude where the hazardous debris objects reside. The concept could work by vaporizing or trapping the objects, or slowing them enough for re‐entry on passing through the “mitt.” To compete with ORION, an alternative must intercept 300 k objects in two years. We demonstrate two difficulties with the “mitt” idea. The first of these is that even if it is made of aerogel with 1 mg/cm3 density, the required mass is about 2 MT. The second problem is that an elliptical mitt orbit covering the 400–1100 km debris altitude range would suffer ram pressure that would have to be compensated by a 10 kN‐thrust engine operating continuously for the mission d...",
"corpus_id": 129092189,
"title": "“Catcher’s Mitt” as an Alternative to laser Space Debris Mitigation"
} | {
"abstract": "We derive optimum values of parameters for laser-driven flights into low Earth orbit (LEO) using an Earth-based laser, as well as sensitivity to variations from the optima. These parameters are the ablation plasma exhaust velocity v E and specific ablation energy Q * , plus related quantities such as momentum coupling coefficient C m and the pulsed or continuous laser intensity that must be delivered to the ablator to produce these values. Different optima are found depending upon whether it is desired to maximize mass m delivered to LEO, maximize the ratio m / M of orbit to ground mass, or minimize cost in energy per gram delivered. Although it is not within the scope of this report to provide an engineered flyer design, a notional, cone-shaped flyer is described to provide a substrate for the discussion and flight simulations. The flyer design emphasizes conceptually and physically separate functions of light collection at a distance from the laser source, light concentration on the ablator, and autonomous steering. Approximately ideal flight paths to LEO are illustrated beginning from an elevated platform. We believe LEO launch costs can be reduced 100-fold in this way. Sounding rocket cases, where the only goal is to momentarily reach a certain altitude starting from near sea level, are also discussed. Nonlinear optical constraints on laser propagation through the atmosphere to the flyer are briefly considered.",
"corpus_id": 59468305,
"score": 2,
"title": "Optimum parameters for laser launching objects into low Earth orbit"
} |
{
"abstract": "EDITOR—Ghosh et al have written a useful review of ulcerative colitis, but we wish to make some points regarding paediatric practice.1 It is particularly important to make these points in a general journal as a recent survey by the British Paediatric Surveillance Unit showed that inflammatory bowel disease is a common childhood condition, with an incidence of 4.7/100 000/year …",
"corpus_id": 394095,
"title": "Ulcerative colitis should be investigated differently in children"
} | {
"abstract": "Failure of growth and retarded sexual development are serious and common problems in children and teenagers with inflammatory bowel disease, particularly Crohn's disease. Thus height, weight, sexual staging, and bone age should be closely monitored in such patients. In 1989 we reported serious underrecording of these variables of growth in a cohort of Scottish children with inflammatory bowel disease.1 We assessed the situation a decade later.\n\nWe studied 28 boys and 13 girls aged 16 years at first admission to hospital with ulcerative colitis (n=14) or Crohn's disease (n=27). These patients, identified from the Scottish hospitals database of inpatients statistics for 1984-88, were resident in four of the Scottish regions.\n\nWe reviewed the patients' case records and noted whether height, …",
"corpus_id": 27040634,
"title": "Neglect of growth and development in the clinical monitoring of children and teenagers with inflammatory bowel disease: review of case records"
} | {
"abstract": "The prognosis of ulcerative colitis including survival, colectomy rate, activity of disease, and working capacity was estimated from a follow up study of 783 patients with ulcerative colitis comprising all patients from the county of Copenhagen, except for the island of Amager, diagnosed between 1960 and 1978. The period of observation ranged from one to 18 years with a median of 6.7 years for the clinical observations, eight years for survival and 11.6 years for the occurrence of large bowel cancer. The follow up was 100% for both survival and cancer. The survival rate in women did not differ from that in the general population. In men over 40 years of age at diagnosis a slight excess mortality was found, but only in the year of diagnosis (2.1%) and the following year (1.5%). Colonic cancer was seen in only seven out of the 783 patients, corresponding to an annual risk of 0.07% and a cumulative risk after 18 years with ulcerative colitis of 1.4% (95% confidence limits, 0.7-2.8%) independent of the initial extent of disease. The colectomy rate was 9.6% in the year of diagnosis. The cumulative 10- and 18-year colectomy rate was 23% and 31%, respectively. After three years from diagnosis the capacity for work both in those subject to resection and treated conservatively did not differ significantly from that in the background population. At any time about 50% of the patients were without symptoms, in about 30% the disease activity was low and in about 20% moderate or high. Most patients, however, differed in activity from one year to another and almost all patients (97%) experienced at least one relapse during a 10 year time period.",
"corpus_id": 19958647,
"score": 2,
"title": "Long term prognosis in ulcerative colitis--based on results from a regional patient group from the county of Copenhagen."
} |
{
"abstract": "Hilbert-Huang transform (HHT) is proposed to process the seismic response recordings in an 8-story frame-shear wall base-isolated building. Empirical Mode Decomposition (EMD) method is first applied to identify the time variant characteristics and the data series can be decomposed into several components. Hilbert transform is well-behaved in identifying the frequency components. The first 5 intrinsic mode functions (IMFs) are decomposed with their different frequencies. The analytical function is reconstructed and compared with the original signal. They are extremely consistent in amplitude and phase. Based on the IMFs obtained, frequencies of the original signal are inferred at 5 Hz and 1.6 Hz. The higher frequency is regarded as the vibration excited by surface waves. 1.6 Hz is suggested as the dominant frequency of the building. Analysis indicates that HHT is accurate in extracting the dynamic characteristics of structural systems.",
"corpus_id": 155841683,
"title": "HHT based analysis on seismic response recordings for a base-isolated building"
} | {
"abstract": "When measured data contain damage events of the structure, it is important to extract the information of damage as much as possible from the data. In this paper, two methods are proposed for such a purpose. The first method, based on the empirical mode decomposition (EMD), is intended to extract damage spikes due to a sudden change of structural stiffness from the measured data thereby detecting the damage time instants and damage locations. The second method, based on EMD and Hilbert transform is capable of (1) detecting the damage time instants, and (2) determining the natural frequencies and damping ratios of the structure before and after damage. The two proposed methods are applied to a benchmark problem established by the ASCE Task Group on Structural Health Monitoring. Simulation results demonstrate that the proposed methods provide new and useful tools for the damage detection and evaluation of structures.",
"corpus_id": 122058929,
"title": "Hilbert-Huang Based Approach for Structural Damage Detection"
} | {
"abstract": "Structural Health Monitoring (SHM) allows to perform a diagnosis on demand which assists the operator to plan his future maintenance or repair activities. Using structural vibrations to extract damage sensitive features, problems can arise due to variations of the dynamical properties with changing environmental and operational conditions (EOC). The dynamic changes due to changing EOCs like variations in temperature, rotational speed, wind speed, etc. may be of the same order of magnitude as the variations due to damage making a reliable damage detection impossible. In this paper, we show a method for the compensation of changing EOC. The well-known null space based fault detection (NSFD) is used for damage detection. In the first stage, a training is performed using data from the undamaged structure under varying EOC. For the compensation of the EOC-e ects the undamaged state is modeled by different reference data corresponding to different representative EOC conditions. Finally, in the application, the influences of one or other EOC on each incoming data is weighted separately by means of a fuzzy-classiffcation algorithm. The theory and algorithm is successfully tested with data sets from a real wind turbine and with data from a laboratory model.",
"corpus_id": 111334837,
"score": 2,
"title": "Vibration-Based Damage Detection under Changing Environmental and Operational Conditions"
} |
{
"abstract": "This article addresses two main questions: do young people leaving vocational upper-secondary education make more successful transitions to employment than leavers from academic upper-secondary education, or than leavers from lower-secondary education? And does this 'vocational effect' vary systematically across countries? The article distinguishes two ideal types of transition system, based on the strength of linkages between vocational education and employment, and governed respectively by an 'employment logic' and by an 'education logic'. The vocational effect is predicted to be stronger in systems governed by the employment logic. This prediction, together with other hypotheses based on the ideal types, is tested using school-leaver survey data for the Netherlands (representing the employment logic), Scotland (representing the education logic), and Ireland and Sweden (representing intermediate cases). The ideal types are broadly supported, subject to limitations of comparability of the data.",
"corpus_id": 153393640,
"title": "Vocational upper-secondary education and the transition from school"
} | {
"abstract": "In this report we discuss the recent reforms and other innovations within Swedish initial VET since the mid of 1990s. Using a descriptive approach, we will analyse the current state of play for ini ...",
"corpus_id": 6380155,
"title": "Bridging the gaps: Recent reforms and innovations in Swedish VET to handle the current challenges"
} | {
"abstract": "This study investigates the proposition that Japanese companies have a greater propensity than U.S. companies to sustain commitment to R&D in the face of fluctuating profits and liquidity. The anal...",
"corpus_id": 168067739,
"score": 0,
"title": "EFFECTS OF PROFITABILITY AND LIQUIDITY ON R&D INTENSITY: JAPANESE AND U.S. COMPANIES COMPARED"
} |
{
"abstract": "Abstract: Inflammation induces cardiac fibrosis and hypertrophy in multiple cardiovascular diseases, contributing to cardiac dysfunction. We tested the hypothesis that pentoxifylline (PTX), a phosphodiesterase inhibitor with anti-inflammatory property, would attenuate cardiac fibrosis and hypertrophy, and prevent cardiac dysfunction in angiotensin (ANG) II-induced hypertensive rats. Sprague–Dawley rats were divided into control and ANG II-infused groups treated with or without PTX for 2 weeks. PTX had no effect on ANG II-induced hypertension, but significantly attenuated cardiac fibrosis and hypertrophy, and ameliorated cardiac dysfunction in ANG II-induced hypertensive rats. In addition, ANG II-induced increase in circulating and cardiac proinflammatory cytokines were attenuated by PTX, which reduced cardiac nuclear factor-kappa B activity. Furthermore, PTX decreased cardiac expression of genetic markers important for fibrosis, hypertrophy, and endothelial dysfunction, and reduced migration and infiltration of macrophages. In contrast, PTX had no effects on the above parameters in control rats. The findings suggest that PTX ameliorates cardiac fibrosis, pathological hypertrophy, and cardiac dysfunction by suppressing inflammatory responses in angiotensin II-induced hypertension, and that these benefits were independent of the blood pressure lowering effect. The PTX by its anti-inflammatory property may be a potential therapeutic option for the prevention of cardiac remodeling and dysfunction in ANG II-induced hypertension.",
"corpus_id": 2136571,
"title": "Pentoxifylline Ameliorates Cardiac Fibrosis, Pathological Hypertrophy, and Cardiac Dysfunction in Angiotensin II-induced Hypertensive Rats"
} | {
"abstract": "Objective—Interleukin-12 is essential for the differentiation of naïve T cells into interferon-&ggr;–producing T cells, which regulate inflammatory responses. We investigated this process of regulating hypertension-induced cardiac fibrosis. Methods and Results—Mice infused with angiotensin II showed a marked increase in interleukin-12p35 expression in cardiac macrophages. The degree of cardiac fibrosis was significantly enhanced in interleukin-12p35 knockout (p35-KO) mice compared with wild-type (WT) littermates in response to angiotensin II. Fibrotic hearts of p35-KO mice showed increased accumulation of alternatively activated (M2) macrophages and expression of M2 genes such as Arg-1 and Fizz1. Bone marrow–derived macrophages from WT or p35-KO mice did not differ in differentiation in response to angiotensin II treatment; however, in the presence of CD4+ T cells, macrophages from p35-KO mice differentiated into M2 macrophages and showed elevated expression of transforming growth factor-&bgr;. Moreover, CD4+ T-cell–treated p35-KO macrophages could stimulate cardiac fibroblasts to differentiate into &agr;-smooth muscle actin–positive and collagen I–positive myofibroblasts in 3-dimensional nanofiber gels. Neutralizing antibodies against transforming growth factor-&bgr; inhibited myofibroblast formation induced by M2 macrophages. Conclusion—Deficiency in interleukin-12p35 regulates angiotensin II–induced cardiac fibrosis by promoting CD4+ T-cell–dependent differentiation of M2 macrophages and production of transforming growth factor-&bgr;.",
"corpus_id": 7026412,
"title": "Interleukin-12p35 Deletion Promotes CD4 T-Cell–Dependent Macrophage Differentiation and Enhances Angiotensin II–Induced Cardiac Fibrosis"
} | {
"abstract": "When children interpret spoken language in real time, linguistic information drives rapid shifts in visual attention to objects in the visual world. This language-vision interaction can provide insights into children's developing efficiency in language comprehension. But how does language influence visual attention when the linguistic signal and the visual world are both processed via the visual channel? Here, we measured eye movements during real-time comprehension of a visual-manual language, American Sign Language (ASL), by 29 native ASL-learning children (16-53 mos, 16 deaf, 13 hearing) and 16 fluent deaf adult signers. All signers showed evidence of rapid, incremental language comprehension, tending to initiate an eye movement before sign offset. Deaf and hearing ASL-learners showed similar gaze patterns, suggesting that the in-the-moment dynamics of eye movements during ASL processing are shaped by the constraints of processing a visual language in real time and not by differential access to auditory information in day-to-day life. Finally, variation in children's ASL processing was positively correlated with age and vocabulary size. Thus, despite competition for attention within a single modality, the timing and accuracy of visual fixations during ASL comprehension reflect information processing skills that are important for language acquisition regardless of language modality.",
"corpus_id": 4925958,
"score": 0,
"title": "Real-time lexical comprehension in young children learning American Sign Language."
} |
{
"abstract": "We extend the class of dynamic factor yield curve models in order to include macroeconomic factors. Our work benefits from recent developments in the dynamic factor literature related to the extraction of the common factors from a large panel of macroeconomic series and the estimation of the parameters in the model. We include these factors in a dynamic factor model for the yield curve, in which we model the salient structure of the yield curve by imposing smoothness restrictions on the yield factor loadings via cubic spline functions. We carry out a likelihood-based analysis in which we jointly consider a factor model for the yield curve, a factor model for the macroeconomic series, and their dynamic interactions with the latent dynamic factors. We illustrate the methodology by forecasting the U.S. term structure of interest rates. For this empirical study, we use a monthly time series panel of unsmoothed Fama–Bliss zero yields for treasuries of different maturities between 1970 and 2009, which we combine with a macro panel of 110 series over the same sample period. We show that the relationship between the macroeconomic factors and the yield curve data has an intuitive interpretation, and that there is interdependence between the yield and macroeconomic factors. Finally, we perform an extensive out-of-sample forecasting study. Our main conclusion is that macroeconomic variables can lead to more accurate yield curve forecasts.",
"corpus_id": 68583,
"title": "Forecasting the U.S. Term Structure of Interest Rates Using a Macroeconomic Smooth Dynamic Factor Model"
} | {
"abstract": "SUMMARY \n \nThis paper analyzes the predictive content of the term structure components level, slope, and curvature within a dynamic factor model of macroeconomic and interest rate data. Surprise changes of the three components are identified using sign restrictions, and their macroeconomic underpinnings are studied via impulse response analysis. The curvature factor is found to carry predictive information both about the future evolution of the yield curve and the macroeconomy. In particular, unexpected increases of the curvature factor precede a flattening of the yield curve and announce a significant decline of output more than 1 year ahead. Copyright © 2010 John Wiley & Sons, Ltd.",
"corpus_id": 154802666,
"title": "Term structure surprises: the predictive content of curvature, level, and slope"
} | {
"abstract": "E CONOMISTS have expended enormous effort examining the rationale for various contractual arrangements in agriculture, particularly sharecropping. While economists have made considerable theoretical efforts to understand agricultural contracts, few empirical studies have been undertaken. The dearth of empirical analyses of agricultural contracts is particularly striking for modern Western agriculture.' This is an important omission, not only because the existing empirical work tends to focus on the question of efficiency, but also because the theoretical models tend to examine contracts that bear little resemblance to those found in the United States today.",
"corpus_id": 153707520,
"score": 1,
"title": "Contract Choice in Modern Agriculture: Cash Rent versus Cropshare"
} |
{
"abstract": "This paper describes the Tezpur University dataset of online handwritten Assamese characters. The online data acquisition process involves the capturing of data as the text is written on a digitizer with an electronic pen. A sensor picks up the pen-tip movements, as well as pen-up/pen-down switching. The dataset contains 8,235 isolated online handwritten Assamese characters. Preliminary results on the classification of online handwritten Assamese characters using the above dataset are presented in this paper. The use of the support vector machine classifier and the classification accuracy for three different feature vectors are explored in our research.",
"corpus_id": 2150246,
"title": "A Dataset of Online Handwritten Assamese Characters"
} | {
"abstract": "A new feature extraction approach based on elastic meshing and directional decomposition techniques for handwritten Chinese character recognition (HCCR) is proposed in this letter. It is found that decomposing a Chinese character into horizontal, vertical stroke, left slant and right slant directional sub-patterns is very helpful for feature extraction and recognition. Three kinds of decomposition methods are proposed. A minimum distance classifier is trained by 3755 categories of characters using the new features. Testing on a total of 37,550 untrained handwritten samples produces the recognition rate of 92.36%, showing the effectiveness of the proposed approach.",
"corpus_id": 134709,
"title": "HANDWRITTEN CHINESE CHARACTER RECOGNITION WITH DIRECTIONAL DECOMPOSITION CELLULAR FEATURES"
} | {
"abstract": "Over the past 10 years, widespread and concerted research efforts have led to increasingly sophisticated and efficient methods and instruments for detecting exaggeration or fabrication of cognitive dysfunction. Despite these psychometric advances, the process of diagnosing malingering remains difficult and largely idiosyncratic. This article presents a proposed set of diagnostic criteria that define psychometric, behavioral, and collateral data indicative of possible, probable, and definite malingering of cognitive dysfunction, for use in clinical practice and for defining populations for clinical research. Relevant literature is reviewed, and limitations and benefits of the proposed criteria are discussed.",
"corpus_id": 33884673,
"score": -1,
"title": "Diagnostic Criteria for Malingered Neurocognitive Dysfunction: Proposed Standards for Clinical Practice and Research"
} |
{
"abstract": "This paper presents a timing synchronization method with shift-orthogonal constant amplitude zero auto correlation (CAZAC) sequences for the multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM) systems. Utilizing the unique properties of CAZAC sequences, at the receivers, unit pulses can be obtained by the symmetrical correlation to detect time offsets and differentiate inter-transmitter delays (ITDs). The performance of the proposed method is compared with traditional methods at different values of ITDs. Simulations demonstrate that the proposed CAZAC sequence-based method provides high accuracy in detecting the different time offsets caused by the distributed transmitters of the MIMO OFDM systems.",
"corpus_id": 1396666,
"title": "A Novel Timing Synchronization Method for MIMO OFDM Systems"
} | {
"abstract": "This paper proposes a time and frequency synchronizaton algorithm for MIMO-OFDM systems. This algorithm designed a training sequence that based on CAZAC sequence, it has sleep timing measuring function waveform, it shows satisfactory performance of delay estimation of each antenna. Simulation results indicate that this timing synchronization algorithm shows satisfactory performance in Rayleigh and AWGN channel at a low signal-to-noise ratios(SNR).",
"corpus_id": 18207637,
"title": "A timing synchronization method for MIMO-OFDM systems"
} | {
"abstract": "In my book Scale and Scope (1990), I focused on the history of the modern industrial firm from the 1880s, when such firms first appeared, through World War II. I did so by comparing the fortunes of more than 600 enterprises—the 200 largest industrial firms at three points in time (World War I, 1929, and World War II) in each of the three major industrial economies (those of the United States, Britain, and Germany). In this paper, I first describe the similarities in the historical beginnings and continuing evolution of these enterprises and then outline my explanation for these similarities. Next, I relate my explanation of these \"empirical regularities\" to four major economic theories relating to the firm: the neoclassical, the principal-agent, the transaction cost, and the evolutionary. Finally, I suggest the value of the transactions cost and evolutionary theories to historians and economists who are attempting to explain the beginnings and growth of modern industrial enterprises.",
"corpus_id": 153338381,
"score": 0,
"title": "Organizational Capabilities and the Economic History of the Industrial Enterprise"
} |
{
"abstract": "Impulsive-noise (IN) over power-line channels can cause serious performance degradations. As such, many IN mitigation techniques have been proposed in the literature, the most common of which is the blanking technique. The conventional way to implement this technique, however, requires prior knowledge about the IN characteristics to identify the optimal blanking threshold (OBT). When such knowledge cannot be obtained, the performance deteriorates rapidly. To alleviate this, we propose a lookup table (LUT)-based algorithm with uniform quantization to utilize estimates of the peak-to-average power ratio at the receiver to determine the OBT. To fully evaluate the performance of the proposed method, we investigate the impact of quantization bits on the system performance in terms of signal-to-noise ratio (SNR) and symbol error rate under various IN scenarios. The results reveal that a 5-bit LUT is sufficient to achieve a gain of up to 3-dB SNR improvement relative to the conventional blanking method. It will also be shown that to maintain good performance, the resolution of quantization must be increased especially when the IN probability of occurrence is relatively high.",
"corpus_id": 969216,
"title": "Quantized Peak-Based Impulsive Noise Blanking in Power-Line Communications"
} | {
"abstract": "Relaying over power line communication (PLC) channels has the potential to improve the reliability and robustness of many PLC-based applications. In particular, this paper proposes to enhance the energy efficiency (EE) of a dual-hop relaying PLC system in the presence of impulsive noise by considering energy-harvesting (EH) at the relaying modem. Amplify-and-forward (AF) relaying and time-switching relaying EH protocols are deployed in this paper. The PLC modems are assumed to have the capability to go on low-power consumption sleep mode when they are neither transmitting nor receiving. The system performance is evaluated in terms of EE and average outage probability for which analytical expressions are derived. Using the derived expressions, several system parameters are investigated, such as the channel gain, which is related to the number of network branches, EH time factor and impulsive noise characteristics. Particularly, the optimization problem of the EH time is addressed thoroughly in order to maximize the achievable gains. Results reveal that the proposed system can offer considerable improvements compared with the conventional AF relaying scheme.",
"corpus_id": 7949369,
"title": "Enhanced Amplify-and-Forward Relaying in Non-Gaussian PLC Networks"
} | {
"abstract": "The present paper aims to design a fuzzy FACTS controller called STATCOM for voltage regulation of Standalone wind energy conversion systems. The Wind energy conversion system consists of wind turbine drives a permanent magnet synchronous generator PMSG feeding static or dynamic loads. The FACTS Statcom controller is adapted and controlled using Fuzzy Logic approach. The Loading types of Wind Energy Conversion are static and dynamic loads. The Dynamic load is pumping systems for irrigation applications. The model of turbine system, PMSG, Transmission system, Loads, Statcom controllers, and Fuzzy logic control system are modelled. The studied system is simulated using Simulink Matlab software package. The Simulated system is subjected to a varieties of disturbances such as load variations and wind speed variations. The digital simulation results prove the effectiveness and powerful of the suggested Fuzzy logic controllers in terms of Fast voltage regulation.",
"corpus_id": 9553080,
"score": 1,
"title": "Fuzzy FACTS voltage regulator for isolated wind energy conversion systems under different wind speed and loading conditions"
} |
{
"abstract": "AbstractHerein, we represent a simple method for the detection and characterization of molecular species of triacylglycerol monohydroperoxides (TGOOH) in biological samples by use of reversed-phase liquid chromatography with a LTQ Orbitrap XL mass spectrometer (LC/LTQ Orbitrap) via an electrospray ionization source. Data were acquired using high-resolution, high-mass accuracy in Fourier-transform mode. Platform performance, related to the identification of TGOOH in human lipoproteins and plasma, was estimated using extracted ion chromatograms with mass tolerance windows of 5 ppm. Native low-density lipoproteins (nLDL) and native high-density lipoproteins (nHDL) from a healthy donor were oxidized by CuSO4 to generate oxidized LDL (oxLDL) and oxidized HDL (oxHDL). No TGOOH molecular species were detected in the nLDL and nHDL, whereas 11 species of TGOOH molecules were detected in the oxLDL and oxHDL. In positive-ion mode, TGOOH was found as [M + NH4]+. In negative-ion mode, TGOOH was observed as [M + CH3COO]–. TGOOH was more easily ionized in positive-ion mode than in negative-ion mode. The LC/LTQ Orbitrap method was applied to human plasma and three molecular species of TGOOH were detected. The limit of detection is 0.1 pmol (S/N = 10:1) for each synthesized TGOOH.\n FigureAnalysis of triacylglycerol hydroperoxides in human lipoproteins by Orbitrap mass spectrometer",
"corpus_id": 596353,
"title": "Analysis of triacylglycerol hydroperoxides in human lipoproteins by Orbitrap mass spectrometer"
} | {
"abstract": "Abstract1-Palmitoyl-2-linoleoylphosphatidylcholine monohydroperoxide (PC 16:0/18:2-OOH) and 1-stearoyl-2-linoleoylphosphatidylcholine monohydroperoxide (PC 18:0/18:2-OOH) were measured by liquid chromatography/mass spectrometry (LC/MS) using nonendogenous 1-palmitoyl-2-heptadecenoylphosphatidylcholine monohydroperoxide as an internal standard. The calibration curves for synthetic PC 16:0/18:2-OOH and PC 18:0/18:2-OOH, which were obtained by direct injection of the internal standard into the LC/MS system, were linear throughout the calibration range (0.8–12.8 pmol). Within-day and between-day coefficients of variation were less than 10%, and the recoveries were between 86% and 105%. The limit of detection (LOD) and the limit of quantification (LOQ) were determined using synthetic standards. The LOD (signal-to-noise ratio 3:1) was 0.01 pmol, and the LOQ (signal-to-noise ratio 6:1) was 0.08 pmol for both PC 16:0/18:2-OOH and PC 18:0/18:2-OOH. With use of this method, the concentrations of PC 16:0/18:2-OOH and PC 18:0/18:2-OOH in the lipoprotein fractions during copper-mediated oxidation were determined. We prepared oxLDL and oxHDL by incubating native LDL and native HDL from human plasma (n = 10) with CuSO4 for up to 4 h. The time course of the PC 16:0/18:2-OOH and PC 18:0/18:2-OOH levels during oxidation consisted of three phases. For oxidized LDL, both compounds exhibited a slow lag phase and a subsequent rapidly increasing propagation phase, followed by a gradually decreasing degradation phase. In contrast, for oxidized HDL, both compounds initially exhibited a prompt propagation phase with a subsequent plateau phase, followed by a rapid degradation phase. The analytical LC/MS method for phosphatidylcholine hydroperoxides might be useful for the analysis of biological samples.\n Online Abstract FigureQuantitative determination of phosphatidylcholine hydroperoxides during copper-oxidation of LDL and HDL by liquid chromatography/mass spectrometry",
"corpus_id": 206909217,
"title": "Quantitative determination of phosphatidylcholine hydroperoxides during copper oxidation of LDL and HDL by liquid chromatography/mass spectrometry"
} | {
"abstract": "During the Second World War the British West African colonies supplied raw materials and manpower for the war effort. The small peacetime army of the Gold Coast increased to nearly 70,000 men, including technical and service corps, and was used in overseas campaigns. Most soldiers were drawn from the supposed martial peoples of the Northern Territories but recruiting was extended to Asante and the south in mid-1940. Although formal conscription was only applied to drivers and artisans, a large number of recruits were forcibly enlisted through a system of official quotas imposed on districts and through chiefs. Opposition to military service, especially for overseas compaigns, was widespread and is indicated by the attempts to evade recruiting parties and also the large number of desertions. In order to release labour for the military and also conserve scarce supplies of raw materials, some gold mines were closed. Wartime shortages, inflation and the lack of jobs after the war led to discontent in the Gold Coast but there is little evidence to indicate that this resulted in a significant number of ex-servicemen being drawn into political activity.",
"corpus_id": 162421608,
"score": 0,
"title": "Military and Labour Recruitment in the Gold Coast During the Second World War"
} |
{
"abstract": "Recently, many antibacterial agents have been found in the venoms of animals from different sources. However, multidrug-resistant strains of bacteria are an important health problem in need for new antibacterial sources and agents. This study aimed to evaluate the antibacterial activity of several snake crude venoms in Elapidae family against several strains of gram-positive and gram-negative bacteria as new sources of potential antibacterial agents. Current studies revealed that king cobra (Ophiophagus hannah) crude venom showed selective antibacterial activity against methicillinresistant Staphylococcus aureus (MRSA) more efficient than tested antibiotics currently on the market. King cobra crude venom showed the minimum inhibitory concentration (MIC) = 8 μg/ml against MRSA, whereas standard antibiotics (ampicillin, penicillin, chloramphenicol and tetracycline) showed MIC in the range of 8-64 μg/ml. The result of scanning electron microscope revealed that king cobra crude venom exerted antibacterial activity against grampositive bacteria via its membrane-damaging activity and it is a feasible source for exploring antimicrobial prototypes for future design new antibiotics against drug-resistant clinical bacteria.",
"corpus_id": 155368722,
"title": "Antibacterial activity of snake venoms against bacterial clinical isolates"
} | {
"abstract": "An increasing problem in the field of health protection is the emergence of drug-resistant and multi-drug-resistant bacterial strains. They cause a number of infections, including hospital infections, which currently available antibiotics are unable to fight. Therefore, many studies are devoted to the search for new therapeutic agents with bactericidal and bacteriostatic properties. One of the latest concepts is to search for this type of substances among toxins produced by venomous animals. In this approach, however, special attention is paid to snake venom because it contains molecules with antibacterial properties. Thorough investigations have shown that the phospholipases A 2 (PLA 2 ) and l -amino acids oxidases (LAAO), as well as fragments of these enzymes, are mainly responsible for the bactericidal properties of snake venoms. Some preliminary research studies also suggest that fragments of three-finger toxins (3FTx) are bactericidal. It has also been proven that some snakes produce antibacterial peptides (AMP) homologous to human defensins and cathelicidins. The presence of these proteins and peptides means that snake venoms continue to be an interesting material for researchers and can be perceived as a promising source of antibacterial agents.",
"corpus_id": 202718157,
"title": "Antibacterial properties of snake venom components"
} | {
"abstract": "The hormonal changes in the development of pseudohypoparathyroidism ( PSH ) have not, to our knowledge, been previously reported. The male sibling of a child with PSH was studied for 2 1/2 years. At 1 year of age he had generalized subcutaneous calcifications that subsequently migrated over his body. At 3 years of age and over a six-month period, serum calcium levels fell; serum phosphorus, parathyroid hormone (PTH), and 1,25-dihydroxyvitamin D (1,25-[OH]2D) concentrations increased. There was no calcemic, phosphaturic, or urinary cyclic adenosine monophosphate response to PTH. The concentration of serum PTH was suppressed by infusion of calcium and doubled with edetic acid infusion, indicating that the parathyroids were sensitive to changes in calcium levels. Thus, increasing PTH and increased 1,25-(OH)2D concentrations occur in the development of PSH . Migratory skin calcifications may occur. We speculate that increasing the serum PTH level reflects increasing compensatory parathyroid production to overcome a progressive PTH receptor defect and serves, with increased 1,25-(OH)2D concentrations, to prevent severe falls in serum calcium concentrations in the early stage of the disease.",
"corpus_id": 7443277,
"score": 1,
"title": "The development of pseudohypoparathyroidism. Involvement of progressively increasing serum parathyroid hormone concentrations, increased 1,25-dihydroxyvitamin D concentrations, and 'migratory' subcutaneous calcifications."
} |
{
"abstract": "ConclusionThe computerized intermittent scanning system for intensive care unit has been described in detail. This system scans over eight patients consecutively, makes diagnosis, and performs automated treatment through the automatic injectiors. With the introduction of the scanning system with a mini-computer, the entire system cost approximately 100,000 dollars which, we believe, would encourage a widespread use of those based on the similar principle in the future.",
"corpus_id": 2365272,
"title": "Computerized intermittent scanning system for automated treatment in intensive care unit"
} | {
"abstract": "Because of the complexity of living things with which a doctor must deal, perhaps no profession stands to gain more from the use of this tool than medicine. It provides the key by which we may reduce the art to a science and, in the process, eliminate that which is useless and unnecessary in the practice of medicine, and through automation extend to everyone the benefits of new knowledge when acquired. In this presentation I hope to convince you of the truth of these general statements by describing our experiences in the development of a computer-based patient-monitoririg system. In 1952, in the laboratory of Dr. Earl Wood at the Mayo Clinic, we developed a method for calculating stroke volume from the contour of a central arterial pressure waveform.’ Not only has this method become the core of our physiological monitoring system, but it was through work with a project growing out of this study that I first became aware of and interested in computers as a tool for physiologic research. I will briefly describe the sequence of this involvement since I think it illustrates well the way in which computers can provide not just numbers but insight:",
"corpus_id": 13652581,
"title": "Experiences with Computer‐Based Patient Monitoring: Third Becton, Dickinson and Company Oscar Schwidetzky Memorial Lecture"
} | {
"abstract": "The relative importance of individual and groups of haemodynamic parameters was assessed by the use of discriminant function analysis in 20 patients with acute myocardial infarction and shock. Individual measurements reflecting cardiac output and velocity of blood flow were most reliable as indicators of survival. Discriminant function may serve as an indicator of the severity of shock.",
"corpus_id": 32465178,
"score": 2,
"title": "Objective index of haemodynamics status for quantitation of severity and prognosis of shock complicating myocardial infarction."
} |
{
"abstract": "Background: Susac syndrome is a rare disease characterized by the triad of encephalopathy, branch retinal artery occlusion, and sensorineural hearing loss mainly affecting young women. The finding of antibodies against the endothelium in the sera of these patients has supported the hypothesis of an autoimmune endotheliopathy of the brain, inner ear and retina. Because of the rarity of the disease, treatment is based on the knowledge of case reports and small case series. Medical therapy consists of glucocorticoids, immunosuppressants, acetyl salicylic acid, and immunomodulatory agents such as intravenous immunoglobulin. Methods: We present the case histories of 2 young women with Susac syndrome presenting with several episodes of encephalopathy, branch retinal artery occlusions, and hearing loss that were treated with different immunosuppressive drugs, glucocorticoids and intravenous immunoglobulin. In the course of the disease, the treatment was successfully switched to subcutaneous immunoglobulin without any further relapse in both patients. Conclusion: We conclude that the application of subcutaneous immunoglobulin is easy to learn, helps to reduce in-hospital costs and enables a more flexible everyday life. The treatment with subcutaneous immunoglobulin helps to reduce immunosuppressants and appears to prevent relapses.",
"corpus_id": 1686857,
"title": "Susac Syndrome Treated with Subcutaneous Immunoglobulin"
} | {
"abstract": "We hypothesized that subcutaneous administration of immunoglobulins (SCIG) in chronic inflammatory demyelinating polyradiculoneuropathy (CIDP) is feasible, safe and superior to treatment with saline for the performance of muscle strength.",
"corpus_id": 25492581,
"title": "Subcutaneous immunoglobulin in responders to intravenous therapy with chronic inflammatory demyelinating polyradiculoneuropathy"
} | {
"abstract": "Stormwater runoff in urban catchments contains heavy metals (zinc, copper, lead) and suspended solids (TSS) which can substantially degrade urban waterways. To identify these pollutant sources and quantify their loads the MEDUSA (Modelled Estimates of Discharges for Urban Stormwater Assessments) modelling framework was developed. The model quantifies pollutant build-up and wash-off from individual impervious roof, road and car park surfaces for individual rain events, incorporating differences in pollutant dynamics between surface types and rainfall characteristics. This requires delineating all impervious surfaces and their material types, the drainage network, rainfall characteristics and coefficients for the pollutant dynamics equations. An example application of the model to a small urban catchment demonstrates how the model can be used to identify the magnitude of pollutant loads, their spatial origin and the response of the catchment to changes in specific rainfall characteristics. A sensitivity analysis then identifies the key parameters influencing each pollutant load within the stormwater given the catchment characteristics, which allows development of a targeted calibration process that will enhance the certainty of the model outputs, while minimizing the data collection required for effective calibration. A detailed explanation of the modelling framework and pre-calibration sensitivity analysis is presented.",
"corpus_id": 705736,
"score": 0,
"title": "A novel modelling framework to prioritize estimation of non-point source pollution parameters for quantifying pollutant origin and discharge in urban catchments."
} |
{
"abstract": "Intracranial dermoid cysts are rare congenital tumors. Because of a slow growth pattern, these tumors may attain considerable size before discovery. Patients harboring these lesions generally have a long history of symptoms; average time between onset of symptoms and diagnosis is 8.5 years [1, 2]. The mode of presentation depends upon the size and location of the tumor. The clinical picture is nonspecific, and symptoms may include headache, seizures, dementia, and, occasionally, meningitis. Papilledema is often absent, and spinal tap is normal in more than half of the cases [3, 4]. Plain skull x-rays are often abnormal and may show a variety of changes. This case illustrates a previously unreported finding of an intraventricular fat-fluid level secondary to spontaneous rupture of a dermoid cyst.",
"corpus_id": 1660005,
"title": "Intraventricular fat-fluid level secondary to rupture of an intracranial dermoid cyst."
} | {
"abstract": "BACKGROUND\nIntracranial dermoid cysts are rare congenital neoplasms that are believed to arise from ectopic cell rests incorporated in the closing neural tube. The rupture of an intracranial dermoid cyst is a relatively rare event that typically occurs spontaneously. In the past it was believed that rupture is always fatal, a hypothesis that is not supported by more recently reported cases. The symptoms associated with rupture vary from no symptoms to sudden death.\n\n\nMETHODS\nThe present paper analyzes published cases of ruptured intracranial dermoid cysts in terms of their age profile and their clinical presentation and describes an additional case.\n\n\nRESULTS\nAnalysis of published cases revealed headache (14 out of 44 patients; 31.8%) and seizures (13 out of 44 patients; 29.5%), to be the most common signs of rupture followed by, often temporary, sensory or motor hemisyndrome (7 out of 44 patients; 15.9%), and chemical meningitis (3 out of 44 patients; 6.9%).\n\n\nCONCLUSION\nHeadache occurred primarily in younger patients (mean age 23.5 +/- 9.3 years), whereas seizures primarily occurred in older patients (mean age 42.8 +/- 11.3 years). The patients with sensory or motor hemisyndrome associated with rupture of an intracranial dermoid cyst showed a more homogeneous age distribution (mean age 38.4 +/- 23.5 years).",
"corpus_id": 1723308,
"title": "Ruptured intracranial dermoid cysts."
} | {
"abstract": "BACKGROUND\nCombining supervised exercise training (ET) and disease management program (DMP) may benefit people with heart failure (HF) but will require additional resources.\n\n\nOBJECTIVES\nTo assess the 1-year cost-effectiveness of a 24-week ET program added to a post-discharge DMP in patients recently hospitalized with HF.\n\n\nMETHODS\nUsing randomized controlled trial data, within-trial cost-utility analyses were undertaken in the overall population (n = 278), patients aged <70 (n = 180), and those aged ≥70 (n = 98). Incremental net monetary benefits (INMB) were calculated based on quality-adjusted life-years (QALY) and healthcare costs from the perspective of a state health department (Queensland, Australia).\n\n\nRESULTS\nAt the AU$50,000/QALY threshold, ET showed 29.6% and 1.7% probability of being cost-effective in the overall population (INMB AU$ -1,472) and patients aged ≥70 (INMB AU$ -11,469), respectively. In patients aged <70, ET was potentially cost-effective with 83.6% probability (INMB AU$4,059).\n\n\nCONCLUSION\nAdding ET to DMP was not cost-effective overall or in patients aged ≥70 but was relatively cost-effective in those aged <70.",
"corpus_id": 121653669,
"score": 1,
"title": "One-year cost-effectiveness of supervised center-based exercise training in addition to a post-discharge disease management program for patients recently hospitalized with acute heart failure: The EJECTION-HF study."
} |
{
"abstract": "We previously reported that a higher degree of methylation of CpG sites in the promoter (positions 31, 37, 43, 52, and 58) and enhancer site 7862 of human papillomavirus (HPV) 16 was associated with a lower likelihood of being diagnosed with HPV 16–associated CIN 2+. The purpose of this study was to replicate our previous findings and, in addition, to evaluate the influence of plasma concentrations of folate and vitamin B12 on the degree of HPV 16 methylation (HPV 16m). The study included 315 HPV 16-positive women diagnosed with either CIN 2+ or ≤CIN 1. Pyrosequencing technology was used to quantify the degree of HPV 16m. We reproduced the previously reported inverse association between HPV 16m and risk of being diagnosed with CIN 2+. In addition, we observed that women with higher plasma folate and HPV 16m or those with higher plasma vitamin B12 and HPV 16m were 75% (P < 0.01) and 60% (P = 0.02) less likely to be diagnosed with CIN 2+, respectively. With a tertile increase in the plasma folate or vitamin B12, there was a 50% (P = 0.03) and 40% (P = 0.07) increase in the odds of having a higher degree of HPV 16m, respectively. This study provides initial evidence that methyl donor micronutrients, folate and vitamin B12, may play an important role in maintaining a desirably high degree of methylation at specific CpG sites in the HPV E6 promoter and enhancer that are associated with the likelihood of being diagnosed with CIN 2+. Cancer Prev Res; 7(11); 1128–37. ©2014 AACR.",
"corpus_id": 114061,
"title": "Folate and Vitamin B12 May Play a Critical Role in Lowering the HPV 16 Methylation–Associated Risk of Developing Higher Grades of CIN"
} | {
"abstract": "Screening for uterine cervical intraepithelial neoplasia (CIN) followed by aggressive treatment has reduced invasive cervical cancer (ICC) incidence and mortality. However, ICC cases and carcinoma in situ (CIS) continue to be diagnosed annually in the United States, with minorities bearing the brunt of this burden. Because ICC peak incidence and mortality are 10-15 years earlier than other solid cancers, the number of potential years of life lost to this cancer is substantial. Screening for early signs of CIN is still the mainstay of many cervical cancer control programs. However, the accuracy of existing screening tests remains suboptimal. Changes in epigenetic patterns that occur as a result of human papillomavirus infection contribute to CIN progression to cancer, and can be harnessed to improve existing screening tests. However, this requires a concerted effort to identify the epigenomic landscape that is reliably altered by HPV infection specific to ICC, distinct from transient changes.",
"corpus_id": 11218141,
"title": "Disparities in Cervical Cancer Incidence and Mortality: Can Epigenetics Contribute to Eliminating Disparities?"
} | {
"abstract": "The Oligocene Oligopipiza and gen nov. is the first fossil Pipizinae found in the lacustrine outcrop of Céreste (South-East of France). It differs from the other Pipizinae in the male genitalia, with a surstylus without tooth and shorter than epandrium, and a long epandrium with a very deep and narrow median theca. It is compared to other extant and fossil Pipizinae. Its position in this clade is supported by its inclusion in previous morphological phylogenetic analysis of the Syrphidae. Palaeoecological inferences for the paleobiota of Céreste are made based on this taxon and point to the presence of a mixed forest. The taphonomy of these flies is discussed. They were probably embedded in surface microbial mats. The pollinator role of Oligopipiza quadriguttata is also discussed on the basis of the presence of pollen surrounding the fossil flies.",
"corpus_id": 73577210,
"score": 1,
"title": "The first pipizine hoverfly from the Oligocene of Céreste, France"
} |
{
"abstract": "The enhancement of thermoelectric figure of merit ZT requires to either increase the power factor or reduce the phonon conductance, or even both. In graphene, the high phonon thermal conductivity is the main factor limiting the thermoelectric conversion. The common strategy to enhance ZT is therefore to introduce phonon scatterers to suppress the phonon conductance while retaining high electrical conductance and Seebeck coefficient. Although thermoelectric performance is eventually enhanced, all studies based on this strategy show a significant reduction of the electrical conductance. In this study we demonstrate that appropriate sources of disorder, including isotopes and vacancies at lowest electron density positions, can be used as phonon scatterers to reduce the phonon conductance in graphene ribbons without degrading the electrical conductance, particularly in the low-energy region which is the most important range for device operation. By means of atomistic calculations we show that the natural electronic properties of graphene ribbons can be fully preserved while their thermoelectric efficiency is strongly enhanced. For ribbons of width M = 5 dimer lines, room-temperature ZT is enhanced from less than 0.26 to more than 2.5. This study is likely to set the milestones of a new generation of nano-devices with dual electronic/thermoelectric functionalities.",
"corpus_id": 1322054,
"title": "Optimizing the thermoelectric performance of graphene nano-ribbons without degrading the electronic properties"
} | {
"abstract": "Research on thermoelectrical energy conversion, the reuse of waste heat produced by some mechanical or chemical processes to generate electricity, has recently gained some momentum. The calculation of the electronic parameters entering the figure of merit of this energy conversion, and therefore the discovery of efficient materials, is usually performed starting from Landauer's approach to quantum transport coupled with Onsager's linear response theory. As it is well known, this approach suffers from certain serious drawbacks. Here, we discuss alternative dynamical methods that can go beyond the validity of Landauer's/Onsager's approach for electronic transport. They can be used to validate the predictions of Landauer's/Onsager's approach and to investigate systems for which this approach has been shown to be unsatisfactory.",
"corpus_id": 19802642,
"title": "Towards a dynamical approach to the calculation of the figure of merit of thermoelectric nanoscale devices."
} | {
"abstract": "Objective:\nTo determine overweight and obesity prevalence in preschool children from public education, and to determine their relation to food consumption.\n\n\nMethod:\nCross-sectional study with children aged between 2 and 5 years, of both sexes, enrolled at municipal day care centers. Socioeconomic, demographic and anthropometric data were collected, in order to calculate the body mass index (BMI) for age. Data on food consumption were assessed using a Food Frequency Questionnaire. χ2 test, Kruskal-Wallis test, Student's t-test and Pearson's correlation were used at a significance level of 5%.\n\n\nResults:\nOf 548 children, 52% were male, with mean age of 4.2 years old. Most families had incomes between 1 and 2 minimum wages (59.7%), in addition to 10 years (mothers) of education. Anthropometric parameters did not differ significantly between sexes. According to the BMI-for-age, it was found that most of children were well-nourished (85.2%), 8.2% had the risk of becoming overweight, and 4.2% were overweight. The most consumed foods were: rice (100%), beans (99.4%), bread (98.5%), fruit (98.5%), red meat (97.1%), butter and margarine (95.4%), biscuits, cakes and sweet pies (94.1%), dairy products (94.1%), chocolate milk (91.7%), and soft drinks (90.2%). Consumed foods that were strongly correlated (r > 0.7) to the risk of/excess weight were, as follows: bread; biscuits, cakes, sweet pies; dairy products; chocolate milk; sausages.\n\n\nConclusion:\nThere was low prevalence of overweight and absence of obesity among the population assessed. The risk of overweight was greater among girls. Data from the study showed deviations in food consumption.",
"corpus_id": 7902874,
"score": 0,
"title": "Overweight and obesity in preschoolers: Prevalence and relation to food consumption."
} |
{
"abstract": "Peritonsillar abscess (PTA) is a common infection of the oropharynx resulting in painful swallowing, sometimes associated with fever, trismus and a typical voice alteration. Several draining methods have been suggested, including needle aspiration (NA), incision and drainage (ID), or abscesstonsillectomy. However, a gold standard of surgical therapy still does not exist. The aim of this study was to evaluate the outcome in patients who had undergone ID supplemented by cranial tonsillotomy (IDTT) as first-line treatment. A retrospective chart review of all patients who had undergone IDTT at our department in 2015 was performed. Demographic data, clinical findings, pain intensity on a 10-point visual analog scale, operation time and routine bloods before and after IDTT were collected. In addition, a 10-point visual analog scale (VAS) was utilized to measure personal satisfaction 2 weeks and 2 months after surgery. A total of 104 procedures were performed in 65 male and 38 female patients (median age 35 years), including one patient with a contralateral PTA 2 weeks after IDTT. Three patients had experienced abscess formation after admittance for antibiotic treatment of acute tonsillitis. 57.7 % of all patients denied intake of antibiotic therapy in their history at initial presentation. Patients were hospitalized for 3 days (median). The median pain intensity (VAS) within the first three postoperative days was 2, 1 and 1, respectively. Two weeks and 2 months after surgery patients were highly satisfied with the procedure (median value 10). Bleeding complications did not occur. IDTT is a novel surgical concept and associated with great patient comfort. It is safe, easy to learn and associated with an early return to normal diet and physical activity. These findings are supported by a rapid normalization of white blood cell count and C-reactive protein. IDTT eliminates the necessity of painful re-draining of the wound cavity and is free of bleeding complications. In contrast to ID and NA, histological examination of tonsillar tissue is feasible to disclose a previously undetected malign disease. Further analysis is warranted to verify the success rate in the long-term.",
"corpus_id": 1622169,
"title": "Cranial tonsillotomy for peritonsillar abscess: what a relief!"
} | {
"abstract": "Few reports document the coexistence of Peritonsillar Abscess (PTA) and Infectious Mononucleosis (IM). In this paper, we are reporting on two cases that presented to our department with the two conditions simultaneously. We also review the literature and discuss the current theories behind what was considered, for sometime, an unusual presentation of a common problem.",
"corpus_id": 7400520,
"title": "Peritonsillar abscess and infectious mononucleosis: an association or a different presentation of the same condition."
} | {
"abstract": "This paper proposes and tests a new formal model of the competition for capital, using the analogy of a “tournament” as a substitute for the ”race-to-the-bottom” model. Our key insight is that political costs that accompany legislating have both direct and indirect effects on the likelihood and scale of reforms. While countries with higher political costs are less likely themselves to enact reforms, the presence of these costs also reduces competing countries' incentives to reform regardless of their own political costs. Domestic politics therefore mitigates the pressures for downward convergence of tax policy despite increased capital mobility. We examine the capital tax policies in OECD countries during the period from 1980 to 1997 and find that states are sensitive to tax reforms in competitor countries, although their responses to reforms are mediated by their own domestic costs to reform. We define two potential sources of political costs of reform: transaction costs, due to the presence of multiple veto players in the legislative process, and constituency costs, due to ideological opposition to policy changes that benefit capital. Our evidence reveals that a reduction in these costs either domestically or abroad increases the likelihood that a country enacts tax reforms.",
"corpus_id": 38359366,
"score": 0,
"title": "Remodeling the Competition for Capital: How Domestic Politics Erases the Race to the Bottom"
} |
{
"abstract": "A methodology for improving efficiency of wireless power transfer (WPT) systems, derived from the concept of conjugate image impedances, is presented. For accuracy reasons, enhancement of the system efficiency through adjustment of its geometry parameters is based on full-wave electromagnetic (EM) simulation models. Fast WPT system design is realized by means of surrogate-based optimization exploiting auxiliary equivalent network model, frequency scaling and response correction techniques.",
"corpus_id": 164505,
"title": "Surrogate-based optimization of efficient resonant wireless power transfer links using conjugate image impedances"
} | {
"abstract": "This paper provides tables which contain the conversion between the various common two-port parameters, Z, Y, H, ABCD, S, and T. The conversions are valid for complex normalizing impedances. An example is provided which verifies the conversions to and from S parameters. >",
"corpus_id": 110378641,
"title": "Conversions between S, Z, Y, H, ABCD, and T parameters which are valid for complex source and load impedances"
} | {
"abstract": "This article describes the organization and funding of mental health services in Canada for persons accused or convicted of criminal offenses. Initially, the context in which these services are provided is briefly presented. Next, the organizational structures which exist to provide pre-trial and pre-sentence evaluations are described. In subsequent sections, mental health services for persons found incompetent to stand trial (IST) and not guilty by reason of insanity (NGRI) and for inmates of penal institutions are presented. In conclusion, some distinctive features of these structures are identified along with the principal advantages and weaknesses of each. The Canadian Context Several characteristics of the Canadian health and judicial systems are important to note at the outset, as they have major implications for the organization and funding of forensic services. Each Canadian province administers its own health and social welfare system. While each provincial system is slightly different from another, all Canadians are provided with extensive health care and social services. Taxation formula for supporting these services vary from one province to another. Adults with no children, who do not work and who are not receiving unemployment insurance, receive approximately $480 a month in welfare payments. While health is exclusively a provincial matter, the Criminal Code is a national or federal law. The administration of the justice system is a provincial responsibility, and thus differs to some extent from one province to another. This division of responsibilities and powers between the federal and provincial governments further complicates the difficult interfacing of the health and judicial systems, but it allows regional variants of the same system to emerge. The Canadian Criminal Code does not define incompetency to stand trial but generally, it is considered to be an inability to understand the court proceedings",
"corpus_id": 31798116,
"score": 0,
"title": "The organization of forensic services in Canada."
} |
{
"abstract": "Studying the phenotypic manifestations of increased genetic liability for schizophrenia can increase our understanding of this disorder. Specifically, information from alleles identified in genome-wide association studies can be collapsed into a polygenic risk score (PRS) to explore how genetic risk is manifest within different samples. In this systematic review, we provide a comprehensive assessment of studies examining associations between schizophrenia PRS (SZ-PRS) and several phenotypic measures. We searched EMBASE, Medline and PsycINFO (from August 2009-14th March 2016) plus references of included studies, following PRISMA guidelines. Study inclusion was based on predetermined criteria and data were extracted independently and in duplicate. Overall, SZ-PRS was associated with increased risk for psychiatric disorders such as depression and bipolar disorder, lower performance IQ and negative symptoms. SZ-PRS explained up to 6% of genetic variation in psychiatric phenotypes, compared to <0.7% in measures of cognition. Future gains from using the PRS approach may be greater if used for examining phenotypes that are more closely related to biological substrates, for scores based on gene-pathways, and where PRSs are used to stratify individuals for study of treatment response. As it was difficult to interpret findings across studies due to insufficient information provided by many studies, we propose a framework to guide robust reporting of PRS associations in the future.",
"corpus_id": 2549646,
"title": "The use of polygenic risk scores to identify phenotypes associated with genetic risk of schizophrenia: Systematic review"
} | {
"abstract": "BACKGROUND\nDespite evidence from twin and family studies for an important contribution of genetic factors to both childhood and adult onset psychiatric disorders, identifying robustly associated specific DNA variants has proved challenging. In the pregenomics era the genetic architecture (number, frequency and effect size of risk variants) of complex genetic disorders was unknown. Empirical evidence for the genetic architecture of psychiatric disorders is emerging from the genetic studies of the last 5 years.\n\n\nMETHODS AND SCOPE\nWe review the methods investigating the polygenic nature of complex disorders. We provide mini-guides to genomic profile (or polygenic) risk scoring and to estimation of variance (or heritability) from common SNPs; a glossary of key terms is also provided. We review results of applications of the methods to psychiatric disorders and related traits and consider how these methods inform on missing heritability, hidden heritability and still-missing heritability.\n\n\nFINDINGS\nGenome-wide genotyping and sequencing studies are providing evidence that psychiatric disorders are truly polygenic, that is they have a genetic architecture of many genetic variants, including risk variants that are both common and rare in the population. Sample sizes published to date are mostly underpowered to detect effect sizes of the magnitude presented by nature, and these effect sizes may be constrained by the biological validity of the diagnostic constructs.\n\n\nCONCLUSIONS\nIncreasing the sample size for genome wide association studies of psychiatric disorders will lead to the identification of more associated genetic variants, as already found for schizophrenia. These loci provide the starting point of functional analyses that might eventually lead to new prevention and treatment options and to improved biological validity of diagnostic constructs. Polygenic analyses will contribute further to our understanding of complex genetic traits as sample sizes increase and as sample resources become richer in phenotypic descriptors, both in terms of clinical symptoms and of nongenetic risk factors.",
"corpus_id": 4841181,
"title": "Research review: Polygenic methods and their application to psychiatric traits."
} | {
"abstract": "Genome-wide association studies (GWAS) have demonstrated a significant polygenic contribution to bipolar disorder (BD) where disease risk is determined by the summation of many alleles of small individual magnitude. Modelling polygenic risk scores may be a powerful way of identifying disrupted brain regions whose genetic architecture is related to that of BD. We determined the extent to which common genetic variation underlying risk to BD affected neural activation during an executive processing/language task in individuals at familial risk of BD and healthy controls. Polygenic risk scores were calculated for each individual based on GWAS data from the Psychiatric GWAS Consortium Bipolar Disorder Working Group (PGC-BD) of over 16 000 subjects. The familial group had a significantly higher polygene score than the control group (P=0.04). There were no significant group by polygene interaction effects in terms of association with brain activation. However, we did find that an increasing polygenic risk allele load for BD was associated with increased activation in limbic regions previously implicated in BD, including the anterior cingulate cortex and amygdala, across both groups. The findings suggest that this novel polygenic approach to examine brain-imaging data may be a useful means of identifying genetically mediated traits mechanistically linked to the aetiology of BD.",
"corpus_id": 9484616,
"score": 2,
"title": "The influence of polygenic risk for bipolar disorder on neural activation assessed using fMRI"
} |
{
"abstract": "Intrusive images related to adverse experiences are an important feature of a number of psychological disorders and a hallmark symptom of posttraumatic stress disorder (PTSD). Depression, anxiety, and PTSD are all common reactions following a burn injury. However, the nature of burn-related trauma memories and associated intrusions and their contribution to psychological disorders is not well understood. The aim of the study was to take a broad look at the nature of imagery experienced by people who have sustained a burn injury. Nineteen participants completed self-report questionnaires assessing depression, anxiety, and PTSD symptoms and were administered a semi-structured interview which explored the characteristics (vividness, sensory modalities, intrusions, emotion intensity) of imagery formed in relation to their burn injuries. Ongoing intrusive imagery was reported by over half the participants and there were significant correlations between frequency of intrusive images and posttraumatic symptoms, and between intensity of emotions associated with intrusive images and depression and posttraumatic symptoms. A thematic analysis of the memory narratives revealed four main themes: threat to self, view of the world, view of others, and positive psychological change. These results are discussed in relation to existing trauma theory and burn injury literature. Implications for clinical practice and recommendations for further research are proposed.",
"corpus_id": 385271,
"title": "Investigating the phenomenology of imagery following traumatic burn injuries."
} | {
"abstract": "The paper examines the relevant professional literature in order to explore how adjustment after burn injury may be enhanced. For this purpose, the unique characteristics of burn injury, and particularly the psychological meaning of the skin injury, are examined. An attempt is made to understand why some researchers find that a majority of this population suffers psychological disturbance, while others show that it is a 'normal' population, with no premorbid psychopathology. The ways of enhancing the psychological adjustment of burn victims, beginning with the acute phase of hospitalization and until long-term adjustment in the community, are discussed. These include, mainly, integrative team work to create a 'cover' as a skin substitute around the patient, social support, different techniques of psychotherapy when necessary, and job placement. In an attempt to learn what happens to burn patients a year after injury and later, we reviewed studies of their situation in terms of work, the family (including sexual functioning) and social interaction. In light of all this, the possibility of predicting long-term psychological adjustment among burn victims and the variables that may be relevant to this, such as, size of the burn or, rather, the individual's personality traits, are discussed.",
"corpus_id": 6033341,
"title": "Long-term psychosocial adjustment after burn injury."
} | {
"abstract": "This was a cross-sectional study which looked into the interaction between situational factors, role stressors, hazard exposure and personal factors among 135 nurses in the Philippine General Hospital. More than half (58.5%) of the respondents reported being ill due to work in the past year, and 59.3% missed work because of an illness. Regression showed factors associated with burnout were organizational role stress, hazard exposure, self-efficacy, age, number of working years, illness in the past 12 months, migraine, dizziness, sleep disorder, cough and colds, and diarrhea. After multiple regression analysis, organizational role stress (p = .000), migraine (p = .001), age (p = .018) and illness in the past 12 months (p = .000) were found to be significant predictors of burnout. The contribution of the study is in advancing new concepts in the already existing framework of burnout, and thus, assisting nurses and hospital administration in on controlling this problem.",
"corpus_id": 9334995,
"score": 1,
"title": "Multiple interactions of hazard exposures, role stressors and situational factors, and burnout among nurses."
} |
{
"abstract": "In this communication, we propose a original small-size multiband antenna with a coupling feed for LTE/WWAN/ HIPERLAN2/ IEEE 802.11a operation in the smart phones. The obtained impedance bandwidth of the proposed antenna (S11<;-6dB) over the operating bands can attain about 520/1545 MHz for the smart phone application, respectively. Ultimately, up to four resonances are achieved to cover the desired 698-960 /1710-2690 MHz and the HIPERLAN2/ IEEE 802.11a band. With an additional parasitic part, it is successfully achieved to cover the HIPERLAN2/IEEE 802.11a band. In general, the simulated results are consistent with the measured results. In the same time, the radiation efficiency, antenna gain and body SAR of the proposed antenna make it good enough for the practical smart phone.",
"corpus_id": 20512902,
"title": "Printed small-size monopole antenna for LTE/WWAN smartphone application"
} | {
"abstract": "A new internal multiband mobile phone antenna formed by two printed monopole slots of different lengths cut at the edge of the system ground plane of the mobile phone is presented. The antenna can generate two wide bands centered at about 900 and 2100 MHz to cover the GSM850/GSM900/DCS/PCS/UMTS bands and the 2.4-GHz WLAN band. Further, the antenna has a simple planar structure and occupies a small area of only. It is also promising to bend the antenna into an L shape to reduce its volume occupied inside the mobile phone. Good radiation characteristics are obtained over the two wide operating bands.",
"corpus_id": 20859089,
"title": "Printed Monopole Slot Antenna for Internal Multiband Mobile Phone Antenna"
} | {
"abstract": "Early detection of pulmonary cancer is the most promising way to enhance a patient’s chance for survival. Accurate pulmonary nodule detection in computed tomography (CT) images is a crucial step in diagnosing pulmonary cancer. In this paper, inspired by the successful use of deep convolutional neural networks (DCNNs) in natural image recognition, we propose a novel pulmonary nodule detection approach based on DCNNs. We first introduce a deconvolutional structure to Faster Region-based Convolutional Neural Network (Faster R-CNN) for candidate detection on axial slices. Then, a three-dimensional DCNN is presented for the subsequent false positive reduction. Experimental results of the LUng Nodule Analysis 2016 (LUNA16) Challenge demonstrate the superior detection performance of the proposed approach on nodule detection (average FROC-score of 0.893, ranking the 1st place over all submitted results), which outperforms the best result on the leaderboard of the LUNA16 Challenge (average FROC-score of 0.864).",
"corpus_id": 5740617,
"score": -1,
"title": "Accurate Pulmonary Nodule Detection in Computed Tomography Images Using Deep Convolutional Neural Networks"
} |
{
"abstract": "Frequently occurred congestions have prevented the deregulated power market to achieve its objective, cheaper energy for consumers, because congestion cost has been added to consumers' locational marginal price (LMP) besides generation cost. Some measures are needed to solve this problem. Otherwise, benefits brought by deregulation will be incomplete. As an alternative to building new transmission lines, which is not preferred due to many factors, Flexible AC Transmission System (FACTS) was introduced to solve both the congestion problem G. Huang et al. (2002) and in turn reduce LMP S.C. Srivastava et al. (2000). However, a key issue is still missing, which is the pricing scheme for the utilization of FACTS devices and penalty for users to operate at their limits. Unless this issue is addressed, no proper incentives can be provided to market participants for new constructions. This paper proposes a pricing scheme for FACTS devices in congestion management, which addresses both the penalty and the utilization issues. Numerical examples are also used to demonstrate our ideas.",
"corpus_id": 153498383,
"title": "Establishing pricing schemes for FACTS devices in congestion management"
} | {
"abstract": "One way to solve the congestion problem in a deregulated power system is re-dispatching the generation. This paper investigates the impacts of thyristor controlled series capacitor (TCSC) and static VAr compensator (SVC) on this re-dispatch method with the objective of minimizing the total amount of transactions being curtailed. An algorithm of optimal power flow (OPF) to reduce the transaction curtailment through installing TCSC/SVC in the system is proposed in this paper. Both the pool type transaction and the bilateral type transaction can be taken care using this method. This paper also investigates the improvement of total transfer capability (TTC) by using TCSC and SVC with the consideration of transaction patterns. With the increase of TTC, the possibility of congestion occurrence will be reduced. The TTC calculation is solved by OPF. Numerical examples are used to demonstrate the effect of TCSC/SVC on transaction curtailment and TTC improvements. The test results show that the effect is significant. Finally, this paper suggests some potential future research.",
"corpus_id": 111017899,
"title": "TCSC and SVC as re-dispatch tools for congestion management and TTC improvement"
} | {
"abstract": "The drag and momentum fluxes produced by gravity waves generated in flow over orography are reviewed, focusing on adiabatic conditions without phase transitions or radiation effects, and steady mean incoming flow. The orographic gravity wave drag is first introduced in its simplest possible form, for inviscid, linearized, non-rotating flow with the Boussinesq and hydrostatic approximations, and constant wind and static stability. Subsequently, the contributions made by previous authors (primarily using theory and numerical simulations) to elucidate how the drag is affected by additional physical processes are surveyed. These include the effect of orography anisotropy, vertical wind shear, total and partial critical levels, vertical wave reflection and resonance, non-hydrostatic effects and trapped lee waves, rotation and nonlinearity. Frictional and boundary layer effects are also briefly mentioned. A better understanding of all of these aspects is important for guiding the improvement of drag parametrization schemes.",
"corpus_id": 16838561,
"score": 0,
"title": "The physics of orographic gravity wave drag"
} |
{
"abstract": "We present a study of full area emitter phototransistors with different optical window sizes implemented in a SiGe Bipolar technology. Extracted responsivity of 3.5 A/W and an opto-microwave cut-off frequency of 739 MHz were observed.",
"corpus_id": 301159,
"title": "Full area emitter SiGe phototransistor for opto-microwave circuit applications"
} | {
"abstract": "We present a new approach to obtain low-cost and high-performance SiGe phototransistors in a commercial BiCMOS process. Photoresponsivity of 2.7 A/W was obtained for 850-nm detection due to the transistor gain, corresponding to 393% quantum efficiency. Responsivities of 0.13 A/W and 0.07mA/W were achieved for 1060 and 1310 nm with SiGe absorption. With V/sub ce/=2 V, we measure a -3-dB bandwidth of up to 5.3 GHz for phototransistors with a 4-/spl mu/m/sup 2/ active area and 2.0 GHz for phototransistors with 60-/spl mu/m/sup 2/ active area and finger contacts. This high-efficiency and high-speed phototransistor is an enabling device for monolithic receiver integration.",
"corpus_id": 22160827,
"title": "Low-cost, high-efficiency, and high-speed SiGe phototransistors in commercial BiCMOS"
} | {
"abstract": "A partnership between Government agencies and the information technologies research community has succeeded in the past for the benefit of the Nation. The most notable example is the emergence of the Internet as the basis for broad scientific, cultural, civic, and commercial discourse, evolving from what was originally a Government-supported networking research project. The collaborative development of a new applied research domain is critical to help meet the Nation's growing information service demands. Applied research that considers real world operating constraints can provide valuable new problems and insights for the academic research domain, leading to new demonstrable and deployable systems. This applied research domain is a National Challenge to provide a transition strategy for migrating Federal Information Services from legacy systems, through the interoperable systems of the Internet, and toward more advanced integrated global systems. A unique opportunity exists for a new paradigm for interaction between Government and citizen; an opportunity to invent the",
"corpus_id": 22113295,
"score": 1,
"title": "Towards the digital government of the 21st century: a report from the workshop on research and development opportunities in federal information services"
} |
{
"abstract": "It is well known that the (present) value of rural land is dependent upon anticipated future net benefits appropriately discounted. In his Principles of Economics in the late 1890s, Marshall noted that the capital value of land \"is the actuarial 'discounted' value of all the net incomes which it is likely to afford\" (1898, p. 718-emphasis added). Modern researchers like Harris and Nehring (1976), Lee and Rask (1976), Melichar (1979), Reinsel and Reinsel (1979), and many others readily acknowledge the importance of expectations in their theoretic models. Nevertheless, expectations apparently have been incorporated in only one cross-sectional, econometric analysis of land values. Reynolds and Timmons (1969) included two variables in their empirical study to proxy future expectations: expected capital gains and expected net farm income. These variables were calculated as a weighted average of historical returns. No empirical analyses using actual market transactions have incorporated variables to quantify the role of expectations based on the subjective beliefs of the land market participants. The role of subjective expectations is especially important in a rapidly developing county where much of the growth is in the form of relatively small residential lots scattered throughout the urban fringe. In such counties, there are many \"speculators\" in the urban fringe land market. In an area with fewer \"speculators,\" less demand for building sites, and a lower probability of conversion, expectations may not be quite as important. Nevertheless, subjective expectations should be a significant determinant of price variations in any rural land market via their impact on an-",
"corpus_id": 152557622,
"title": "A Case Study of Rural Land Prices at the Urban Fringe Including Subjective Buyer Expectations"
} | {
"abstract": "In this study hedonic modelling methods beyond the ordinary least squares estimator are investigated in explaining and predicting the land prices in the two submarkets (Espoo and Nurmijarvi) of the Finnish land markets. The first paper deals with the estimation of several parametric hedonic models, including dynamic responses, using recursive estimation technique. The second paper examines the applicability of semiparametric structural time series methods to the optimal estimation of spatio-temporal movements of land prices. The third paper focuses on the robust nonparametric estimation using local polynomial modelling approach in explaining and predicting the land prices. The fourth paper investigates flexible wavelet transforms in the estimation of long-run temporal land price movements (cycles and trends). The final fifth paper uses robust parametric estimator, the three-stage MM-estimator, to explicitly address the problem of outlying and influential data points. The key observation of this study is that there is much scope for methods beyond the ordinary least squares estimator in explaining and predicting the land prices in local markets. This is especially true in the submarket of Espoo, where the use of unconventional methods of the study showed that significant improvements could be achieved in hedonic models’ explanatory power and/or predictive validity when the methods of this research are used instead of the orthodox least squares estimator. In the Espoo case structural time series models, local polynomial regression and robust MM-estimation all generated more precise results in terms of post-sample prediction power than the conventional least squares estimator. The empirical experimentation quite strongly indicated that the determination of land prices in the municipality of Nurmijarvi could be best explained by the use of unobserved component models. The flexible local polynomial modelling and three-stage MM-estimation surprisingly added no value in terms of greater post-sample precision in the Nurmijarvi case.",
"corpus_id": 26342645,
"title": "ON THE HEDONIC MODELLING OF LAND PRICES"
} | {
"abstract": "This paper addresses one area where perhaps historians have been particularly remiss, namely how employers responded and organized to neutralize the growing power of labour and the penetration of socialist ideas and communist influence amongst the working class in the inter-war years.",
"corpus_id": 154417994,
"score": 1,
"title": "'A Crusade for Capitalism': The Economic League, 1919-39:"
} |
{
"abstract": "We derive the equations for the nonsupersymmetric vacua of D3-branes in the presence of nonperturbative moduli stabilization in type IIB flux compactifications, and solve and analyze them in the case of two particular 7-brane embeddings at the bottom of the warped deformed conifold. In the limit of large volume and long throat, we obtain vacua by imposing a constraint on the 7-brane embedding. These vacua fill out continuous spaces of higher dimension than the corresponding supersymmetric vacua, and have negative effective cosmological constant. Perturbative stability of these vacua is possible but not generic. Finally, we argue that -branes at the tip of the conifold share the same vacua as D3-branes.",
"corpus_id": 6833468,
"title": "Nonsupersymmetric brane vacua in stabilized compactifications"
} | {
"abstract": "We study the dynamics of a D3 brane in generic IIB warped compactifications, using the Hamiltonian formulation discussed in arXiv:0805.3700. Taking into account of both closed and open string fluctuations, we derive the warped Kahler potential governing the motion of a probe D3 brane. By including the backreaction of D3, we also comment on how the problem of defining a holomorphic gauge coupling on wrapped D7 branes in warped background can be resolved.",
"corpus_id": 14722450,
"title": "ON D3-BRANE DYNAMICS AT STRONG WARPING"
} | {
"abstract": "Abstract The detection efficiencies of two fission counter assemblies (235U and 238U) designed to measure the neutron flux at JET, have been determined for monoenergetic and radioactive neutron sources. The results are in good agreement with calculations using Monte Carlo neutron transport codes, when care is used in setting up the computer models. The mean difference between calculation and experiment is (7±7%).",
"corpus_id": 119789936,
"score": 1,
"title": "Calculation and measurement of 235U and 238U fussion counter assembly detection efficiency"
} |
{
"abstract": "BackgroundLobosphaera incisa, formerly known as Myrmecia incisa and then Parietochloris incisa, is an oleaginous unicellular green alga belonging to the class Trebouxiophyceae (Chlorophyta). It is the richest known plant source of arachidonic acid, an ω-6 poly-unsaturated fatty acid valued by the pharmaceutical and baby-food industries. It is therefore an organism of high biotechnological interest, and we recently reported the sequence of its chloroplast genome.ResultsWe now report the complete sequence of the mitochondrial genome of L. incisa from high-throughput Illumina short-read sequencing. The circular chromosome of 69,997 bp is predicted to encode a total of 64 genes, some harboring specific self-splicing group I and group II introns. Overall, the gene content is highly similar to that of the mitochondrial genomes of other Trebouxiophyceae, with 34 protein-coding, 3 rRNA, and 27 tRNA genes. Genes are distributed in two clusters located on different DNA strands, a bipartite arrangement that suggests expression from two divergent promoters yielding polycistronic primary transcripts. The L. incisa mitochondrial genome contains families of intergenic dispersed DNA repeat sequences that are not shared with other known mitochondrial genomes of Trebouxiophyceae. The most peculiar feature of the genome is a repetitive palindromic repeat, the LIMP (L. Incisa Mitochondrial Palindrome), found 19 times in the genome. It is formed by repetitions of an AACCA pentanucleotide, followed by an invariant 7-nt loop and a complementary repeat of the TGGTT motif. Analysis of the genome sequencing reads indicates that the LIMP can be a substrate for large-scale genomic rearrangements. We speculate that LIMPs can act as origins of replication. Deep sequencing of the L. incisa transcriptome also suggests that the LIMPs with long stems are sites of transcript processing. The genome also contains five copies of a related palindromic repeat, the HyLIMP, with a 10-nt motif related to that of the LIMP.ConclusionsThe mitochondrial genome of L. incisa encodes a unique type of repetitive palindromic repeat sequence, the LIMP, which can mediate genome rearrangements and play a role in mitochondrial gene expression. Experimental studies are needed to confirm and further characterize the functional role(s) of the LIMP.",
"corpus_id": 349732,
"title": "The complete mitochondrial genome sequence of the green microalga Lobosphaera (Parietochloris) incisa reveals a new type of palindromic repetitive repeat"
} | {
"abstract": "Microalgae hold great promise with regards to the production of valuable products such as PUFAs and biofuel. They are a highly interesting group of organisms for investigating lipid metabolism and while some insight has been gained from comparison of C. reinhardtii to other well characterized model organisms, it is becoming increasingly clear that substantial diversity exists between algal species. Among them, the terrestrial green microalga L. incisa is unique in its ability to accumulate high levels of ARA and sequester it in neutral lipids within LBs, especially when deprived of nitrogen. \nIn order to understand the unique mechanisms of sequestering ARA in neutral lipids, LB biogenesis was analyzed on a protein level in L. incisa strain SAG 2468. Following 3 d of nitrogen limitation, a state characterized by TAG and ARA accumulation, a multitude of proteins could be identified in LB isolates by means of LC-MS/MS. Semi-quantitative enrichment analysis through comparison with other cellular fractions was carried out and yielded a number of candidate LB associated proteins. For a subset of these candidates, the subcellular localization was confirmed by heterologous expression in tobacco pollen tubes along with confocal microscopy. Additionally, gene expression was analyzed in L. incisa cultures subjected to nitrogen starvation and subsequently rescued by nitrogen resupply, a time course during which TAG is first accumulated and then remobilized. \nThe proteins g555, g15430 and g13747 were found to be putative structural components of the lipid storage organelle based on similarities to known algal proteins, strong enrichment in the L. incisa LB fraction and hydrophobicity of the amino acid sequence, respectively. \nFurthermore, two putative lipases were investigated in this study, one of them LB-associated. Even though TAG lipase activity could not be established for either of them in this study, they may still play a role in L. incisa LB homeostasis. \nAn additional lipase candidate, LiSDP1, was demonstrated to hydrolyze TAG when the gene was expressed in an A. thaliana mutant lacking both plant homologs. The protein appears to localize to LBs in tobacco pollen tubes and is postulated to be involved in the degradation of L. incisa LBs during recovery from nitrogen starvation. \nAltogether, this study saw the successful isolation and confirmation of LB proteins from L. incisa as well as the identification of a TAG lipase that is most likely involved in storage lipid degradation, thereby contributing to the elucidation of LB biogenesis in this unique microalga.",
"corpus_id": 90474200,
"title": "Biogenesis of Lipid Bodies in Lobosphaera incisa"
} | {
"abstract": "The 2008 Principled Consensus between China and Japan to jointly develop a small area in the northern part of the East China Sea and cooperatively exploit the Chinese Chunxiao oil and gas field is positive progress in the settlement of maritime boundary disputes in the East China Sea. This article examines the negotiating context and content of the document.",
"corpus_id": 154775958,
"score": 0,
"title": "A Note on the 2008 Cooperation Consensus Between China and Japan in the East China Sea"
} |
{
"abstract": "Lasing from Ge was achieved by highly n-type doping and biaxially tensile strain to overcome free carrier absorption. High n-type doping and efficient carrier injection remain the most important issues for electrical excitation of lasing.",
"corpus_id": 8287915,
"title": "A germanium-on-silicon laser for on-chip applications"
} | {
"abstract": "Monolithic lasers on Si are ideal for high-volume and large-scale electronic-photonic integration. Ge is an interesting candidate owing to its pseudodirect gap properties and compatibility with Si complementary metal oxide semiconductor technology. Recently we have demonstrated room-temperature photoluminescence, electroluminescence, and optical gain from the direct gap transition of band-engineered Ge-on-Si using tensile strain and n-type doping. Here we report what we believe to be the first experimental observation of lasing from the direct gap transition of Ge-on-Si at room temperature using an edge-emitting waveguide device. The emission exhibited a gain spectrum of 1590-1610 nm, line narrowing and polarization evolution from a mixed TE/TM to predominantly TE with increasing gain, and a clear threshold behavior.",
"corpus_id": 15885757,
"title": "Ge-on-Si laser operating at room temperature."
} | {
"abstract": "We report the first room‐temperature sharp line electroluminescence of an erbium‐doped silicon light‐emitting diode at λ=1.54 μm. The electroluminescence originates from an internal f‐shell transition of Er3+. The wavelength and linewidth are relatively independent of temperature. The light intensity saturates at a drive current density of 5 A/cm2 due to the long excited state lifetime of Er3+. As the temperature increases from 100 K to room temperature, the light intensity decreases significantly.",
"corpus_id": 121463386,
"score": 2,
"title": "Room‐temperature sharp line electroluminescence at λ=1.54 μm from an erbium‐doped, silicon light‐emitting diode"
} |
{
"abstract": "A consistency relation between project elements, arises as a result of the optimal project configuration synthesis and IT-projects for newly created and development of distributed information systems were studied. Formalization of the following concepts was done: components of project's product, project element, project element characteristic, consistency relation. It was shown that a consistency relation occurs between project element characteristics and it is anti-reflexive and anti-symmetry. Project configuration showing the consistency relations between the project element characteristics was shown.",
"corpus_id": 9249852,
"title": "Managing projects configuration in development distributed information systems"
} | {
"abstract": "Разработана концептуальная модель процесса УК в проектах, где показано, что для достижения цели этого процесса, необходимо управлять конфигурацией проекта, продукта и проектного окружения. Показана связь между задачами синтеза и управления конфигурацией в проекте в течении его ЖЦ.Описан состав общего процесса УК в проектах, и показаны роль и место каждого его компонента.",
"corpus_id": 111006456,
"title": "CONCEPTUAL MODEL OF THE CONFIGURATION MANAGEMENT PROCESS IN PROJECTS"
} | {
"abstract": "From the Publisher: \nBest selling author and world-renowned software development expert Robert C. Martin shows how to solve the most challenging problems facing software developers, project managers, and software project leaders today. \n \nThis comprehensive, pragmatic tutorial on Agile Development and eXtreme programming, written by one of the founding father of Agile Development: \nTeaches software developers and project managers how to get projects done on time, and on budget using the power of Agile Development. \nUses real-world case studies to show how to of plan, test, refactor, and pair program using eXtreme programming. \nContains a wealth of reusable C++ and Java code. \nFocuses on solving customer oriented systems problems using UML and Design Patterns. \n \n \nRobert C. Martin is President of Object Mentor Inc. Martin and his team of software consultants use Object-Oriented Design, Patterns, UML, Agile Methodologies, and eXtreme Programming with worldwide clients. He is the author of the best-selling book Designing Object-Oriented C++ Applications Using the Booch Method (Prentice Hall, 1995), Chief Editor of, Pattern Languages of Program Design 3 (Addison Wesley, 1997), Editor of, More C++ Gems (Cambridge, 1999), and co-author of XP in Practice, with James Newkirk (Addison-Wesley, 2001). He was Editor in Chief of the C++ Report from 1996 to 1999. He is a featured speaker at international conferences and trade shows. \n \nAuthor Biography: \nROBERT C. MARTIN is President of Object Mentor Inc. Martin and his team of software consultants use Object-Oriented Design, Patterns, UML, Agile Methodologies, and eXtreme Programming with worldwide clients. He is the author of the best-selling book Designing Object-Oriented C++ Applications Using the Booch Method (Prentice Hall, 1995), Chief Editor of, Pattern Languages of Program Design 3 (Addison Wesley, 1997), Editor of, More C++ Gems (Cambridge, 1999), and co-author of XP in Practice, with James Newkirk (Addison-Wesley, 2001). He was Editor in Chief of the C++ Report from 1996 to 1999. He is a featured speaker at international conferences and trade shows.",
"corpus_id": 60690699,
"score": 2,
"title": "Agile Software Development, Principles, Patterns, and Practices"
} |
{
"abstract": "Heavy-ion irradiation is a new method of mutation breeding to produce new cultivars. We established the application of this method in rice plants to obtain mutants. Rice seeds were irradiated by C or Ne ions (135MeV/u) with a LET (linear energy transfer) of 22.7 or 64.2 keV/microm, respectively. Chlorophyll-deficient mutants (CDM) segregated in M2 progeny were albino, pale-green, yellow or striped-leave phenotypes. The highest rate of CDM with C-ion irradiation, 7.31%, was obtained at 40 Gy among the doses examined. Ne-ion irradiation gave the highest rate, 11.6%, at 20 Gy. We used the RLGS (Restriction Landmark Genomic Scanning) method to analyze DNA deletion in an albino mutant genome. Not I-landmark RLGS profiles detected about 2000 spots in rice. We found that one of the polymorphic spots was strongly linked to the albino phenotypic mutant derived from deleting of a DNA fragment, and demonstrated the high ability to detect of polymorphic regions by the RLGS method.",
"corpus_id": 1422400,
"title": "Chlorophyll-deficient mutants of rice demonstrated the deletion of a DNA fragment by heavy-ion irradiation."
} | {
"abstract": "A novel chlorophyll-deficient chd6 mutant of F1 hybrids from Vitis venifera was selected to study its primarily physiological characteristics and leaf ultrastructures under culturing in vitro. The results showed that although increasing Fe2+ and Mg2+ concentration could improve growth of the mutant in vitro, the effect was limited. In addition, it was determined that relatively lower Fe2+ and Mg2+ concentrations would be beneficial to the survival in vitro on GS medium. Chlorophyll contents of the mutant were significantly lower than those of its parents (4.53–13.76% of those of the higher-value parent Red globe). The chlorophyll a/b ratio was greatly increased (up to 4.34) to approximately twofold greater than that of its parents. Some critically successive enzymes for converting ALA to chlorophyll a could be inhibited to a variable extent in the chd6 mutant, and more serious inhibition could happen in critical enzymes converting Mg-proto to Pchlide or Mg-proto to chlorophyll b. The mutant also showed not only poor Rubisco activities, lower percentage of dry matter, and soluble carbohydrate content, but also lower IAA and GA (GA1 + GA4) content and higher ABA content. In leaf ultrastructures, the mutant presented larger stomata size, higher percentages of stomata opening, and lower stomata density with more stoma approximately a ring shape. Most chloroplasts of the chd6 mutant developed as asymmetric ellipses with deficient and irregular lamella.",
"corpus_id": 202282,
"title": "Physiological Characteristics and Leaf Ultrastructure of a Novel Chlorophyll-deficient chd6 Mutant of Vitis venifera Cultured in vitro"
} | {
"abstract": "Proton rotating frame relaxation times [T 1ρ (H)] were used to characterize the molecular dynamics and structural homogeneity in waxy corn starch, wheat gluten, and mixtures of both. Single-phase relaxation of T 1ρ (H) was found in native starch, indicating a relatively small dimension of structural heterogeneity in terms of spin-diffusion. Heating of the starch samples decreased the T 1ρ (H) to 3.2-3.4 msec, as compared to raw starch samples at 5.3-6.2 msec, possibly due to the presence of more amorphous domains. The native wheat gluten displayed a slightly inhomogeneous T 1ρ (H) of 4.9-6.3 msec, suggesting the presence of a structural inhomogeneity, different from that of native waxy corn starch. Heating of gluten decreased the T 1ρ (H), which was also dependent on moisture. When mixed at a 1 :1 starch-to-gluten ratio and heated, the T 1ρ (H) associated with the gluten were similar to those for pure gluten at 20% moisture content (mc). However, when dried to 2% mc, the gluten T 1ρ (H) increased to 9.3-9.6 msec. The T 1ρ (H) values for starch in the mixture were slightly increased to 5.7 msec. The different T 1ρ (H) values for starch and gluten suggested a limited miscibility of the two components. Compared to starch, gluten T 1ρ (H) was far more sensitive to moisture content.",
"corpus_id": 12823628,
"score": 1,
"title": "Proton relaxation of starch and gluten by solid-state nuclear magnetic resonance spectroscopy"
} |
{
"abstract": "Epitaxial double perovskite La2CoMnO6 (LCMO) films were grown by metalorganic aerosol deposition on SrTiO3(111) substrates. A high Curie temperature, TC = 226 K, and large magnetization close to saturation, MS(5 K) = 5.8μB/f.u., indicate a 97% degree of B-site (Co,Mn) ordering within the film. The Co/Mn ordering was directly imaged at the atomic scale by scanning transmission electron microscopy with energy-dispersive X-ray spectroscopy (STEM-EDX). Local electron-energy-loss spectroscopy (EELS) measurements reveal that the B-sites are predominantly occupied by Co(2+) and Mn(4+) ions in quantitative agreement with magnetic data. Relatively small values of the (1/2 1/2 1/2) superstructure peak intensity, obtained by X-ray diffraction (XRD), point out the existence of ordered domains with an arbitrary phase relationship across the domain boundary. The size of these domains is estimated to be in the range 35-170 nm according to TEM observations and modelling the magnetization data. These observations provide important information towards the complexity of the cation ordering phenomenon and its implications on magnetism in double perovskites, and similar materials.",
"corpus_id": 11214357,
"title": "Phase problem in the B-site ordering of La2CoMnO6: impact on structure and magnetism."
} | {
"abstract": "$B$-site ordered thin films of double perovskite ${\\mathrm{Sr}}_{2}{\\mathrm{CoIrO}}_{6}$ were epitaxially grown by a metalorganic aerosol deposition technique on various substrates, actuating different strain states. X-ray diffraction, transmission electron microscopy, and polarized far-field Raman spectroscopy confirm the strained epitaxial growth on all used substrates. Polarization-dependent Co ${L}_{2,3}$ x-ray absorption spectroscopy reveals a change of the magnetic easy axis of the antiferromagnetically ordered (high-spin) ${\\mathrm{Co}}^{3+}$ sublattice within the strain series. By reversing the applied strain direction from tensile to compressive, the easy axis changes abruptly from in-plane to out-of-plane orientation. The low-temperature magnetoresistance changes its sign respectively and is described by a combination of weak antilocalization and anisotropic magnetoresistance effects.",
"corpus_id": 56075363,
"title": "Strain-induced changes of the electronic properties of B -site ordered double-perovskite Sr 2 CoIrO 6 thin films"
} | {
"abstract": "We have prepared 14 new AA'BB'O{sub 6} perovskites which possess a rock salt ordering of the B-site cations and a layered ordering of the A-site cations. The compositions obtained are NaLnMnWO{sub 6} (Ln=Ce, Pr, Sm, Gd, Dy, and Ho) and NaLnMgWO{sub 6} (Ln=Ce, Pr, Sm, Eu, Gd, Tb, Dy, and Ho). The samples were structurally characterized by powder X-ray diffraction which has revealed metrically tetragonal lattice parameters for compositions with Ln=Ce, Pr and monoclinic symmetry for compositions with smaller lanthanides. Magnetic susceptibility vs. temperature measurements have found that all six NaLnMnWO{sub 6} compounds undergo antiferromagnetic ordering at temperatures between 10 and 13 K. Several compounds show signs of a second magnetic phase transition. One sample, NaPrMnWO{sub 6}, appears to pass through at least three magnetic phase transitions within a narrow temperature range. All eight NaLnMgWO{sub 6} compounds remain paramagnetic down to 2 K revealing that the ordering of the Ln{sup 3+} cations in the NaLnMnWO{sub 6} compounds is induced by the ordering of the Mn{sup 2+} sub-lattice. - Graphical abstract: Evidence for multiple magnetic phase transitions in the A and B-site ordered perovskite NaPrMnWO{sub 6}.",
"corpus_id": 95447327,
"score": 2,
"title": "Magnetic and structural properties of NaLnMnWO6 and NaLnMgWO6 perovskites"
} |
{
"abstract": "In the Red-Blue Nonblocker problem, the input is a bipartite graph \\(G=(R \\uplus B, E)\\) and an integer k, and the question is whether one can select at least k vertices from R so that every vertex in B has a neighbor in R that was not selected. While the problem is W[1]-complete for parameter k, a related problem, Nonblocker, is FPT for parameter k. In the Nonblocker problem, we are given a graph H and an integer k, and the question is whether one can select at least k vertices so that every selected vertex has a neighbor that was not selected. There is also a simple reduction from Nonblocker to Red-Blue Nonblocker, creating two copies of the vertex set and adding an edge between two vertices in different copies if they correspond to the same vertex or to adjacent vertices. We give FPT algorithms for Red-Blue Nonblocker instances that are the result of this transformation – we call these instances symmetric. This is not achieved by playing back the entire transformation, since this problem is NP-complete, but by a kernelization argument that is inspired by playing back the transformation only for certain well-structured parts of the instance. We also give an FPT algorithm for almost symmetric instances, where we assume the symmetry relation is part of the input.",
"corpus_id": 4708713,
"title": "When is Red-Blue Nonblocker Fixed-Parameter Tractable?"
} | {
"abstract": "We show that if the two parts of a finite bipartite graph have the same degree sequence, then there is a bipartite graph, with the same degree sequences, which is symmetric, in that it has an involutive graph automorphism that interchanges its two parts. To prove this, we study the relationship between symmetric bipartite graphs and graphs with loops.",
"corpus_id": 335395,
"title": "Symmetric Bipartite Graphs and Graphs with Loops"
} | {
"abstract": "The problem of calculating the distribution function of a general quadratic form in normal random variables is examined. Two numerical integration methods for inverting the characteristic function are presented. Both make use of paths of integration that pass through, or near to, a suitable saddle-point. It is assumed that a computer is available for the calculation of functions of complex variables and for the performance of various matrix computations. Approximations for special cases are stated and examples are given.",
"corpus_id": 121576858,
"score": 1,
"title": "Distribution of Quadratic Forms in Normal Random Variables—Evaluation by Numerical Integration"
} |
{
"abstract": "Identification and extraction of singing voice from within musical mixtures is a key challenge in source separation and machine audition. Recently, deep neural networks (DNN) have been used to estimate 'ideal' binary masks for carefully controlled cocktail party speech separation problems. However, it is not yet known whether these methods are capable of generalizing to the discrimination of voice and non-voice in the context of musical mixtures. Here, we trained a convolutional DNN (of around a billion parameters) to provide probabilistic estimates of the ideal binary mask for separation of vocal sounds from real-world musical mixtures. We contrast our DNN results with more traditional linear methods. Our approach may be useful for automatic removal of vocal sounds from musical mixtures for 'karaoke' type applications.",
"corpus_id": 8811169,
"title": "Deep Karaoke: Extracting Vocals from Musical Mixtures Using a Convolutional Deep Neural Network"
} | {
"abstract": "Content-based music information retrieval tasks have traditionally been solved using engineered features and shallow processing architectures. In recent years, there has been increasing interest in using feature learning and deep architectures instead, thus reducing the required engineering effort and the need for prior knowledge. However, this new approach typically still relies on mid-level representations of music audio, e.g. spectrograms, instead of raw audio signals. In this paper, we investigate whether it is possible to apply feature learning directly to raw audio signals. We train convolutional neural networks using both approaches and compare their performance on an automatic tagging task. Although they do not outperform a spectrogram-based approach, the networks are able to autonomously discover frequency decompositions from raw audio, as well as phase-and translation-invariant feature representations.",
"corpus_id": 13094557,
"title": "End-to-end learning for music audio"
} | {
"abstract": "Gaining knowledge and actionable insights from complex, high-dimensional and heterogeneous biomedical data remains a key challenge in transforming health care. Various types of data have been emerging in modern biomedical research, including electronic health records, imaging, -omics, sensor data and text, which are complex, heterogeneous, poorly annotated and generally unstructured. Traditional data mining and statistical learning approaches typically need to first perform feature engineering to obtain effective and more robust features from those data, and then build prediction or clustering models on top of them. There are lots of challenges on both steps in a scenario of complicated data and lacking of sufficient domain knowledge. The latest advances in deep learning technologies provide new effective paradigms to obtain end-to-end learning models from complex data. In this article, we review the recent literature on applying deep learning technologies to advance the health care domain. Based on the analyzed work, we suggest that deep learning approaches could be the vehicle for translating big biomedical data into improved human health. However, we also note limitations and needs for improved methods development and applications, especially in terms of ease-of-understanding for domain experts and citizen scientists. We discuss such challenges and suggest developing holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability.",
"corpus_id": 2740197,
"score": -1,
"title": "Deep learning for healthcare: review, opportunities and challenges."
} |
{
"abstract": "A cloud point extraction coupled with high performance liquid chromatography (HPLC/UV) method was developed for the determination of Δ9-tetrahydrocannabinol (THC) in micellar phase. The nonionic surfactant “Dowfax 20B102” was used to extract and pre-concentrate THC from cannabis resin, prior to its determination with a HPLC–UV system (diode array detector) with isocratic elution. The parameters and variables affecting the extraction were investigated. Under optimum conditions (1 wt.% Dowfax 20B102, 1 wt.% Na2SO4, T = 318 K, t = 30 min), this method yielded a quite satisfactory recovery rate (~81 %). The limit of detection was 0.04 μg mL−1, and the relative standard deviation was less than 2 %. Compared with conventional solid–liquid extraction, this new method avoids the use of volatile organic solvents, therefore is environmentally safer.",
"corpus_id": 609031,
"title": "Cloud point extraction of Δ9-tetrahydrocannabinol from cannabis resin"
} | {
"abstract": "A rapid and simple procedure using liquid-liquid extraction and subsequent gas chromatographic mass-spectrometric detection has been developed for determination of Delta9-tetrahydrocannabinol (THC), cannabidiol (CBD) and cannabinol (CBN) in different hemp foods. After addition of Delta8-tetrahydrocannabinol as internal standard, both solid and liquid specimens were extracted with two volumes of 2 ml of hexane/isopropanol (9:1): Chromatography was performed on a fused silica capillary column and analytes were determined in the selected-ion-monitoring (SIM) mode. The method was validated in the range 1-50 ng/ml liquid samples or 1-50 ng/g solid samples for THC and CBN, and 2-50 ng/ml or ng/g for CBD. Mean recoveries ranged between 78.8 and 90.2% for the different analytes in solid and liquid samples. The quantification limits were 1 ng/ml or ng/g for THC and CBN and 2 ng/ml or ng/g CBD. The method was applied to analysis of various hemp foods. THC content in different products varied 50-fold, whereas CBN and CBD were absent in some samples and achieved hundreds of ng/ml or ng/g in others. The concentration ratio (THC + CBN)/CBD was used to differentiate between the phenotypes of cannabis plants in different specimens. Products possibly originating from drug-type cannabis plants were found in the majority of analyzed specimens.",
"corpus_id": 24863723,
"title": "A rapid and simple procedure for the determination of cannabinoids in hemp food products by gas chromatography-mass spectrometry."
} | {
"abstract": "Study Design. Human intervertebral disc cells were cultured in monolayer and treated with adenovirus-containing marker genes to determine the susceptibility of the cells to adenovirus-mediated gene transfer. Objectives. To test the efficacy of the adenovirus-mediated gene transfer technique for transferring exogenous genes to human intervertebral disc cells in vitro. Summary of Background Data. Upregulated proteoglycan synthesis after direct in vivo adenovirus-mediated transfer of growth factor genes to the rabbit intervertebral disc has previously been reported. Before contemplating extending this approach to the treatment of human disc disease, it is necessary to demonstrate that human intervertebral disc cells are indeed susceptible to adenovirus-mediated gene transduction. Methods. Human intervertebral disc cells were isolated from disc tissue obtained from 15 patients during surgical disc procedures. The cells were cultured in monolayer and treated with saline containing five different doses of adenovirus carrying the lacZ gene (Ad/CMV-lacZ), saline containing adenovirus carrying the luciferase gene (Ad/CMV-luciferase), or saline alone. Transgene expression was analyzed by 5-bromo-4-chloro-3-indolyl-&bgr;-galactosidase (X-Gal) staining and luciferase assay. Results. Adenovirus efficiently transferred lacZ and luciferase marker genes to cells from degenerated discs as well as to cells from nondegenerated discs. A minimum dose of 150 MOI Ad/CMV-lacZ was found to be sufficient to achieve transduction of approximately 100% of disc cells—regardless of patient age, sex, surgical indication, disc level, and degeneration grade. No statistically significant difference in the luciferase activities could be detected in disc cell cultures from degenerated and nondegenerated discs treated with Ad/CMV-luciferase. Conclusions. In vitro transducibility of human intervertebral disc cells by adenovirus is relatively insensitive to disc degeneration grade. Because the rate-limiting step for successful gene therapy is the ability to transfer genesefficiently to the target tissue, the achievement of efficient gene transfer to human intervertebral disc cells(using a direct, adenovirus-mediated approach) is an important and necessary step in the development of gene therapy strategies for the management of human intervertebral disc disorders.",
"corpus_id": 11604162,
"score": 0,
"title": "Human Intervertebral Disc Cells Are Genetically Modifiable by Adenovirus-Mediated Gene Transfer: Implications for the Clinical Management of Intervertebral Disc Disorders"
} |
{
"abstract": "Improvements in parallel computing hardware usually involve increments in the number of available resources for a given application such as the number of computing cores and the amount of memory. In the case of shared-memory computers, the increase in computing resources and available memory is usually constrained by the coherency protocol, whose overhead rises with system size, limiting the scalability of the final system. In this paper we propose an efficient and cost-effective way to increase the memory available for a given application by leveraging free memory in other computers in the cluster.\n Our proposal is based on the observation that many applications benefit from having more memory resources but do not require more computing cores, thus reducing the requirements for cache coherency and allowing a simpler implementation and better scalability.\n Simulation results show that, when additional mechanisms intended to hide remote memory latency are used, execution time of applications that use our proposal is similar to the time required to execute them in a computer populated with enough local memory, thus validating the feasibility of our proposal. We are currently building a prototype that implements our ideas.",
"corpus_id": 1203374,
"title": "A practical way to extend shared memory support beyond a motherboard at low cost"
} | {
"abstract": "Presents a collection of slides covering the following topics: AMD Opteron processor; x86 64-bit architecture evolution; Magny-Cours silicon; MCM 2.0 logical view; HyperTransport technology; HT assist; probe filter entry; cache coherence protocol; probe filter transaction scenarios; probe filter coverage ratio; and memory latency.",
"corpus_id": 22522094,
"title": "Blade computing with the AMD Opteron™ processor (\"magny-cours\")"
} | {
"abstract": "In recent years, different processing technologies have been engineered to fabricate capsules or particles with peculiar properties (e.g., swelling, pH-sensitive response) at the micro and sub-micrometric size scale, to be used as carriers for controlled drug and molecular release. Herein, the development of cellulose acetate (CA) micro-carriers with mono- (MC) or bi-phasic (BC) composition is proposed, fabricated via electrohydrodynamic atomization (EHDA)—an electro-dropping technology able to micro-size polymer solution by the application of high voltage electrostatic forces. Image analysis allows identification of the process parameters to optimize morphology, in terms of size distribution and shape. Meanwhile, an accurate rheological study has enabled investigating the interface between CA solutions with different viscosities to optimize BC systems. Release tests have confirmed that BC carriers can retain the drug more efficiently in acidic conditions, also providing a more gradual and sustained release until six days, with respect to MC carriers. Hence, all these results have proven that biphasic architecture significantly improves the capability of CA microcarriers to release ketoprofen lysinate, thus suggesting a new route to design core/shell systems for the retarded oral administration of anti-inflammatory drugs.",
"corpus_id": 73457232,
"score": 0,
"title": "Mono- and Bi-Phasic Cellulose Acetate Micro-Vectors for Anti-Inflammatory Drug Delivery"
} |
{
"abstract": null,
"corpus_id": 18814101,
"title": "Performance Measurement and Management ( PMM ) for SMEs : a literature review and a reference framework for PMM design"
} | {
"abstract": "The significance of aligning IT with corporate strategy is widely recognized, but the lack of appropriate methodologies prevented practitioners from integrating IT projects with competitive strategies effectively. This article addresses the issue of deploying Web services strategically using the concept of a widely accepted management tool, the balanced scorecard. A framework is developed to match potential benefits of Web services with corporate strategy in four business dimensions: innovation and learning, internal business process, customer, and financial. It is argued that the strategic benefits of implementing Web services can only be realized if the Web services initiatives are planned and implemented within the framework of an IT strategy that is designed to support the business strategy of a firm.",
"corpus_id": 26271377,
"title": "Integrating Web Services With Competitive Strategies: The Balanced Scorecard Approach"
} | {
"abstract": "In this paper, we study the problem of keyword search with access control (KSAC) over encrypted data in cloud computing. We first propose a scalable framework where user can use his attribute values and a search query to locally derive a search capability, and a file can be retrieved only when its keywords match the query and the user’s attribute values can pass the policy check. Using this framework, we propose a novel scheme called KSAC, which enables keyword search with access control over encrypted data. KSAC utilizes a recent cryptographic primitive called hierarchical predicate encryption to enforce fine-grained access control and perform multi-field query search. Meanwhile, it also supports the search capability deviation, and achieves efficient access policy update as well as keyword update without compromising data privacy. To enhance the privacy, KSAC also plants noises in the query to hide users’ access privileges. Intensive evaluations on real-world dataset are conducted to validate the applicability of the proposed scheme and demonstrate its protection for user’s access privilege.",
"corpus_id": 36222417,
"score": -1,
"title": "Keyword Search With Access Control Over Encrypted Cloud Data"
} |
{
"abstract": "Mexican American adolescents face disparities in mental health and academic achievement, perhaps in part because of discrimination experiences. However, culturally-related values, fostered by ethnic pride and socialization, may serve to mitigate the negative impact of discrimination. Guided by the Stress Process Model, the current study examined risk and protective processes using a 2-wave multi-informant study with 750 Mexican American families. Specifically, we examined two possible mechanisms by which Mexican American values may support positive outcomes in the context of discrimination; as a protective factor (moderator) or risk reducer (mediator). Analyses supported the role of Mexican American values as a risk reducer. This study underscores the importance of examining multiple mechanisms of protective processes in understanding Mexican American adolescent resilience.",
"corpus_id": 45427380,
"title": "Discrimination and adjustment for Mexican American adolescents: A prospective examination of the benefits of culturally-related values."
} | {
"abstract": null,
"corpus_id": 332539,
"title": "Acculturation, Internalizing Mental Health Symptoms, and Self-Esteem: Cultural Experiences of Latino Adolescents in North Carolina"
} | {
"abstract": "Guided by ecodevelopmental theory (Szapocznik & Coatsworth, 1999), this study investigated the influence of multiple social ecological levels (microsystem, mesosystem, macrosystem), domains (family, school, peers), and processes (support, conflict) on the development of externalizing and internalizing behavior problems in a sample of 150 middle-school age Hispanic females. Support and conflict in the family microsystem showed the strongest and most consistent bivariate and unique relations with behavior problems. Two mesosystem level variables, conflict between parents and daughters' peers and support between parent and school personnel, were also significantly related to mother and daughter reports of externalizing behavior, respectively. Acculturation interacted with family conflict to predict both internalizing and externalizing behavior, and with support between family and school to predict externalizing behavior. These results highlight the importance of accounting for multiple dimensions and levels of social context when investigating the development of psychopathology. Implications for interventions are discussed.",
"corpus_id": 144118370,
"score": -1,
"title": "Ecodevelopmental Correlates of Behavior Problemsin Young Hispanic Females"
} |
{
"abstract": "In any power plants, it is crucial to perform a preventive maintenance to avoid unexpected breakdown of machinery, e.g., circulating water pump, using data collected from various sensors. There have been prior attempts using just traditional prediction techniques. In this paper, we propose a two-stage model that employs a technique from time series analysis to predict when the machine tends to be failed for one day in advance. The first stage focuses on forecasting trends of each sensor using “Auto-Regression Integrated Moving Average (ARIMA).” Then, the second stage aims to classify failure mode using the predicted sensor values. The experiment was conducted on data collected from eight sensors within one year. The result is shown that our proposed algorithm significantly outperforms an existing technique, Regression Artificial Neural Network.",
"corpus_id": 38748491,
"title": "Fault detection for circulating water pump using time series forecasting and outlier detection"
} | {
"abstract": null,
"corpus_id": 16418454,
"title": "Using GIS to Explore the Relationship between Socioeconomic Status and Demographic Variables and Crime in Pittsburgh, Pennsylvania"
} | {
"abstract": "The study involved three experiments. The first, a parametric investigation of nictitating membrane conditioning with eight constant intertrial intervals (ITIs) between 5 and 120 sec, orthogonal to interstimulus intervals (ISIs) of 250 and 750 msec plus three temporal conditioning control groups, revealed that performance improved rapidly with increasing ITI but stabilized at relatively low ITI values. At 750-msec ISI, a decrement in performance was found at 60-sec ITI. Experiment II, using constant ITIs of 45–75 sec in 5-sec steps, at 750-msec ISI confirmed the trend toward a performance decrement around 60 sec, although the trend was weak and highly variable. Experiment III evaluated the differences in performance between constant and variable ITI, using three ITI values and three conditions of variation at each value. Findings were discussed in terms of differences in conditioning resulting from both length and degree of variation of ITI and some subtle effects which may emerge only when constant ITIs are used.",
"corpus_id": 143930451,
"score": -1,
"title": "Conditioning of the nictitating membrane response of the rabbit (Oryctolagus cuniculus) as a function of length and degree of variation of intertrial interval"
} |
{
"abstract": "The signal amplification technique of peptide nucleic acid (PNA)-based electrochemical DNA sensor was developed in a label-free and one-step method utilizing enzymatic catalysis. Electrochemical detection of DNA hybridization on a PNA-modified electrode is based on the change of surface charge caused by the hybridization of negatively charged DNA molecules. The negatively charged mediator, ferrocenedicarboxylic acid, cannot diffuse to the DNA hybridized electrode surface due to the charge repulsion with the hybridized DNA molecule while it can easily approach the neutral PNA-modified electrode surface without the hybridization. By employing glucose oxidase catalysis on this PNA-based electrochemical system, the oxidized mediator could be immediately reduced leading to greatly increased electrochemical signals. Using the enzymatic strategy, we successfully demonstrated its clinical utility by detecting one of the mutation sequences of the breast cancer susceptibility gene BRCA1 at a sample concentration lower than 10(-9) M. Furthermore, a single base-mismatched sample could be also discriminated from a perfectly matched sample.",
"corpus_id": 742420,
"title": "Enzyme-catalyzed signal amplification for electrochemical DNA detection with a PNA-modified electrode."
} | {
"abstract": "A homothymine PNA decamer bearing four lysine residues has been synthesized as a probe for the development of amperometric sensors. On one hand, the four amino groups introduced make this derivative nine times more soluble than the corresponding homothymine PNA decamer and, on the other hand, allow the stable anchoring of this molecule on Au nanostructured surface through the terminal -NH2 moieties. In particular, XPS and electrochemical investigations performed with hexylamine, as a model molecule, indicate that the stable deposition of primary amine derivatives on such a nanostructured surface is possible and involves the free electron doublet on the nitrogen atom. This finding indicates that this PNA derivative is suitable to act as the probe molecule for the development of amperometric sensors. Thanks to the molecular probe chosen and to the use of a nanostructured surface as the substrate for the sensor assembly, the device proposed makes possible the selective recognition of the target oligonucleotide sequence with very high sensitivity.",
"corpus_id": 11254709,
"title": "Peptide nucleic acids tagged with four lysine residues for amperometric genosensors"
} | {
"abstract": "Factors contributing to the establishment of the earliest college health programs are reviewed. The author considers the evolution of these programs for two periods: the first 100 years (1860-1960) and the next 30 years (1960-1990). The changing emphases in college health programs during these two periods are seen as responses to contemporaneous events, including the development of vaccines and other advances in science and medicine, the emergence of intercollegiate athletics--first as a significant element in the college experience and subsequently as a major business--and the expansion of higher education in response to the arrival of the baby boomers in the mid-1960s. Contemporary healthcare reform is briefly reviewed, and the author concludes with an assessment of the probable impact of current healthcare reform proposals on the future of college health programs and on campus-controlled health centers.",
"corpus_id": 29639049,
"score": 0,
"title": "The evolution of medical services for students at colleges and universities in the United States."
} |
{
"abstract": "This unit provides thorough coverage of the most useful chemical and enzyme probes that can be used to examine RNA secondary and tertiary structure. Footprinting methods are presented using dimethyl sulfate, diethyl pyrocarbonate, ethylnitrosourea, kethoxal, CMCT, and nucleases. For chemical probes, both strand scission and primer extension detection protocols are included.",
"corpus_id": 6485724,
"title": "Probing RNA Structure with Chemical Reagents and Enzymes"
} | {
"abstract": "The structures of RNA molecules are often important for their function and regulation, yet there are no experimental techniques for genome-scale measurement of RNA structure. Here we describe a novel strategy termed parallel analysis of RNA structure (PARS), which is based on deep sequencing fragments of RNAs that were treated with structure-specific enzymes, thus providing simultaneous in vitro profiling of the secondary structure of thousands of RNA species at single nucleotide resolution. We apply PARS to profile the secondary structure of the messenger RNAs (mRNAs) of the budding yeast Saccharomyces cerevisiae and obtain structural profiles for over 3,000 distinct transcripts. Analysis of these profiles reveals several RNA structural properties of yeast transcripts, including the existence of more secondary structure over coding regions compared with untranslated regions, a three-nucleotide periodicity of secondary structure across coding regions and an anti-correlation between the efficiency with which an mRNA is translated and the structure over its translation start site. PARS is readily applicable to other organisms and to profiling RNA structure in diverse conditions, thus enabling studies of the dynamics of secondary structure at a genomic scale.",
"corpus_id": 4344837,
"title": "Genome-wide measurement of RNA secondary structure in yeast"
} | {
"abstract": "This paper reviews the existing literature on venture capital and private equity. The paper emphasises the importance of examining venture capital in the light of recent developments in corporate finance and its distinctiveness from other forms of finance. In order to understand current developments, the paper adopts a framework which combines industry/market and firm levels of analysis. Existing literature is reviewed using this framework. Industry level issues relate to rivalry between firms, the power of suppliers and customers, and the threats from new entrants and substitutes. Firm level issues concern deal generation, initial and second screening, valuation and due diligence, deal approval and structuring, post-contractual monitoring, investment realisation, and entrepreneurs' exit and recontracting with venture capitalists. This is followed by a review of the evidence on the performance of venture capital firms. The paper suggests potentially fruitful areas for further research including the extension of analysis to cover all stages of venture capital investment, examination of the inter-linkages between industry and firm level issues and between stages in the venture capital process, as well as further analysis of deal structuring issues and investment realisation and recontracting. Copyright Blackwell Publishers Ltd 1998.",
"corpus_id": 154028463,
"score": 0,
"title": "Venture Capital and Private Equity: A Review and Synthesis"
} |
{
"abstract": "In this paper we suggest advanced IEEE 802.11ax TCP-aware scheduling strategies for optimizing the AP operation under transmission of unidirectional TCP traffic. Our scheduling strategies optimize the performance using the capability for Multi User transmissions over the Uplink, first introduced in IEEE 802.11ax, together with Multi User transmissions over the Downlink. They are based on Transmission Opportunities (TXOP) and we suggest three scheduling strategies determining the TXOP formation parameters. In one of the strategies one can control the achieved Goodput vs. the delay. We also assume saturated WiFi transmission queues. We show that with minimal Goodput degradation one can avoid considerable delays.",
"corpus_id": 4389923,
"title": "Advanced IEEE 802.11ax TCP aware scheduling under unreliable channels"
} | {
"abstract": "The current family of 802.11 protocols are based on the Carrier Sense Multiple Access (CSMA) mechanism which is a simple and robust means of sharing a channel. However, two current trends in wireless networks point towards a situation where CSMA fails to perform better than pure random access solutions such as ALOHA. The first trend is the ever increasing raw data rate in each generation of 802.11 which is set to continue with the current 802.11ax standardisation. The second is the move towards smaller frames as end users increasingly use mobile devices instead of desktop computers. We show that as the ratio of propagation delay to packet transmission time increases, the performance of CSMA degrades correspondingly, to the point where ALOHA outperforms CSMA.",
"corpus_id": 1396302,
"title": "The failure of CSMA in emerging wireless network scenarios"
} | {
"abstract": null,
"corpus_id": 14439187,
"score": -1,
"title": "Low Crest Factor Modulation Techniques for Orthogonal Frequency Division Multiplexing ( OFDM )"
} |
{
"abstract": "Background ::: Computer-aided diagnosis (CAD) for colonoscopy may help endoscopists distinguish neoplastic polyps (adenomas) requiring resection from nonneoplastic polyps not requiring resection, potentially reducing cost. ::: ::: ::: Objective ::: To evaluate the performance of real-time CAD with endocytoscopes (×520 ultramagnifying colonoscopes providing microvascular and cellular visualization of colorectal polyps after application of the narrow-band imaging [NBI] and methylene blue staining modes, respectively). ::: ::: ::: Design ::: Single-group, open-label, prospective study. (UMIN [University hospital Medical Information Network] Clinical Trial Registry: UMIN000027360). ::: ::: ::: Setting ::: University hospital. ::: ::: ::: Participants ::: 791 consecutive patients undergoing colonoscopy and 23 endoscopists. ::: ::: ::: Intervention ::: Real-time use of CAD during colonoscopy. ::: ::: ::: Measurements ::: CAD-predicted pathology (neoplastic or nonneoplastic) of detected diminutive polyps (≤5 mm) on the basis of real-time outputs compared with pathologic diagnosis of the resected specimen (gold standard). The primary end point was whether CAD with the stained mode produced a negative predictive value (NPV) of 90% or greater for identifying diminutive rectosigmoid adenomas, the threshold required to \"diagnose-and-leave\" nonneoplastic polyps. Best- and worst-case scenarios assumed that polyps lacking either CAD diagnosis or pathology were true- or false-positive or true- or false-negative, respectively. ::: ::: ::: Results ::: Overall, 466 diminutive (including 250 rectosigmoid) polyps from 325 patients were assessed by CAD, with a pathologic prediction rate of 98.1% (457 of 466). The NPVs of CAD for diminutive rectosigmoid adenomas were 96.4% (95% CI, 91.8% to 98.8%) (best-case scenario) and 93.7% (CI, 88.3% to 97.1%) (worst-case scenario) with stained mode and 96.5% (CI, 92.1% to 98.9%) (best-case scenario) and 95.2% (CI, 90.3% to 98.0%) (worst-case scenario) with NBI. ::: ::: ::: Limitation ::: Two thirds of the colonoscopies were conducted by experts who had each experienced more than 200 endocytoscopies; 186 polyps not assessed by CAD were excluded. ::: ::: ::: Conclusion ::: Real-time CAD can achieve the performance level required for a diagnose-and-leave strategy for diminutive, nonneoplastic rectosigmoid polyps. ::: ::: ::: Primary Funding Source ::: Japan Society for the Promotion of Science.",
"corpus_id": 51986244,
"title": "Real-Time Use of Artificial Intelligence in Identification of Diminutive Polyps During Colonoscopy: A Prospective Study"
} | {
"abstract": null,
"corpus_id": 31369642,
"title": "Symptomatic Adults Colorectal Neoplasms in Asymptomatic and Prevalence of Nonpolypoid ( Flat and Depressed )"
} | {
"abstract": "This work presents methods to automatically find optimal parameter settings for convolutional neural networks (CNNs) by using an evolutionary algorithm called particle swarm optimization (PSO). Even though the parameter space is extremely large (> 10 20), we experimentally show that a better parameter setting can be found for Alexnet configuration for five different image datasets. We have also developed two candidate pruning algorithms for efficient evolutionary process. In the experiments, we achieved 0.7-5.7% improvements from the original parameter sets in Caffe, while requiring only 2-4% of processing cost of the naive PSO-based approach.",
"corpus_id": 2783101,
"score": -1,
"title": "Efficient Optimization of Convolutional Neural Networks Using Particle Swarm Optimization"
} |
{
"abstract": "Although neuropsychological testing has proven to be a valuable tool in concussion management, it is most useful when administered as part of a comprehensive assessment battery that includes grading of symptoms and clinical balance tests. A thorough sideline and clinical examination by the certified athletic trainer and team physician is considered an important first step in the management of concussion. The evaluation should be conducted in a systematic manner, whether on the field or in the clinical setting. The evaluation should include obtaining a history for specific details about the injury (eg, mechanism, symptomatology, concussion history), followed by assessing neurocognitive function and balance, which is the focus of this article. The objective measures from balance testing can provide clinicians with an additional piece of the concussion puzzle, remove some of the guesswork in uncovering less obvious symptoms, and assist in determining readiness to return safely to participation.",
"corpus_id": 990248,
"title": "Balance assessment in the management of sport-related concussion."
} | {
"abstract": "Sport concussion (SC) has emerged as a major health concern in the medical community and general public owing to increased research and media attention, which has primarily focused on male athletes. Female athletes have an equal, if not increased, susceptibility to SC. An ever-growing body of research continues to compare male and female athletes in terms of SC before and after an injury. Clinicians must be cognizant of this literature to make evidence-based clinical decision when providing care to female athletes and discern between dated and/or unsupported claims in terms of SC.",
"corpus_id": 1850374,
"title": "Sport Concussion and the Female Athlete."
} | {
"abstract": "We publish below a piece by Raphael Lemkin (1901^1959) on the genocide of Ukrainians perpetrated, according to Lemkin, by the Soviet authorities between 1926 and 1946. This document was kindly brought to our attention by Roman Serbyn, Proffessor of History at the University of Que¤ bec at Montreal, who also supplied a transcript of the original text and wrote an introductory note. The document was known to Lemkin specialists and experts in genocide, although most scholars have tended to ignore it, or to play it down (notable exceptions are J. Cooper, Ralph Lemkin and the Struggle for the Genocide Convention (NewYork: Palgrave Macmillan, 2007), 253, as well as J.-L. Panne¤ , ‘Rafae« l Lemkin ou le pouvoir d’un sans-pouvoir’, in Rafae« l Lemkin, Qu’est-ce qu’un ge¤ nocide? Pre¤ sentation par Jean-Louis Panne¤ (Monaco: E¤ dition du Rocher, 2008)). It seemed to us that this short article by Lemkin sheds much light on his view of genocide as the annihilation of a ‘national group’. a.c.",
"corpus_id": 145785211,
"score": 0,
"title": "Lemkin on Genocide of Nations"
} |
{
"abstract": "We address the problem of automatic analysis of geometrically proximate nets in VLSI layout by presenting a framework (named FASCL) which supports pairwise analysis of nets based on a geometric kernel. The exact form of the analysis function can be specified to the kernel, which assumes a coupling function based on pairwise interaction between geometrically proximate nets. The user can also attach these functions to conditions and FASCL will automatically apply the function to all pairs of nets which satisfy a condition. Our method runs with sub-quadratic time complexity, O(N/sup 1+k/), where N is the number of nets and we have proved that k<1. We have successfully used the program to analyze circuits for bridging faults, coupling capacitance extraction, crosstalk analysis, signal integrity analysis and delay fault testing.",
"corpus_id": 432678,
"title": "On automatic analysis of geometrically proximate nets in VLSI layout"
} | {
"abstract": "Noise analysis and avoidance is an increasingly critical step in deep submicron design. Ever increasing requirements on performance have led to widespread use of dynamic logic circuit families and its other derivatives. These aggressive circuit families trade off noise margin for timing performance making them more susceptible to noise failure and increasing the need for noise analysis. Currently, noise analysis is performed either through circuit or timing simulation or through model order reduction. These techniques in use are still inefficient for analyzing massive amount of interconnect data found in present day integrated circuits. This paper presents efficient techniques for estimation of coupled noise in on-chip interconnects. This noise estimation metric is an upper bound for RC circuits, being similar in spirit to Elmore delay in timing analysis. Such an efficient noise metric is especially useful for noise criticality pruning and physical design based noise avoidance techniques.",
"corpus_id": 14179320,
"title": "Efficient coupled noise estimation for on-chip interconnects"
} | {
"abstract": "Targeted advertising benefits consumers by delivering them only the messages that match their interests, and also helps advertisers by identifying only the consumers interested in their messages. Although targeting mechanisms for online advertising are well established, pervasive computing environments lack analogous approaches. This paper explores the application of activity inferencing to targeted advertising. We present two mechanisms that link activity descriptions with ad content: direct keyword matching using an online advertising service, and \"human computation\" matching, which enhances keyword matching with help from online workers. The direct keyword approach is easier to engineer and responds more quickly, whereas the human computation approach has the potential to target more effectively.",
"corpus_id": 4325657,
"score": 1,
"title": "An Exploration into Activity-Informed Physical Advertising Using PEST"
} |
{
"abstract": "Dense ceramic materials can form in nature under mild temperatures in water. By contrast, man-made ceramics often require sintering temperatures in excess of 1,400 °C for densification. Chemical strategies inspired by biomineralization processes have been demonstrated but remain limited to the fabrication of thin films and particles. Besides biomineralization, the formation of dense ceramic-like materials such as limestone also occurs in nature through large-scale geological processes. Inspired by the geological compaction of mineral sediments in nature, we report a room-temperature method to produce dense and strong ceramics within timescales comparable to those of conventional manufacturing processes. Using nanoscale powders and high compaction pressures, we show that such cold sintering process can be realized with water at room temperature to result in centimetre-sized bulk parts with specific strength that is comparable to, and occasionally even higher than, that of traditional structural materials like concrete.",
"corpus_id": 3513178,
"title": "Geologically-inspired strong bulk ceramics made with water at room temperature"
} | {
"abstract": "Despite lower hardness, stiffness, and resistance to harsh environments, heavy metallic parts and soft polymer-based composites are often preferred to ceramics because they offer higher resilience. By contrast, highly mineralized biomaterials combine these properties through hierarchical and heterogeneous architecture. Reproducing these internal designs into synthetic highly mineralized materials would therefore widen their range of application. To this aim, external fields have been used to control the orientation and position of microparticles and build complex architectures. This approach is compatible with most manufacturing processes and provides large flexibility in design. Here, I present an overview of these processes and describe how they can augment the properties of the materials produced. Theoretical and experimental descriptions are detailed to determine the strengths and limitations of each technique. With this knowledge, potential areas of improvement and future research directions will lead to the creation of highly mineralized materials with unprecedented functionalities.",
"corpus_id": 76651539,
"title": "External fields for the fabrication of highly mineralized hierarchical architectures"
} | {
"abstract": "A cloud parametrization scheme which allows for low, medium, high and convective clouds has been developed from GATE data for use in the Meteorological Office 11-layer tropical model. The problems involved in using synoptic observations to derive methods of predicting clouds are discussed. Only limited success was obtained in relating observed cloud amounts to relative humidity and atmospheric temperature structure. The restrictions imposed on the cloud scheme by the model's resolution and by its inability to produce a perfect simulation are considered. In the light of these difficulties a simple approach was adopted based on the assumption that condensation on the smallest scales is part of a larger-scale condensation regime related to the synoptic scale situation. The scheme has been designed to reproduce the main features of a cloud field by relating the large-scale meteorological features associated with a cloud distribution to model variables. Low, medium and high cloud amounts are determined from a quadratic relationship with relative humidity. Low cloud has also been related to the temperature lapse rate in an attempt to model the persistent areas of sub-tropical stratocumulus occurring under inversions. A relative humidity relationship is inappropriate for convective cloud which has, therefore, been related to the convective mass flux calculated in the convection scheme of the model. The scheme has been reasonably successful in predicting the cloudiness associated with the ITCZ and the NE. and SE. trades. The cloud fields showed a good degree of coherence from day to day and there were no signs of unrealistic feedbacks between radiation, cloud and dynamics.",
"corpus_id": 123473714,
"score": 2,
"title": "A cloud parametrization scheme derived from GATE data for use with a numerical model"
} |
{
"abstract": "Knowledge discovery in high dimensional data is a challenging enterprise, but new visual analytic tools appear to offer users remarkable powers if they are ready to learn new concepts and interfaces. Our 3-year effort to develop versions of the Hierarchical Clustering Explorer (HCE) began with building an interactive tool for exploring clustering results. It expanded, based on user needs, to include other potent analytic and visualization tools for multivariate data, especially the rank-by-feature framework. Our own successes using HCE provided some testimonial evidence of its utility, but we felt it necessary to get beyond our subjective impressions. This paper presents an evaluation of the Hierarchical Clustering Explorer (HCE) using three case studies and an email user survey (n=57) to focus on skill acquisition with the novel concepts and interface for the rank-by-feature framework. Knowledgeable and motivated users in diverse fields provided multiple perspectives that refined our understanding of strengths and weaknesses. A user survey confirmed the benefits of HCE, but gave less guidance about improvements. Both evaluations suggested improved training methods.",
"corpus_id": 61792,
"title": "Knowledge Discovery in High Dimensional Data: Case Studies and a User Survey for an Information Visualization Tool"
} | {
"abstract": "BACKGROUND\nBioinformatics visualization tools are often not robust enough to support biomedical specialists’ complex exploratory analyses. Tools need to accommodate the workflows that scientists actually perform for specific translational research questions. To understand and model one of these workflows, we conducted a case-based, cognitive task analysis of a biomedical specialist’s exploratory workflow for the question: What functional interactions among gene products of high throughput expression data suggest previously unknown mechanisms of a disease?\n\n\nRESULTS\nFrom our cognitive task analysis four complementary representations of the targeted workflow were developed. They include: usage scenarios, flow diagrams, a cognitive task taxonomy, and a mapping between cognitive tasks and user-centered visualization requirements. The representations capture the flows of cognitive tasks that led a biomedical specialist to inferences critical to hypothesizing. We created representations at levels of detail that could strategically guide visualization development, and we confirmed this by making a trial prototype based on user requirements for a small portion of the workflow.\n\n\nCONCLUSIONS\nOur results imply that visualizations should make available to scientific users “bundles of features†consonant with the compositional cognitive tasks purposefully enacted at specific points in the workflow. We also highlight certain aspects of visualizations that: (a) need more built-in flexibility; (b) are critical for negotiating meaning; and (c) are necessary for essential metacognitive support.",
"corpus_id": 15222179,
"title": "A cognitive task analysis of a visual analytic workflow: Exploring molecular interaction networks in systems biology"
} | {
"abstract": "Timeboxes are rectangular widgets that can be used in direct-manipulation graphical user interfaces (GUIs) to specify query constraints on time series data sets. Timeboxes are used to specify simultaneously two sets of constraints: given a set of N time series profiles, a timebox covering time periods x 1…x 2 (x 1 ≤ x 2) and values y 1…y 2 (y 1 ≤ y 2) will retrieve only those n√N that have values y 1 ≤ y 2 during all times x 1 ≤ x ≤ x 2. TimeSearcher is an information visualization tool that combines timebox queries with overview displays, query-by-example facilities, and support for queries over multiple time-varying attributes. Query manipulation tools including pattern inversion and ‘leaders & laggards’ graphical bookmarks provide additional support for interactive exploration of data sets. Extensions to the basic timebox model that provide additional expressivity include variable time timeboxes, which can be used to express queries with variability in the time interval, and angular queries, which search for ranges of differentials, rather than absolute values. Analysis of the algorithmic requirements for providing dynamic query performance for timebox queries showed that a sequential search outperformed searches based on geometric indices. Design studies helped identify the strengths and weaknesses of the query tools. Extended case studies involving the analysis of two different types of data from molecular biology experiments provided valuable feedback and validated the utility of both the timebox model and the TimeSearcher tool. Timesearcher is available at http://www.cs.umd.edu/hcil/timesearcher",
"corpus_id": 5628923,
"score": 2,
"title": "Dynamic Query Tools for Time Series Data Sets: Timebox Widgets for Interactive Exploration"
} |
{
"abstract": "Epigraph is an efficient graph-based algorithm for designing vaccine antigens to optimize potential T-cell epitope (PTE) coverage. Epigraph vaccine antigens are functionally similar to Mosaic vaccines, which have demonstrated effectiveness in preliminary HIV non-human primate studies. In contrast to the Mosaic algorithm, Epigraph is substantially faster, and in restricted cases, provides a mathematically optimal solution. Epigraph furthermore has new features that enable enhanced vaccine design flexibility. These features include the ability to exclude rare epitopes from a design, to optimize population coverage based on inexact epitope matches, and to apply the code to both aligned and unaligned input sequences. Epigraph was developed to provide practical design solutions for two outstanding vaccine problems. The first of these is a personalized approach to a therapeutic T-cell HIV vaccine that would provide antigens with an excellent match to an individual’s infecting strain, intended to contain or clear a chronic infection. The second is a pan-filovirus vaccine, with the potential to protect against all known viruses in the Filoviradae family, including ebolaviruses. A web-based interface to run the Epigraph tool suite is available (http://www.hiv.lanl.gov/content/sequence/EPIGRAPH/epigraph.html).",
"corpus_id": 1977846,
"title": "Epigraph: A Vaccine Design Tool Applied to an HIV Therapeutic Vaccine and a Pan-Filovirus Vaccine"
} | {
"abstract": "Malaria is a global health burden, and a major cause of mortality and morbidity in Africa. Here we designed a putative malaria epitope ensemble vaccine by selecting an optimal set of pathogen epitopes. From the IEDB database, 584 experimentally-verified CD8+ epitopes and 483 experimentally-verified CD4+ epitopes were collected; 89% of which were found in 8 proteins. Using the PVS server, highly conserved epitopes were identified from variability analysis of multiple alignments of Plasmodium falciparum protein sequences. The allele-dependent binding of epitopes was then assessed using IEDB analysis tools, from which the population protection coverage of single and combined epitopes was estimated. Ten conserved epitopes from four well-studied antigens were found to have a coverage of 97.9% of the world population: 7 CD8+ T cell epitopes (LLMDCSGSI, FLIFFDLFLV, LLACAGLAYK, TPYAGEPAPF, LLACAGLAY, SLKKNSRSL, and NEVVVKEEY) and 3 CD4+ T cell epitopes (MRKLAILSVSSFLFV, KSKYKLATSVLAGLL and GLAYKFVVPGAATPYE). The addition of four heteroclitic peptides - single point mutated epitopes - increased HLA binding affinity and raised the predicted world population coverage above 99%.",
"corpus_id": 24561389,
"title": "In silico design of knowledge-based Plasmodium falciparum epitope ensemble vaccines."
} | {
"abstract": "The problem of an anti-plane Griffith crack moving along the interface of dissimilar piezoelectric materials is solved by using the integral transform technique. It is shown from the result that the intensity factors of anti-plane stress and electric displacement are dependent on the speed of the Griffith crack as well as the material coefficients. When the two piezoelectric materials are identical, the present result will reduce to the result for the problem of an anti-plane moving Griffith crack in homogeneous piezoelectric materials.",
"corpus_id": 134672944,
"score": 0,
"title": "Griffith crack moving along the interface of two dissimilar piezoelectric materials"
} |
{
"abstract": "Academically, quantitative measurement of texture is essential for the study of the chemical and physiological mechanisms of texture. Commercially, quantitative measurement of texture is essential to ensure the quality of produce at packout. The diversity of tissues involved, the variety of attributes required to fully describe textural properties, and the changes in these attributes as the product ripens and senesces contribute to the complexity of texture measurement. Texture is a human assessment of the structural elements of a food. It is generally accepted that texture relates primarily to mechanical properties, so instrumental measurements relate mainly to mechanical properties. Fruits and vegetables exhibit viscoelastic behavior under mechanical loading, which means that the force, distance, and time involved in loading determine the value of any measurement. Because of their viscoelastic character, every effort should be made to hold the speed of the test constant in manual texture measurements and the rate of loading should be specified and controlled in mechanized measurements. There are many types of mechanical loading: puncture, compression, shearing, torsion (twisting), extrusion, crushing, tension, bending, vibration, and impact. The most widely used texture measurement for fruits and vegetables, after manual squeezing of course, is the Magness-Taylor fruit firmness test, which measures the maximum force to puncture the product in a specified way. The Kramer shear or shear-compression test is widely used in the processed foods industry, but is less commonly used by horticulturists. Nondestructive methods are highly desired both for sorting and for postharvest research. Compression tests of excised tissue pieces are frequently used in research. Nondestructive testing using impact, vibrational behavior, light scattering, and optical methods are being investigated but none has been widely accepted to date. Multiple instrumental measurements may be necessary to adequately the diversity of textural attributes sensed by the human consumer.",
"corpus_id": 7873014,
"title": "Textural quality assessment for fresh fruits and vegetables."
} | {
"abstract": "Tomato (Lycopersicon esculentum Mill.) genotypes varying in intrinsic firmness were examined to determine the quantitative relationships between polygalacturonase (EC 3.2.1.15) activity, firmness and other ripening parameters including rate (days from mature-green to full red) and intensity (rate of ethylene production at climacteric peak) of ripening. Texture, respiration and ethylene production were monitored in the immature-green through the red (ripe) stages of development. Polygalacturonase activity was measured by direct assay of salt-extractable wall protein or by monitoring the release of pectins from isolated, enzymically active wall. In all fruit, polygalacturonase activity was highly correlated with pericarp softening, but only moderately correlated with softening of whole fruit (r = 0.920 and 0.757, respectively). Polygalacturonase activity was positively correlated with cell-wall autolytic activity in pink (r = 0.969) and red (r = 0.900) fruit. Firmer genotypes exhibited lower rates of respiration and ethylene production during ripening. Polygalacturonase activity in isolates prepared from fruit at the climacteric peak was positively correlated with ethylene production and respiration, and negatively correlated with days to ripening (r = 0.929, 0.805, and -0.791, respectively). The data demonstrate the importance of selecting the appropriate method of firmness determination and are consistent with the hypothesis that pectin fragments released by polygalacturonase contribute to the production of autocatalytic (system II) ethylene.",
"corpus_id": 86251760,
"title": "Physiology and firmness determination of ripening tomato fruit"
} | {
"abstract": "The net energy (NE) system takes into account the metabolic utilisation of energy and has been proposed as a superior system for characterising the energy value of feeds. In growing pigs, the inefficiency of ME utilisation for NE (or the heat increment, HI) is dependent on many factors, among them the genotype, which implies that published NE prediction equations may not apply across all genotypes. We conducted a study to investigate the effect of two genotypes (Yorkshire-Hampshire♀ × Duroc♂; YH × D) and Large white♀ × Landrace♂; LW × LR) on heat production (HP) and NE value of a corn soybean meal-based diet fed to growing pigs. The diet met or exceeded the nutrient specifications of 20–50 kg b.w. pigs according to NRC (1998). A total of sixteen barrows were used, eight of each genotype (initial b.w. of 20.1 ± 1.1 and 19.0 ± 0.9 kg for YH ×D and LW × LR, respectively). Pigs were initially fed at 550 kcal/kg b.w.–0.60/day (high ME intake) for determination of DE and ME in metabolism crates. Thereafter, HP was measured using an indirect calorimeter at either high ME or 330 kcal/kg b.w.–0.60/day (low ME intake) to estimate fasting HP (FHP) by regression. Pigs were allowed a 3-d adaptation period at low ME intake before measurement of HP. Irrespective of the genotype, a reduction of ME intake resulted in a decrease (P < 0.0001) of HP (352 for high ME vs. 292 kcal/kg b.w.–0.60/day for low ME). Pigs of LW × LR tended (P = 0.07) to have higher HP than those of YH× D and their estimated FHP was 175 and 103 kcal/kg b.w.–0.60/day, respectively. The determined diet NE value was lower for the YHxD genotype (2,307 vs. 2633 kcal/kg DMI, P = 0.01) than for the LW × LR genotype. Pigs of LW × LR genotype showed lower (179 vs. 226 kcal/kg b.w.–0.60/day, P = 0.003) HI than YH × D genotype and were determined to retain less energy as protein (100 vs. 123 kcal/kg b.w.–0.60/day, P =0.04) and more energy as fat (73 vs. 42 kcal/kg b.w.–0.60/day, P = 0.04). The diet NE value was 96% (LW × LR) and 81% (YH × D) of the predicted NE from published equations. In conclusion, a corn-soybean meal fed at equal amounts resulted in different HP and NE value depending on genotype.",
"corpus_id": 8067536,
"score": 1,
"title": "Effect of genotype on heat production and net energy value of a corn-soybean meal-based diet fed to growing pigs."
} |
{
"abstract": "Systems based on synchronous grammars and tree transducers promise to improve the quality of statistical machine translation output, but are often very computationally intensive. The complexity is exponential in the size of individual grammar rules due to arbitrary re-orderings between the two languages, and rules extracted from parallel corpora can be quite large. We devise a linear-time algorithm for factoring syntactic re-orderings by binarizing synchronous rules when possible and show that the resulting rule set significantly improves the speed and accuracy of a state-of-the-art syntax-based machine translation system.",
"corpus_id": 2506060,
"title": "Synchronous Binarization For Machine Translation"
} | {
"abstract": null,
"corpus_id": 15193589,
"title": "An Evaluation Exercise for Word Alignment"
} | {
"abstract": "This paper proposes an overview of waste-to-energy conversion by gasification processes based on thermal plasma. In the first part, basic aspects of the gasification process have been discussed: chemical reaction in gasification, main reactor configuration, chemical conversion performances, tar content in syngas and performances in function of the design and the operation conditions (temperature, pressure, oxidizing agent…). In the second part of the paper are compared the performances, available in the scientific literature, of various waste gasification processes based on thermal plasma (DC or AC plasma torches) at lab scale versus typical performances of waste autothermal gasification: LHV of the syngas, cold gas efficiency and net electrical efficiency. In the last part, a review has been done on the various torch technologies used for waste gasification by plasma at industrial scale, the major companies on this market and the perspectives of the industrial development of the waste gasification by thermal plasma. The main conclusions are that plasma technology is considered as a highly attractive route for the processing of waste-to-energy and can be easily adapted to the treatment of various wastes (municipal solid wastes, heavy oil, used car tires, medical wastes…). The high enthalpy, the residence time and high temperature in plasma can advantageously improve the conditions for gasification, which are inaccessible in other thermal processes and can allow reaching, due to low tar content in the syngas, better net electrical efficiency than autothermal processes.",
"corpus_id": 11909884,
"score": -1,
"title": "Waste Gasification by Thermal Plasma : A Review"
} |
{
"abstract": "Aiming at the varying cutting features of depth and thickness in high-speed milling, using mathematical methods to model the theoretical three-dimensional model of calculating cutting forces based on the machining principle. First of all, according to the oblique cutting model, a cutting force model of flank edge was presented. The differential method was used in this process. The model was approached with calculating instantaneous chip thickness based on real tooth trajectory. Secondly, the chisel edge for differential along the vertical direction of cutting edges according to the orthogonal cutting model, calculating the cutting force of the infinitesimal. The cutting force model of chisel edge was constructed by the integral method. Merging the both upon, then the three-dimensional cutting force model is established. In the end, the model was programmed by means of the software Matlab. The result indicates that the numerical results agree well with experimental data, and the foundation of the cutter’s stress field can be laid by this model.",
"corpus_id": 582618,
"title": "The 3D Modeling of Cutting Force in High-speed Milling for Flat Mill"
} | {
"abstract": "In order to study the chip formation mechanism in metal cutting process, based on finite element software ABAQUS, establish finite element model, and carry out numerical simulation on serrated chip formation of Ni-base superalloy GH4169 and ribbon chip formation of 45# steel respectively.In addition, analyze the influence law of three factors (cutting speed, feed rate, back cutting depth) on cutting force and the distribution rule of cutting heat in serrated chip formation of GH4169. DOI: http://dx.doi.org/10.11591/telkomnika.v10i3.608",
"corpus_id": 9389676,
"title": "Numerical Simulation of Chip Formation in Metal Cutting Process"
} | {
"abstract": "The mechanistic and unified mechanics of cutting approaches to the prediction of forces in milling operations are briefly described and compared. The mechanistic approach is shown to depend on milling force coefficients determined from milling tests for each cutter geometry. By contrast the unified mechanics of cutting approach relies on an experimentally determined orthogonal cutting data base (i.e., shear angle, friction coefficient and shear stress), incorporating the tool geometrical variables, and milling models based on a generic oblique cutting analysis. It is shown that the milling force coefficients for all force components and cutter geometrical designs can be predicted from an orthogonal cutting data base and the generic oblique cutting analysis for use in the predictive mechanistic milling models. This method eliminates the need for the experimental calibration of each milling cutter geometry for the mechanistic approach to force prediction and can be applied to more complex cutter designs. This method of milling force coefficient prediction has been experimentally verified when milling Ti 6 Al 4 V titanium alloy for a range of chatter, eccentricity and run-out free cutting conditions and cutter geometrical specifications.",
"corpus_id": 108507937,
"score": 2,
"title": "Prediction of Milling Force Coefficients From Orthogonal Cutting Data"
} |
{
"abstract": "Magnetic Penrose process (MPP) is not only the most exciting and fascinating process mining the rotational energy of black hole but it is also the favored astrophysically viable mechanism for high energy sources and phenomena. It operates in three regimes of efficiency, namely low, moderate and ultra, depending on the magnetization and charging of spinning black holes in astrophysical setting. In this paper, we revisit MPP with a comprehensive discussion of its physics in different regimes, and compare its operation with other competing mechanisms. We show that MPP could in principle foot the bill for powering engine of such phenomena as ultra-high-energy cosmic rays, relativistic jets, fast radio bursts, quasars, AGNs, etc. Further, it also leads to a number of important observable predictions. All this beautifully bears out the promise of a new vista of energy powerhouse heralded by Roger Penrose half a century ago through this process, and it has today risen in its magnetically empowered version of mid 1980s from a purely thought experiment of academic interest to a realistic powering mechanism for various high-energy astrophysical phenomena.",
"corpus_id": 153312978,
"title": "Fifty Years of Energy Extraction from Rotating Black Hole: Revisiting Magnetic Penrose Process"
} | {
"abstract": "Using axisymetric general relativistic magnetohydrodynamics simulations we study evolution of accretion torus around black hole endowed with different initial magnetic field configurations. Due to accretion of material onto black hole, parabolic magnetic field will develop in accretion torus funnel around vertical axis, for any initial magnetic field configuration.",
"corpus_id": 215786205,
"title": "Simulations of black hole accretion torus in various magnetic field configurations"
} | {
"abstract": "Abstract In previous papers (A.D. Erlykin, A.W. Wolfendale, Astropart. Phys. 7 (1997) 1, 203; 8 (1998) 265; J. Phys. G 23 (1997) 979), we presented evidence for structure in the size spectrum of cosmic ray air showers which we interpreted as due to the presence of oxygen and iron nuclei from a local, recent, supernova remnant. Although the energies in question are 3 × 1015 eV and 1.2 × 1016 eV, well above those where direct measurements are possible, the direct measurements are, in fact, relevant. We find that the direct measurements are quite consistent with an extrapolation back of our spectra. Indeed, taken alone, the direct measurements themselves provide strong evidence for the existence of an extra, single source contribution to the total energy spectrum. The paper also includes a discussion of the high energy electron spectrum, anisotropies and the likely site of the local SNR.",
"corpus_id": 123476193,
"score": 2,
"title": "High energy cosmic ray spectroscopy. IV. The evidence from direct observations at lower energies and directional anisotropies"
} |
{
"abstract": "Random surface roughness very often can occur during the growth or etching of films under non-equilibrium conditions. Several competing mechanisms such as noise, surface diffusion, and shadowing all play a role in the evolution of surface roughness. However, recent results obtained in many growth and etching processes exhibit an unusual tendency: the morphology is very rough where it is expected to be smooth and vice versa. The origin, we believe, is due to the fact that during the deposition and etching processes the atoms very often do not stick to the surface upon their first strikes. Atoms actually bounce around before all settle on surface sites. This non-unity sticking probability can lead to a very rough surface during etching and a very smooth surface during sputter or chemical vapor deposition that cannot be explained by the conventional mechanisms.",
"corpus_id": 1223809,
"title": "Novel Mechanisms on the Growth Morphology of Films"
} | {
"abstract": "A parametric study of single‐crystal silicon roughness induced by an SF6 plasma has been carried out by means of atomic force microscopy. An helicon source (also called resonant inductive plasma etcher) has been used to study the relation between plasma parameters and subsequent surface damage. The surface damage has been examined in terms of height roughness analysis and in terms of spatial (lateral) extent of the surface roughness. The central result is that roughness scales with the ratio of the ion flux over the reactive neutral flux (J+/JF), showing the combined role of both ionic and neutral species. At low ion flux, the neutrals smooth the surface, while at higher ion flux, they propagate the ion‐induced defects, allowing the roughness to be enhanced. Influences of other parameters such as exposure duration, ion energy, or substrate temperature have also been quantified. It is shown that the roughness growth is well described by an empirical law: rms∝(1/√E)(J+/JF)ηtβ, with η≊0.45 and β≊1 (rms is th...",
"corpus_id": 120067052,
"title": "SILICON ROUGHNESS INDUCED BY PLASMA ETCHING"
} | {
"abstract": "This article explores the strategies used by Israeli students to resolve the Israeli-Palestinian conflict in the interactive computer game, PeaceMaker. Students played PeaceMaker in the roles of both the Israeli Prime Minister and the Palestinian President in random order. Students must take actions satisfying constituents on both sides of the conflict in order to win the game. The diversity of actions taken in each role was measured. Several hypotheses test the degree to which Israeli students, depending on which role they played and their own demographic variables, exploited a consistent set of actions or explored a more diverse range of actions across three main types: construction, political, and security. The results show that (1) greater action diversity increases success in both roles, (2) Israeli students engaged in less diverse actions when playing the Israeli role than when playing the Palestinian role, (3) students' religiosity and political Hawkishness negatively predicted action diversity when playing the Palestinian role, and (4) action diversity mediates the relationship between a student's background knowledge about the conflict and success in the Israeli role. The significance of these findings for understanding attitudes about the Israeli-Palestinian conflict are discussed, including implications for conflict resolution more generally.",
"corpus_id": 17952143,
"score": 0,
"title": "Action diversity in a simulation of the Israeli-Palestinian conflict"
} |
{
"abstract": "This research shows how two teacher educators, one from Canada and one from the United States, have attempted to imbue their preservice and graduate education practices with a sense of social justice, despite the downgrading of the importance of social justice by accreditation agencies such as National Council for the Accreditation of Teacher Education (NCATE) in the United States. Most specifically, narrative exemplars (Lyons & LaBoskey, 2002) are presented and reflectively analyzed through Connell's (1993) three principles of curricular justice, which lead toward or away from social justice in preservice and graduate education. As a consequence of this inquiry into lived educational practice, rich insights into teaching, learning, and human interaction emerge.",
"corpus_id": 153738297,
"title": "Social Justice in Preservice and Graduate Education: A Reflective Narrative Analysis"
} | {
"abstract": "Contents Preface Contributors Part I: Reflection and Reflective Inquiry Chapter 1: Reflection and Reflective Inquiry: Critical Issues, Evolving Conceptualizations, Contemporary Claims and Future Possibilities. Nona Lyons Part II: Foundational Issues: Needed Conceptual Frameworks Chapter 2. Foundational Issues - 'a deepening of conscious life.' Nona Lyons Chapter 3. The Role of Descriptive Inquiry in Building Presence and Civic Capacity. Carol Rodgers Part III: Reflective Inquiry in the Professions Chatper 4. Teacher Education: A Critical Analysis of Reflection as a Goal for Teacher Education. Ken Zeichner & Katrina Yan Liu Chapter 5. Education for the Law: Reflective Education for the Law. Filippa Marullo Anzalone Chapter 6. Medical Education: Reflective Inquiry in the Medical Profession. C. Anthony Ryan Chapter 7. Occupational Therapy: Occupational Therapy as Reflective Practice. Ellen S. Cohn, Barbara A. Boyt Shell, & Elizabeth Blesedell Crepeau Chapter 8. Nursing Education: Application of Critical Reflective Inquiry in Nursing Education. Hesook Suzie Kim, Laurie M. Lauzon Clabo, Patricia Burbank, Mary Leveillee, Diane Martins. Chapter 9. Social Work Education: Reflective Inquiry in Social Work Education. Marian Murphy, Maria Dempsey, Carmel Halton Chapter 10. Teaching: Reflective Practice in the Professions: Teaching. Cheryl Craig Chapter 11. Adult Education: Critical Reflection as an Adult Learning Process. Stephen Brookfield Chapter 12. Education for Probation Services: Fostering Reflective Practice in the Public Service: A Study of the Probation Service in the Republic of Ireland. Carmel Halton Part IV. Facilitating and Scaffolding Reflective Engagement: Considering Institutional Contexts Chapter 13. A Child Study/Lesson Study: Developing Minds to Understand & Teach Children. Joan V. Mast & Herbert P. Ginsburg Chapter 14. Within K-12 Schools for School Reform: What does it take? Michaelann Kelley, Paul D. Gray, Donna Reid, & Cheryl Craig Chapter 15. Reflective Inquiry in the Round. Steve Seidel Part V. Professional Pedagogies and Research Practices: Teaching and Researching Reflective Inquiry, Cheryl Craig Chapter 16. As Inquiry. Anna Richert & Clare Bove Chapter 17. As Self-Study: 'Doing as I Do': The Role of Teacher Educator Self-Study in Educating for Reflective Inquiry. Vicki Kubler LaBoskey & Mary Lynn Hamilton Chapter 18. Through a Portfolio Process: Professional Pedagogies and Research Practices: Teaching & Researching Reflective Inquiry through a Medical Portfolio Process. Martina Kelly Chapter 19. As Narrative Inquiry: Narrative Inquiry as Reflective Practice: Tensions and Possibilities. Charles Aidan Downey & D. Jean Clandinin Chapter 20. Reflection Through Collaborative Action Research and Inquiry. John Loughran Chapter 21. Through Curriculum Design: Developing Transformative Curriculum Leaders Through Reflective Inquiry. Chen Ai Yen & Da",
"corpus_id": 142201388,
"title": "Handbook of reflection and reflective inquiry : mapping a way of knowing for professional reflective inquiry"
} | {
"abstract": "olutionary intelligentsia a theocratic “Khomeini government” was indeed in the cards for Iran? Let’s just say that if he was deceived about this, he was in good company. The first prime minister of revolution, Mehdi Bazargan, to whom Foucault’s famous letter of protest is addressed to (pp. 260–63), complained in a private session (summer 1992) to the author of this review that Khomeini “deceived us all, that is, he deceived all but the devil himself.” Afary and Anderson don’t expect Foucault to have suddenly reversed a career of critiquing modernity at the threshold of this massive, anti-modern revolution. What they do fault him for is missing the dying canary in the mine—the trampling of women’s rights in Iran should have alarmed Foucault. It did not. The authors trace this failure to the blind spot that Foucault had allowed to grow in his privileged, male homosexual field of vision. Maxim Rodnison (whose critical essays along with the compendium of Foucault’s writings on Iran are included in an informative appendix to the present book) lays the blame on Foucault’s lack of familiarity with the hidden authoritarianism of an Islamic state. For the sources of Foucault’s naiveté, however, one needs not to look even that far. In an interview conducted in 1978 in Tehran (p.186), Foucault called the liberal democratic industrial capitalism “the harshest, most savage, most selfish, most dishonest, oppressive society one could possibly imagine.” The poverty of imagination underlying such a statement is breathtaking. Without abandoning one line of Foucault’s voluminous critiques of modernity and with all the due respect to postmodernism, it would not be hard for non-Westerner intellectuals to imagine a harsher, more savage, more selfish, more dishonest and, more oppressive society. And once they got over their idealism, they too would choose boring, slightly oppressive, slightly mendacious leafy suburbs of Paris, London, New York or Los Angeles over a utopian “political spirituality” that “takes nothing from Western philosophy, from its juridical and revolutionary foundations” (pp. 186–7). The Holocaust and Memory in the Global Age, by Daniel Levy and Natan Sznaider, translated by Assenka Oksiloff. Philadelphia, PA: Temple University Press, 2005. 240 pp. $24.95 paper. ISBN: 1592132758.",
"corpus_id": 145453861,
"score": 0,
"title": "The Holocaust and Memory in the Global Age"
} |
{
"abstract": "To the Editor: Uncommon mutations in the 5′ untranslated region (UTR) of the b-globin gene have been described in b-thalassemia. The 5′UTR is composed of 50 nucleotides between the cap site and the initiation codon and has been demonstrated to be important to the regulation of b-globin expression (1–3). We herein describe two unrelated families with the b mutation 5′UTR +20(C?T), without the association of IVSII-745(C?G) in cis, leading to a thalassemia major phenotype when coinherited with a b 39(C?T) mutation.",
"corpus_id": 457105,
"title": "Thalassemia major phenotypes secondary to the association of β 5′UTR +20(C→T) allele with β 39(C→T)"
} | {
"abstract": "We have evaluated the mutation profile in a sample of 127 unrelated beta-thalassemia (beta thal) individuals, diagnosed through A2 and fetal hemoglobin quantification by high-performance liquid chromatography (HPLC) from the Brazilian southernmost state, where a flow of Italian immigrants had occurred in the late 19th century, mainly from Northern Italy. The molecular analysis was performed by DNA sequencing of the most common mutations found in the Mediterranean region. The beta 0 codon 39 nonsense mutation was the most frequent alteration (50.9%), followed by beta+ IVSI 110 G>A (18.1%), beta 0 IVSI 1 G>A (12.9%), beta+ IVSI 6 T>C (9.5%), and other rare mutations (8.6%). The chosen gene sequence was able to identify 91% beta-thal mutations in the population studied, showing some similarity with allele frequencies of the mainly colonizing countries of Rio Grande do Sul state. The comparison of our results to other Brazilian studies has shown significant differences. Therefore, we can conclude that the genotypic profile of beta-thal shows great variability. Hence, it would be arbitrary to infer regional study results as being representative of the Brazilian whole population. Brazilian researchers of different regions should identify their most frequent genotypes to provide better understanding on this disease and state adequate public health policies.",
"corpus_id": 45758368,
"title": "Identification of beta thalassemia mutations in South Brazilians."
} | {
"abstract": "Purpose: SAR245408 is a pan-class I phosphoinositide 3-kinase (PI3K) inhibitor. This phase I study determined the maximum tolerated dose (MTD) of two dosing schedules [first 21 days of a 28-day period (21/7) and continuous once-daily dosing (CDD)], pharmacokinetic and pharmacodynamic profiles, and preliminary efficacy. Experimental Design: Patients with refractory advanced solid malignancies were treated with SAR245408 using a 3 + 3 design. Pharmacokinetic parameters were determined after single and repeated doses. Pharmacodynamic effects were evaluated in plasma, hair sheath cells, and skin and tumor biopsies. Results: Sixty-nine patients were enrolled. The MTD of both schedules was 600 mg; dose-limiting toxicities were maculopapular rash and hypersensitivity reaction. The most frequent drug-related adverse events included dermatologic toxicities, diarrhea, nausea, and decreased appetite. Plasma pharmacokinetics showed a median time to maximum concentration of 8 to 22 hours, mean terminal elimination half-life of 70 to 88 hours, and 5- to 13-fold accumulation after daily dosing (first cycle). Steady-state concentration was reached between days 15 and 21, and exposure was dose-proportional with doses up to 400 mg. SAR245408 inhibited the PI3K pathway (∼40%–80% reduction in phosphorylation of AKT, PRAS40, 4EBP1, and S6 in tumor and surrogate tissues) and, unexpectedly, also inhibited the MEK/ERK pathway. A partial response was seen in one patient with advanced non–small cell lung cancer. Eight patients were progression-free at 6 months. Pharmacodynamic and clinical activity were observed irrespective of tumor PI3K pathway molecular alterations. Conclusions: SAR245408 was tolerable at doses associated with PI3K pathway inhibition. The recommended phase II dose of the capsule formulation is 600 mg administered orally with CDD. Clin Cancer Res; 20(1); 233–45. ©2013 AACR.",
"corpus_id": 35405459,
"score": 1,
"title": "Phase I Safety, Pharmacokinetic, and Pharmacodynamic Study of SAR245408 (XL147), an Oral Pan-Class I PI3K Inhibitor, in Patients with Advanced Solid Tumors"
} |
{
"abstract": "Pancreatic ductal adenocarcinoma (PDAC) is strikingly resistant to conventional therapeutic approaches. We previously demonstrated that the histone deacetylase-associated protein SIN3B is essential for oncogene-induced senescence in cultured cells. Here, using a mouse model of pancreatic cancer, we have demonstrated that SIN3B is required for activated KRAS-induced senescence in vivo. Surprisingly, impaired senescence as the result of genetic inactivation of Sin3B was associated with delayed PDAC progression and correlated with an impaired inflammatory response. In murine and human pancreatic cells and tissues, levels of SIN3B correlated with KRAS-induced production of IL-1α. Furthermore, evaluation of human pancreatic tissue and cancer cells revealed that Sin3B was decreased in control and PDAC samples, compared with samples from patients with pancreatic inflammation. These results indicate that senescence-associated inflammation positively correlates with PDAC progression and suggest that SIN3B has potential as a therapeutic target for inhibiting inflammation-driven tumorigenesis.",
"corpus_id": 3044466,
"title": "Senescence-associated SIN3B promotes inflammation and pancreatic cancer progression."
} | {
"abstract": "Distinguishing between indolent and aggressive prostate adenocarcinoma remains a priority to accurately identify patients who need therapeutic intervention. SIN3B has been implicated in the initiation of senescence in vitro Here we show that in a mouse model of prostate cancer, SIN3B provides a barrier to malignant progression. SIN3B was required for PTEN-induced cellular senescence and prevented progression to invasive prostate adenocarcinoma. Furthermore, SIN3B was downregulated in human prostate adenocarcinoma correlating with upregulation of its target genes. Our results suggest a tumor suppressor function for SIN3B that limits prostate adenocarcinoma progression, with potential implications for the use of SIN3B and its target genes as candidate diagnostic markers to distinguish indolent from aggressive disease. Cancer Res; 77(19); 5339-48. ©2017 AACR.",
"corpus_id": 4856202,
"title": "Chromatin-Associated Protein SIN3B Prevents Prostate Cancer Progression by Inducing Senescence."
} | {
"abstract": "Significance Regulation of iron homeostasis is perturbed in numerous pathologic states. Thus, identifications of mechanisms responsible for iron metabolism have broad implications for disease modification. Here, we link the sulfur assimilation pathway to iron-deficiency anemia. Deletion of bisphosphate 3′-nucleotidase (Bpnt1), a key component of the sulfur assimilation pathway, leads to accumulation of phosphoadenosine phosphate (PAP), causing iron deficiency anemia in part due to inhibition of hypoxia-inducible factor 2-α. Reduction of PAP through introduction of a hypomorphic mutation in 3′-phosphoadenosine 5-phosphosulfate synthase 2 gene (Papss2, the enzyme responsible for PAP production) rescues the iron deficiency phenotype. Sulfur assimilation is an evolutionarily conserved pathway that plays an essential role in cellular and metabolic processes, including sulfation, amino acid biosynthesis, and organismal development. We report that loss of a key enzymatic component of the pathway, bisphosphate 3′-nucleotidase (Bpnt1), in mice, both whole animal and intestine-specific, leads to iron-deficiency anemia. Analysis of mutant enterocytes demonstrates that modulation of their substrate 3′-phosphoadenosine 5′-phosphate (PAP) influences levels of key iron homeostasis factors involved in dietary iron reduction, import and transport, that in part mimic those reported for the loss of hypoxic-induced transcription factor, HIF-2α. Our studies define a genetic basis for iron-deficiency anemia, a molecular approach for rescuing loss of nucleotidase function, and an unanticipated link between nucleotide hydrolysis in the sulfur assimilation pathway and iron homeostasis.",
"corpus_id": 3681838,
"score": 1,
"title": "Modulation of intestinal sulfur assimilation metabolism regulates iron homeostasis"
} |
{
"abstract": "Through linkage of administrative databases, this study highlights the characteristic of and health care costs for children over the last year of life. BACKGROUND AND OBJECTIVES: Heath care use and cost for children at the end of life is not well documented across the multiple sectors where children receive care. The study objective was to examine demographics, location, cause of death, and health care use and costs over the last year of life for children aged 1 month to 19 years who died in Ontario, Canada. METHODS: We conducted a population-based retrospective cohort study using administrative databases to determine the characteristics of and health care costs by age group and cause of death over a 3-year period from 2010 to 2013. RESULTS: In our cohort of 1620 children, 41.6% died of a chronic disease with wide variation across age groups. The mean health care cost over the last year of life was $78 332 (Canadian) with a median of $18 450, reflecting the impact of high-cost decedents. The mean costs for children with chronic or perinatal/congenital illnesses nearly tripled over the last 4 months of life. The majority of costs (67.0%) were incurred in acute care settings, with 88.0% of children with a perinatal/congenital illness and 79.7% with a chronic illness dying in acute care. Only 33.4% of children received home care in the last year of life. CONCLUSIONS: Children in Ontario receive the majority of their end-of-life care in acute care settings at a high cost to the health care system. Initiatives to optimize care should focus on early discussion of the goals of care and assessment of whether the care provided fits with these goals.",
"corpus_id": 1747886,
"title": "Children’s End-of-Life Health Care Use and Cost"
} | {
"abstract": "* Abbreviation:\n CCC — : complex chronic condition\n\nIn this issue of Pediatrics , Johnston et al,1 in their article “Disparities in Inpatient Intensity of End-of-Life Care for Complex Chronic Conditions,” add to our growing awareness of the impact that family sociodemographic characteristics can have on health care for children with complex chronic conditions (CCCs). Family race, ethnicity, income, geography, and education have been shown to alter other aspects of medical care and health outcomes for this vulnerable pediatric population.2,3 Because >80% of children with CCCs die in inpatient settings,4 it is important to examine how such factors may also be associated with end-of-life care.\n\nJohnston et al1 analyzed all deaths in California between 2000 and 2013 for children 1 to 21 years old with an International Classification of Diseases, Ninth and 10th Revisions code for a CCC5 on their death certificate or on hospital discharge documentation within 1 year of … \n\nAddress correspondence to Renee D. Boss, MD, MHS, Division of Neonatology, Department of Pediatrics, School of Medicine, Berman Institute of Bioethics, Johns Hopkins University, 1809 Ashland Ave, Baltimore, MD 21287. E-mail: rboss1{at}jhmi.edu",
"corpus_id": 108293804,
"title": "Social Disparities and Death Among Children With Complex Chronic Conditions"
} | {
"abstract": "The direct control of carrier spin by an electric field at room temperature is one of the most important challenges in the field of spintronics. For this purpose, we here propose a quaternary Heusler alloy FeVTiSi. Based on first principles calculations, the FeVTiSi alloy is found to be an intrinsic bipolar magnetic semiconductor in which the valence band and conduction band approach the Fermi level through opposite spin channels. Thus the FeVTiSi alloy can conduct completely spin-polarized currents with a tunable spin-polarization direction simply by applying a gate voltage. Furthermore, by Monte Carlo simulations based on the classical Heisenberg Hamiltonian, the Curie temperature of the FeVTiSi alloy is predicted to be well above the room temperature. The bipolar magnetic semiconducting character and the room temperature magnetic ordering endow the FeVTiSi alloy with great potential for developing electrically controllable spintronic devices working at room temperature.",
"corpus_id": 119234819,
"score": 0,
"title": "Electrical control of carriers' spin orientation in the FeVTiSi Heusler alloy†"
} |
{
"abstract": "Data from 311 rural and urban family firms from the National Family Business Panel (NFBP) were used to investigate the relative contributions of human, social, and financial capital resources, normative and non-normative disruptions, and federal disaster assistance on family firm resilience. Results indicate that the sets of social capital and disruption variables were significantly and negatively related to firm resilience for rural firms, while perceiving the business as a way of life was significantly and positively related to firm resilience for urban firms. Federal disaster assistance was negatively related to firm resilience for both rural and urban firms. Additional findings, conclusions, and implications of findings are discussed.",
"corpus_id": 154711115,
"title": "Determinants of rural and urban family firm resilience"
} | {
"abstract": "Using a systematic literature review, we address the topic of social capital in family firms. Based on 69 studies, we analyze the main findings, sampling and methodologies, theoretical approaches, definitions, and measurements of social capital in family firms. We also present how social capital is used as a model variable and present a conceptual framework of social capital in family firms. Subsequently, we identify the research gaps and develop research questions for further research.",
"corpus_id": 252642062,
"title": "Social Capital in the Family Business Literature: A Systematic Review and Future Research Agenda"
} | {
"abstract": "Abstract The Hopkins Symptom Checklist (HSCL) is a widely used measure of symptom distress and in particular is a valuable criterion measure in psychotherapeutic drug trials. Its reliability, validity, and sensitivity to change have been well established. However, its factor structure has been subject to much debate. In previous studies a wide range of different factor structures have been found by various researchers. The aim of the present study was to produce a short, less arduous, but acceptably reliable version of HSCL with a replicable factor structure. The factor structure which was based on a previously described, robust three-factor version of the HSCL, was established using a two-step process which began with a two-factor analysts of the largest subscales, General Feelings of Distress (GFD) and Somatic Distress (SD). This was followed by a three-factor analysis of seven items from each of three subscales. The robustness of the factor structure of the resulting scale was revealed by the factor co...",
"corpus_id": 145731199,
"score": 2,
"title": "Development and evaluation of a 21-item version of the hopkins symptom checklist with New Zealand and united states respondents"
} |
{
"abstract": "Certain biophysical characteristics of the DNA from each of the five nondefective adenovirus 2 (Ad2)-simian virus 40 (SV40) hybrid viruses (Ad2+ND1, Ad2+ND2, Ad2+ND3, Ad2+ND4, Ad2+ND5) have been determined. The guanine plus cytosine content varied from 55 to 57% and was not significantly different from that of nonhybrid Ad2 (56%), and the hybrid DNA molecules had mean molecular lengths which were similar to that of the standard, Ad2. The Ad2 and SV40 components of each hybrid were linked by alkali-resistant, presumably covalent bonds. The percentage of SV40 DNA in each hybrid virus was determined by hybridization with SV40 complementary RNA in a calibrated system. The results indicate that each hybrid virus DNA contains a different percentage of SV40 nucleotide sequences. The estimated size of the SV40 DNA component varies from 48,000 daltons for Ad2+ND3 to 840,000 daltons for Ad2+ND4, the latter being equivalent to between one-fourth and one-third of the SV40 genome.",
"corpus_id": 2647704,
"title": "Studies of Nondefective Adenovirus 2-Simian Virus 40 Hybrid Viruses. VI. Characterization of the DNA from Five Nondefective Hybrid Viruses"
} | {
"abstract": "CELLS productively infected with the double-stranded DNA tumour virus SV40 synthesise viral RNA in two phases. Before viral DNA replication begins, only early genes are transcribed (early RNA); after the onset of viral DNA replication, both early and late gene sequences are transcribed (early-plus-late RNA)1, 2. Using the technique of hydroxyapatite (HA) chromatography to separate the strands of SV40 DNA3–5, or an RNA-RNA hybridisation method6, it has been shown that the early template occupies about one third of one strand (the (−) strand), while the late template occupies about two thirds of the opposite strand (the (+) strand). In cells transformed by 5V40, most or all of the early template is transcribed, and additional regions of the (−) strand may be transcribed as well5, 7; however, little if any transcription occurs from the (+) strand7. Furthermore, the synthesis of SV40 T antigen in SV40-transformed cells is totally resistant to interferon8. These observations suggest that, in the transformed cell, SV40 gene expression is under the control of the host rather than the viral genome. This apparent alteration in control may be the result of the integration of SV40 DNA into chromosomal DNA9.",
"corpus_id": 9701490,
"title": "Strand orientation of SV40 transcription in cells infected by non-defective adenovirus 2-SV40 hybrid viruses."
} | {
"abstract": "Abstract DNA extracted from the Escherichia coli bacteriophage P2, as studied in the electron microscope, consists of linear monomers of uniform length (13.1 to 13.2 μ, corresponding to a molecular weight of 2.2 × 107), in addition to dimers and circles. These disappear when the temperature is raised just above that at which denaturation begins. The presence in P2 DNA of cohesive ends like those described for phage λ DNA is thus confirmed. A direct comparison with λ DNA in the electron microscope, and the high temperature at which P2 DNA infectivity is recovered, both show that P2 DNA cohesive ends are much more stable to heat than those of λ. Electron microscopy of partially denatured P2 DNA reveals several highly characteristic zones along the DNA “melting map”, recognizable by the temperature at which they start denaturing. Changes in ultraviolet absorption of P2 DNA at 260 nm wavelength during heat denaturation (melting curves), and also the accompanying spectral changes, indicate that P2 DNA contains at least three fractions of different base pair contents (fraction I: 20% of mass with 34% (G + C) content; II: 16% with 45% (G + C); III: 64% with 58% (G + C). The electron microscopical observations permit one to conclude that this compositional heterogeneity is intra-molecular. A correlation between the DNA fractions defined by the melting curves and the denaturation zones of the melting map is tentatively established. The DNA of P2 Hy, a phage believed to be genetically identical to P2 except for a small segment of its chromosome, appears to be identical to P2 except that fraction I is over-represented in P2 Hy DNA. The electron microscopic data are compatible with this finding, but the effect is very small. On this basis it is possible tentatively to localize on the melting map of P2 DNA the genetic region which differentiates it from P2 Hy, and thereby orient the physical in respect to the genetic representation of the P2 chromosome.",
"corpus_id": 27752244,
"score": 2,
"title": "Heat denaturation of P2 bacteriophage DNA: compositional heterogeneity."
} |
{
"abstract": "In cases of systemic inflammatory response syndrome, sepsis, and septic shock, the activity of glycosylphosphatidylinositol-specific phospholipase D (GPI-PLD) in serum amounts to 20 to 25% of the activity found in a healthy control group. The activity of serum GPI-PLD is positively correlated with inflammatory markers and counts of monocytes and stab cells (bands) and negatively correlated with polymorphonuclear neutrophils and lymphocytes in severe diseases. This indicates a yet unknown involvement of the inflammatory system in GPI-PLD liberation and suggests that the liver is not the only source of the plasma enzyme. Plasma was shown to contain an effective inhibitor of GPI-PLD which is soluble in organic solvents. Its concentration in capillary plasma is 20-fold higher than in venous plasma. To find possible other sources of plasma GPI-PLD besides the liver, the GPI-degrading activity was measured in different organs of the rat. Product formation was analysed using [125I]TID-labeled GPI-AP.",
"corpus_id": 742575,
"title": "Glycosylphosphatidylinositol-specific phospholipase D in blood serum: is the liver the only source of the enzyme?"
} | {
"abstract": "Insulin-mimetic species of low molecular weight are speculated to mediate some intracellular insulin actions. These inositol glycans, which are generated upon insulin stimulation from glycosylphosphatidylinositols, might control the activity of a multitude of insulin effector enzymes. Acylated inositol glycans (AIGs) are generated by cleavage of protein-free GPI precursors through the action of GPI-specific phospholipase C (GPI-PLC) and D (GPI-PLD). We synthesized AIGs (IG-1, IG-2, IG-13, IG-14, and IG-15) and then evaluated their insulin-mimicking bioactivities. IG-1 significantly stimulated glycogen synthesis and lipogenesis in 3T3-L1 adipocytes and rat isolated adipocytes dose-dependently. IG-2 significantly stimulated lipogenesis in rat isolated adipocytes dose-dependently. IG-15 also enhanced glycogen synthesis and lipogenesis in 3T3-L1 adipocytes. The administration of IG-1 decreased plasma glucose, increased glycogen content in liver and skeletal muscles and improved glucose tolerance in C57B6N mice with normal diets. The administration of IG-1 decreased plasma glucose in STZ-diabetic C57B6N mice. The treatment of IG-1 decreased plasma glucose, increased glycogen content in liver and skeletal muscles and improved glucose tolerance in C57B6N mice with high fat-diets and db/db mice. The long-term treatment of IG-1 decreased plasma glucose and reduced food intake and body weight in C57B6N mice with high fat-diets and ob/ob mice. Thus, IG-1 has insulin-mimicking bioactivities and improves glucose tolerance in mice models of diabetes with or without obesity.",
"corpus_id": 16529451,
"title": "Insulin-Mimicking Bioactivities of Acylated Inositol Glycans in Several Mouse Models of Diabetes with or without Obesity"
} | {
"abstract": "Urinary tract infections (UTIs) mainly due to uropathogenic Escherichia coli (UPEC) are one of the most frequent complications in kidney-transplanted patients, causing significant morbidity. However, the mechanisms underlying UTI in renal grafts remain poorly understood. Here, we analysed the effects of the potent immunosuppressive agent cyclosporine A (CsA) on the activation of collecting duct cells that represent a preferential site of adhesion and translocation for UPEC. CsA induced the inhibition of lipopolysaccharide- induced activation of collecting duct cells due to the downregulation of the expression of TLR4 via the microRNA Let-7i. Using an experimental model of ascending UTI, we showed that the pretreatment of mice with CsA prior to infection induced a marked fall in cytokine production by collecting duct cells, neutrophil recruitment, and a dramatic rise of bacterial load, but not in infected TLR4-defective mice kidneys. This effect was also observed in CsA-treated infected kidneys, where the expression of Let-7i was increased. Treatment with a synthetic Let-7i mimic reproduced the effects of CsA. Conversely, pretreatment with an anti-Let-7i antagonised the effects of CsA and rescued the innate immune response of collecting duct cells against UPEC. Thus, the utilisation of an anti-Let-7i during kidney transplantation may protect CsA-treated patients from ascending bacterial infection.",
"corpus_id": 3591289,
"score": 1,
"title": "Cyclosporine A Induces MicroRNAs Controlling Innate Immunity during Renal Bacterial Infection"
} |
{
"abstract": "ABSTRACT \n Widespread planting of crops genetically engineered to produce insecticidal toxins from the bacterium Bacillus thuringiensis (Bt) imposes selection on many key agricultural pests to evolve resistance to Bt. Fitness costs can slow the evolution of Bt resistance. We examined effects of entomopathogenic nematodes on fitness costs of Bt resistance in the pink bollworm, Pectinophora gossypiella (Saunders) (Lepidoptera: Gelechiidae), a major pest of cotton, Gossypium hirsutum L., in the southwestern United States that is currently controlled by transgenic cotton that produces Bt toxin Cry1Ac. We tested whether the entomopathogenic nematodes Steinernema riobrave Cabanillas, Poinar, and Raulston (Rhabditida: Steinernematidae) and Heterorhabditis bacteriophora Poinar (Rhabditida: Heterorhabditidae) affected fitness costs of resistance to Cry1Ac in two laboratory-selected hybrid strains of pink bollworm reared on non-Bt cotton bolls. The nematode S. riobrave imposed a recessive fitness cost for one strain, and H. bacteriophora imposed a fitness cost affecting heterozygous resistant individuals for the other strain. Activity of phenoloxidase, an important component of insects' immune response, did not differ between Bt-resistant and Bt-susceptible families. This suggests phenoloxidase does not affect susceptibility to entomopathogenic nematodes in Bt-resistant pink bollworm. Additionally, phenoloxidase activity does not contribute to Bt resistance, as has been found in some species. We conclude that other mechanisms cause higher nematode-imposed mortality for pink bollworm with Bt resistance genes. Incorporation of nematode-imposed fitness costs into a spatially explicit simulation model suggests that entomopathogenic nematodes in non-Bt refuges could delay resistance by pink bollworm to Bt cotton.",
"corpus_id": 3108336,
"title": "Effects of Pink Bollworm Resistance to Bacillus thuringiensis on Phenoloxidase Activity and Susceptibility to Entomopathogenic Nematodes"
} | {
"abstract": "ABSTRACT \n Fitness costs can delay pest resistance to crops that produce insecticidal toxins derived from the bacterium Bacillus thuringiensis (Bt), and past research has found that entomopathogens impose fitness costs of Bt resistance. In addition, entomopathogens can be used for integrated pest management by providing biological control of pests. The western corn rootworm, Diabrotica virgifera virgifera LeConte (Coleoptera: Chrysomelidae), is a major pest of maize and is currently managed by planting of Bt maize. We tested whether entomopathogenic nematodes and fungi increased mortality of western corn rootworm and whether these entomopathogens increased fitness costs of resistance to Cry3Bb1 maize. We exposed western corn rootworm larvae to two species of nematodes, Heterorhabditis bacteriophora Poinar (Rhabditida: Heterorhabditidae) and Steinernema feltiae Filipjev (Rhabditida: Steinernematidae), and to two species of fungi, Beauveria bassiana (Balsamo) Vuillemin (Hypocreales: Cordycipitaceae) (strain GHA) and Metarhizium brunneum (Metschnikoff) Sorokin (Hypocreales: Clavicipitaceae) (strain F52) in two assay types, namely, seedling mat and small cup. Larval mortality increased with the concentration of H. bacteriophora and S. feltiae in the small cup assay, and with the exception of S. feltiae and B. bassiana in the seedling mat assay, mortality from entomopathogens was significantly greater than zero for the remaining entomopathogens in both assays. However, no fitness costs were observed in either assay type for any entomopathogen. Increased mortality of western corn rootworm larvae caused by these entomopathogens supports their potential use in biological control; however, the lack of fitness costs suggests that entomopathogens will not delay the evolution of Bt resistance in western corn rootworm.",
"corpus_id": 11853632,
"title": "Effects of Entomopathogens on Mortality of Western Corn Rootworm (Coleoptera: Chrysomelidae) and Fitness Costs of Resistance to Cry3Bb1 Maize"
} | {
"abstract": "Pink bollworm, Pectinophora gossypiella (Saunders), mating began 6 to 7 h after the last light in controlled-temperature cabinets. Moth pairs remained in copula 56 to 77 min. The average number of spermatophores transferred to females ranged from 0.9 for pairs confined for 1 night to 3.3 for pairs confined for 6 nights, and 84 to 100% of the pairs mated. Native moth pairs mated less frequently than laboratory-reared moth pairs. Laboratory-reared and native males mated an average of 8.9 and 0.9 times, respectively, during their life span when confined with virgin females each night. Male moths mated only once each night, whereas females (8%) mated twice each night when confined with two males. Female pink bollworm moths were less receptive to mating a second time on the night after the first mating.",
"corpus_id": 84929994,
"score": 2,
"title": "Pink Bollworm (Lepidoptera: Gelechiidae): Comparative Mating Frequencies of Laboratory-Reared and Native Moths"
} |
{
"abstract": "The usage of convex hulls for classification is discussed with a practical algorithm, in which a sample is classified according to the distances to convex hulls. Sometimes convex hulls of classes are too close to keep a large margin. In this paper, we discuss a way to keep a margin larger than a specified value. To do this, we introduce a concept of ``expanded convex hull'' and confirm its effectiveness.",
"corpus_id": 1778719,
"title": "Margin Preserved Approximate Convex Hulls for Classification"
} | {
"abstract": "In this work, a new method for one-class classification based on the Convex Hull geometric structure is proposed. The new method creates a family of convex hulls able to fit the geometrical shape of the training points. The increased computational cost due to the creation of the convex hull in multiple dimensions is circumvented using random projections. This provides an approximation of the original structure with multiple bi-dimensional views. In the projection planes, a mechanism for noisy points rejection has also been elaborated and evaluated. Results show that the approach performs considerably well with respect to the state the art in one-class classification.",
"corpus_id": 13595411,
"title": "Approximate Convex Hulls Family for One-Class Classification"
} | {
"abstract": "Abstract This paper describes a number of experiments to compare and validate the performance of machine learning classifiers. Creating machine learning models for data with wide varieties has huge applications in predictive modelling across multiple domain of science. This work reviews state of the art techniques in machine learning classifiers methods with several extent of magnitude in statistics and key findings that will be helpful in establishing best methodological practices for class predictions. Comprehensive comparative review analysis with statistical validations for various machine learning algorithm for SVM, Bagging, Boosting, Decision Trees and Nearest Neighborhood algorithm on multiple data sets is carried out. Focus on the statistical analysis of the results using Friedman-Test and Wilcoxon Test as well as other interpretative metrics like classification rate, ROC, F-measure are evaluated to benchmark results.",
"corpus_id": 64948498,
"score": 1,
"title": "Friedman and Wilcoxon Evaluations Comparing SVM, Bagging, Boosting, K-NN and Decision Tree Classifiers"
} |
{
"abstract": "Objective Pheochromocytomas are uncommon tumours arising from chromaffin cells of the adrenal medulla and related paraganglia. So far, one of the few reported markers to discriminate malignant from benign tumours is the βB‐subunit of inhibin and activin, members of the transforming growth factor (TGF)‐β superfamily of growth and differentiation factors.",
"corpus_id": 1562056,
"title": "Expression of activin and inhibin subunits, receptors and binding proteins in human pheochromocytomas: a study based on mRNA analysis and immunohistochemistry"
} | {
"abstract": "Many studies have tried to discriminate malignant from benign phaeochromocytomas, but until now no widely accepted histological, immunohistochemical, or molecular methods have been available. In this study of 29 malignant and 85 benign phaeochromocytomas from 102 patients, immunohistochemistry was performed with antibodies to the tumour suppressor gene product p53 and the proto‐oncogene products bcl‐2 and c‐erbB‐2, using the avidin–biotin complex method. Malignant phaeochromocytomas showed a statistically significant higher frequency of p53 (p=0·042) and bcl‐2 (p=0·037) protein expression than their benign counterparts. The combination of both markers showed an even higher significance (p=0·004), to which both markers contributed equally. Overexpression of c‐erbB‐2 was associated with the occurrence of familial phaeochromocytomas (p=0·001), but no difference was found between benign and malignant cases. In conclusion, p53, bcl‐2, and c‐erbB‐2 all appear to be involved in the pathogenesis of a proportion of phaeochromocytomas. Immunoreactivity to p53 and bcl‐2 proteins may help to predict the clinical behaviour of phaeochromocytomas. Copyright © 1999 John Wiley & Sons, Ltd.",
"corpus_id": 10484582,
"title": "Prognostic value of p53, bcl‐2, and c‐erbB‐2 protein expression in phaeochromocytomas"
} | {
"abstract": "Australia is currently undergoing a change in the Casemix payment environment. This is the result of an agreement to move to a more nationally-consistent approach to activity-based funding (ABF) for services provided in public hospitals. ABF for acute inpatients will be based on the Australian Refined DRG Casemix system, which is derived from the coded clinical data from each hospital admission. Thus, there will be a need to audit the clinical coding to assess the quality of the data in order to determine if the payments based on that coding are correct. \n \nIn 2009 and 2010, Pavilion Health conducted two major audits of clinical coding in NSW and Queensland. Together, these audits included 55 hospitals and 6,300 records. This paper discusses the insights gained from these two audits.",
"corpus_id": 1373389,
"score": 1,
"title": "Coded data quality for Casemix payment: insights from two external audits"
} |
{
"abstract": "This review of current acne treatments begins with the crucial discovery in 1979 of isotretinoin treatment for nodulocystic acne. This drug s approval in 1982 revolutionized therapy, since it was the first oral acne-specific drug, and it provided prolonged remissions. In addition, it may prevent the emergence of resistant bacteria, a problem linked to the traditional use of antibiotics for acne. Patients who are not candidates for isotretinoin therapy may benefit from one of the other drugs or drug combinations reviewed, including the third-generation topical retinoids adapalene and tazarotene, retinoic acid reformulated in new vehicles, azelaic acid, and topical antibiotics. Proper selection and education of patients are essential, since serious consequences may result from poorly monitored use of antibiotics and retinoid.",
"corpus_id": 189744,
"title": "The modern age of acne therapy: a review of current treatment options."
} | {
"abstract": "The present article gives a concise survey of contemporary opinions on acne vulgaris, its etiopathogenesis, clinical forms and laboratory diagnostics. In particular, the value of microbiological diagnostics and possibilities of local as well as general therapy are discussed. Moreover, our experience is described with vaccinotherapy to manage serious clinical forms and cases when current therapy fails.",
"corpus_id": 2893063,
"title": "A microbiological approach to acne vulgaris."
} | {
"abstract": "Hirsutism, polycystic ovaries, and elevated levels of plasma testosterone are characteristic clinical features in women with extreme insulin resistance and acanthosis nigricans. Extreme insulin resistance resulting from autoantibodies to the insulin receptor (type B extreme insulin resistance) had been considered an exception to this generalization. A woman with type B extreme insulin resistance developed clinical evidence of masculinization in association with a markedly elevated level of plasma testosterone (1000 ng/dL). In nine women with autoantibodies to the insulin receptor, excessive ovarian production of testosterone was a common feature among the premenopausal patients. Postmenopausal patients rarely developed elevated levels of plasma testosterone, presumably as a result of ovarian failure. Overproduction of testosterone may result from a direct effect of hyperinsulinemia on the ovary.",
"corpus_id": 8121480,
"score": 2,
"title": "Insulin resistance associated with androgen excess in women with autoantibodies to the insulin receptor."
} |
{
"abstract": "Molecular and genetic studies of the yeast Saccharomyces cerevisiae isolated at distinct stages of sherry making (young wine, solera, and criadera) in various winemaking regions of Spain demonstrated that sherry yeasts diverged from primary winemaking yeasts according to several physiological and molecular markers. All sherry strains, regardless of the place and time of their isolation, carry a 24-bp deletion in the ITS1 region of ribosomal DNA, whereas the yeasts of primary winemaking lack this deletion. Molecular karyotypes of sherry yeasts from different populations were found to be very similar.",
"corpus_id": 1191706,
"title": "Genetic Differentiation of the Sherry Yeasts Saccharomyces cerevisiae"
} | {
"abstract": "Wine biological aging is a wine making process used to produce specific beverages in several countries in Europe, including Spain, Italy, France, and Hungary. This process involves the formation of a velum at the surface of the wine. Here, we present the first large scale comparison of all European flor strains involved in this process. We inferred the population structure of these European flor strains from their microsatellite genotype diversity and analyzed their ploidy. We show that almost all of these flor strains belong to the same cluster and are diploid, except for a few Spanish strains. Comparison of the array hybridization profile of six flor strains originating from these four countries, with that of three wine strains did not reveal any large segmental amplification. Nonetheless, some genes, including YKL221W/MCH2 and YKL222C, were amplified in the genome of four out of six flor strains. Finally, we correlated ICR1 ncRNA and FLO11 polymorphisms with flor yeast population structure, and associate the presence of wild type ICR1 and a long Flo11p with thin velum formation in a cluster of Jura strains. These results provide new insight into the diversity of flor yeast and show that combinations of different adaptive changes can lead to an increase of hydrophobicity and affect velum formation.",
"corpus_id": 4333793,
"title": "Population Structure and Comparative Genome Hybridization of European Flor Yeast Reveal a Unique Group of Saccharomyces cerevisiae Strains with Few Gene Duplications in Their Genome"
} | {
"abstract": "Tendon lesions are among the most frequent musculoskeletal pathologies. Vascular endothelial growth factor (VEGF) is known to regulate angiogenesis. VEGF-111, a biologically active and proteolysis-resistant splice variant of this family, was recently identified. This study aimed at evaluating whether VEGF-111 could have a therapeutic interest in tendon pathologies. Surgical section of one Achilles tendon of rats was performed before a local injection of either saline or VEGF-111. After 5, 15 and 30 days, the Achilles tendons of 10 rats of both groups were sampled and submitted to a biomechanical tensile test. The force necessary to induce tendon rupture was greater for tendons of the VEGF-111 group (p<0.05) while the section areas of the tendons were similar. The mechanical stress was similar at 5 and 15 days in the both groups but was improved for the VEGF-111 group at day 30 (p <0.001). No difference was observed in the mRNA expression of collagen III, tenomodulin and MMP-9. In conclusion, we observed that a local injection of VEGF-111 improves the early phases of the healing process of rat tendons after a surgical section. Further confirmatory experimentations are needed to consolidate our results.",
"corpus_id": 1709634,
"score": 1,
"title": "Vascular Endothelial Growth Factor-111 (VEGF-111) and tendon healing: preliminary results in a rat model of tendon injury."
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.