query dict | pos dict | neg dict |
|---|---|---|
{
"abstract": "The extant paper has been compiled with the objective of “investigating the relationship between organizational culture and application of knowledge management processes from the viewpoint of Education Dept employees”. This study with respect to the objective is an applied survey and the data has been collected by descriptive-correlative methods. The data has been collected by means of organizational cultural profilequestionnaire of Sarros et al (2002) with Cronbach’s alpha coefficient equal to 0.93 and knowledge management questionnaire of Lawson (2003) with reliability of 0.94. The population of this study consists of all education dept employees of Ahvaz (Iran) and the sample size according to Morgan table equaled 175 persons. To analyze the data, descriptive indices including mean value, standard deviation, frequency and inferential statistics, Pearson’s coefficient of correlation, simultaneous regression and t test were used. The results obtained from data analysis indicated a positive and significant association between seven factors of organizational culture and knowledge management (r=0.82, p<0.01). Also, the summary of simultaneous regression analysis indicated that the variable organizational culture is an appropriate predictor for knowledge management meaning that upon simultaneous intervening of all organizational culture factors, about 71% of variations related to knowledge management implementationare predictable. Therefore, considering the importance of organizational culture in the knowledge management processes, the education officials should provide the grounds for more effective application of knowledge in an appropriate organizational culture bed.",
"corpus_id": 35339294,
"title": "The effect of organizational culture on the knowledge management implementation processes from the viewpoint of Education Dept employees"
} | {
"abstract": "The objective of the article is to highlight the role of environmental values in corporate pro-environmental behaviour. Among the five components of corporate environmental awareness, environmental values are of special importance, as is illustrated by the organisational culture of a Hungarian company showing consistent pro-environmental behaviour regarding all awareness components except values. Empirical research findings – arrived at with the help of Q-methodology – indicate the need for a stable and unambiguous integration of environmental values into organisational culture in order to achieve consistent pro-environmental behaviour at companies.",
"corpus_id": 153344276,
"title": "The role of organisational culture in the environmental awareness of companies"
} | {
"abstract": "Corporations are moral persons to the extent that they have rights and duties, but their moral personality is severely limited. As artificial persons, they lack the emotional make-up that allows natural persons to show virtues and vices. That fact, taken with the representative function of management, places significant limitations on what constitutes ethical behavior by management. A common misunderstanding of those limitations can lead ethical managers to behave unethically and can lead the public to have improper expectations of corporations.",
"corpus_id": 143661190,
"score": -1,
"title": "The moral status of the corporation"
} |
{
"abstract": "Cooling with sound is a physical phenomenon allowed by Thermo-Acoustics in which acoustic energy is transformed into a negative heat transfer, in other words: into cooling! Without needing any harmful gas, the transformation is environmentally friendly and can respond to many needs in terms of air conditioning, food refrigeration for domestic use, and cooling medical samples for example. To explore the possibilities of this cooling solution on a small scale, the TACS prototype has been designed, consisting of a low cost thermoacoustic refrigerant “pipe” able to lower the temperature by a few degrees. The obtained results are providing an interesting element for possible future of thermo-acoustic refrigeration. Keywords—Domestic Scale Cooling System, Thermoacoustic, Environmental Friendly Refrigeration.",
"corpus_id": 22408980,
"title": "TACS : Thermo Acoustic Cooling System"
} | {
"abstract": "A thermoacoustic refrigerator has been designed, tested, and certified by NASA to be flown aboard the space‐shuttle get‐away‐special (GAS) payload program on STS‐42 in January 1992. This thermoacoustic heat pump uses acoustic power to pump heat from a low‐temperature source to a high‐temperature sink. It has only 15 g of moving mass operating at 400 Hz, is entirely self‐contained, and operates autonomously. Using an inert gas mixture of helium and xenon, a temperature ratio of Tcold/Thot=0.74 has been achieved while a maximum coefficient of performance relative to Carnot of 16% was observed while pumping a load of 3.3 W at a temperature ratio of 0.85. The greatest observed temperature span with no external heat load was ΔT=76 °C.",
"corpus_id": 14422764,
"title": "A thermoacoustic refrigerator for space applications."
} | {
"abstract": "The scope of this work is the investigation of the interactions that appear in molecular switches on noble metal surfaces. The adsorption of two different types of switching molecules is studied by means of near edge X-ray absorption fine structure (NEXAFS) and X-ray photoelectron spectroscopy (XPS). Samples of molecular powders and evaporated molecular multilayers are used as references in which the molecules are decoupled from the surface. The interpretation of the measured data is supported by quantum-chemical calculations based on density-functional theory (DFT), resulting in a deep understanding of the interaction mechanisms. In the first part of this work dimetacyano azobenzene and dimetacarboxymethylester azobenzene are chosen as simple model systems representing the class of conformal switches. These two compounds exhibit the same adsorption behavior and provide complementary information about the adsorption state. Au(111) and Cu(001) are used as substrates with different surface reactivities. On Au(111) at room temperature, azobenzene physisorbs flat in its trans configuration up to a saturation coverage of one monolayer, while the electronic structure of the adsorbate resembles the one calculated for the free molecule. In contrast, on Cu(001) we find that the substrate temperature and the molecule coverage have an influence on the adsorption state. Below half a monolayer evaporated on Cu(001) at 150 K, the majority of azobenzene molecules are found to be in the same physisorbed state as on Au(111). After annealing the substrate above 250 K most of the molecules chemisorb via their azobenzene center, where the frontier orbitals at the azo bridge rehybridize with the substrate orbitals. This interaction forces the molecule to a butterfly-like bent molecular geometry in which the outer aromatic groups are tilted out of the surface plane. In this conformation the lone-pair electrons can participate in the chemical binding, leading to a higher stabilization of the bent structure. The structural reorientation is accompanied by the deoccupation of the bonding and the occupation of the antibonding molecular orbital, which",
"corpus_id": 98159225,
"score": -1,
"title": "Switchable molecules on metallic surfaces studied by core-level spectroscopies"
} |
{
"abstract": "ABSTRACT Used heavily in food animal production, quinolones occur widely in food products of animal origin. Development of highly sensitive and selective analytical techniques for the detection of quinolone residues, often at trace levels, in food samples is necessary to ensure food safety and understand their public health risk. With complex matrices, food samples typically require a series of pre-treatment steps, such as powdering, homogenization, deproteinization, and filtration. This review summarizes the recent advances in extraction, concentration, and detection techniques for quinolones in food product samples, and briefly compares the advantages and limitations of major techniques. Recent development of quick, easy, cheap, effective, rugged, and safe (QuEChERS) extraction method and immunoassay- and biosensor-based detection methods for the determination of quinolones residues in food products is discussed in details. A perspective on the trends and needs of future research is also presented.",
"corpus_id": 23822894,
"title": "Recent Development in Sample Preparation and Analytical Techniques for Determination of Quinolone Residues in Food Products"
} | {
"abstract": "An amperometric magneto-immunosensor (AMIS) for the detection of residues of fluoroquinolone antibiotics in milk samples is described for the first time. The immunosensor presented combines magnetic beads biomodified with an antibody with a broad recognition profile of fluoroquinolones, a haptenized enzyme and a magnetic graphite–epoxy composite (m-GEC) electrode. After the immunochemical reaction with specific enzyme tracer, the antibody biomodified magnetic beads are easily captured by an electrode made of graphite-epoxy composite containing a magnet, which also acts as transducer for the electrochemical detection. In spite of the complexity of milk, the use of magnetic beads allows elimination of potential interferences caused by the matrix components; hence the AMIS could perform quantitative measurements, directly in these samples, without any additional sample cleanup or extraction step. The immunosensor is able to detect up to seven different fluoroquinolones far below the MRLs defined by the UE for milk; for example ciprofloxacin is detected directly in milk with an IC50 of 0.74 μg/L and a LOD of 0.009 μg/L. This strategy offers great promise for rapid, simple, cost-effective, and on-site analysis fluoroquinolones in complex samples.",
"corpus_id": 24284,
"title": "Electrochemical Detection of Fluoroquinolone Antibiotics in Milk Using a Magneto Immunosensor"
} | {
"abstract": "This study describes the synthesis of Ag nanoparticles using DNA templates with polymorphic structures including the G-quadruplex, the I-motif, and the Duplex and their application for the catalytic reduction of 4-nitrophenol by NaBH4. Interactions between Ag+ and polymorphic DNA were studied through circular dichroism, polyacrylamide gels, and UV spectroscopy. Ag nanoparticles with narrow size distributions were prepared through the reduction of Ag+ by NaBH4 under different DNA templates and Ag+/base ratios. These DNA-templated Ag nanoparticles demonstrated excellent catalytic performance in the reduction reaction of 4-nitrophenol. The rate constants depended on the structure of DNA template, with the decreasing order: I-motif-Ag > G-quadruplex-Ag > Duplex-Ag. The results obtained here suggest a promising pathway to adjust physical–chemical properties of metal nanoparticles through the template of polymorphic DNA.Graphical Abstract",
"corpus_id": 97044633,
"score": -1,
"title": "Catalytic Performance of Ag Nanoparticles Templated by Polymorphic DNA"
} |
{
"abstract": "We propose a framework to learn a structured latent space to represent 4D human body motion, where each latent vector encodes a full motion of the whole 3D human shape. On one hand several data-driven skeletal animation models exist proposing motion spaces of temporally dense motion signals, but based on geometrically sparse kinematic representations. On the other hand many methods exist to build shape spaces of dense 3D geometry, but for static frames. We bring together both concepts, proposing a motion space that is dense both temporally and geometrically. Once trained, our model generates a multi-frame sequence of dense 3D meshes based on a single point in a low-dimensional latent space. This latent space is built to be structured, such that similar motions form clusters. It also embeds variations of duration in the latent vector, allowing semantically close sequences that differ only by temporal unfolding to share similar latent vectors. We demonstrate experimentally the structural properties of our latent space, and show it can be used to generate plausible interpolations between different actions. We also apply our model to 4D human motion completion, showing its promising abilities to learn spatiotemporal features of human motion. Code is available at https://github.com/mmarsot/A_structured_latent_space.",
"corpus_id": 247292261,
"title": "A Structured Latent Space for Human Body Motion Generation"
} | {
"abstract": "Marker-based motion capture (mocap) is widely criticized as producing lifeless animations. We argue that important information about body surface motion is present in standard marker sets but is lost in extracting a skeleton. We demonstrate a new approach called MoSh (Motion and Shape capture), that automatically extracts this detail from mocap data. MoSh estimates body shape and pose together using sparse marker data by exploiting a parametric model of the human body. In contrast to previous work, MoSh solves for the marker locations relative to the body and estimates accurate body shape directly from the markers without the use of 3D scans; this effectively turns a mocap system into an approximate body scanner. MoSh is able to capture soft tissue motions directly from markers by allowing body shape to vary over time. We evaluate the effect of different marker sets on pose and shape accuracy and propose a new sparse marker set for capturing soft-tissue motion. We illustrate MoSh by recovering body shape, pose, and soft-tissue motion from archival mocap data and using this to produce animations with subtlety and realism. We also show soft-tissue motion retargeting to new characters and show how to magnify the 3D deformations of soft tissue to create animations with appealing exaggerations.",
"corpus_id": 13940793,
"title": "MoSh: motion and shape capture from sparse markers"
} | {
"abstract": "This paper describes how to obtain accurate 3D body models and texture of arbitrary people from a single, monocular video in which a person is moving. Based on a parametric body model, we present a robust processing pipeline achieving 3D model fits with 5mm accuracy also for clothed people. Our main contribution is a method to nonrigidly deform the silhouette cones corresponding to the dynamic human silhouettes, resulting in a visual hull in a common reference frame that enables surface reconstruction. This enables efficient estimation of a consensus 3D shape, texture and implanted animation skeleton based on a large number of frames. We present evaluation results for a number of test subjects and analyze overall performance. Requiring only a smartphone or webcam, our method enables everyone to create their own fully animatable digital double, e.g., for social VR applications or virtual try-on for online fashion shopping.",
"corpus_id": 2744065,
"score": -1,
"title": "Video Based Reconstruction of 3D People Models"
} |
{
"abstract": "The Power Electronic Converter (PEC) interfacing with the Renewable Energy Hybrid System (REHS) like solar, wind, etc is very much important and its implementations and validations are most required to reduce the harmonic distortions, unity power factor, DC-link voltage regulation, and lower frequency with high-speed operations. The PEC is like us three phase Pulse Width Modulation of Bidirectional Power Electronic Converter (BPEC). This paper proposes the novel Tracking Controller based Predictive (TCP) with PI controller is implemented and validated using dSpace TMS 1102 Digital Signal Processor in the VSC. This processor is used to generate the reference currents and a novel tracking controller based predictive Instantaneous Reference Current Generation using Synchronous Detection Method (IRCGSDM), based on this controller the real time data are sensed through current and voltage transducer from 1KVA inverter realized and software (MATLAB Simulink) simulations are validated through MATLAB Software, This novel TCP controller algorithm achieved a fast response in the DC-link voltage regulation, reactive power compensation with unity power factor and nearest sinusoidal grid currents using predetermination of cost function minimization.",
"corpus_id": 250031322,
"title": "Analysis of predictive control algorithm based PWM boost converter-based hybrid networks"
} | {
"abstract": "A novel active clamping zero-voltage switching three-phase boost pulsewidth modulation (PWM) rectifier is analyzed and a modified minimum-loss space vector modulation (SVM) strategy suitable for the novel zero-voltage switching (ZVS) rectifier is proposed in this paper. The topology of the novel ZVS rectifier only adds one auxiliary active switch, one resonant inductor, and one clamping capacitor to the traditional hard-switched three-phase boost PWM rectifier. With the proposed SVM strategy, the novel ZVS rectifier can achieve ZVS for all the main and auxiliary switches. In addition, the antiparallel diodes can be turned OFF softly, so the reverse recovery current is eliminated. Besides, the voltage stress of all the switches is equal to the dc-link voltage. The operation principle and soft-switching condition of the novel ZVS rectifier are analyzed. The design guidelines of the soft switched circuit parameters are described in detail. A DSP controlled 30 kW prototype is implemented to verify the theory.",
"corpus_id": 25894003,
"title": "A Novel DC-Side Zero-Voltage Switching (ZVS) Three-Phase Boost PWM Rectifier Controlled by an Improved SVM Method"
} | {
"abstract": "The dual-active-bridge (DAB) topology is ideally suited for high-power dc-dc conversion, especially when bidirectional power transfer is required. However, it has the drawback of high circulating currents and hard switching at light loads, if wide variation in input and output is expected. To address these issues, this paper presents a comprehensive analysis and experimental results with pulsewidth-modulation (PWM) control of the DAB. The PWM control is in addition to phase-shift modulation between the two H-bridges. The analysis addresses PWM of one bridge at a time and of both bridges simultaneously. In the latter, five distinct modes arise based on the choice of PWM and load condition. The possibilities are analyzed for optimizing power density and efficiency for low-load operation. Finally, a composite scheme combining single and dual PWM is proposed that extends the soft-switching range down to zero-load condition, reduces rms and peak currents, and results in significant size reduction of the transformer. Experimental results are presented with a 10-kW prototype.",
"corpus_id": 9733537,
"score": -1,
"title": "PWM control of dual active bridge: comprehensive analysis and experimental verification"
} |
{
"abstract": "Previous studies documented that type of investor significantly affects the performance of bonds and sukuk. These studies showed that the yield to maturity (YTM) of bonds and sukuk are significantly associated with institutional investors. This association is because institutional investors actively monitor the performance of bonds and sukuk. Apart from the type of investor, the roles played by the board of directors (BOD) in decision making significantly influence the performance of bonds and sukuk, especially the YTM. This study aims to investigate the relationship between institutional ownerships and the BOD and yield spreads of longand medium-term corporate bonds and sukuk. Data are obtained from firm issuers’ annual reports, Bond Info Hub of Malaysia Central Bank, Department of Malaysia Statistics and Bloomberg from 2000 to 2014. The study employed unbalanced panel data approach for multivariate robust regression, OLS, fixed-effect, and random-effect models. Results revealed that the presence of top-six institutional investors and characteristics of the BOD exert a significant negative effect on the yield spreads. The findings are also consistent with the agency cost of debt theory, which suggests that long-term bonds carry a lower cost of defaults than medium-term bonds.",
"corpus_id": 149318081,
"title": "Institutional ownership, board of directors and yield spreads in Long- and medium-term corporate bonds and sukuk"
} | {
"abstract": "In this paper, the relationship between ownership concentration and financial performance of companies in Singapore and Vietnam is investigated in a dynamic framework. By focusing on two different types of national governance systems (well-developed vs. under-developed), we observe how the relationship is moderated by the national governance quality. We find that the performance effect of concentrated ownership persists in these markets even after the dynamic nature of the ownership concentration–performance relationship is taken into consideration. Our finding supports the prediction of agency theory about the efficient monitoring effect of large shareholders in markets with highly concentrated ownership. In addition, we find that national governance quality does matter when explaining the ownership concentration–performance relationship. The positive effect of concentrated ownership on performance of firms operating in the under-developed national governance system (Vietnam) tends to be stronger than that in the well-established system (Singapore). This finding is consistent with the argument that ownership concentration is an efficient corporate governance mechanism which can substitute for weak national governance quality. Econometrically, our findings still hold even after controlling for dynamic endogeneity, simultaneity, and unobserved time-invariant heterogeneity, inherent in the corporate governance–performance relationship.",
"corpus_id": 152973953,
"title": "Ownership concentration and corporate performance from a dynamic perspective: Does national governance quality matter?"
} | {
"abstract": "New ventures lack resources, are buffeted by environmental factors, and often experience rapid growth and organizational transformations that can have profound effects on performance and survival. This indicates that factors at multiple levels and across time affect new venture outcomes. Research examining these outcomes often address relationships that cross levels or time, but rarely both. Because scholars potentially can make rich theoretical contributions by simultaneously investigating temporal relationships that cross levels, the authors illustrate multiyear, multilevel model building with random coefficient modeling (RCM) using language that is accessible to entrepreneurship scholars. Specifically, they model the effects of strategic growth actions on new venture performance using a longitudinal data set of young, IPO-stage firms. Their illustration demonstrates the statistical advantages of modeling levels and time simultaneously and offers a roadmap for entrepreneurship scholars interested in examining these effects, including a step-by-step guide with SAS code for working with these data. They also describe some specific research questions to help advance theory development using RCM.",
"corpus_id": 154488034,
"score": -1,
"title": "Modeling Levels and Time in Entrepreneurship Research"
} |
{
"abstract": "Radio observations suggest that 3C 75, located in the dumbbell shaped galaxy NGC 1128 at the center of Abell 400, hosts two colliding jets. Motivated by this source, we perform three-dimensional hydrodynamical simulations using a modified version of the GPU-accelerated Adaptive-MEsh-Refinement hydrodynamical parallel code (GAMER) to study colliding extragalactic jets. We find that colliding jets can be cast into two categories: (1) bouncing jets, in which case the jets bounce off each other keeping their identities, and (2) merging jets, when only one jet emerges from the collision. Under some conditions the interaction causes the jets to break up into oscillating filaments of opposite helicity, with consequences for their downstream stability. When one jet is significantly faster than the other and the impact parameter is small, the jets merge; the faster jet takes over the slower one. In the case of merging jets, the oscillations of the filaments, in projection, may show a feature that resembles a double helix, similar to the radio image of 3C 75. Thus we interpret the morphology of 3C 75 as a consequence of the collision of two jets with distinctly different speeds at a small impact parameter, with the faster jet breaking up into two oscillating filaments.",
"corpus_id": 56104535,
"title": "HYDRODYNAMICAL SIMULATIONS OF COLLIDING JETS: MODELING 3C 75"
} | {
"abstract": "Context. We report the first X-ray detection of a proto supermassive binary black hole at the centre of Abell 400. Using the Chandra Advanced CCD Imaging Spectrometer , we are able to clearly resolve the two active galactic nuclei in 3C 75, the well known double radio source at the centre of Abell 400. Aims. Through analysis of the new Chandra observation of Abell 400 along with 4.5 GHz and 329 MHz Very Large Array radio data, we will show new evidence that the active galactic nuclei in 3C 75 are a bound system. Methods. Using the high quality X-ray data, we map the temperature, pressure, density and entropy of the inner regions as well as the cluster profile properties out to ${\\sim}18\\arcmin$. We compare features in the X-ray and radio images to determine the interaction between the intracluster medium and extended radio emission. Results. The Chandra image shows an elongation of the cluster gas along the northeast-southwest axis; aligned with the initial bending of 3C 75's jets. Additionally, the temperature profile shows no cooling core, consistent with a merging system. There is an apparent shock to the south of the core consistent with a Mach number of ${\\cal M}\\sim1.4$ or speed of $v\\sim1200$ km s -1 . Both active galactic nuclei, at least in projection, are located in the low entropy, high density core just north of the shock region. We find that the projected path of the jets does not follow the intra-cluster medium surface brightness gradient as expected if their path were due to buoyancy. We also find that both central active galactic nuclei are extended and include a thermal component. Conclusions. Based on this analysis, we conclude that the active galactic nuclei in 3C 75 are a bound system from a previous merger. They are contained in a low entropy core moving through the intracluster medium at 1200 km s -1 . The bending of the jets is due to the local intracluster medium wind.",
"corpus_id": 672714,
"title": "X-ray detection of the proto supermassive binary black hole at the centre of Abell 400"
} | {
"abstract": "We model the electromagnetic signatures of massive black hole binaries (MBHBs) with an associated gas component. The method comprises numerical simulations of relativistic binaries and gas coupled with calculations of the physical properties of the emitting gas. We calculate the UV/X-ray and the Hα light curves and the Hα emission profiles. The simulations are carried out with a modified version of the parallel tree SPH code Gadget. The heating, cooling, and radiative processes are calculated for two different physical scenarios, where the gas is approximated as a blackbody or a solar metallicity gas. The calculation for the solar metallicity scenario is carried out with the photoionization code Cloudy. We focus on subparsec binaries that have not yet entered the gravitational radiation phase. The results from the first set of calculations, carried out for a coplanar binary and gas disk, suggest that there are pronounced outbursts in the X-ray light curve during pericentric passages. If such outbursts persist for a large fraction of the lifetime of the system, they can serve as an indicator of this type of binary. The predicted Hα emission line profiles may be used as a criterion for selection of MBHB candidates from existing archival data. The orbital period and mass ratio of a binary may be inferred after carefully monitoring the evolution of the Hα profiles of the candidates. The discovery of subparsec binaries is an important step in understanding of the merger rates of MBHBs and their evolution toward the detectable gravitational wave window.",
"corpus_id": 1571900,
"score": -1,
"title": "Modeling of Emission Signatures of Massive Black Hole Binaries. I. Methods"
} |
{
"abstract": "Myeloid differentiation (MD)-2 is linked to the cell surface as a Toll-like receptor (TLR) 4-bound protein though may also function as a soluble receptor to enable the lipopolysaccharide (LPS)-driven response. We recently demonstrated the importance of MD-2 either as a cell-associated or as a soluble receptor in the control of intestinal epithelial cell response toward LPS. High levels of circulating MD-2 were recently proposed as a risk factor for infectious/ inflammatory diseases as septic shock. We hypothesized that MD-2 might be present in sera from patients with inflammatory bowel disease and have pathogenic consequences. We analysed MD-2 activity in sera from patients with inflammatory bowel disease or from healthy subjects. We measured MD-2 activity as the capacity to mediate LPS-driven stimulation of intestinal epithelial cells (HT29). We found that sera from patients with inflammatory bowel disease, particularly Crohn’s disease, endowed HT29 cells with a markedly higher LPS-dependent stimulating capacity as compared to sera from healthy subjects. The effect of sera was specific for LPS activation and was reduced in the presence of anti-MD-2, and anti-TLR4 antibodies. We conclude that sera from patients with inflammatory bowel disease might contain increased MD-2. This might result in higher local availability of the protein leading to a loss of tolerance toward gut microbiota.",
"corpus_id": 13061485,
"title": "Sera from patients with Crohn’s disease break bacterial lipopolysaccharide tolerance of human intestinal epithelial cells via MD-2 activity"
} | {
"abstract": "They question whether we were able to empty the balloon between inflations. Because we measured the volume inserted into the balloon on each occasion and also the volume recovered we were able to ensure that accumulation of fluid in the balloon did not occur. They criticise the use of a fluid filled catheter to detect intraballoon pressure (standard practice by most other workers). We agree that different absolute values could be obtained using different catheters but find it difficult to see how the varying inter and intraindividual results we obtained could be the result of using different catheters. They also question whether our normal subject group, all ofwhom were symptom free and without past or current history of bowel symptoms, could have inadvertently included subjects with irritable bowel syndrome. If it is possible for an adult who feels healthy and who is not aware of any symptoms referable to the gastrointestinal tract to be suffering from irritable bowel syndrome then we have to admit guilt, but then anybody studying normal human physiology of the gut will commit the same mistake. Varma and Smith also make further points concerning irritable bowel syndrome, suggesting that there may be subgroups who could perform differently. We would not disagree with this possibility and indeed two of our patients with predominant diarrhoea (cases 4 and 24) had maximal tolerable volumes to distension which were at the bottom of the patient range. The point that we made in our paper, however, is that neither of these two patients fell outside the 95% confidence limits for our normal range and while there may indeed be a difference between irritable bowel syndrome patients who are at the extremes of symptoms, neither of these two extremes can be regarded as being 'abnormal.' They question the past history of the patients and their age: we also considered this and mentioned it in our discussion. It seems likely that failure to control for differences between age and sex of the two groups would have increased the differences between the groups rather than reduced them. They comment that rectal volumes tolerated by subjects would vary with the confidence and experience of the subjects. We wholeheartedly agree with this statement and indeed this was one of the aims of our study. Our results showed that variability progressively reduced in an individual as the study continued. This is discussed in detail in our paper. They go on to mention that individual variability would not be so great if other sensory or motor indices of rectal distension were used. The major difficulty identified and discussed in our paper is that the proctometrogram as traditionally employed relies on sensory end points, which are subject to individual interpretation rather than independent measurement. They comment that the tracings which we published of the pressure-volume relations seem to be consistent in each individual and indeed we would agree with this. Our point again, however, is that it is the sensory end points that are variable, even though the pressure-volume curve is consistent. The major problem in analysis of pressure-volume curves, however, is to adequately define in mathematical terms the pressure-volume curve itself. It is of doubtful use to talk about compliance (as do many workers in the field), since as is evident from all published data, the slope of the pressure-volume curve varies during inflation so that there are a number of compliancies rather than a single value. Sun et al also criticise our selection of patients and suggest that our sample was biased towards constipation. Thirteen of 26 patients were constipated, two were predominately diarrhoea sufferers, five had both diarrhoea and constipation, the rest were unaffected by major alterations in stool consistency. These patients therefore do not seem too different from those reported in other recent studies of irritable bowel syndrome such as that of Prior et al (Sun et al, reference 1) in which 27 of 55 irritable bowel patients were constipated. We would not argue with their suggestion that patients with diarrhoea predominant symptoms may differ from those with constipation predominant symptoms and also are aware that those individuals with diarrhoea in our study had the lowest tolerated volumes of the groups. We point out again, however, that none of these individuals fell outside the range of normality as defined by our volunteer data, so that while proctometrography may differentiate between irritable bowel syndrome patients with diarrhoea and constipation, it is difficult to see how the technique could be used to distinguish the irritable bowel syndrome from normal. From the interest which our article has stimulated it is evident that more data are required to understand the basic factors influencing pressure-volume relations in the rectum. We look forward to seeing more data from both correspondents to help clarify these difficulties. D G THOMPSON Department ofMedicine, Section ofGastroenterology, Hope Hospital, Eccles Old Road, ManchesterM6 8HD",
"corpus_id": 1458253,
"title": "Reply"
} | {
"abstract": "Almost 50 years ago ulcerative colitis was included among the seven classical psychosomatic diseases. The psychodynamics and personality structures specific to ulcerative colitis sufferers were sought and the main-stay of treatment was psychotherapy. However, for the past decade the psychogenic approach to this disorder has been replaced by physiological and immunological explanations and treatments. The history of medical and psychogenic explanations and treatments of ulcerative colitis has been traced to the present. Ulcerative colitis remains a \"riddle,\" as it was described almost 50 years ago, a complex disorder whose pattern is to flare up and subside, its cause and cure still unknown despite almost 100 years of study.",
"corpus_id": 5581437,
"score": -1,
"title": "Psychological factors in ulcerative colitis."
} |
{
"abstract": "This research is an attempt to assess the quality of the arrangement of the public organizations auditing in Benin, regarding the challenges in this field. Indeed, the frequent changes in this auditing arrangement raise questions about its quality. Our research is theoretically grounded in DeAngelo (1981) contribution that analyzed auditing quality in terms of auditor competences and auditor independence. Through a survey with auditors in public administrations (103 respondants), we collected then some data related to auditors comptences and independence. These data show that globally beninese auditors seem to be competent and independent. Beyond, based on these results, it is important now to research on some other factors that impact the quality of public organizations auditing.",
"corpus_id": 169174768,
"title": "Quality of public organization auditing arrangement and control in Benin [Qualité de l'audit comptable et financier et du contrôle des structures publiques du Bénin]"
} | {
"abstract": "This study investigates how experience-related differences in the content and structure of auditors' knowledge of financial statement errors can contribute to the effectiveness and efficiency of their audit decisions. Different audit decision tasks are performed by auditors with differing levels of training and experience. Such differential assignment is specifically mandated in the first standard of fieldwork (AU Section 210), and observations of practice (e.g., Haskins [1987]) indicate some consensus in assignment patterns. The appropriateness of these differential assignments has economic consequences because of the higher salaries paid to more experienced auditors. The existence of differential costs and presumed performance differences increases the importance of appropriately linking training, experience, and the promotion selection process. Nevertheless, researchers have yet to document both reliable experience-related performance differences and the ability' and knowledge differences2",
"corpus_id": 153330114,
"title": "Experience And The Ability To Explain Audit Findings"
} | {
"abstract": "We examine whether attention deficits underlie developmental dyslexia, or certain types of dyslexia, by presenting double dissociations between the two. We took into account the existence of distinct types of dyslexia and of attention deficits, and focused on dyslexias that may be thought to have an attentional basis: letter position dyslexia (LPD), in which letters migrate within words, attentional dyslexia (AD), in which letters migrate between words, neglect dyslexia, in which letters on one side of the word are omitted or substituted, and surface dyslexia, in which words are read via the sublexical route. We tested 110 children and adults with developmental dyslexia and/or attention deficits, using extensive batteries of reading and attention. For each participant, the existence of dyslexia and the dyslexia type were tested using reading tests that included stimuli sensitive to the various dyslexia types. Attention deficit and its type was established through attention tasks assessing sustained, selective, orienting, and executive attention functioning. Using this procedure, we identified 55 participants who showed a double dissociation between reading and attention: 28 had dyslexia with normal attention and 27 had attention deficits with normal reading. Importantly, each dyslexia with suspected attentional basis dissociated from attention: we found 21 individuals with LPD, 13 AD, 2 neglect dyslexia, and 12 surface dyslexia without attention deficits. Other dyslexia types (vowel dyslexia, phonological dyslexia, visual dyslexia) also dissociated from attention deficits. Examination of 55 additional individuals with both a specific dyslexia and a certain attention deficit found no attention function that was consistently linked with any dyslexia type. Specifically, LPD and AD dissociated from selective attention, neglect dyslexia dissociated from orienting, and surface dyslexia dissociated from sustained and executive attention. These results indicate that visuospatial attention deficits do not underlie these dyslexias.",
"corpus_id": 1202709,
"score": -1,
"title": "Dissociations between developmental dyslexias and attention deficits"
} |
{
"abstract": "In recent years, the use of electroporation process has attracted much attention, due to its application in various industrial and medical fields. Electroporation is a microbiology technique which creates tiny holes in the cell membrane by the applied electric field. The electroporation process needs high-voltage pulses to provide the required electric field. To generate high-voltage pulses, a pulse generator device must be used. High-voltage pulse generators can be mainly divided into two major groups: Classical pulse generators and power electronics-based pulse generators. As their name suggests, the first group is associated with the primary and elementary pulse generators like Marx generators, and the second group is associated with the pulse generators that have been updated with the advancement of power electronics like Modular Multilevel Converters. These two major groups are also divided into several subgroups which are reviewed in detail in this paper. This study reviews the literature presented in the field of pulse power and pulse generators proper for the electroporation process and addresses their strengths and weaknesses. Several tables are provided to highlight and discuss the characteristics of each subgroup. Finally, a comparative study among different groups of pulse generators is performed which is followed by a classification performance analysis.",
"corpus_id": 249912554,
"title": "High-Voltage Pulse Generators for Electroporation Applications: A Systematic Review"
} | {
"abstract": "High-voltage (HV) pulse generators (PGs) are the core of pulsed electric field applications. Applying HV pulses produces electrical pores in a biological cell membrane, in which if the size of the pores increases beyond a critical size, the cell will not survive. This paper proposes a new HV-PG based on the modular multilevel converter with full-bridge submodules (FB-SMs). In order to alleviate the need of complicated sensorless or sensor-based voltage balancing techniques for the FB-SM capacitors, a dedicated self-regulating charging circuit is connected across each FB-SM capacitor. The individual capacitor charging voltage level is obtained from three successive stages, namely, convert the low-voltage dc input voltage to a high-frequency square ac voltage, increase the ac voltage level via a nanocrystalline step-up transformer, and rectify the secondary transformer ac voltage via a diode FB rectifier. The HV bipolar pulses are formed across the load in a fourth stage through series connected FB-SMs. The flexibility of inserting and bypassing the FB-SM capacitors allows the proposed topology to generate different pulse-waveform shapes, including rectangular waveforms with specifically reduced ${dv/dt}$ and ramp pulses. The practical results, from a scaled-down experimental rig with five FB-SMs and a 1-kV peak-to-peak pulse output, validate the proposed topology.",
"corpus_id": 6173114,
"title": "Full-Bridge Modular Multilevel Submodule-Based High-Voltage Bipolar Pulse Generator With Low-Voltage DC, Input for Pulsed Electric Field Applications"
} | {
"abstract": "The synthesis of a novel divalent silicon compound by debromination of the corresponding dibromosilyl precursor is reported. The silylene possesses a unique reactivity toward electrophiles of the type R-X (R = H, silyl; X = halogen, triflate) in comparison with the germanium congener. DFT calculations suggest that this is due to a much higher basicity of the silylene versus that of germylene lone-pair electrons. Thus, addition of Me3SiX to the silylene (X = OSO2CF3, triflate) furnishes the corresponding (kinetically favored) 1,4-adduct which subsequently rearranges to the thermodynamically favored 1,1-adduct.",
"corpus_id": 30192980,
"score": -1,
"title": "A new type of N-heterocyclic silylene with ambivalent reactivity."
} |
{
"abstract": "Ischemic stroke (IS) is a serious cerebrovascular disease with high morbidity and disability worldwide. Despite the great efforts that have been made, the prognosis of patients with IS remains unsatisfactory. Notably, recent studies indicated that mesenchymal stem cell (MSCs) therapy is becoming a novel research hotspot with large potential in treating multiple human diseases including IS. The current article is aimed at reviewing the progress of MSC treatment on IS. The mechanism of MSCs in the treatment of IS involved with immune regulation, neuroprotection, angiogenesis, and neural circuit reconstruction. In addition, nutritional cytokines, mitochondria, and extracellular vesicles (EVs) may be the main mediators of the therapeutic effect of MSCs. Transplantation of MSCs-derived EVs (MSCs-EVs) affords a better neuroprotective against IS when compared with transplantation of MSCs alone. MSC therapy can prolong the treatment time window of ischemic stroke, and early administration within 7 days after stroke may be the best treatment opportunity. The deliver routine consists of intraventricular, intravascular, intranasal, and intraperitoneal. Furthermore, several methods such as hypoxic preconditioning and gene technology could increase the homing and survival ability of MSCs after transplantation. In addition, MSCs combined with some drugs or physical therapy measures also show better neurological improvement. These data supported the notion that MSC therapy might be a promising therapeutic strategy for IS. And the application of new technology will promote MSC therapy of IS.",
"corpus_id": 235721180,
"title": "Progress in Mesenchymal Stem Cell Therapy for Ischemic Stroke"
} | {
"abstract": "Transplantation of bone marrow stromal cells (BMSCs) is a promising therapy for ischemic stroke, but the poor oxygen environment in brain lesions limits the efficacy of cell-based therapies. Here, we tested whether hypoxic preconditioning (HP) could augment the efficacy of BMSC transplantation in a rat ischemic stroke model and investigated the underlying mechanism of the effect of HP. In vitro, BMSCs were divided into five passage (P0, P1, P2, P3, and P4) groups, and HP was applied to the groups by incubating the cells with 1% oxygen for 0, 4, 8, 12, and 24 h, respectively. We demonstrated that the expression of hypoxia-inducible factor-1α (HIF-1α) was increased in the HP-treated BMSCs, while their viability was unchanged. We also found that HP decreased the apoptosis of BMSCs during subsequent simulated ischemia–reperfusion (I/R) injury, especially in the 8-h HP group. In vivo, a rat transient focal cerebral ischemia model was established. These rats were administered normal cultured BMSCs (N-BMSCs), HP-treated BMSCs (H-BMSCs), or DMEM cell culture medium (control) at 24 h after the ischemic insult. Compared with the DMEM control group, the two BMSC-transplanted groups exhibited significantly improved functional recovery and reduced infarct volume, especially the H-BMSC group. Moreover, HP decreased neuronal apoptosis and enhanced the expression of BDNF and VEGF in the ischemic brain. Survival and differentiation of transplanted BMSCs were also increased by HP, and the quantity of engrafted BMSCs was significantly correlated with neurological function improvement. These results suggest that HP may enhance the therapeutic efficacy of BMSCs in an ischemic stroke model. The underlying mechanism likely involves the inhibition of caspase-3 activation and an increasing expression of HIF-1α, which promotes angiogenesis and neurogenesis and thereby reduces neuronal death and improves neurological function.",
"corpus_id": 433688,
"title": "Hypoxic Preconditioning Augments the Therapeutic Efficacy of Bone Marrow Stromal Cells in a Rat Ischemic Stroke Model"
} | {
"abstract": "Remodeling is an important long-term determinant of cardiac function throughout the progression of heart disease. Numerous biomolecular pathways for mechanosensing and transduction are involved. However, we hypothesize that biomechanical factors alone can explain changes in myocardial volume and chamber size in valve disease. A validated model of the human vasculature and the four cardiac chambers was used to simulate aortic stenosis, mitral regurgitation, and aortic regurgitation. Remodeling was simulated with adaptive feedback preserving myocardial fiber stress and wall shear stress in all four cardiac chambers. Briefly, the model used myocardial fiber stress to determine wall thickness and cardiac chamber wall shear stress to determine chamber volume. Aortic stenosis resulted in the development of concentric left ventricular hypertrophy. Aortic and mitral regurgitation resulted in eccentric remodeling and eccentric hypertrophy, with more pronounced hypertrophy for aortic regurgitation. Comparisons with published clinical data showed the same direction and similar magnitudes of changes in end-diastolic volume index and left ventricular diameters. Changes in myocardial wall volume and wall thickness were within a realistic range in both stenotic and regurgitant valvular disease. Simulations of remodeling in left-sided valvular disease support, in both a qualitative and quantitative manner, that left ventricular chamber size and hypertrophy are primarily determined by preservation of wall shear stress and myocardial fiber stress. NEW & NOTEWORTHY Cardiovascular simulations with adaptive feedback that normalizes wall shear stress and fiber stress in the cardiac chambers could predict, in a quantitative and qualitative manner, remodeling patterns seen in patients with left-sided valvular disease. This highlights how mechanical stress remains a fundamental aspect of cardiac remodeling. This in silico study validated with clinical data paves the way for future patient-specific predictions of remodeling in valvular disease.",
"corpus_id": 73443426,
"score": -1,
"title": "Cardiac remodeling in aortic and mitral valve disease: a simulation study with clinical validation."
} |
{
"abstract": "The American Recovery and Reinvestment Act of 2009 was an attempt to “jump-start the economy to create and save jobs” by inducing state spending on an enormous scale. 787 billion US dollars were allocated to the act, which included tax cuts and extension of benefits under Medicaid, but also major investment programs. Under the recovery act, 28 government agencies were each allocated a portion of the available funds, and then decided how to spend the money. Most of the money was awarded as grants, loans or contracts to state governments, which then distributed it further to specific projects. However, while the recovery act may have avoided an even deeper recession, it has largely failed to jump-start the American economy in the intended way. Could it be that the stimulus had less effect than it could have had, because of corruption? Research shows that corruption increases costs of public investment, and reduces the efficiency of public spending. In this paper, I attempt to gauge the effects of corruption on the stimulus package by comparing projects awarded grants in the 50 US states, using a two-level modeling strategy. First, for each state, the cost of a project is modelled as a function of the number of people employed in the project, which yields a job cost coefficient. The assumption is that a lower coefficient implies more efficient spending, since projects with the same amount of labor cost more when the coefficient is higher. Second, the job cost coefficient is modelled as a function of corruption in the state, controlling for other state-level factors. Corruption is measured as the number of convictions for corruption in the state 1976-2009 (Glaeser & Saks 2006). The empirical analysis shows that the job cost coefficient is higher in states where more public officials have been convicted for corruption, implying that corruption may have impaired the possible effect of the stimulus package. Anders Sundell Department of Political Science University of Gothenburg anders.sundell@pol.gu.se",
"corpus_id": 54171514,
"title": "The American Recovery and Reinvestment Act: less stimulating in corrupt states"
} | {
"abstract": "Why did corruption and cronyism impede growth in some Asian countries but not in others? Building upon theoretical advances in the fields of rent-seeking, transaction costs, and the new institutional economics, this study finds that if there is a balance of power among a small and stable number of government and business actors, cronyism can actually reduce transaction costs and minimize deadweight losses; while either too few or too many actors leads to bandwagoning politics that increases deadweight losses from corruption. By examining corruption and cronyism through the lens of transaction costs, and showing why a particular set of government-business relations- although corrupt - also lowered transaction costs and made investment more credible while another set of relations did not, this study provides the outlines of a story that can both explain one aspect of corruption and also yields a theoretically-grounded causal mechanism that lets us distinguish between types of corruption.",
"corpus_id": 152961252,
"title": "Transaction Costs and Crony Capitalism in East Asia"
} | {
"abstract": "In this paper we respond to calls for an institution-based perspective on strategy. With its emphasis upon mimetic, coercive, and normative isomorphism, institutional theory has earned a deterministic reputation and seems an unlikely foundation on which to construct a theory of strategy. However, a second movement in institutional theory is emerging that gives greater emphasis to creativity and agency. We develop this approach by highlighting co-evolutionary processes that are shaping the varieties of capitalism (VoC) in Asia. To do so, we examine the extent to which the VoC model can be fruitfully applied in the Asian context. In the spirit of the second movement of institutional theory, we describe three processes in which firm strategy collectively and intentionally feeds back to shape institutions: (1) filling institutional voids, (2) retarding institutional innovation, and (3) deploying institutional escape. We outline the key contributions contained in the articles of this Special Issue and discuss a research agenda generated by the VoC perspective.",
"corpus_id": 5085061,
"score": -1,
"title": "Varieties of Asian capitalism: Toward an institutional theory of Asian enterprise"
} |
{
"abstract": "In the last decades, the percutaneous interventional approach for the treatment of central venous obstructions (CVO) has become increasingly popular as the treatment of first choice because of its minimal invasiveness and reported success rates. CVOs are caused by a diverse spectrum of diseases which can be broadly categorized into two principal eliciting genera, either benign or malignant obstructions. The large group of benign venous obstructions includes the increasing number of end-stage renal disease patients with vascular access related complications. Due to the invasiveness and complexity of thoracic surgery for benign CVOs, the less invasive percutaneous interventional therapy can generally be considered the preferred treatment option. Initially, the radiological intervention consisted of balloon angioplasty alone, subsequently additional stent placement was applied. This was advocated as either primary placement or secondary in cases of elastic recoil or residual stenosis after percutaneous transluminal angioplasty (PTA). The efficacy of angioplasty of CVO in patients with vascular accesses, either with or without stenting, has been addressed by various studies. Overall, reports indicate an initial technical and clinical success rate above 95% and satisfactory patency rates. However, systematic follow-up and frequent re-interventions are necessary to maintain vascular patency to achieve long-term success.(J Vasc Access",
"corpus_id": 208063748,
"title": "Radiological Central Vein Treatment in Vascular Access"
} | {
"abstract": "CTA has become an important diagnostic tool in the evaluation of vascular diseases in virtually all parts of the body. Whereas CTA is able to provide images depicting exquisite anatomic detail, careful scanning technique and selection of scan parameters are critical for high quality studies. The choices to be made when prescribing a scan can seem daunting at first, but if one applies the principles outlined previously, CTA can be a relatively easy, fast, and safe diagnostic technique that is effective in the majority of patients with vascular disease.",
"corpus_id": 710756,
"title": "CT angiography of the arterial system."
} | {
"abstract": "We report a case of arteriovenous fistula and pseudoaneurysm formation following endopyelotomy. Presentation, successful management with interventional radiology techniques, and the relationship between variant renal artery anatomy and endopyelotomy are discussed.",
"corpus_id": 37137350,
"score": -1,
"title": "Arteriovenous fistula complicating endopyelotomy."
} |
{
"abstract": "Abstract Volatilome analysis is growing in attention for the diagnosis of diseases in animals and humans. In particular, volatilome analysis in fecal samples is starting to be proposed as a fast, easy and noninvasive method for disease diagnosis. Volatilome comprises volatile organic compounds (VOCs), which are produced during both physiological and patho-physiological processes. Thus, VOCs from a pathological condition often differ from those of a healthy state and therefore the VOCs profile can be used in the detection of some diseases. Due to their strengths and advantages, feces are currently being used to obtain information related to health status in animals. However, they are complex samples, that can present problems for some analytical techniques and require special consideration in their use and preparation before analysis. This situation demands an effort to clarify which analytic options are currently being used in the research context to analyze the possibilities these offer, with the final objectives of contributing to develop a standardized methodology and to exploit feces potential as a diagnostic matrix. The current work reviews the studies focused on the diagnosis of animal diseases through fecal volatilome in order to evaluate the analytical methods used and their advantages and limitations. The alternatives found in the literature for sampling, storage, sample pretreatment, measurement and data treatment have been summarized, considering all the steps involved in the analytical process.",
"corpus_id": 226853435,
"title": "Analytical Tools for Disease Diagnosis in Animals via Fecal Volatilome"
} | {
"abstract": "IntroductionDisturbance to the hindgut microbiota can be detrimental to equine health. Metabolomics provides a robust approach to studying the functional aspect of hindgut microorganisms. Sample preparation is an important step towards achieving optimal results in the later stages of analysis. The preparation of samples is unique depending on the technique employed and the sample matrix to be analysed. Gas chromatography mass spectrometry (GCMS) is one of the most widely used platforms for the study of metabolomics and until now an optimised method has not been developed for equine faeces.ObjectivesTo compare a sample preparation method for extracting volatile organic compounds (VOCs) from equine faeces.MethodsVolatile organic compounds were determined by headspace solid phase microextraction gas chromatography mass spectrometry (HS-SPME-GCMS). Factors investigated were the mass of equine faeces, type of SPME fibre coating, vial volume and storage conditions.ResultsThe resultant method was unique to those developed for other species. Aliquots of 1000 or 2000 mg in 10 ml or 20 ml SPME headspace were optimal. From those tested, the extraction of VOCs should ideally be performed using a divinylbenzene-carboxen-polydimethysiloxane (DVB-CAR-PDMS) SPME fibre. Storage of faeces for up to 12 months at − 80 °C shared a greater percentage of VOCs with a fresh sample than the equivalent stored at − 20 °C.ConclusionsAn optimised method for extracting VOCs from equine faeces using HS-SPME-GCMS has been developed and will act as a standard to enable comparisons between studies. This work has also highlighted storage conditions as an important factor to consider in experimental design for faecal metabolomics studies.",
"corpus_id": 3356342,
"title": "A comparison of sample preparation methods for extracting volatile organic compounds (VOCs) from equine faeces using HS-SPME"
} | {
"abstract": "To evaluate the evidence for the use of probiotics in the prevention of acute diarrhoea, we did a meta-analysis of the available data from 34 masked, randomised, placebo-controlled trials. Only one trial was community based and carried out in a developing country. Most of the remaining 33 studies were carried out in a developed country in a health-care setting. Evaluating the evidence by types of acute diarrhoea suggests that probiotics significantly reduced antibiotic-associated diarrhoea by 52% (95% CI 35-65%), reduced the risk of travellers' diarrhoea by 8% (-6 to 21%), and that of acute diarrhoea of diverse causes by 34% (8-53%). Probiotics reduced the associated risk of acute diarrhoea among children by 57% (35-71%), and by 26% (7-49%) among adults. The protective effect did not vary significantly among the probiotic strains Saccharomyces boulardii, Lactobacillus rhamnosus GG, Lactobacillus acidophilus, Lactobacillus bulgaricus, and other strains used alone or in combinations of two or more strains. Although there is some suggestion that probiotics may be efficacious in preventing acute diarrhoea, there is a lack of data from community-based trials and from developing countries evaluating the effect on acute diarrhoea unrelated to antibiotic usage. The effect on acute diarrhoea is dependent on the age of the host and genera of strain used.",
"corpus_id": 24339186,
"score": -1,
"title": "Efficacy of probiotics in prevention of acute diarrhoea: a meta-analysis of masked, randomised, placebo-controlled trials."
} |
{
"abstract": "In this paper we compare different matrix-based numerical methods computing the Greatest Common Divisor (GCD) of several polynomials. More particularly we compare numerically and symbolically resultant type, ERES and MP methods in respect of their complexity and effectiveness. The combination of numerical and symbolic operations suggests a new approach in software mathematical computations denoted as hybrid computations. For some of the above methods their hybrid nature is presented. Finally the notion of approximate GCD is described and a useful criterion estimating the strength of approximation of a computed GCD is also developed.",
"corpus_id": 44453534,
"title": "Numerical and symbolical comparison of resultant type, ERES and MP methods computing the Greatest Common Divisor of several polynomials"
} | {
"abstract": "The computation of the Singular Value Decomposition (SVD) of structured matrices has become an important line of research in numerical linear algebra. In this work the problem of inversion in the context of the computation of curve intersections is considered. Although this problem has usually been dealt with in the field of exact rational computations and in that case it can be solved by using Gaussian elimination, when one has to work in finite precision arithmetic the problem leads to the computation of the SVD of a Sylvester matrix, a different type of structured matrix widely used in computer algebra. In addition only a small part of the SVD is needed, which shows the interest of having special algorithms for this situation. 1. Introduction and basic results. The singular value decomposition (SVD) is one of the most valuable tools in numerical linear algebra. Nevertheless, in spite of its advantageous features it has not usually been applied in solving curve intersection problems. Our aim in this paper is to consider the problem of inversion in the context of the computation of the points of intersection of two plane curves, and to show how this problem can be solved by computing the SVD of a Sylvester matrix, a type of structured matrix widely used in computer algebra. It must also be stressed that only a small part of the SVD will be needed. Our approach to the intersection of parametric curves uses algebraic methods, and an important step of the process consists of the implicitization of one of the curves by using resultants. The process leads to the computation of the roots of a polynomial of (usually) high degree, and so these roots will usually be floating point numbers. Therefore, although we employ algebraic methods the use of floating point arithmetic is unavoidable, contrarily to the idea expressed in the last section of (20), where it is indicated that the algebraic methods assume the procedure is carried out using exact (integer or rational number) arithmetic. In fact, it is clear that the solution of practical problems of not small size needs the combination of symbolic computations and numerical computations, although historically those two classes of algorithms were developed by two distinct groups of people having very little interaction with each other. In connection with the problem we are dealing with, it can be read in (20) that inversion -computing the parameter for a point known to lie on the curve- can be performed using Gauss elimination or Cramer's rule. While that assert may be correct when working with exact arithmetic and problems of small size, the suggested procedure is far from being adequate with problems of large size which lead to polynomial roots given as floating point numbers. As it will be seen in Section , the use of algebraic methods will allow us to formulate the problem of inversion in terms of the computation of the nullspace of a Sylvester matrix of",
"corpus_id": 1131848,
"title": "A NEW SOURCE OF STRUCTURED SINGULAR VALUE DECOMPOSITION PROBLEMS"
} | {
"abstract": "This paper provides a brief review of the state-of-the-art of neural networks in off-line text recognition. We discuss the role that neural networks have played in text recognition. We also assess the state of the art of neural networks in character and word recognition. Despite the success of neural networks in character and word recognition, there are still many challenging problems.",
"corpus_id": 7632195,
"score": -1,
"title": "Neural networks in off-line text recognition: a review"
} |
{
"abstract": "From its beginning, transmit power has always placed a significant constraint on the performance of wireless radio systems. The transmit power control problem can be characterized as that of maintaining adequate power in each transmitted waveform so as to increase the expectation that the minimum required SIR at the receiver will at least be reached. This has been shown not to be a trivial endeavor due to the variability of the physical channel with time as well as interference and other practical constraints on “infinitely” increasing transmit power. Several power control algorithms have been proposed, of which the class of distributed and autonomous transmit power control algorithms have been shown in literature to perform quite satisfactorily when compared to centralized schemes due to the moderate complexity that is achievable; and the vast control and signaling overhead that is saved. This thesis work explores the application of fuzzy control to the subject of modeling uplink transmit power control in code division multiple access system. A possible implementation scenario of an SIR-based fully distributed constrained transmit power control algorithm in a multiservice network by applying fuzzy proportional-plus-integral control with a two-input (error and error change) and one-output (transmit power adjustment command) fuzzy rule base and inference engine is proposed.",
"corpus_id": 107246755,
"title": "Fuzzy Modeling of Uplink Transmit Power Control in a CDMA Network"
} | {
"abstract": "Efficiently sharing the spectrum resource is of paramount importance in wireless communication systems, in particular in Personal Communications where large numbers of wireless subscribers are to be served. Spectrum resource sharing involves protecting other users from excessive interference as well as making receivers more tolerant to this interference. Transmitter power control techniques fall into the first category. In this paper we describe the power control problem, discuss its major factors, objective criteria, measurable information and algorithm requirements. We attempt to put the problem in a general framework and propose an evolving knowledge‐bank to share, study and compare between algorithms.",
"corpus_id": 492876,
"title": "Toward a framework for power control in cellular systems"
} | {
"abstract": "In the original paper, it was conjectured that the system performance in the up-link and down-link in a cellular radio system under optimum transmitter power control should be statistically similar. In this comment we show that the achievable signal-to-interference ratios in the up- and down-links are, in fact, identical at every instant. >",
"corpus_id": 60741635,
"score": -1,
"title": "Comment on \"Performance of optimum transmitter power control in cellular radio systems\""
} |
{
"abstract": "ABSTRACT Water resources have fundamental importance both for living beings life and for human activities. The lack of water can be a limiting factor for occupation and development of a certain region. Photovoltaic pumping systems are an alternative solution for remote locations, with electric generation from photovoltaic panels being a locally pollutant-free option and less maintenance when compared to combustion-generation engines. This work aims the analysis of a photovoltaic pumping system with two cell technologies, amorphous and polycrystalline silicon. The research shown an average volume pumped of 2,255.44 and 2,397.01 litres per day in the amorphous and polycrystalline systems, respectively. The best efficiency results in both systems were at 0.8 kWhm−2, 4.69% for amorphous and 7.75% polycrystalline. However, the first one presented lower voltage drop with panel’s temperature increasing. It obtained higher flow and overall efficiency in the polycrystalline system.",
"corpus_id": 226340492,
"title": "Stand–alone water pumping system powered by amorphous and polycrystalline photovoltaic panels in Paraná - Brazil"
} | {
"abstract": "Abstract Excess heat on photovoltaic panels can limit their efficiency owning to an increase in internal resistance and consequent energy losses. This work aims to evaluate the effect of a water-cooling sprinkler system underneath polycrystalline photovoltaic modules. Voltage, current, power, and efficiency were monitored to quantify system performance. Three periods of analysis were defined. The first had intermittent cooling, the second had no cooling and the third had continuous cooling in the hottest hours of the day. The analysis occurred at two levels of irradiation – high and low. The use of the cooling system at a high level of irradiation resulted in a 12.26% relative increase in power and a 12.17% relative increase in efficiency. At a low irradiation level, the relative power and efficiency increased by 8.48% and 9.09%, respectively. The cooling system had a significant effect on the power and efficiency of the studied photovoltaic panel.",
"corpus_id": 158965671,
"title": "Performance and effect of water-cooling on a microgeneration system of photovoltaic solar energy in Paraná, Brazil"
} | {
"abstract": "The rapidly increasing demand for silicon panels has resulted in severe pollution from its manufacture in the underdeveloped regions of China, where environmental control usually concedes to economic development. The study aimed to estimate the external costs of silicon production to inform policy on the sustainable exploitation of solar energy. Specifically, the study estimated the cost of health damage and agricultural loss resulting from the pollution of silicon production. The data was collected from a silicon industrial zone in Southwest China. The cost of agricultural loss was estimated using a production function method, while the cost of health damage was estimated using dose-response functions, in which the air pollutant concentration was estimated based on a Gaussian dispersion model. The results show that the annual external cost of agricultural loss is CNY 0.76–4.13 million and that of health damage is CNY 26.53–44.63 million, or a total external cost is CNY 27.29–48.76 million. The average external cost is CNY 447.35 (CNY 321.1–473.82) per ton of metallurgical silicon. This information has implication for making policies on the control of pollution and the sustainable development of solar energy industry.",
"corpus_id": 157980953,
"score": -1,
"title": "External cost of photovoltaic oriented silicon production: A case in China"
} |
{
"abstract": "In order to reach Europe’s 2020 and 2050 targets on greenhouse gas emissions, geothermal resources have to contribute substantially to carbon-free energy needs. Deep geothermal developments, however, are often accompanied by induced seismicity due to stimulation. The induced seismicity can be a threat for the development of future large scale application of deep geothermal power plants. Therefore, understanding the physical processes at the origin of the seismicity induced by forced fluid circulation in geothermal fields is essential, and this paper reviews the current knowledge in connection with field cases. The driving force of a seismic event is a change of the stress state in the crust. To asses this quantitatively, one needs to know the initial stress state, the spatio-temporal stress changes, the failure criterion, and the rupture dynamics that describes how a seismic event is produced. Several existing geomechanical-numerical models are, in theory, capable of predicting the spatio-temporal changes of the stress state and few of their effects on induced seismicity. They consider coupling between geomechanical, fluid flow, and heat transport processes, with different levels of complexity. The characteristics of the recorded seismicity induced in geothermal fields play a major role to calibrate and to assess the models. The Soultz-sous-Forets enhanced geothermal system in France, where thousands of seismic events were induced during stimulations, is an example representative for fields developed in deep crystalline rocks. In such formations, induced seismicity mainly occurs on a network of pre-existing faults and fractures oriented in accordance with the stress field, and shearing on these structures is apparently the dominating failure mechanism. Several physics-based models have been tested on this well-documented field to reproduce the observations. By contrast, the hydraulic fracturing operations carried out in Gros Schonebeck (Germany) geothermal reservoir, which is located in sedimentary formations at a depth similar to Soultz-sous-Forets, induced very few and weak seismic events. This behavior is consistent with a less seismogenic tensile fracture opening as being the dominant failure mechanism. Interestingly, the few recorded and located seismic events at this site likely occurred on a pre-existing fault. To characterize a geothermal field with regards to the expected induced seismicity, the following key factors are proposed: natural seismicity at a field scale, stress field, structural fracture/fault characterization, rock type, history of pressure and injection/circulation rates, and past induced seismicity. To mitigate induced seismicity during major hydraulic stimulations, and to prevent large-magnitude event occurrence once injection stopped, early-warning systems and decision support systems are required. To feed these systems, we advocate the application of hybrid methods and the development of fast models. Hybrid methods would combine the best a priori knowledge of the expected behavior of the field underground, inherited from geomechanical-numerical modeling, with a statistical approach based on the real-time observation of the induced seismicity. Fast models would capture the essential physics while minimizing computing time. The quantitative understanding of induced seismicity, however, remains a challenging and complex matter. Only an integration of all current research and development efforts, in the fields of modeling, measuring, monitoring, and matching, will make a chance on success.",
"corpus_id": 56575701,
"title": "Induced Seismicity in Geothermal Reservoirs : Physical Processes and Key Parameters"
} | {
"abstract": "We describe a new numerical approach to constrain the three‐dimensional (3‐D) pattern of fault reactivation. Taking advantage of the knowledge of the tectonic stress field, the ratio of the resolved shear and normal stresses (slip tendency) as well as the direction of the shear stress is calculated at every location on the faults modelled by triangulated surfaces. Although the calculated contact stresses represent only a first order approximation of the real stresses, comparison of the 3‐D pattern of slip tendency with the frictional resistance of the fault can provide useful constraints on the probability of fault reactivation. The method was applied to 3‐D geometrical fault models in the Roer Valley Rift System (southeast Netherlands) which is presently characterized by pronounced tectonic activity. The input stress tensors were constrained by published stress indicators. The analysis demonstrated that the observed fault activity could be explained within a reasonable range of frictional parameters and input stress magnitudes. In addition a fairly good correlation was found between the predicted slip directions and the focal mechanisms of local earthquakes. This suggests that in the study area, fault models being valid in the uppermost part of the crust are suitable to constrain fault reactivation even in the deeper part of the seismogenic layer. The analysis further demonstrated that fault hierarchy and the regional tectonic contexts of the fault system are important factors in fault reactivation. Therefore they always should be taken into account during evaluation of the calculated slip tendency and slip direction patterns.",
"corpus_id": 10310882,
"title": "Slip tendency analysis as a tool to constrain fault reactivation: A numerical approach applied to three‐dimensional fault models in the Roer Valley rift system (southeast Netherlands)"
} | {
"abstract": "In this article, the many definitions of research by design are used to build a coherent model for a research by design process. Three phases are identified, each with their own characteristics and types of activities: the pre-design, the design and the post-design phase. In combination with several practical examples of design-led research projects and design studios, these phases are adhered to practical activities and outcomes. Using all this information, the article concludes with proposing a renewed definition of research by design.",
"corpus_id": 15384815,
"score": -1,
"title": "Research by Design: Proposition for a Methodological Approach"
} |
{
"abstract": "Program errors are hard to detect and are costly both to programmers who spend significant efforts in debugging, and for systems that are guarded by runtime checks. Static verification techniques have been applied to imperative and object-oriented languages, like Java and C#, but few have been applied to a higher-order lazy functional language, like Haskell. In this paper, we describe a sound and automatic static verification framework for Haskell, that is based on contracts and symbolic execution. Our approach is modular and gives precise blame assignments at compile-time in the presence of higher-order functions and laziness.",
"corpus_id": 6223557,
"title": "Static contract checking for Haskell"
} | {
"abstract": "Assertion-based contracts provide a powerful mechanism for stating invariants at module boundaries and for enforcing them uniformly. In 2002, Findler and Felleisen showed how to add contracts to higher-order functional languages, allowing programmers to assert invariants about functions as values. Following up in 2004, Blume and McAllester provided a quotient model for contracts. Roughly speaking, their model equates a contract with the set of values that cannot violate the contract. Their studies raised interesting questions about the nature of contracts and, in particular, the nature of the any contract. \n \nIn this paper, we develop a model for software contracts that follows Dana Scott's program by interpreting contracts as projections. The model has already improved our implementation of contracts. We also demonstrate how it increases our understanding of contract-oriented programming and design. In particular, our work provides a definitive answer to the questions raised by Blume and McAllester's work. The key insight from our model that resolves those questions is that a contract that puts no obligation on either party is not the same as the most permissive contract for just one of the parties.",
"corpus_id": 407896,
"title": "Contracts as Pairs of Projections"
} | {
"abstract": "A formalization of the sequential, parallel, and continuous rewriting based on a uniform underlying concept of selective substitution grammars is presented. Each of the rewriting modes is formalized through a universal derivation restriction characterizing the rewriting in question. It is shown that whichever of the three rewriting modes is formalized, the resulting grammars generate precisely the family of context sensitive languages. Moreover, when erasing productions are allowed, these grammars generate all recursively enumerable languages.",
"corpus_id": 123335730,
"score": -1,
"title": "A formalization of sequential, parallel, and continuous rewriting"
} |
{
"abstract": "Ontology learning refers to an automatic extraction of ontology to produce the ontology learning layer cake which consists of five kinds of output: terms, concepts, taxonomy relations, non-taxonomy relations and axioms. Term extraction is a prerequisite for all aspects of ontology learning. It is the automatic mining of complete terms from the input document. Another important part of ontology is taxonomy, or the hierarchy of concepts. It presents a tree view of the ontology and shows the inheritance between subconcepts and superconcepts. In this research, two methods were proposed for improving the performance of the extraction result. The first method uses particle swarm optimization in order to optimize the weights of features. The advantage of particle swarm optimization is that it can calculate and adjust the weight of each feature according to the appropriate value, and here it is used to improve the performance of term and taxonomy extraction. The second method uses a hybrid technique that uses multi-objective particle swarm optimization and fuzzy systems that ensures that the membership functions and fuzzy system rule sets are optimized. The advantage of using a fuzzy system is that the imprecise and uncertain values of feature weights can be tolerated during the extraction process. This method is used to improve the performance of taxonomy extraction. In the term extraction experiment, five extracted features were used for each term from the document. These features were represented by feature vectors consisting of domain relevance, domain consensus, term cohesion, first occurrence and length of noun phrase. For taxonomy extraction, matching Hearst lexico-syntactic patterns in documents and the web, and hypernym information form WordNet were used as the features that represent each pair of terms from the texts. These two proposed methods are evaluated using a dataset that contains documents about tourism. For term extraction, the proposed method is compared with benchmark algorithms such as Term Frequency Inverse Document Frequency, Weirdness, Glossary Extraction and Term Extractor, using the precision performance evaluation measurement. For taxonomy extraction, the proposed methods are compared with benchmark methods of Feature-based and weighting by Support Vector Machine using the f-measure, precision and recall performance evaluation measurements. For the first method, the experiment results concluded that implementing particle swarm optimization in order to optimize the feature weights in terms and taxonomy extraction leads to improved accuracy of extraction result compared to the benchmark algorithms. For the second method, the results concluded that the hybrid technique that uses multi-objective particle swarm optimization and fuzzy systems leads to improved performance of taxonomy extraction results when compared to the benchmark methods, while adjusting the fuzzy membership function and keeping the number of fuzzy rules to a minimum number with a high degree of accuracy.",
"corpus_id": 86741843,
"title": "Hybrid fuzzy multi-objective particle swarm optimization for taxonomy extraction"
} | {
"abstract": "Classification problems often have a large number of features in the data sets, but not all of them are useful for classification. Irrelevant and redundant features may even reduce the performance. Feature selection aims to choose a small number of relevant features to achieve similar or even better classification performance than using all features. It has two main conflicting objectives of maximizing the classification performance and minimizing the number of features. However, most existing feature selection algorithms treat the task as a single objective problem. This paper presents the first study on multi-objective particle swarm optimization (PSO) for feature selection. The task is to generate a Pareto front of nondominated solutions (feature subsets). We investigate two PSO-based multi-objective feature selection algorithms. The first algorithm introduces the idea of nondominated sorting into PSO to address feature selection problems. The second algorithm applies the ideas of crowding, mutation, and dominance to PSO to search for the Pareto front solutions. The two multi-objective algorithms are compared with two conventional feature selection methods, a single objective feature selection method, a two-stage feature selection algorithm, and three well-known evolutionary multi-objective algorithms on 12 benchmark data sets. The experimental results show that the two PSO-based multi-objective algorithms can automatically evolve a set of nondominated solutions. The first algorithm outperforms the two conventional methods, the single objective method, and the two-stage algorithm. It achieves comparable results with the existing three well-known multi-objective algorithms in most cases. The second algorithm achieves better results than the first algorithm and all other methods mentioned previously.",
"corpus_id": 529726,
"title": "Particle Swarm Optimisation for Feature Selection in Classification : A Multi-Objective Approach"
} | {
"abstract": "Abstract Support vector machines (SVMs) have shown great potential for learning classification functions that can be applied to object recognition. In this work, we extend SVMs to model the appearance of human faces which undergo non-linear change across multiple views. The approach uses inherent factors in the nature of the input images and the SVM classification algorithm to perform both multi-view face detection and pose estimation.",
"corpus_id": 19020788,
"score": -1,
"title": "Composite support vector machines for detection of faces across views and pose estimation"
} |
{
"abstract": "Collaborative filtering (CF) is a successful technology for building recommender systems. Unfortunately, it suffers from three limitations - sparsity, scalability and cold start problem. To address these problems, a recommendation algorithm combining user grade-based collaborative filtering and probabilistic relational models (UGCF-PRM) is presented. UGCF-PRM integrates user information, item information and user-item rating data, and uses an adaptive recommendation strategy for each user. In UGCF-PRM a user grade function is defined and a collaborative filtering based on this function is used, which can find neighbors for the target user efficiently. Because of the first-order character of probabilistic relational models, UGCF-PRM can solve the cold start problem. The experiment results on the MovieLens data set show that UGCF-PRM performs better than a pure CF approach in both recommendation quality and recommendation efficiency.",
"corpus_id": 34858467,
"title": "A Recommendation Algorithm Combining User Grade-Based Collaborative Filtering and Probabilistic Relational Models"
} | {
"abstract": "With the development of e-commerce and the proliferation of easily accessible information, recommender systems have become a popular technique to prune large information spaces so that users are directed toward those items that best meet their needs and preferences. A variety of techniques have been proposed for performing recommendations, including content-based and collaborative techniques. Content-based filtering selects information based on semantic content, whereas collaborative filtering combines the opinions of other users to make a prediction for a target user. In this paper, we describe a new filtering approach that combines the content-based filter and collaborative filter to capitalize on their respective strengths, and thereby achieves a good performance. We present a series of recommendations on the selection of the appropriate factors and also look into different techniques for calculating user-user similarities based on the integrated information extracted from user profiles and user ratings. Finally, we experimentally evaluate our approach and compare it with classic filters, the result of which demonstrate the effectiveness of our approach.",
"corpus_id": 823273,
"title": "A new approach for combining content-based and collaborative filters"
} | {
"abstract": "Abstract The effect of Si doping on defect density in AlN layers grown on sapphire was analysed. Si concentration in the range of 10 19 cm −3 leads to dislocation line inclination in AlN layers with a threading dislocation density of 3×10 10 cm −2 . Overgrowth of Si doped AlN layers by non-intentionally doped AlN results in a reduction of threading dislocation density by a factor of two. In contrast, an increase of the Si concentration to an order of 10 20 cm −3 leads to a structural degradation of the AlN layers. The degradation process takes place through transformation to columnar-like growth. In a second experiment the AlN/AlN:Si/AlN layers with a decreased defect density were trench-patterned and used for subsequent epitaxial lateral overgrowth. In comparison to the epitaxial lateral overgrowth of non-intentionally doped AlN templates, the use of the AlN templates containing an AlN:Si interlayer allows to reduce the threading dislocation density in the defect-rich regions above the ridges in 6 µm thick epitaxial laterally overgrown AlN by a factor of 2.5.",
"corpus_id": 100438079,
"score": -1,
"title": "Silicon induced defect reduction in AlN template layers for epitaxial lateral overgrowth"
} |
{
"abstract": "This paper reviews the effectiveness of unconventional monetary policies and their relevance for emerging markets. Such policies may be useful either when interbank rates fall to zero, or when a credit crunch or rise in risk premium impairs the normal transmission mechanism of monetary policy. Unconventional monetary policy measures encompass three broad categories: (i) commitment effect, i.e., verbal commitments to maintain very low interest rates for a certain period, either conditionally or unconditionally; (ii) quantitative easing, i.e., targeting the level of current account balances of the central bank; and (iii) qualitative or credit easing, which involves purchases of targeted assets to lower rates and/or increase liquidity in the target market. It also examines issues related to the exit strategy from unconventional policy, and assesses the applicability of unconventional policies for Asian economies other than Japan. Most studies of the commitment effect (or duration effect) suggest that statements by a central bank regarding the duration of a policy of very low or zero interest rates also affect market expectations of interest rates, but the impact is mainly limited to shorter-term rates. The literature on the effects of quantitative easing monetary policy is less conclusive, especially when one accounts for other announcements by the central bank. Regarding qualitative easing (credit easing) policy, the effect of expanding outright purchases of government bonds on bond yields looks limited. However, other kinds of asset purchase interventions do seem to have been more successful in relieving market stresses. For Asian countries aside from Japan, unconventional policies look most attractive as a way to relieve funding blockages in specific markets rather than to stimulate overall growth. Only India; Republic of Korea; Singapore; and Taipei,China adopted unconventional measures, and those of the middle two were chiefly related to their use of the Fed's swap line for United States dollars to ease dollar shortages in the region. However, if growth of United States consumption slows structurally, this may force Asian economies to rely more on unconventional monetary policy measures during future downturns.",
"corpus_id": 153701884,
"title": "The Role and Effectiveness of Unconventional Monetary Policy"
} | {
"abstract": "We live in extraordinarily challenging times for the global economy and for economic policymakers, not least for central banks such as the Federal Reserve. As you know, the recent economic statistics have been dismal, with many economies, including ours, having fallen into recession. And behind those statistics, we must never forget, are millions of people struggling with lost jobs, lost homes, and lost confidence in their economic future. In examples that resonate with me personally, the unemployment rate in the small town in South Carolina where I grew up has risen to 14 percent, and I learned the other day that what had once been my family home was recently put through foreclosure.",
"corpus_id": 152693453,
"title": "Federal Reserve policies to ease credit and their implications for the Fed's balance sheet: a speech at the National Press Club Luncheon, National Press Club, Washington, D.C., February 18, 2009"
} | {
"abstract": "This paper reviews the propagation of the 2008-2009 financial crisis through China, Japan and the United States (the \"triad\") and analyzes the responses of these countries from the viewpoints of achieving recovery and returning to a growth path that will be free of the imbalances that limit long-term sustainability. For a variety of reasons, the triad countries are both key protagonists in the crisis, and central agents in the recovery from it. Greater cooperation among them will be important to ensure that the recovery that emerges leads to a sustainable growth path. Their actions are important also because they can have great influence on the responses of other countries. The paper concludes with an overview of areas in which cooperation is especially desirable.",
"corpus_id": 154882887,
"score": -1,
"title": "The triad in crisis: What we learned and how it will change global cooperation"
} |
{
"abstract": "Software plays a vital role in most of the embedded systems including safety and mission-critical systems in avionics, automotive, nuclear and medical applications. Along with the functional complexity of software, the quality of software-intensive systems has become a crucial concern. Numerous techniques are being developed to evaluate the quality of a software system from its architecture in terms of quality attributes such as reliability, safety and performance, and to automate the search for alternative designs which provides good trade-offs with respect to those quality attributes of interest. However, the results of these quantitative architecture evaluations depend on design-time estimates for a series of model parameters, which may not be accurate and can change at run-time. Conventional approaches use numerical values (point estimates) as design-time estimates, where the uncertainty in the parameter estimation is not part of the evaluation. As a result, architecture-based quality evaluations at design-time can be inaccurate and thus, sub-optimal design decisions may be taken. To overcome this problem, this thesis presents a novel design-time architecture evaluation and optimisation approach that incorporates parameter uncertainties. The work specifically focuses on architecture-based reliability evaluation models, where a number of parameters have to be estimated subject to heterogeneous uncertain factors. Instead of using point-estimates for architecture-based reliability evaluation models, this work proposes to incorporate heterogeneous and diverse uncertainty information into the reliability evaluation and architecture optimisation. A framework is devised which can capture uncertainty information associated with parameters and use them for the search for robust and optimal candidate architectures. This approach is able to find good architecture solutions that can tolerate the impact of the uncertainties, and thus provides better decision support. The accuracy and scalability of the presented approach is validated with an industrial case study and a series of experiments with generated examples in different problem sizes and characteristics.",
"corpus_id": 38830646,
"title": "Architecture Optimisation of Embedded Systems under Uncertainty in Probabilistic Reliability Evaluation Model Parameters"
} | {
"abstract": "Recent work in the area of model-based safety analysis has demonstrated key advantages of this methodology over traditional approaches, for example, the capability of automatic generation of safety artifacts. Since safety analysis requires knowledge of the component faults and failure modes, one also needs to formalize and incorporate the system fault behavior into the nominal system model. Fault behaviors typically tend to be quite varied and complex, and incorporating them directly into the nominal system model can clutter it severely. This manual process is error-prone and also makes model evolution difficult. These issues can be resolved by separating the fault behavior from the nominal system model in the form of a \"fault model\", and providing a mechanism for automatically combining the two for analysis. Towards implementing this approach we identify key requirements for a flexible behavioral fault modeling notation. We formalize it as a domain-specific language based on Lustre, a textual synchronous dataflow language. The fault modeling extensions are designed to be amenable for automatic composition into the nominal system model.",
"corpus_id": 16331116,
"title": "Behavioral Fault Modeling for Model-based Safety Analysis"
} | {
"abstract": "We present Crayon, a library and runtime system that reduces display power dissipation by acceptably approximating displayed images via shape and color transforms. Crayon can be inserted between an application and the display to optimize dynamically generated images before they appear on the screen. It can also be applied offline to optimize stored images before they are retrieved and displayed. Crayon exploits three fundamental properties: the acceptability of small changes in shape and color, the fact that the power dissipation of OLED displays and DLP pico-projectors is different for different colors, and the relatively small energy cost of computation in comparison to display energy usage. We implement and evaluate Crayon in three contexts: a hardware platform with detailed power measurement facilities and an OLED display, an Android tablet, and a set of cross-platform tools. Our results show that Crayon's color transforms can reduce display power dissipation by over 66% while producing images that remain visually acceptable to users. The measured whole-system power reduction is approximately 50%. We quantify the acceptability of Crayon's shape and color transforms with a user study involving over 400 participants and over 21,000 image evaluations.",
"corpus_id": 9569178,
"score": -1,
"title": "Crayon: saving power through shape and color approximation on next-generation displays"
} |
{
"abstract": "Ability to clearly delineate the nuclei of microscopic cancer cells is crucial to the accuracy and efficiency of image-based approaches to cancer diagnosis and treatment. Oftentimes, however, such cells contain overlapped (or touched) nuclei. The study proposed in this work presents a hybrid trichotomic technique that combines the Gram-Schmidt method (GSM), handling of relevant geometric features of the cell nuclei, and application of the K-means clustering algorithm to segment, detect, and separate touched nuclei in microscopic cancer images. Using a dataset of microscopic images from two datasets comprising of breast cancer cells and acute lymphoblastic leukemia the proposed technique achieves average mean square error (MSE) of 0.087 and 0.075 for the two datatypes, respectively. Utilising the K-means clustering algorithm in the separation phase of the proposed technique ensures an average normalized accuracy of 0.73 and 0.91 respectively in terms of the nuclei separation for the microscopic breast cancer and acute lymphocyte leukemia cell images in comparison to manual approaches.",
"corpus_id": 15280159,
"title": "A trichotomie technique to separate overlapped nuclei in microscopic cancer images"
} | {
"abstract": "A combination of Gram-Schmidt method and cluster validation algorithm based Bayesian is proposed for nuclei segmentation on microscopic breast cancer image. Gram-Schmidt is applied to identify the cell nuclei on a microscopic breast cancer image and the cluster validation algorithm based Bayesian method is used for separating the touching nuclei. The microscopic image of the breast cancer cells are used as dataset. The segmented cell nuclei results on microscopic breast cancer images using Gram-Schmidt method shows that the most of MSE values are below 0.1 and the average MSE of segmented cell nuclei results is 0.08. The average accuracy of separated cell nuclei counting using cluster validation algorithm is 73% compares with the manual counting.",
"corpus_id": 16139984,
"title": "Nuclei segmentation of microscopic breast cancer image using Gram-Schmidt and cluster validation algorithm"
} | {
"abstract": "The current method of grading prostate cancer on histology uses the Gleason system, which describes five increasingly malignant stages of cancer according to qualitative analysis of tissue architecture. The Gleason grading system has been shown to suffer from inter- and intra-observer variability. In this paper we present a new method for automated and quantitative grading of prostate biopsy specimens. A total of 102 graph-based, morphological, and textural features are extracted from each tissue patch in order to quantify the arrangement of nuclei and glandular structures within digitized images of histological prostate tissue specimens. A support vector machine (SVM) is used to classify the digitized histology slides into one of four different tissue classes: benign epithelium, benign stroma, Gleason grade 3 adenocarcinoma, and Gleason grade 4 adenocarcinoma. The SVM classifier was able to distinguish between all four types of tissue patterns, achieving an accuracy of 92.8% when distinguishing between Gleason grade 3 and stroma, 92.4% between epithelium and stroma, and 76.9% between Gleason grades 3 and 4. Both textural and graph-based features were found to be important in discriminating between different tissue classes. This work suggests that the current Gleason grading scheme can be improved by utilizing quantitative image analysis to aid pathologists in producing an accurate and reproducible diagnosis",
"corpus_id": 1049985,
"score": -1,
"title": "AUTOMATED GRADING OF PROSTATE CANCER USING ARCHITECTURAL AND TEXTURAL IMAGE FEATURES"
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.