id
stringlengths 7
7
| title
stringlengths 3
578
| abstract
stringlengths 0
16.7k
| keyphrases
sequence | prmu
sequence |
---|---|---|---|---|
1nmej7g | Fast inspection for size-based analysis in aggregate processing | As rocks are transported along the conveyor belt of a quarry, the maximum dimension of the rocks exiting the crushers should not exceed a size threshold specific to each crusher. If the rocks are too large then they can pose a threat to equipment, and lead to a large cost in repair and loss of production. A 2D vision system is presented, which is capable of estimating the size distribution of the rocks, and also monitoring for oversize rocks. Image segmentation is performed, which is followed by a process that classifies the segments as valid or invalid using a support vector machine. A novel split algorithm is presented, which attempts to split segments that have resulted in undersegmentation. This allows the system to constantly monitor for oversize rocks without stopping the conveyor belt. For the experiments presented in this paper, a set of images was taken of rocks on a moving conveyor. In testing, it was found that 81.31% of the segments output by the system correctly found the maximum dimension of the rock that it represented. | [
"conveyor belt",
"quarry",
"size distribution",
"oversize rocks",
"support vector machine",
"2d image segmentation"
] | [
"P",
"P",
"P",
"P",
"P",
"R"
] |
vRhSVrw | The automatic selection of an optimal wavelet filter and its enhancement by the new sparsogram for bearing fault detection: Part 2 of the two related manuscripts that have a joint title as Two automatic vibration-based fault diagnostic methods using the novel sparsity measurementParts 1 and 2 | A new optimal wavelet filter based on genetic algorithm is designed. Optimal parameters of complex Morlet wavelet can be automatically determined. Convergence of optimal Morlet filter has been enhanced by a new sparsogram. Sparsity measurement value is maximized by genetic algorithm. A non-linear function is introduced to depress noise that is embedded in the inspected signals. | [
"sparsogram",
"sparsity measurement",
"genetic algorithm",
"rolling element bearings",
"fault diagnosis",
"morlet wavelet filter"
] | [
"P",
"P",
"P",
"M",
"M",
"R"
] |
-sBa89P | Online Anomaly Detection for Hard Disk Drives Based on Mahalanobis Distance | A hard disk drive (HDD) failure may cause serious data loss and catastrophic consequences. Online health monitoring provides information about the degradation trend of the HDD, and hence the early warning of failures, which gives us a chance to save the data. This paper developed an approach for HDD anomaly detection using Mahalanobis distance (MD). Critical parameters were selected using failure modes, mechanisms, and effects analysis (FMMEA), and the minimum redundancy maximum relevance (mRMR) method. A self-monitoring, analysis, and reporting technology (SMART) data set is used to evaluate the performance of the developed approach. The result shows that about 67% of the anomalies of failed drives can be detected with zero false alarm rate, and most of them can provide users with at least 20 hours during which to backup the data. | [
"hard disk drive",
"mahalanobis distance, online anomaly detection, self-monitoring, analysis, and reporting technology"
] | [
"P",
"R"
] |
3z1&JbS | teaching the abstract data type in cs2 | Teaching the abstract data type in CS2 is made difficult by the fact that the topic is intertwined with issues of language support, dynamic data structures and implementation techniques for dynamic data structures. When we switched to Ada to teach CS2, details of the language support for data abstraction caused us to restructure the CS2 course. By pushing the topic of the abstract data type toward the beginning of the course, we have found that it is covered more successfully. | [
"teaching",
"abstract data type",
"abstraction",
"data",
"language",
"support",
"dynamic",
"data structures",
"implementation"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P"
] |
BGTYy6V | agent-based support for human/agent teams | In this paper, we present an interface agent, MokSAF, which facilitates time-critical team-planning tasks for teams of both humans and heterogeneous software agents. This agent assists in the formation of teams of humans (via other MokSAF agents) and task agents that can autonomously perform team subtasks. It provides a suitable interaction mechanism to instruct the various task agents in the team; and, by monitoring the human's progress, reallocate or modify the sub-tasks if the human fails to achieve that subtask. A military domain has been used to investigate this interface agent. The task consists of three military (human) commanders that each assemble a platoon, and plan routes so that all three platoons arrive at a given rendezvous by a specified time. An experimental study has been conducted to evaluate MokSAF and the assistance provided by one of three different task agents, and the results summarized. | [
"interface agents",
"multi-agent systems",
"functional substitutability",
"team coordination"
] | [
"P",
"U",
"U",
"M"
] |
2jxX7KR | Dynamic slicing: A generic analysis based on a natural semantics format | Slicing analyses have been proposed for different programming languages. Rather than defining a new analysis from scratch for each programming language, we would like to specify such an analysis once and for all, in a language-independent way, and then specialize it for different programming languages. In order to achieve this goal, we propose a notion of natural semantics format and a dynamic slicing analysis format. The natural semantics format formalizes a class of natural semantics and the analysis format is a generic, language-independent, slicing analysis. The correctness of the generic analysis is established as a relation between the derivation trees of the original program and the slice. This generic analysis is then instantiated to several programming languages conforming to the semantics format (an imperative language, a logic programming language and a functional language), yielding a dynamic slicing analyzer for each of these languages. | [
"natural semantics",
"dynamic slicing analysis",
"correctness",
"proof tree",
"systematic derivation"
] | [
"P",
"P",
"P",
"M",
"M"
] |
3HjCEAn | Foster impedance data modeling via singly terminated LC ladder networks | In this work, a lossless model is developed for the given Foster impedance data. In the model, a 2-port short-or open-terminated LC ladder is used. After applying the proposed algorithm, a realizable driving-point reactance function that fits the given data is obtained. Next, this function is synthesized, resulting in the desired model. In the algorithm, there is no need to select a circuit topology for the model. Two examples are given to illustrate the utilization of the proposed modeling algorithm. | [
"foster impedance",
"modeling",
"lc ladder networks",
"lossless circuits",
"network synthesis"
] | [
"P",
"P",
"P",
"R",
"M"
] |
2gxLtrE | Socio-cognitive engineering: A methodology for the design of human-centred technology | We describe a general methodology, socio-cognitive engineering, for the design of human-centred technology. It integrates software, task, knowledge and organizational engineering and has been refined and tested through a series of projects to develop computer systems to support training and professional work. In this paper we describe the methodology and illustrate its use through a project to develop a computer-based training system for neuro-radiology. | [
"human-centred technology",
"organizational engineering",
"computer-based training",
"software engineering",
"humancomputer interaction",
"radiology"
] | [
"P",
"P",
"P",
"R",
"U",
"U"
] |
3r2uCpz | variable-breadth k-best detector for mimo systems | Tree search detection algorithms can provide Maximum-Likelihood detection over Gaussian MIMO channels with lower complexity than the exhaustive search. Furthermore, the performance of MIMO detectors is highly influenced by the channel matrix condition number. In this paper, the impact of the 2-norm condition number in data detection is exploited in order to decrease the complexity of already proposed algorithms. A suboptimal tree search method called K-Best is combined with a channel matrix condition number estimator and a threshold selection method. This approach leads to a variable-breadth K-Best detector with predictable average performance and suitable for hardware implementation. The results show that the proposed scheme has lower complexity, i.e. it is less power consuming, than a fixed K-Best detector of similar performance. | [
"k-best",
"condition number",
"mimo detection"
] | [
"P",
"P",
"R"
] |
4wn-qUZ | Indoor Hovering Control of Small Ducted-fan Type OAV Using Ultrasonic Positioning System | This study focused on a development of attitude stabilization and a position (hovering) control scheme of the SFR (Small Flying Robot). The platform type of the SFR is a ducted-fan type OAV having a diameter of 30 cm. The attitude of the SFR was successfully stabilized by a PD control method. In the attitude control loop, the feedback states were angle and angular rates, which were outputs from an AHRS (Attitude Heading Reference System). The position of the SFR was controlled precisely by the LQI (Linear Quadratic Integration) method with a Kalman filter. In the position control loop, an ultrasonic positioning system was used for the 3D position reference, which was developed in a previous research. The proposed simple mathematical model could analyze dynamics and stability of the SFR. Also, the stabilities of the attitude control and the precision of the position control were verified by comparing and analyzing data of experiments and simulations. | [
"hovering control",
"ultrasonic positioning system",
"small flying robot",
"attitude control",
"position control",
"ducted-fan oav"
] | [
"P",
"P",
"P",
"P",
"P",
"R"
] |
3KqFcpY | A note on the learning automata based algorithms for adaptive parameter selection in PSO | PSO, like many stochastic search methods, is very sensitive to efficient parameter setting such that modifying a single parameter may cause a considerable change in the result. In this paper, we study the ability of learning automata for adaptive PSO parameter selection. We introduced two classes of learning automata based algorithms for adaptive selection of value for inertia weight and acceleration coefficients. In the first class, particles of a swarm use the same parameter values adjusted by learning automata. In the second class, each particle has its own characteristics and sets its parameter values individually. In addition, for both classed of proposed algorithms, two approaches for changing value of the parameters has been applied. In first approach, named adventurous, value of a parameter is selected from a finite set while in the second approach, named conservative, value of a parameter either changes by a fixed amount or remains unchanged. Experimental results show that proposed learning automata based algorithms compared to other schemes such as SPSO, PSOIW, PSO-TVAC, PSOLP, DAPSO, GPSO, and DCPSO have the same or even higher ability to find better solutions. In addition, proposed algorithms converge to stopping criteria for some of the highly multi modal functions significantly faster. | [
"learning automata",
"particle swarm optimization",
"parameter adaptation"
] | [
"P",
"M",
"R"
] |
4EGj3d8 | The Caribbean Animal Health Network: New Tools for Harmonization and Reinforcement of Animal Disease Surveillance | The Caribbean Animal Health Network (CaribVET) is a collaboration of veterinary services, diagnostic laboratories, research institutes, universities, and regional/international organizations to improve animal health in the Caribbean. New tools were used by the network to develop regional animal health activities: (1) A steering committee, a coordination unit, and working groups on specific diseases or activities were established. The working group on avian influenza used a collaborative Web site to develop a regionally harmonized avian influenza surveillance protocol and performance indicators. (2) A specific network was implemented on West Nile virus (WNV) to describe the WNV status of the Caribbean countries, to perform a technology transfer of WNV diagnostics, and to establish a surveillance system. (3) The CaribVET Web site ( http://www.caribvet.net) encompasses information on surveillance systems, diagnostic laboratories, conferences, bibliography, and diseases of major concern in the region. It is a participatory Web site allowing registered users to add or edit information, pages, or data. An online notification system of sanitary information was set up for Guadeloupe to improve knowledge on animal diseases and facilitate early alert. | [
"caribbean",
"veterinary services",
"epidemiosurveillance",
"participatory tools"
] | [
"P",
"P",
"U",
"R"
] |
1yghV&Z | Intelligent road adaptive suspension system design using an experts based hybrid genetic algorithm | Intelligent road adaptive suspension system. Performance optimization with minimum actuator size. Automate design using hybrid genetic algorithm. | [
"road adaptive",
"hybrid genetic algorithms",
"intelligent suspension system"
] | [
"P",
"P",
"R"
] |
1rfFaGc | Screening for Marijuana and Cocaine Abuse by Immunoanalysis and Gas Chromatography | Drug abuse among college students is characterized by lower academic performance and long-term negative consequences. Screening to detect students at high risk of consuming drugs is of primary importance to insure early identification and appropriate levels of care. As a result, this study aimed to determine the current or past use of drug abuse through a questionnaire applied to a student population at the Universidad Autnoma del Estado de Morelos. The results were confirmed by immunoanalysis and gas chromatography of urine. We interviewed 181 students aged 15 to 21 (gender was not considered in this study), and urine samples were collected for analytical analysis. For detection of metabolites Delta9-THCA-A and benzoylecgonine from marijuana and cocaine, respectively, a homogenous enzymatic inmmunoanalysis was used; subsequent samples were analyzed by a mass spectrometer with quadrupole detector. Seven samples of the total (181) did not completely fit the inclusion criteria and were eliminated. The results showed 0.50% and 1.16% positive samples for benzoylecgonine and Delta9-THCA-A, respectively. These results are not different from those of the National Questionnaire on Addiction. We can establish a program for detecting drug consumption in our students. This kind of study is important in order to implement programs that can help us to decrease the abuse of drugs in our college population. | [
"marijuana",
"cocaine",
"inmmunoanalysis",
"college population",
"gas chromatographymass spectrometry"
] | [
"P",
"P",
"P",
"P",
"M"
] |
y8k362J | A REASONED APPROACH TO ERROR HANDLING Position Paper on Work-in-Progress | It is widely acknowledged that Enterprise Resource Planning (ERP) systems are difficult to use. Our own studies have revealed that one of the largest sources of frustration for ERP users is the inadequate support in error situations afforded by these systems. We propose an approach to error handling in which reasoning on the part of the system enables it to behave as a collaborative partner in helping its users understand the causes of errors and, whenever possible, make the necessary corrections. While our focus here is on E-RP systems, this approach could be applied to any system for improving its error handling capabilities. | [
"reasoning",
"error handling",
"planning",
"collaboration",
"erp systems"
] | [
"P",
"P",
"P",
"P",
"R"
] |
pTziXyw | Facilitating the rapid development and scalable orchestration of composite Web services | The development of new Web services through the composition of existing ones has gained a considerable momentum as a means to realise business-to-business collaborations. Unfortunately, given that services are often developed in an ad hoc fashion using manifold technologies and standards, connecting and coordinating them in order to build composite services is a delicate and time-consuming task. In this paper, we describe the design and implementation of a system in which services are composed using a model-driven approach, and the resulting composite services are orchestrated following a peer-to-peer paradigm. The system provides tools for specifying composite services through statecharts, data conversion rules, and multi-attribute provider selection policies. These specifications are interpreted by software components that interact in a peer-to-peer way to coordinate the execution of the composite service. We report results of an experimental evaluation showing the relative advantages of this peer-to-peer approach with respect to a centralised one. | [
"web service",
"statechart",
"web service composition",
"web service orchestration",
"dynamic provider selection",
"peer-to-peer interaction"
] | [
"P",
"P",
"R",
"R",
"M",
"R"
] |
34eYyLE | The HASCASL prologue: Categorical syntax and semantics of the partial lambda-calculus | We develop the semantic foundations of the specification language HASCASL, which combines algebraic specification and functional programming on the basis of Moggi's partial lambda-calculus. Generalizing Lambek's classical equivalence between the simply typed lambda-calculus and cartesian closed categories, we establish an equivalence between partial cartesian closed categories (pccc's) and partial lambda-theories. Building on these results, we define (set-theoretic) notions of intensional Henkin model and syntactic lambda-algebra for Moggi's partial lambda-calculus. These models are shown to be equivalent to the originally described categorical models in pccc's via the global element construction. The semantics Of HASCASL is defined in terms of syntactic lambda-algebras. Correlations between logics and classes of categories facilitate reasoning both on the logical and on the categorical side; as an application, we pinpoint unique choice as the distinctive feature of topos logic (in comparison to intuitionistic higher-order logic of partial functions, which by our results is the logic of pccc's with equality). Finally, we give some applications of the model-theoretic equivalence result to the semantics of HASCASL and its relation to first-order CASL. (c) 2005 Elsevier B.V. All rights reserved. | [
"casl",
"partial lambda-calculus",
"algebraic specification",
"categorical logic"
] | [
"P",
"P",
"P",
"R"
] |
4qSepsE | Some spherical boundary elements and a discretization error indicator for acoustic problems with spherical surfaces | In this paper, three types of quadratic interpolated boundary elements are presented to investigate acoustic problems. These elements, called the spherical elements, are specially designed for the discretization of spherical surfaces, which are directly constructed on the real geometry of spherical surface without any error due to geometrical approximation. The first one is a triangular spherical element, while the other two are quadrilateral spherical elements. These spherical elements are applied to analyze the acoustic problems, wherein the BurtonMiller formulation is used to build the boundary element formulation. Furthermore, in order to guide the mesh refinement and evaluate the accuracy of numerical results, an error indicator based on the gradient distribution of the sound pressure is presented. Several examples are presented to illustrate the applicability of these elements, including the pulsating sphere, scattering of a rigid sphere and the multi-sphere scattering. Numerical results indicate that not only the spherical elements are effective and accurate, but also the error indicator proposed is quite helpful to evaluate the accuracy of numerical results. | [
"discretization error",
"acoustic problems",
"spherical elements",
"boundary element method"
] | [
"P",
"P",
"P",
"M"
] |
3WtDVge | Studies on the effect of non-isothermal mixing on water-using network's energy performance | This study explores the effect of different types of non-isothermal mixing on water-using network's utility consumption target, and some non-isothermal mixing rules are deduced, which can be used to estimate if utility will increase, decrease or remain unchanged after non-isothermal mixing. The energy penalty caused by heterogeneous mixing can be eliminated by decreasing the temperature approach between hot and cold streams through indirect heat transfer before mixing, so that the mixing can remain as a means of direct heat transfer when synthesizing heat-integrated water networks. Based on the non-isothermal mixing rules, one can make full use of direct heat transfer by mixing to obtain a simpler network structure and avoid the possibility of an energy penalty caused by improper non-isothermal mixing. (C) 2011 Elsevier Ltd. All rights reserved. | [
"non-isothermal mixing",
"mixing rules",
"energy penalty",
"heat-integrated water networks"
] | [
"P",
"P",
"P",
"P"
] |
4dXYs4F | APPLICATION OF HIGHER ORDER CUMULANT FEATURES FOR CARDIAC HEALTH DIAGNOSIS USING ECG SIGNALS | Electrocardiogram (ECG) is the electrical activity of the heart indicated by P, Q-R-S and T wave. The minute changes in the amplitude and duration of ECG depicts a particular type of cardiac abnormality. It is very difficult to decipher the hidden information present in this nonlinear and nonstationary signal. An automatic diagnostic system that characterizes cardiac activities in ECG signals would provide more insight into these phenomena thereby revealing important clinical information. Various methods have been proposed to detect cardiac abnormalities in ECG recordings. Application of higher order spectra (HOS) features is a seemingly promising approach because it can capture the nonlinear and dynamic nature of the ECG signals. In this paper, we have automatically classified five types of beats using HOS features (higher order cumulants) using two different approaches. The five types of ECG beats are normal (N), right bundle branch block (RBBB), left bundle branch block (LBBB), atrial premature contraction (APC) and ventricular premature contraction (VPC). In the first approach, cumulant features of segmented ECG signal were used for classification; whereas in the second approach cumulants of discrete wavelet transform (DWT) coefficients were used as features for classifiers. In both approaches, the cumulant features were subjected to data reduction using principal component analysis (PCA) and classified using three layer feed-forward neural network (NN) and least square - support vector machine (LS-SVM) classifiers. In this study, we obtained the highest average accuracy of 94.52%, sensitivity of 98.61% and specificity of 98.41% using first approach with NN classifier. The developed system is ready clinically to run on large datasets. | [
"cumulants",
"electrocardiogram",
"classifier",
"principal component analysis (pca)",
"higher order statistics"
] | [
"P",
"P",
"P",
"P",
"M"
] |
2QG2bMB | Chemopreventive effects of resveratrol and resveratrol derivatives | Resveratrol is considered to have a number of beneficial effects. Recently, our group modified the molecule and synthesized a number of compounds with different biochemical effects. Polymethoxy and polyhydroxy derivatives of resveratrol were shown to inhibit tumor cell growth in various cell lines and inflammation pathways (cyclooxygenases activity), in part more effectively than resveratrol itself. One lead compound (hexahydroxystilbene, M8) turned out to be the most effective inhibitor of tumor cell growth and of cyclooxygenase 2 activity. M8 was then studied in two different human melanoma mouse models. This novel resveratrol analog was able to inhibit melanoma tumors in a primary tumor model alone and in combination with dacarbacine, an anticancer compound that is used for melanoma treatment. We also tested the development of lymph node metastasis in a second melanoma model and again M8 successfully inhibited the tumor as well as the size and weight of lymph node metastasis. Hydroxylated resveratrol analogs therefore represent a novel class of anticancer compounds and promising candidates for in vivo studies. | [
"resveratrol",
"melanoma",
"resveratrol metabolism",
"free radicals",
"antitumor effects",
"polyhydroxyphenols",
"ribonucleotide reductase"
] | [
"P",
"P",
"M",
"U",
"M",
"U",
"U"
] |
SRT:Z&: | A numerical method for two-phase flows with an interface | One of the challenges in modelling multiphase fluid systems is to capture accurately the discontinuous-interface phenomenon. In this paper a numerical model for two-phase flows with a varying density is presented, in which a modified volume of fluid (VOF) method is combined with a semi-implicit algorithm (simple) and a higher-order advection scheme in a collocated grid. The improved volume tracking method allows interfaces to be captured and maintained compactly in one cell, without imposing restrictions on the topological complexity or the number of interfaces that can be represented. The surface tension force is modelled by a continuum surface force approximation. An efficient solver is used for the resulting system of the linear equations. Example problems simulated in this paper are the buoyancy-driven motion of multiple bubbles in a viscous liquid, and bubble-rise towards an interface. The complex topological changes that occur during bubble rise are well predicted. The result is verified by experimental data in the literature. | [
"two-phase flow",
"volume of fluid (vof) method",
"bubble coalescence",
"bubble deformation",
"convection scheme",
"interface fragmentation",
"interface tracking"
] | [
"P",
"P",
"M",
"M",
"M",
"M",
"R"
] |
2&8gSMU | Context reasoning using extended evidence theory in pervasive computing environments | Most existing context reasoning approaches implicitly assume that contexts are precise and complete. This assumption cannot be held in pervasive computing environments, where contexts are often imprecise and incomplete due to unreliable connectivity, user mobility and resource constraints. To this end, we propose an approach called CRET: Context Reasoning using extended Evidence Theory. CRET applies the evidence theory to context reasoning in pervasive computing environments. Because evidence theory is limited by two fundamental problemscomputation-intensiveness and Zadeh paradox, CRET presents evidence selection and conflict resolution strategies. Empirical study shows that CRET is desirable for pervasive applications. | [
"context reasoning",
"evidence theory",
"pervasive computing"
] | [
"P",
"P",
"P"
] |
-Da3U2S | A component-oriented framework for experimental computer graphics | This paper provides a report about a framework that uses a variety of standards. Readers interested in 3D computer graphics or component-oriented technology in C++ will find a report about the integration of various standards by relying on yet another standard for component-oriented software engineering. The highly successful Java standard called Open Services Gateway initiative (OSGi) is employed in a C++ implementation called Open Service Platform. The application of this standard, which is primarily focused on network-centric software and embedded systems, in the field of real-time 3D computer graphics, provides novel insights into the usability of the OSGi standard. | [
"computer graphics",
"c++",
"osgi",
"software components"
] | [
"P",
"P",
"P",
"M"
] |
48R-nEk | Learning the optimal parameter of the Hamacher t-norm applied for fuzzy-rule-based model extraction | Mamdani-type inference systems with trapezoidal-shaped fuzzy membership functions play a crucial role in a wide variety of engineering systems, including real-time control, transportation and logistics, network management, etc. The automatic identification or construction of such fuzzy systems input output data is one of the key problems in modeling. In the past years, the authors have investigated several different fuzzy t-norms, among others, algebraic and trigonometric ones, and the Hamacher product by substituting the standard min t-norm operation, in order to achieve better model fitting. In the present paper, the focus is on examining the general parametric Hamacher t-norm, where the free parameter quite essentially influences the quality of modeling and the learning capability of the model identification system. Based on a wide scope of simulation experiments, a quasi-optimal interval for the value of the Hamacher operator is proposed. | [
"hamacher t-norm",
"fuzzy-rule-based model",
"aggregation operators",
"mamdani inference system",
"improved bacterial memetic algorithm",
"modified bacterial memetic algorithm"
] | [
"P",
"P",
"M",
"M",
"U",
"U"
] |
3k6S5:C | Homodyne characterization of continuous variable bipartite states | We suggest a scheme to reconstruct the covariance matrix of continuous variable two-mode states using a single homodyne detector and a few polarization elements. Our method can be used to fully characterize the bipartite Gaussian entangled states obtained at the output of a (nondegenerate) optical parametric amplifier driven below threshold, as well as to extract relevant informations on generic bipartite states made of two frequency degenerate but orthogonally polarized modes. | [
"entanglement",
"homodyne detection",
"gaussian states"
] | [
"P",
"M",
"R"
] |
57QhHPV | CH5M3D: an HTML5 program for creating 3D molecular structures | While a number of programs and web-based applications are available for the interactive display of 3-dimensional molecular structures, few of these provide the ability to edit these structures. For this reason, we have developed a library written in JavaScript to allow for the simple creation of web-based applications that should run on any browser capable of rendering HTML5 web pages. While our primary interest in developing this application was for educational use, it may also prove useful to researchers who want a light-weight application for viewing and editing small molecular structures. | [
"3d",
"html5",
"visualization",
"molecular editor",
"molecular graphics"
] | [
"P",
"P",
"U",
"M",
"M"
] |
3huFm:v | Skeleton-based parallel programming: Functional and parallel semantics in a single shot ? | Semantics of skeleton-based parallel programming languages comes usually as two distinct items: a functional semantics, modeling the function computed by the skeleton program, and a parallel semantics describing the ways used to exploit parallelism during the execution of the skeleton program. The former is usually expressed using some kind of semantic formalism, while the latter is almost always given in an informal way. Such a separation of functional and parallel semantics seriously impairs the possibility of programmers to use the semantic tools to prove properties of programs. In this work, we show how a formal semantic framework can be set up that handles both functional and parallel aspects of skeleton-based parallel programs. The framework is based on a labeled transition system. We show how different properties related to skeleton programs can be proved using such a system. We use Lithium, a skeleton-based full Java parallel programming environment, as the case study. | [
"parallel semantics",
"functional semantics",
"labeled transition systems",
"algorithmical skeletons",
"structured parallel programming"
] | [
"P",
"P",
"P",
"M",
"M"
] |
23jbvWt | Visualization of gridded scalar data with uncertainty in geosciences | Characterization of the earth's subsurface involves the construction of 3D models from sparse data and so leads to simulation results that involve some degree of uncertainty. This uncertainty is often neglected in the subsequent visualization, due to the fact that no established methods or available software exist. We describe a visualization method to render scalar fields with a probability density function at each data point. We render these data as isosurfaces and make use of a colour scheme, which intuitively gives the viewer an idea of which parts of the surface are more reliable than others. We further show how to extract an envelope that indicates within which volume the isosurface will lie with a certain confidence, and augment the isosurfaces with additional geometry in order to show this information. The resulting visualization is easy and intuitive to understand and is suitable for rendering multiple distinguishable isosurfaces at a time. It can moreover be easily used together with other visualized objects, such as the geological context. Finally we show how we have integrated this into a visualization pipeline that is based on the Visualization Toolkit (VTK) and the open source scenegraph OpenSG, allowing us to render the results on a desktop and in different kinds of virtual environments. | [
"visualization",
"uncertainty",
"3d",
"scalar fields",
"visualisation",
"monte carlo simulation"
] | [
"P",
"P",
"P",
"P",
"U",
"M"
] |
1xLWRYV | Introduction to the Special Section on Search and Mining User-Generated Content | The primary goal of this special section of ACM Transactions on Intelligent Systems and Technology is to foster research in the interplay between Social Media, Data/Opinion Mining and Search, aiming to reflect the actual developments in technologies that exploit user-generated content. | [
"search",
"user-generated contents",
"social media",
"opinion mining",
"documentation",
"experimentation",
"algorithms",
"data mining",
"text mining",
"information retrieval"
] | [
"P",
"P",
"P",
"P",
"U",
"U",
"U",
"R",
"M",
"U"
] |
-W5sz2Y | Domination of aggregation operators and preservation of transitivity | Aggregation processes are fundamental in any discipline where the fusion of information is of vital interest. For aggregating binary fuzzy relations such as equivalence relations or fuzzy orderings, the question arises which aggregation operators preserve specific properties of the underlying relations, e.g. T-transitivity. It will be shown that preservation of T-transitivity is closely related to the domination of the applied aggregation operator over the corresponding t-norm T. Furthermore, basic properties for dominating aggregation operators, not only in the case of dominating some t-norm T, but dominating some arbitrary aggregation operator, will be presented. Domination of isomorphic t-norms and ordinal sums of t-norms will be treated. Special attention is paid to the four basic t-norms (minimum t-norm, product t-norm, Lukasiewicz t-norm, and the drastic product). | [
"domination",
"aggregation operators",
"fuzzy relations",
"t-transitivity"
] | [
"P",
"P",
"P",
"P"
] |
2ze3oB5 | Pedagogical-research designs to capture the symbiotic nature of professional knowledge and learning about e-learning in initial teacher education in the UK | This paper argues that if new communications technologies and online spaces are to yield 'new relationship[s] with learners' (DfES, 2005, p. 11) then research that is tuned to recognize, capture and explain the pedagogical processes at the centre of such interactions is vital. This has implications for the design of pedagogical activities within Initial Teacher Education (ITE) intended to develop student teachers' professional knowledge and understanding of e-learning strategies. A case study is presented of an intervention, which attempted to synthesize a face-to-face and online school-based experience with University-based lectures, in order to develop student teachers' capacity to theorize and reflect upon the development of their online pedagogical practice. Theory that focuses on the complex and symbiotic nature of professional knowledge and learning was developed to analyse data in the form of interviews with student teachers and archived extracts from their online interactions with the children. The aim was to evaluate the effectiveness of a pedagogical-research design based upon the authentic and situated use of e-learning strategies and technologies for developing student teachers' professional knowledge and understanding of online pedagogy. Ultimately the paper concludes that, from the perspective of a dynamic conceptualisation of e-learning as continuously emerging (Andrews & Haythornthwaite, 2007) then a pedagogical-research design that develops and captures student teachers' capacity to reflect upon the development of their own online pedagogy and professional knowledge and understanding in relation to e-learning is vital. (C) 2009 Elsevier Ltd. All rights reserved. | [
"pedagogy",
"professional knowledge",
"e-learning",
"virtual learning environment",
"learning platform"
] | [
"P",
"P",
"P",
"M",
"M"
] |
3rUzV7J | A CANbus-based safety-critical distributed aeroengine control systems architecture demonstrator | Recent advances in microelectronics coupled with ever decreasing costs mean that it is now possible to produce very compact and cheap intelligent modules. For instance, it is now quite common for cars to use a number of intelligent units with intercommunication to implement complex functions such as traction control. There has also been a move towards embedding processing in sensors and actuators directly with application to the process control, automotive and aerospace sectors. When considering aerospace applications there are major benefits to be gained by adopting a distributed controller. However, this has to be carried out within the strict design constraints for safety-critical systems. This paper discusses design tools and a distributed system demonstrator that has been developed to explore future distributed control systems. | [
"distributed systems",
"fault tolerance",
"databuses",
"automatic code generation"
] | [
"P",
"U",
"U",
"U"
] |
1KEqkJG | Genetic and Environmental Influences on the Development of Alcoholism | The physiological changes of adolescence may promote risk-taking behaviors, including binge drinking. Approximately 40% of alcoholics were already drinking heavily in late adolescence. Most cases of alcoholism are established by the age of 30 years with the peak prevalence at 1823 years of age. Therefore the key time frame for the development, and prevention, of alcoholism lies in adolescence and young adulthood. Severe childhood stressors have been associated with increased vulnerability to addiction, however, not all stress-exposed children go on to develop alcoholism. Origins of resilience can be both genetic (variation in alcohol-metabolizing genes, increased susceptibility to alcohol's sedative effects) and environmental (lack of alcohol availability, positive peer and parental support). Genetic vulnerability is likely to be conferred by multiple genes of small to modest effects, possibly only apparent in geneenvironment interactions. For example, it has been shown that childhood maltreatment interacts with a monoamine oxidase A (MAOA) gene variant to predict antisocial behavior that is often associated with alcoholism, and an interaction between early life stress and a serotonin transporter promoter variant predicts alcohol abuse in nonhuman primates and depression in humans. In addition, a common Met158 variant in the catechol-O-methyltransferase (COMT) gene can confer both risk and resilience to alcoholism in different drinking environments. It is likely that a complex mix of gene(s)environment(s) interactions underlie addiction vulnerability and development. Riskresilience factors can best be determined in longitudinal studies, preferably starting during pregnancy. This kind of research is important for planning future measures to prevent harmful drinking in adolescence | [
"adolescents",
"maoa",
"comt",
"httlpr",
"polymorphism"
] | [
"P",
"P",
"P",
"U",
"U"
] |
4&GPCHY | Cultural repercussions An analysis of management behaviour through the lens of European cultural variations | Purpose - This pilot study complements the ongoing culture-management behaviour discourse by systematically investigating two novel dimensions through which culture can be measured and compared between four European Union (EU) countries. The purpose of this paper is to examine to what extent these cultural dimensions influence management behaviour in different countries. Design/methodology/approach - The results pertaining to the cultural dimensions "Authority driven" and "Capitalistic driven" are derived from European values study data sets. The results pertaining to variances in management behaviour are derived from an empirical questionnaire-based study. Spearman rank correlation fitted with confidence intervals yield several significant correlations which are discussed. Findings - The results, based on country specific samples from Slovenia, Germany, Austria and Denmark, confirm that there exists considerable differences in cultural manifestations between the four countries and that these differences have an impact on management behaviour. Most notably, a strong positive correlation was found between the comparatively highly authoritative cultures of Slovenia and Germany to thwart decentralization. Further evidence was found that the highly subordinate driven cultures of Denmark and Austria tend to have a predilection towards two-way vertical knowledge flows. Mixed results were found on capitalistic driven cultures' impact on control mechanisms and use of motivational factors. Research limitations/implications - The results are limited to companies within the manufacturing industry of the four focus countries. It is, however, highly probable that the results lend themselves to companies in other countries with similar cultural manifestations, albeit this remains to be empirically proven. Practical implications - The results provide a deeper understanding of why and how management models continue to differ throughout Europe. Managers as well as academics benefit from this discussion. Originality/value - The cultural dimensions are innovative, and specifically designed to probe culture differences between elder EU countries and a transition economy. This digression from mainstream cultural manifestations provides a refreshing perspective on management implications and rejuvenates the culture debate. | [
"culture",
"behaviour",
"europe",
"authority",
"capitalist systems"
] | [
"P",
"P",
"P",
"P",
"M"
] |
3ZHdAt- | An active contour computer algorithm for the classification of cucumbers | The cucumber is one of the most important crops worldwide and, because it is generally consumed fresh, it must be classified into quality categories. The European classification system includes a parameter that relates the degree of curvature relative to the length. Until now, this classification could not been be achieved with an automatic system due to the difficulty associated with correctly calculating the axis of a cucumber. This article describes a computer algorithm that uses active contours or snakes to classify cucumbers by length and curvature. This algorithm demonstrates an advantage in the determination of the central line of each cucumber, based on an iterative process that is quick and carries out the classification process efficiently. The method was validated against human classification for 360cucumbers and was also compared with an ellipsoid approximation method. The active contour method reduced the classification error by 15% points, compared with the ellipsoid approximation method, to 1%, with no serious errors (i.e., misclassification of Class Extra and I into Class II or vice versa). Meanwhile, the ellipsoid approximation method led to a 16% error rate, of which 2% were serious errors (an error of two classes). The developed approach is applicable to fresh cucumber commercial classification lines to meet the requirements of the European regulations for cucumber classification. | [
"cucumber",
"curvature",
"length",
"artificial vision",
"grading",
"shape"
] | [
"P",
"P",
"P",
"U",
"U",
"U"
] |
-tup8ag | Dealing with Transient Faults in the Interconnection Network of CMPs at the Cache Coherence Level | The importance of transient faults is predicted to grow due to current technology trends of increased scale of integration. One of the components that will be significantly affected by transient faults is the interconnection network of chip multiprocessors (CMPs). To deal efficiently with these faults and differently from other authors, we propose to use fault-tolerant cache coherence protocols that ensure the correct execution of programs when not all messages are correctly delivered. We describe the extensions made to a directory-based cache coherence protocol to provide fault tolerance and provide a modified set of token counting rules which are useful to design fault-tolerant token-based cache coherence protocols. We compare the directory-based fault-tolerant protocol with a token-based fault-tolerant one. We also show how to adjust the fault tolerance parameters to achieve the desired level of fault tolerance and measure the overhead achieved to be able to support very high fault rates. Simulation results using a set of scientific, multimedia, and commercial applications show that the fault tolerance measures have virtually no impact on execution time with respect to a non-fault-tolerant protocol. Additionally, our protocols can support very high rates of transient faults at the cost of slightly increased network traffic. | [
"transient faults",
"interconnection network",
"cache coherence",
"fault tolerance"
] | [
"P",
"P",
"P",
"P"
] |
11kXSX6 | ADPLL design parameters determinations through noise modeling | This paper presents a methodology to determine all-digital phase-locked loop (ADPLL) circuit variables based on required design specifications, including output phase noise, fractional spur and locking time. An analytical model is developed to characterize the effects of different noise sources on ADPLL output phase noise and fractional spur. Applying the proposed noise model, circuit variables in ADPLL can be properly selected to meet phase noise, fractional spur and locking time requirements. For model validation, we collect ADPLL circuit designs published in recent literatures and perform model analysis. The analysis results and hardware measurements obtain good agreements. | [
"adpll",
"phase noise",
"spur",
"fractional-n pll",
"digitally controlled oscillator",
"frequency divider",
"phasefrequency detector"
] | [
"P",
"P",
"P",
"U",
"U",
"U",
"U"
] |
4s:LJCp | MATLAB-implemented estimation procedure for model-based assessment of hepatic insulin degradation from standard intravenous glucose tolerance test data | Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ?0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% <20%. In conclusion, our MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. | [
"c-peptide and insulin minimal model",
"indexes of ?-cell responsiveness",
"optimization algorithm",
"glucoseinsulin regulatory system"
] | [
"M",
"M",
"R",
"U"
] |
-fazrfa | Knowledge discovery in databases for sonar images classification | Sonar images classification is of great importance for various realistic applications such as submarine navigation or seabed mapping. Most approaches developed or used in the present work for seabed characterization are based on the use of texture analysis methods. Indeed, sonar images have different homogeneous areas of sediment that can be viewed as texture entities. Generally, texture features are numerous and not all are relevant; an extraction-reduction of these features seems necessary before the classification phase. We present in this work a complete chain for sonar images classification while optimizing the chain steps. We use the Knowledge Discovery in Databases (KDD) process for the chain development. The underwater environment is uncertain, which is reflected on the images obtained from the sensors used for their acquisition. Therefore, it is important to develop robust methods to these imperfections. We solve this problem in two different ways: a first solution is to make robust traditional classification methods, such as support vector machines or k-nearest neighbors, to these imperfections. A second solution is to model these imperfections to be taken into account by belief or fuzzy classification methods. We present the results obtained using different texture analysis approaches and classification approaches. We use other approaches based on the uncertain theories to overcome sonar images imperfections problem. | [
"sonar images",
"classification",
"texture",
"knowledge discovery on database",
"svm",
"belief svm",
"fuzzy svm",
"extraction -reduction",
"evaluation"
] | [
"P",
"P",
"P",
"R",
"U",
"M",
"M",
"U",
"U"
] |
3tq14VJ | proximity of persistence modules and their diagrams | Topological persistence has proven to be a key concept for the study of real-valued functions defined over topological spaces. Its validity relies on the fundamental property that the persistence diagrams of nearby functions are close. However, existing stability results are restricted to the case of continuous functions defined over triangulable spaces. In this paper, we present new stability results that do not suffer from the above restrictions. Furthermore, by working at an algebraic level directly, we make it possible to compare the persistence diagrams of functions defined over different spaces, thus enabling a variety of new applications of the concept of persistence. Along the way, we extend the definition of persistence diagram to a larger setting, introduce the notions of discretization of a persistence module and associated pixelization map, define a proximity measure between persistence modules, and show how to interpolate between persistence modules, thereby lending a more analytic character to this otherwise algebraic setting. We believe these new theoretical concepts and tools shed new light on the theory of persistence, in addition to simplifying proofs and enabling new applications. | [
"topological persistence",
"persistence diagram",
"stability",
"discretization",
"topological data analysis"
] | [
"P",
"P",
"P",
"P",
"M"
] |
4JDSDFc | Harmony search algorithm for minimum cost design of steel frames with semi-rigid connections and column bases | Harmony search-based algorithm is developed to determine the minimum cost design of steel frames with semi-rigid connections and column bases under displacement, strength and size constraints. Harmony search (HS) is recently developed metaheuristic search algorithm which is based on the analogy between the performance process of natural music and searching for solutions of optimum design problems. The geometric non-linearity of the frame members, the semi-rigid behaviour of the beam-to-column connections and column bases are taken into account in the design algorithm. The results obtained by semi-rigid connection and column base modelling are also compared to one developed by rigid connection modelling. The efficiency of HS algorithm, in comparison with genetic algorithms (GAs), is verified with three benchmark examples. The results indicate that HS could obtain lighter frames and less cost values than those developed using GAs. | [
"harmony search",
"steel frames",
"semi-rigid connections",
"optimum design",
"semi-rigid column bases"
] | [
"P",
"P",
"P",
"P",
"R"
] |
4sVnkRj | An extendable heuristic framework to solve the p-compact-regions problem for urban economic modeling | Propose an extendable heuristic framework for solving large, practical, and non-linear regionalization problems. Introduce NMI as a novel method and prove its effectiveness to compute compactness in a p-compact-regions problem. Demonstrate the good performance of MERGE heuristic in solving real-world large p-compact-regions problem. | [
"heuristic",
"compactness",
"regionalization",
"spatial optimization",
"greedy",
"clustering",
"moment of inertia",
"simulated annealing",
"tabu",
"grasp",
"zoning"
] | [
"P",
"P",
"P",
"U",
"U",
"U",
"M",
"U",
"U",
"U",
"U"
] |
vKTtZzn | Constraints for the design of variability-intensive service-oriented reference architectures - An industrial case study | Context: Service-oriented architecture has become a widely used concept in software industry. However, we currently lack support for designing variability-intensive service-oriented systems. Such systems could be used in different environments, without the need to design them from scratch. To support the design of variability-intensive service-oriented systems, reference architectures that facilitate variability in instantiated service-oriented architectures can help. Objective: The design of variability-intensive service-oriented reference architectures is subject to specific constraints. Architects need to know these constraints when designing such reference architectures. Our objective is to identify these constraints. Method: An exploratory case study was performed in the context of local e-government in the Netherlands to study constraints from the perspective of (a) the users of a variability-intensive service-oriented system (municipalities that implement national laws), and (b) the implementing organizations (software vendors). We collected data through interviews with representatives from five organizations, document analyses and expert meetings. Results: We identified ten constraints (e.g., organizational constraints, integration-related constraints) which affect the process of designing reference architectures for variability-intensive service-oriented systems. Also, we identified how stakeholders are affected by these constraints, and how constraints are specific to the case study domain. Conclusions: Our results help design variability-intensive service-oriented reference architectures. Furthermore, our results can be used to define processes to design such reference architectures. (C) 2012 Elsevier By. All rights reserved. | [
"reference architectures",
"case study",
"service-oriented architecture",
"variability",
"e-government",
"soa"
] | [
"P",
"P",
"P",
"P",
"P",
"U"
] |
-2v-TjY | Positive solutions for a nonlinear differential equation on a measure chain | We are concerned with proving the existence of positive solutions of general two point boundary value problems for the nonlinear equation Lx(t) := -[r(t)x(Delta)(t)](Delta) = f(t,x(t)) We will use fixed point theorems concerning cones in a Banach space. Important results concerning Green's functions for general two point boundary value problems for Lx(t) := -[r(t)x(Delta)(t)](Delta) = 0 will also be given. (C) 2000 Elsevier Science Ltd. All rights reserved. | [
"measure chains"
] | [
"P"
] |
WRGvSH3 | Approximate solution of the fractional advection-dispersion equation | In this paper, we consider practical numerical method to solve a space-time fractional advection-dispersion equation with variable coefficients on a finite domain. The equation is obtained from the standard advection-dispersion equation by replacing the first-order time derivative by the Caputo fractional derivative, and the first-order and second-order space derivatives by the Riemann-Liouville fractional derivative, respectively. Here, a new method for solving this equation is proposed in the reproducing kernel space. The representation of solution is given by the form of series and the n-term approximation solution is obtained by truncating the series. The method is easy to implement and the numerical results show the accuracy of the method. (C) 2009 Elsevier B.V. All rights reserved. | [
"advection-dispersion equation",
"fractional derivative",
"reproducing kernel space"
] | [
"P",
"P",
"P"
] |
4D7g6Xy | Optimal search strategies using simultaneous generalized hill climbing algorithms | Optimal search strategies for conducting reconnaissance, surveillance or search and rescue operations with limited assets are of significant interest to military decision makers. Multiple search platforms with varying capabilities can be deployed individually or simultaneously for these operations (e.g., helicopters, fixed wing or satellite). Due to the timeliness required in these operations, efficient use of available search platforms is critical to the success of such missions. Designing optimal search strategies over multiple search platforms can be modeled and solved as a multiple traveling salesman problem (MTSP). This paper demonstrates how simultaneous generalized hill climbing algorithms (SGHC) can be used to determine optimal search strategies over multiple search platforms for the MTSP. Computational results with SGHC algorithms applied to the MTSP are reported. These results demonstrate that when limited computing budgets are available, optimal/near-optimal search strategies over multiple search platforms can be obtained more efficiently using SGHC algorithms compared to other generalized hill climbing algorithms. Applications and extensions of this research to other military applications are also discussed. (c) 2005 Elsevier Ltd. All rights reserved. | [
"search and rescue operations",
"traveling salesman problem",
"simulated annealing",
"local search algorithms"
] | [
"P",
"P",
"U",
"M"
] |
2cYGEQB | Forecasting the Long-Term Electricity Demand in Taiwan with a Hybrid FLR and BPN Approach | Precisely and accurately predict the electricity demand is an important task for the government in each country. In addition, establishing the lowest upper bound for the electricity demand also avoids unnecessary power plant investment. To this end, a hybrid fuzzy linear regression (FLR) and back propagation network (BPN) approach is proposed in this study. In the proposed methodology, multiple experts construct their own FLR equations to predict the future electricity demand from various viewpoints. Each FLR equation can be fitted by solving two equivalent nonlinear programming problems, based on the opinions of experts. In order to aggregate these fuzzy electricity demand forecasts, a two-step aggregation mechanism is used. First, fuzzy intersection is applied to aggregate the fuzzy electricity demand forecasts into a polygon-shaped fuzzy number, in order to improve the precision. Subsequently, a BPN is constructed to defuzzify the polygon-shaped fuzzy number and generate a representative/crisp value, in order to enhance the accuracy. The actual case of Taiwan is used to evaluate the effectiveness of the proposed methodology. According to the experimental results, the proposed methodology improved the precision and accuracy of the electricity demand forecasting by 33% and 99%, respectively. | [
"forecasting",
"electricity demand",
"flr",
"bpn",
"hybrid approach"
] | [
"P",
"P",
"P",
"P",
"R"
] |
1TYghDm | Queue layouts of graph products and powers | A k-queue layout of a graph G consists of a linear order sigma of V ( G), and a partition of E( G) into k sets, each of which contains no two edges that are nested in sigma. This paper studies queue layouts of graph products and powers. | [
"queue layout",
"graph",
"cartesian product",
"d-dimensional grid graph",
"d-dimensional toroidal grid graph",
"hamming graph"
] | [
"P",
"P",
"M",
"M",
"M",
"M"
] |
-5bjz3A | Detection wavelength tuning of InGaAs/GaAs quantum dot infrared photodetector with thermal treatment | Thermal treatment of In0.5Ga0.5As/GaAs quantum dot infrared photodetector (QDIP) structure has been carried out at 700C for 1min with SiO2 capping layer. Thermal treatment of In0.5Ga0.5As/GaAs QDIP structure induced a blue-shift in its photoluminescence (PL) spectrum by a 50meV with a reduction of its intensity. The blue-shift in PL spectrum and the reduction in PL intensity is thought to be due to the interdiffusion of In and Ga at the interfaces of quantum dots (QDs) and GaAs barriers. The fabricated QDIP with thermally treated structure showed a red-shift in its photocurrent spectrum by a 22meV from the photocurrent peak of 200meV for as-grown QDIP, as a consequence of the blue-shift of QD bandgap. | [
"quantum dot",
"infrared photodetector",
"thermal treatment"
] | [
"P",
"P",
"P"
] |
TCSVb35 | Self-associated concept mapping for representation, elicitation and inference of knowledge | Concept maps have been widely put to educational uses. They possess a number of appealing features which make them a promising tool for teaching, learning, evaluation, and curriculum planning. This paper presents self-associated concept mapping (SACM) which extends the use of concept mapping by proposing the idea of self-construction and automatic problem solving to traditional concept maps. The SACM can be automatically constructed and dynamic updated. A Constrained Fuzzy Spreading Activation (CFSA) model is proposed to SACM for supporting rapid and automatic decisions. With the successful development of the SACM, the capability of Knowledge-based systems (KBS) can be enhanced. The concept and operational feasibility of the SACM is realized through a case study in a consultancy business. The theoretical results are found to agree well with the experimental results. | [
"self-associated concept maps",
"concept mapping",
"knowledge-based systems",
"knowledge representation",
"knowledge management"
] | [
"P",
"P",
"P",
"R",
"M"
] |
3-W9GEL | Managing distributed shared arrays in a bulk-synchronous parallel programming environment | NestStep is a parallel programming language for the BSP (bulk-hronous parallel) programming model. In this article we describe the concept of distributed shared arrays in NestStep and its implementation on top of MPI. In particular, we present a novel method for runtime scheduling of irregular, direct remote accesses to sections of distributed shared arrays. Our method, which is fully parallelized, uses conventional two-sided message passing and thus avoids the overhead of a standard implementation of direct remote memory access based on one-sided communication. The main prerequisite is that the given program is structured in a BSP-compliant way. Copyright (C) 2004 John Wiley Sons, Ltd. | [
"distributed shared array",
"neststep",
"parallel programming language",
"bsp model",
"bulk",
"synchronous parallelism",
"runtime scheduling of communication"
] | [
"P",
"P",
"P",
"R",
"U",
"M",
"R"
] |
12MV98S | Recrystallization process controlled by staircase pulse in phase change memory | We proposed a novel staircase pulse programming method to control recrystallization region. We predicted the recrystallization region change with increasing amplitude of the second subpulse by finite element analysis. The well-controlled gradual resistance was obtained. This technique is very useful for freely-achievable multilevel storage. | [
"recrystallization",
"phase change memory",
"multilevel storage"
] | [
"P",
"P",
"P"
] |
-W-F6AD | Analysis and optimization of the effect of light and nutrient solution on wheat growth and development using an inverse system model strategy | RW and W LEDs are more conducive to wheat growth and development. Use an inverse system model strategy to analyze and optimize the planting regime. Positive and inverse system possessed high accuracy with good dynamic performance. | [
"nutrient solution",
"bioregenerative life support systems",
"wheat cultivation",
"light quality",
"system identification"
] | [
"P",
"M",
"M",
"M",
"M"
] |
-Rx:RFT | Predict soil texture distributions using an artificial neural network model | High-resolution soil maps are important for planning agriculture crop production, forest management, hydrological analysis and environmental protection. However, high-resolution soil maps are generally only available for small areas because obtaining these maps through field survey is time consuming and expensive. The objective of this study was to develop an artificial neural network (ANN) model to predict soil texture (sand, clay and silt contents) based on soil attributes obtained from existing coarse resolution soil maps combined with hydrographic parameters derived from a digital elevation model (DEM). The calibrated ANN model then can be used to produce high-resolution soil maps in area with similar conditions without additional field surveys. The hydrographic parameters derived from DEM were soil terrain factor, sediment delivery ratio and vertical slope position. Field measured soil texture in the Black Brook Watershed (BBW) in northwestern New Brunswick, Canada was used to train and test the ANN model. Results indicated that the LevenbergMarquardt optimization algorithm was better than the commonly used training method based on the resilient back-propagation algorithm. The root mean square errors between model predictions and field determination were 4.0 for clay and 6.6 for sand contents. The relative overall accuracy (within 5% of field measurement) was 88% for clay content and 81% for sand content. The trained ANN model has been tested in an experimental farm located in southeastern NB about 180km from the Black Brook Watershed where the model was first calibrated. Results indicated that with proper training, the ANN model can be used in the areas where the model was calibrated (for interpolations), or other areas provided that the relative range of input parameters were similar to the region where the model was calibrated. | [
"soil texture",
"artificial neural network",
"high-resolution soil map",
"sand",
"clay",
"dem"
] | [
"P",
"P",
"P",
"P",
"P",
"P"
] |
19s1GzN | Prior schemata transfer as an account for assessing the intuitive use of new technology | An experiment is conducted for assessing the intuitive use of an interface. Intuitive use relies on the transfer of prior knowledge schemata. Familiar and new features yield distinct patterns of prior schemata transfer and of new schemata induction, respectively. Transfer and induction patterns were moderated by participants' cognitive style. Assessment of these patterns is reported for the evaluation and redesign of interfaces. | [
"transfer",
"intuitive use",
"schemata theory"
] | [
"P",
"P",
"M"
] |
4Ngkf95 | Ten Years' Experience of Isolation of Rickettsia spp. from Blood Samples Using the Shell-Vial Cell Culture Assay | Two strategies to improve the efficacy of the shell-vial method for Rickettsia were analyzed. Blood samples from 59 patients with Mediterranean spotted fever (MSF) were examined using the shell-vial technique. (i) DNA from positive lenses was obtained when they were contaminated. (ii) Blood sample from one patient was cultured in 17 shell-vials. R. conorii was identified in four cases by polymerase chain reaction (PCR)-RFLP. Three of these were obtained from cells adherent to lenses and the fourth one by using total patient blood sample. Rickettsia isolation using all blood samples as well as DNA from shell-vial lenses could be useful in the study of rickettsial infections | [
"rickettsia",
"shell-vial assay",
"msf (mediterranean spotted fever)"
] | [
"P",
"R",
"R"
] |
4AzhzQK | Parallel computation of the correlation dimension from a time series | A parallel algorithm is presented for computing the Correlation Dimension (D2) from a time series generated by a dynamical system, using the method of correlation integrals. Three versions are discussed: the first computes all distances between points in the phase space, whereas the second and third compute only distances less than a threshold ?; the third version in particular is very powerful since it employs a box-assisted approach and linked lists for a fast search of neighboring points. The parallelization is designed for coarse-grained multiprocessor systems with distributed memory and is accomplished using a message passing model and partitioning points evenly among processors. Uniform implementation and computational analysis allow a clear comparison of the three versions. The algorithms, tested on the Transtech PARAstation multiprocessor, are well balanced, give a linear speed-up and show a good scalability. The third version is particularly suitable for fast processing of very long time series and allows the estimation of D2 even for medium- and high-dimensional systems, where an extremely large number of points is needed. The algorithms can be adapted with few modifications to the computation of the generalized dimensions Dq, and they can also be useful in other applications involving the efficient computation of distances between points in a large set. More generally, the computational framework can be used in similar problems involving long-range interactions. | [
"correlation dimension",
"message passing",
"computation of distances",
"long-range interactions",
"nonlinear time series analysis",
"box-assisted parallel algorithms",
"distributed memory multiprocessors"
] | [
"P",
"P",
"P",
"P",
"M",
"R",
"R"
] |
2ek3EbV | Identification of structural damage in buildings using iterative procedure and regularisation method | Purpose The paper aims to identify both the location and severity of damage in complex framed buildings using limited noisy vibration measurements. The study aims to directly adopt incomplete measured mode shapes in structural damage identification and effectively reduce the influence of measurement errors on predictions of structural damage. Design/methodology/approach - Damage indicators are properly chosen to reflect both the location and severity of damage in framed buildings at element level for braces and at critical point level for beams and columns. Basic equations for an iterative solution procedure are provided to be solved for the chosen damage indicators. The Tikhonov regularisation method incorporating the L-curve criterion for determining the regularisation parameter is employed to produce stable and robust solutions for damage indicators. Findings - The proposed method can correctly assess the quantification of structural damage at specific locations in complex framed buildings using only limited information on modal data measurements with errors, without requiring mode shape expansion techniques or model reduction processes. Research limitations/implications - Further work may be needed to improve the accuracy of inverse predictions for very small structural damage from noisy measurements. Practical implications - The paper includes implications for the development of reliable techniques for rapid and on-line damage assessment and health monitoring of framed buildings. Originality/value - The paper offers a practical approach and procedure for correctly detecting structural damage and assessing structural condition from limited noisy vibration measurements. | [
"structures",
"buildings",
"risk assessment",
"numerical analysis"
] | [
"P",
"P",
"M",
"U"
] |
2QoiHpk | Analysis of a simple Markovian re-entrant line with infinite supply of work under the LBFS policy | We consider a two machine 3 step re-entrant line, with an infinite supply of work. The service discipline is last buffer first served. Processing times are independent exponentially distributed. We analyze this system, obtaining steady state behavior and sample path properties. | [
"sample path properties",
"queueing",
"manufacturing",
"priority scheduling",
"markovian multiclass queueing networks",
"last buffer first served discipline",
"infinite virtual buffers",
"steady state distributions",
"gi/m/1 queue"
] | [
"P",
"U",
"U",
"U",
"M",
"R",
"M",
"R",
"U"
] |
-nDG6hM | Residual-free bubbles for embedded Dirichlet problems | We examine a stabilized method employing residual-free bubbles for enforcing Dirichlet constraints on embedded finite element interfaces. In particular, we focus attention on problems where the underlying mesh is not explicitly fitted to the geometry of the interface. The residual-free bubble problem is derived for a simple case and extensions are discussed. We show that under certain conditions, stabilization only requires knowledge of the residual-free bubble on the interface. We then examine methods to approximate the residual-free bubble. A series of benchmark tests are used to demonstrate the accuracy and robustness of the method. Comparisons are made to Nitsches method employing an eigenvalue estimate for the global stability parameter. Particular emphasis is placed on the accuracy of flux evaluations on the interface. Finally, we employ the method to simulate an evolving interface problem motivated by resin transfer molding. | [
"stabilization",
"finite element",
"embedded interface",
"residual free bubbles"
] | [
"P",
"P",
"R",
"M"
] |
22jsZF6 | Restoring images of ancient color postcards | This paper proposes an automatic system for the restoration of digital images of vintage colored postcards, employing the combined techniques of image equalization, background segmentation based on edge detection (using an extension of the standard difference of Gaussians filter), color enhancement, and noisy spots removal. Equalization and background segmentation are used to facilitate background spot removal. To enhance colors, the overall document degradation is regarded as an illumination problem, thereby allowing the use of color constancy algorithms. The methodology was applied to a set of postcards dating from the end of the nineteenth century, and satisfactory visual results were achieved. | [
"edge detection",
"difference of gaussians",
"color constancy",
"color image processing",
"color restoration",
"image restoration"
] | [
"P",
"P",
"P",
"M",
"R",
"R"
] |
ient6uN | the process of developments of systems with rv under the optic of ihc | From its characteristics, software systems with user interfaces based on Virtual Reality technology possess bigger complexity in its process of development, which, in this way, must be supported by more specific methodologies. However, it is noticed that the development process of these systems is presented like a line of research to be studied, since it needs to be better analyzed.This work presents an including vision of the development process of software systems with user interfaces based on Virtual Reality technology from the viewpoint of the Human-Computer Interaction, analyzing some methodologies found in literature directed to the development of these systems. | [
"virtual reality",
"human-computer interaction",
"human-computer interface",
"vr",
"systems development process",
"systems development methodology"
] | [
"P",
"P",
"R",
"U",
"R",
"R"
] |
4AAiftL | The antecedents of purchase and re-purchase intentions of online auction consumers | This research examines the factors influencing purchase and repurchase intentions. The study describes some of the complexities and uncertainties in online auctions. Pchome & eBay and Yahoo online auction websites were the basis for the experiment. Product, price, and seller information represented website-based factors. Mediation strategies, media richness, and trust represented seller behavior factors. | [
"online auctions",
"repurchase intentions",
"mediation strategy",
"media richness",
"trust",
"purchase intentions",
"information exchange",
"price setting"
] | [
"P",
"P",
"P",
"P",
"P",
"R",
"M",
"M"
] |
3yoNz5N | Developing robust vision modules for microsystems applications | In this work, several robust vision modules are developed and implemented for fully automated micromanipulation. These are autofocusing, object and end-effector detection, real-time tracking and optical system calibration modules. An image based visual servoing architecture and a path planning algorithm are also proposed based on the developed vision modules. Experimental results are provided to assess the performance of the proposed visual servoing approach in positioning and trajectory tracking tasks. Proposed path planning algorithm in conjunction with visual servoing imply successful micromanipulation tasks. | [
"microsystems",
"micromanipulation",
"tracking",
"visual servoing",
"path planning",
"visual feedback",
"robust detection",
"normalized cross correlation"
] | [
"P",
"P",
"P",
"P",
"P",
"M",
"R",
"U"
] |
4Efmut: | Mining frequent trajectory patterns in spatial-temporal databases | In this paper, we propose an efficient graph-based mining (GBM) algorithm for mining the frequent trajectory patterns in a spatial-temporal database. The proposed method comprises two phases. First, we scan the database once to generate a mapping graph and trajectory information lists (TI-lists). Then, we traverse the mapping graph in a depth-first search manner to mine all frequent trajectory patterns in the database. By using the mapping graph and TI-lists, the GBM algorithm can localize support counting and pattern extension in a small number of TI-lists. Moreover, it utilizes the adjacency property to reduce the search space. Therefore, our proposed method can efficiently mine the frequent trajectory patterns in the database. The experimental results show that it outperforms the Apriori-based and PrefixSpan-based methods by more than one order of magnitude. (C) 2009 Elsevier Inc. All rights reserved. | [
"frequent trajectory pattern",
"spatial-temporal database",
"data mining",
"location-based service",
"spatial-temporal pattern"
] | [
"P",
"P",
"M",
"U",
"R"
] |
1xd:843 | Development of a direct current power system for a multi-node cabled ocean observatory system | Due to the shortage of suitable research methods for real-time and long-term observation of oceans, an innovative approach that can provide abundant power and wide bandwidth is being developed worldwide for undersea instruments. In this paper, we develop a direct current (DC) power system which is applied to a multi-node cabled ocean observatory system named ZERO (Zhejiang University Experimental and Research Observatory). The system addresses significant issues ranging from terrestrial facility to subsea infrastructure, and focuses on using appropriate methods to deal with several key challenges, including delivery, conversion, distribution, and management of power, and heat dissipation in pressure vessels. A basic laboratory platform consisting of a shore station, a primary node in a water tank, and a secondary node in a deep-sea simulation chamber under 42 MPa pressure was built and fully tested. An improved secondary node was deployed in Monterey Bay in California for a deep-sea trial. An 11-day laboratory test and a half-year sea trial proved that the DC power system based on our proposed methods is viable for the underwater multi-node observatory system. | [
"cabled ocean observatory system",
"heat dissipation",
"dc power system",
"deep sea"
] | [
"P",
"P",
"P",
"M"
] |
2DSg9Nw | Authorization Algorithms for Permission-Role Assignments | Permission-role assignments (PRA) is one important process in Role-based access control (RBAC) which has been proven to be a flexible and useful access model for information sharing in distributed collaborative environments. However, problems may arise during the procedures of PRA. Conflicting permissions may assign to one role, and as a result, the role with the permissions can derive unexpected access capabilities. This paper aims to analyze the problems during the procedures of permission-role assignments in distributed collaborative environments and to develop authorization allocation algorithms to address the problems within permission-role assignments. The algorithms are extended to the case of PRA with the mobility of permission-role relationship. Finally, comparisons with other related work are discussed to demonstrate the effective work of the paper. | [
"authorization",
"access control",
"conflicts"
] | [
"P",
"P",
"P"
] |
1VPvNxs | A Multi-Agent architecture for intelligent gathering systems | This paper presents a model to define heterogeneous agents that solve problems by sharing the knowledge retrieved from the AND cooperating among them. The control structure of those agents is based on a general purpose Multi-Agent architecture (SKELETONAGENT) based on a deliberative approach. Any agent in the architecture is built by means of several interrelated modules: control module, language and communication module, skills modules, knowledge base, yellow pages, etc. ... The control module uses an agenda to activate and coordinate the agent skills. This agenda handles actions from both the internal goals of the agent and from other agents in the environment. In the paper, we show a high level agent model, which is later instantiated to build a set of heterogeneous specialized agents. The paper describes how SKELETONAGENT has been used to implement different kinds of agents and a specialized Multi-Agent System (MAS). The implemented MAS, MAPWEB-ETOURISM, is the specific implementation of a general WEB gathering architecture, named MAPWEB, which extends SKELETONAGENT. MAPWEB has been designed to solve problems in WEB domains through the integration of information gathering and planning techniques. The MAPWEB-ETOURISM system has been applied to a specific WEB domain (e-tourism) which uses information gathered directly from several WEB sources (plane, train, and hotel companies) to solve travel problems. This paper shows how the proposed architecture allows to integrate the different agents tasks with AI techniques like planning to build a MAS which is able to gather and integrate information retrieved from the WEB to solve problems. | [
"multi-agent systems",
"information gathering",
"agent architectures",
"web-based systems"
] | [
"P",
"P",
"R",
"M"
] |
4iWy4Pq | Modeling and analyzing the impact of authorization on workflow executions | It has been a subject of a significant amount of research to automate the execution of workflows (or business processes) on computer resources. However, many workflow scenarios still require human involvement, which introduces additional security and authorization concerns. This paper presents a novel mechanism for modeling the execution of workflows with human involvement under Role-based Authorization Control. Our modeling approach applies Colored Timed Petri-Nets to allow various authorization constraints to be modeled, including role, temporal, cardinality, BoD (Binding of Duty), SOD (Separation of Duty), role hierarchy constraints etc. We also model the execution of tasks with different levels of human involvement and as such allow the interactions between workflow authorization and workflow execution to be captured. The modeling mechanism is developed in such a way that the construction of the authorization model for a workflow can be automated. This feature is very helpful for modeling large collections of authorization policies and/or complex workflows. A Petri-net toolkit, the CPN Tools, is utilized in the development of the modeling mechanism and to simulate the constructed models. This paper also presents the methods to analyze and calculate the authorization overhead as well as the performance data in terms of various metrics through the model simulations. Based on the simulation results, this paper further proposes the approaches to improving performance given the deployed authorization policies. This work can be used for investigating the impact of authorization, for capacity planning, for the design of workload management strategies, and also to estimate execution performance, when human resources and authorization policies are employed in tandem. (C) 2012 Elsevier B.V. All rights reserved. | [
"modeling",
"authorization",
"workflow",
"rbac"
] | [
"P",
"P",
"P",
"U"
] |
4njn3-& | Simulating botulinum neurotoxin with constant pH molecular dynamics in Generalized Born implicit solvent | A new method was proposed by Mongan et al. for constant pH molecular dynamics simulation and was implemented in AMBER 8 package. Protonation states are modeled with different charge sets, and titrating residues are sampled from a Boltzmann distribution of protonation states. The simulation periodically adopts Monte Carlo sampling based on Generalized Born (GB) derived energies. However, when this approach was applied to a bio-toxin, Botulinum Neurotoxin Type A (BoNT/A) at pH 4.4, 4.7, 5.0, 6.8 and 7.2, the pKa p K a predictions yielded by the method were inconsistent with the experimental values. The systems being simulated were divergent. Furthermore, the system behaviors in a very weak acidic solution (pH 6.8) and in a very weak basic solution (pH 7.2) were significantly different from the neutral case (pH 7.0). Hence, we speculate this method may require further study for modeling large biomolecule. | [
"botulinum neurotoxin",
"constant ph molecular dynamics",
"generalized born method"
] | [
"P",
"P",
"R"
] |
3tqC96H | interactive constraint-based search and replace | We describe enhancements to graphical search and replace that allow users to extend the capabilities of a graphical editor. Interactive constraint-based search and replace can search for objects that obey user-specified sets of constraints and automatically apply other constraints to modify these objects. We show how an interactive tool that employs this technique makes it possible for users to define sets of constraints graphically that modify existing illustrations or control the creation of new illustrations. The interace uses the same visual language as the editor and allows users to understand and create powerful rules without conventional programming. Rules can be saved and retrieved for use alone or in combination. Examples, generated with a working implementation, demonstrate applications to drawing beautification and transformation. | [
"interaction",
"constraint",
"search",
"graphics",
"use",
"user",
"user",
"capabilities",
"object",
"tool",
"control",
"visual language",
"rules",
"program",
"combinational",
"examples",
"implementation",
"demonstrate",
"applications",
"drawing",
"graphical editing",
"interactive techniques",
"demonstrational techniques",
"constraint specification",
"editor extensibility",
"users"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"M",
"R",
"R",
"M",
"M",
"P"
] |
4TytwSY | Smoothed analysis of integer programming | We present a probabilistic analysis of integer linear programs (ILPs). More specifically, we study ILPs in a so-called smoothed analysis in which it is assumed that first an adversary specifies the coefficients of an integer program and then (some of) these coefficients are randomly perturbed, e.g., using a Gaussian or a uniform distribution with small standard deviation. In this probabilistic model, we investigate structural properties of ILPs and apply them to the analysis of algorithms. For example, we prove a lower bound on the slack of the optimal solution. As a result of our analysis, we are able to specify the smoothed complexity of classes of ILPs in terms of their worst case complexity. This way, we obtain polynomial smoothed complexity for packing and covering problems with any fixed number of constraints. Previous results of this kind were restricted to the case of binary programs. | [
"integer programming",
"probabilistic analysis"
] | [
"P",
"P"
] |
LmAHb7V | Model-based verification of a security protocol for conditional access to services | We use the formal language LOTOS to specify and verify the robustness of the Equicrypt protocol under design in the European OKAPI project for conditional access to multimedia services. We state some desired security properties and formalize them. We describe a generic intruder process and its modelling, and show that some properties are falsified in the presence of this intruder. The diagnostic sequences can be used almost directly to exhibit the scenarios of possible attacks on the protocol. Finally, we propose an improvement of the protocol which satisfies our properties. | [
"security protocols",
"lotos",
"equicrypt",
"model-checking"
] | [
"P",
"P",
"P",
"U"
] |
3:BaczT | A multi-period TSP with stochastic regular and urgent demands | In this paper, we study the multi-period TSP problem with stochastic urgent and regular demands. Urgent demands have to be satisfied immediately while regular demands can be satisfied either immediately or the day after. Demands appear stochastically at nodes. The objective is to minimize the average long-run delivery costs, knowing the probabilities governing the demands at nodes. The problem is cast as a Markov Process with Costs and, at least in principle, can be solved using an approach originally proposed by Howard [R.A. Howard, Dynamic Programming and Markov Processes, MIT, Cambridge, USA, 1960] for Markov Processes with Rewards. However, the number of states of the Markov Process grows exponentially with the number of nodes in the network which pose a limit on the dimension of the instances that are computationally tractable. We suggest a second Markov approach considering a system (Aggregate Model) whose number of states grows only polynomially with the number of nodes in the network. The important relations between the optimal solutions of the original and the aggregate models will be discussed. Finally, we also propose a hybrid procedure which combines both models. The viability of the proposed methodology is shown by applying the procedure to a numerical example. | [
"markov processes",
"routing",
"logistics"
] | [
"P",
"U",
"U"
] |
3CSdt8c | Timetable construction: the algorithms and complexity perspective | This paper advocates approaching timetable construction from the algorithms and complexity perspective, in which analysis of the specific problem under study is used to find efficient algorithms for some of its aspects, or to relate it to other problems. Examples are given of problem analyses leading to relaxations, phased approaches, very large-scale neighbourhood searches, bipartite matchings, ejection chains, and connections with standard NP-complete problems. Although a thorough treatment is not possible in a paper of this length, it is hoped that the examples will encourage timetabling researchers to explore further with a view to utilising some of the techniques in their own work. | [
"timetabling",
"algorithms",
"np-completeness"
] | [
"P",
"P",
"P"
] |
-2MyPEd | Women's autonomy and the evaluation on the information available on the Internet on hormone therapy after menopause | This study was designed to examine the information in Spanish, provided by different Web sites, related to hormone therapy and climacteric symptoms. Web sites evaluated included those belonging to government and scientific institutions and to a miscellaneous group. In Web sites in Spanish, there was more extensive information on the benefits of hormone therapy than there were on other items. The Web sites of governmental institutions provided significantly more information on the risks (P < .01) and benefits (P = .02) of hormone therapy than did the other sites. Governmental institutions from the United States, unlike those from Spain, did not make recommendations regarding when hormone therapy should be considered and instead emphasized the woman's decision. The variability of information in Spanish on hormone therapy and postmenopausal symptoms presented on the Internet is related to the organization responsible for the Web site. Besides, cultural differences in the concept of patient autonomy could partly explain the differences in emphasis made on women's role in the decision-making process. | [
"internet",
"hormone replacement therapy",
"information dissemination",
"personal autonomy"
] | [
"P",
"M",
"M",
"M"
] |
4jLfmTC | Algorithmic aspects of Steiner convexity and enumeration of Steiner trees | For a set \(W\) of vertices of a connected graph \(G=(V(G),E(G))\), a Steiner W-tree is a connected subgraph \(T\) of \(G\) such that \(W\subseteq V(T)\) and \(|E(T)|\) is minimum. Vertices in \(W\) are called terminals. In this work, we design an algorithm for the enumeration of all Steiner \(W\)-trees for a constant number of terminals, which is the usual scenario in many applications. We discuss algorithmic issues involving space requirements to compactly represent the optimal solutions and the time delay to generate them. After generating the first Steiner \(W\)-tree in polynomial time, our algorithm enumerates the remaining trees with \(O(n)\) delay (where \(n=|V(G)|\)). An algorithm to enumerate all Steiner trees was already known (Khachiyan et al., SIAM J Discret Math 19:966984, 2005), but this is the first one achieving polynomial delay. A by-product of our algorithm is a representation of all (possibly exponentially many) optimal solutions using polynomially bounded space. We also deal with the following problem: given \(W\) and a vertex \(x\in V(G)\setminus W\), is \(x\) in a Steiner \(W'\)-tree for some \(\emptyset \ne W' \subseteq W\)? This problem is investigated from the complexity point of view. We prove that it is NP-hard when \(W\) has arbitrary size. In addition, we prove that deciding whether \(x\) is in some Steiner \(W\)-tree is NP-hard as well. We discuss how these problems can be used to define a notion of Steiner convexity in graphs. | [
"steiner convexity",
"steiner tree",
"complexity of algorithms",
"enumerative combinatorics"
] | [
"P",
"P",
"R",
"U"
] |
12du67g | Modeling customer satisfaction for new product development using a PSO-based ANFIS approach | When developing new products, it is important to understand customer perception towards consumer products. It is because the success of new products is heavily dependent on the associated customer satisfaction level. If customers are satisfied with a new product, the chance of the product being successful in marketplaces would be higher. Various approaches have been attempted to model the relationship between customer satisfaction and design attributes of products. In this paper, a particle swarm optimization (PSO) based ANFIS approach to modeling customer satisfaction is proposed for improving the modeling accuracy. In the approach, PSO is employed to determine the parameters of an ANFIS from which better customer satisfaction models in terms of modeling accuracy can be generated. A notebook computer design is used as an example to illustrate the approach. To evaluate the effectiveness of the proposed approach, modeling results based on the proposed approach are compared with those based on the fuzzy regression (FR), ANFIS and genetic algorithm (GA)-based ANFIS approaches. The comparisons indicate that the proposed approach can effectively generate customer satisfaction models and that their modeling results outperform those based on the other three methods in terms of mean absolute errors and variance of errors. | [
"new product development",
"anfis",
"particle swarm optimization",
"customer satisfaction models"
] | [
"P",
"P",
"P",
"P"
] |
f9bbjEb | The influence of controllable task-lighting on productivity: a field study in a factory | This study examines whether or not a controllable task-lighting system that allows people to select high lighting levels will enhance productivity under real working conditions. For a period of 16months a study was carried out in a luminaire factory in Finland in which such a task-lighting system was installed above 10 individual workstations. The illuminances selected by the users were recorded and productivity was monitored. Enhancing productivity can be relevant in industrial processes. The increase of productivity for the test group was +4.5% compared to a reference group, and statistically significant. The mechanism for this increase can be improved visual performance, biological effects of light, or psychological effects. Different dimming speeds were used to see whether the subjects choices were based on illuminance or on the response of the control system. Decreasing the dimming speed of the system decreased the illuminance chosen by 13%. However, at slower dimming speeds the subjects took 55% longer to reach a given level, which suggests that they were aiming to set the lighting to their preferred level and not just pushing the button for a certain time. | [
"lighting",
"productivity",
"industrial environment"
] | [
"P",
"P",
"M"
] |
-LL6GDx | Self-directedness in nontraditional college students: a behavioral factor in computer anxiety? | This paper reports on the results of a study of nontraditional undergraduate students enrolled in a business communication course. The focus of the study was to determine whether or not self-directedness in learner profile was a predictor of computer anxiety. Analysis was carried out to assess the factor structure of the Oddi Continuing Learning Inventory and Oetting's Computer Anxiety Scale. Self-directedness was indicated to be a predictor of computer anxiety in nontraditional college students. | [
"self-directedness",
"computer anxiety",
"nontraditional students"
] | [
"P",
"P",
"R"
] |
-Ted4sK | Selective discount for supplierbuyer coordination using common replenishment epochs | A supplier may reduce its order costs by providing an incentive in the form of price discounts to buyers to restrict their replenishment intervals to multiples of a common replenishment epoch (CRE). This coordination mechanism was studied by Viswanathan and Piplani who suggested that the supplier offer a discount that is the maximum of the discount required by all buyers. We generalize their model to allow for a selective discount policy that, if beneficial, excludes some buyers to minimize the suppliers total cost. Using a computational study, we observe that offering discounts to buyers selectively, if necessary, by segmenting them and offering multiple CRE, reduces the suppliers cost in many scenarios. | [
"coordination",
"replenishment",
"supply chain management",
"inventory"
] | [
"P",
"P",
"U",
"U"
] |
2rssGiH | Role of Exogenous and Endogenous Hormones in Endometrial Cancer | Endometrial carcinoma is the most common cancer of the female reproductive organs in the United States. International comparisons reveal that the incidence of endometrial cancer vary widely between different countries with the highest rates observed in North America and Northern Europe, intermediate rates in Eastern Europe and Latin America, and lowest rates in Asia and Africa. International variation in endometrial cancer rates may represent differences in the distribution of known risk factors, which include obesity, postmenopausal estrogen replacement, ovarian dysfunction, diabetes mellitus, infertility, nulliparity, and tamoxifen use. Most of the risk factors for endometrial cancer can be explained within the framework of the unopposed estrogen hypothesis, which proposes that exposure to estrogens unopposed by progesterone or synthetic progestins leads to increased mitotic activity of endometrial cells, increased number of DNA replication errors, and somatic mutations resulting in malignant phenotype. Although the impact of exogenous hormone replacement was intensively studied during the last two decades, less is known about the effects of endogenous hormones in endometrial cancer. A review of available experimental, clinical, and epidemiologic data suggests that in addition to estrogens, other endogenous hormones, including progesterone, androgens, gonadotropins, prolactin, insulin, and insulin-like growth factors, may play a role in the pathogenesis of different histopathologic types of endometrial cancer. | [
"endogenous hormones",
"hormones",
"endometrial cancer",
"cancer",
"progesterone",
"exogenous hormones",
"androgens"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P"
] |
b87t3EU | Lamination of metal sheets | This paper describes a manufacturing process to produce injection molding tools by lamination of aluminum alloy sheets. The process involves design of the tool using 3D CAD modeling, slicing of the data into cross sections, laser beam cutting of the sheets into 2D profiles, lamination, bonding, milling and finishing. This technology allows to include complex cooling channels in order to achieve a significant reduction of the cycle time and to control the cooling process. Because of the low bonding temperature and force, large tools can be made in a time- and cost-effective manner. The manufacturing system is similar to the technology developed by Nakagawa et al. [T. Nakagawa, M. Kunieda, S. Liu, Laser cut sheet laminated forming dies by diffusion bonding, in: Proceedings of the 25th International MTDR Conference, Institute of Industrial Science, University of Tokyo, Japan, 1985]; however, the used sheet material is different and therefore, the bonding method. In this article, the introduced bonding technique, laser cutting, milling and other important parameters such as bonding strength are discussed. (C) 1999 Published by Elsevier Science B.V. All rights reserved. | [
"injection molding tool",
"3d cad modeling",
"rapid prototyping",
"laminated object manufacturing"
] | [
"P",
"P",
"U",
"M"
] |
-EKQj49 | Optimization of Robust Loss Functions for Weakly-Labeled Image Taxonomies | The recently proposed ImageNet dataset consists of several million images, each annotated with a single object category. These annotations may be imperfect, in the sense that many images contain multiple objects belonging to the label vocabulary. In other words, we have a multi-label problem but the annotations include only a single label (which is not necessarily the most prominent). Such a setting motivates the use of a robust evaluation measure, which allows for a limited number of labels to be predicted and, so long as one of the predicted labels is correct, the overall prediction should be considered correct. This is indeed the type of evaluation measure used to assess algorithm performance in a recent competition on ImageNet data. Optimizing such types of performance measures presents several hurdles even with existing structured output learning methods. Indeed, many of the current state-of-the-art methods optimize the prediction of only a single output label, ignoring this structure altogether. In this paper, we show how to directly optimize continuous surrogates of such performance measures using structured output learning techniques with latent variables. We use the output of existing binary classifiers as input features in a new learning stage which optimizes the structured loss corresponding to the robust performance measure. We present empirical evidence that this allows us to boost the performance of binary classification on a variety of weakly-supervised labeling problems defined on image taxonomies. | [
"image taxonomies",
"image labeling",
"image tagging",
"structured learning"
] | [
"P",
"R",
"M",
"R"
] |
yUU4faa | Calculation of the Raman frequencies as a function of pressure in the solid phases II and III (III ') of benzene | We calculate here the Raman frequencies of the lattice modes A(A (g) ), B(B (2g) ) and C(B (1g) B (3g) ) as a function of pressure at room temperature for the solid phases (II, III and III') of benzene. This calculation is performed using volume data through the mode Gruneisen parameter. It is found that our calculated frequencies of those lattice modes increase with increasing pressure, as expected. Calculated frequencies are in good agreement with the measurements of the three lattice modes for the solid phases studied in benzene. | [
"raman frequency",
"benzene",
"volume",
"solid phases (ii, iii, iii ')"
] | [
"P",
"P",
"P",
"R"
] |
5&vbD5A | Prediction of temperature distribution in human BEL exposed to 900MHz mobile phone radiation using ANFIS | An experimental setup is designed to investigate the effect of 900MHz electromagnetic field. EM field strength of RF generator, antenna distances applied, exposure time and the measured depths were systematically changed, while establishing the varying conditions. A soft computing method is firstly used in the literature to determine the temperature distribution of tissue equivalent liquid exposed to electromagnetic field. | [
"900mhz mobile phone radiation",
"soft computing",
"temperature effect of electromagnetic fields",
"adaptive neuro-fuzzy inference system (anfis)",
"modeling the temperature effect of electromagnetic fields",
"thermal effects of electromagnetic fields"
] | [
"P",
"P",
"R",
"M",
"M",
"M"
] |
4qEp9kj | An experimental investigation of formality in UML-based development | The Object Constraint Language (OCL) was introduced as part of the Unified Modeling Language (UML). Its main purpose is to make UML models more precise and unambiguous by providing a constraint language describing constraints that the UML diagrams alone do not convey, including class invariants, operation contracts, and statechart guard conditions. There is an ongoing debate regarding the usefulness of using OCL in UML-based development, questioning whether the additional effort and formality is worth the benefit. It is argued that natural language may be sufficient, and using OCL may not bring any tangible benefits. This debate is in fact similar to the discussion about the effectiveness of formal methods in software engineering, but in a much more specific context. This paper presents the results of two controlled experiments that investigate the impact of using OCL on three software engineering activities using UML analysis models: detection of model defects through inspections, comprehension of the system logic and functionality, and impact analysis of changes. The results show that, once past an initial learning curve, significant benefits can be obtained by using OCL in combination with UML analysis diagrams to form a precise UML analysis model. But, this result is however conditioned on providing substantial, thorough training to the experiment participants. | [
"uml",
"ocl",
"comprehension of software models",
"software engineering experimentation"
] | [
"P",
"P",
"R",
"R"
] |
U91E2Sb | quantitative evaluation between rival planning systems with dosimetric indices | To evaluate and quantify the potential dosimetric gains of two intensity-modulated radiotherapy (IMRT) modalities for head-and-neck (H&N) tumor cases with PET/CT images guided target volume delineation. Twenty patients curatively treated by PS1 were examined. Dose plans were compared using dose volume histograms (DVH), conformity index ( CI ) and homogeneity index ( HI ) of the planned target volume (PTV) and improvement ratios of CI and HI. A dosimetric gain in conformity and homogeneity of PTV and sparing of OARs was significantly obtained in PS1 versus PS2 plans in the study cohort. Whether such dosimetric superiority in PS1 could transfer into clinical advantages needs further investigation. | [
"conformity index",
"homogeneity index",
"planned target volume"
] | [
"P",
"P",
"P"
] |
54wowFd | automatic data partitioning in software transactional memories | We investigate to which extent data partitioning can help improve the performance of software transactional memory (STM). Our main idea is that the access patterns of the various data structures of an application might be sufficiently different so that it would be beneficial to tune the behavior of the STM for individual data partitions. We evaluate our approach using standard transactional memory benchmarks. We show that these applications contain partitions with different characteristics and, despite the runtime overhead introduced by partition tracking and dynamic tuning, that partitioning provides significant performance improvements. | [
"data",
"partitioning",
"software transactional memories",
"transactional memory",
"help",
"performance",
"access",
"pattern",
"data structures",
"applications",
"tuning",
"behavior",
"evaluation",
"standardization",
"benchmark",
"runtime",
"tracking",
"dynamic"
] | [
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P",
"P"
] |
1rw4P7h | Quasi-synchronous checkpointing: Models, characterization, and classification | Checkpointing algorithms are classified as synchronous and asynchronous in the literature. In synchronous checkpointing, processes synchronize their checkpointing activities so that a globally consistent set of checkpoints is always maintained in the system. Synchronizing checkpointing activity involves message overhead and process execution may have to be suspended during the checkpointing coordination, resulting in performance degradation. In asynchronous checkpointing, processes take checkpoints without any coordination with others. Asynchronous checkpointing provides maximum autonomy for processes to take checkpoints; however, some of the checkpoints taken may not lie on any consistent global checkpoint, thus making the checkpointing efforts useless. Asynchronous checkpointing algorithms in the literature can reduce the number of useless checkpoints by making processes take communication induced checkpoints besides asynchronous checkpoints. We call such algorithms quasi-synchronous. In this paper, we present a theoretical framework for characterizing and classifying such algorithms. The theory not only helps to classify and characterize the quasi-synchronous checkpointing algorithms, but also helps to analyze the properties and limitations of the algorithms belonging to each class. It also provides guidelines for designing and evaluating such algorithms. | [
"consistent global checkpoint",
"causality",
"distributed checkpointing",
"failure recovery",
"fault tolerance",
"zigzag paths"
] | [
"P",
"U",
"M",
"U",
"U",
"U"
] |
77ULaXo | Consensus control of multi-agent systems with missing data in actuators and Markovian communication failure | This article investigates the consensus problem of multi-agent systems (MASs) with imperfect communication both in channels and in actuators. The data transmission among agents may fail due to limited communication capacity, and the actuators may fail to receive information owing to noisy environment. We use a Markov chain approach to characterise the occurrence of the two types missing data in a unified framework. A sufficient consensus condition is first obtained in terms of linear matrix inequalities. Then, based on this condition, a novel controller design method is further developed such that the MAS with imperfect communication reaches mean-square consensus. It is shown that the consensus problem for MASs with switching topology can be regarded as a special case of the problem considered in this article, and the related theoretical results are presented as well. Numerical examples are provided to illustrate the effectiveness of the proposed approach. | [
"consensus control",
"multi-agent systems",
"missing data",
"markov chain",
"mean-square stability"
] | [
"P",
"P",
"P",
"P",
"M"
] |
3UN&5LU | Remote sensing big data computing: Challenges and opportunities | This paper identifies the properties and features of remote sensing big data. This paper reviews the stat-of-the-arts of remote sensing big data computing. This paper discusses the data-intensive computing issues in remote sensing big data processing. | [
"big data",
"data-intensive computing",
"remote sensing data processing"
] | [
"P",
"P",
"R"
] |
2LgvPtm | hmmer acceleration using systolic array based reconfigurable architecture | HMMer is a widely-used bioinformatics software package that uses profile Hidden Markov Models (HMMs) to model the primary structure consensus of a family of protein or nucleic acid sequences. However, with the rapid growth of both sequence and model databases, it is more and more time-consuming to run HMMer on traditional computer architecture. With the development of modern field programmable gate array (FPGA) technology, applications can be accelerated using CPU-FPGA cooperative system by mapping computational-intensive work onto FPGA. In this paper, the computation kernel of HMMer, P7Viterbi, is selected to be accelerated by FPGA. After carefully data dependency analysis, we proposed a systolic array based reconfigurable architecture to exploit both inter-module and intra-module parallelism. There is an infrequent feedback loop in P7Viterbi to update the value of beginning state (B state), which limits further parallelization. Previous work either ignored the feedback loop or serialized the process, leading to loss of either precision or efficiency. Our proposed architecture can exploit maximum parallelism without loss of precision. The proposed architecture speculatively runs with fully parallelism assuming that the feedback loop does not take place. If the rare feedback case actually occurs, a rollback mechanism is used to ensure correctness. Results show that by using Xilinx Virtex-5 110T FPGA, the proposed architecture can achieve about a 56.8 times speedup compared with that of Intel Core2 Duo 2.33GHz CPU. | [
"hmmer",
"acceleration",
"systolic array",
"reconfigurable."
] | [
"P",
"P",
"P",
"R"
] |
1qTVnVS | On the consensus threshold for the opinion dynamics of Krause-Hegselmann | In the consensus model of Krause-Hegselmann, opinions are real numbers between 0 and 1, and two agents are compatible if the difference of their opinions is smaller than the confidence bound parameter epsilon. A randomly chosen agent takes the average of the opinions of all neighboring agents which are compatible with it. We propose a conjecture, based on numerical evidence, on the value of the consensus threshold epsilon(c) of this model. We claim that epsilon(c) can take only two possible values, depending on the behavior of the average degree d of the graph representing the social relationships, when the population N approaches infinity: if d diverges when N -> infinity, epsilon(c) equals the consensus threshold epsilon(i) similar to 0.2 on the complete graph; if instead d stays finite when N -> infinity, epsilon(c) = 1/2 as for the model of Deffuant et al. | [
"sociophysics",
"monte carlo simulations"
] | [
"U",
"U"
] |
52fqaPT | Effect of intravenous administration of dipyridamole in a rat model of chronic cerebral ischemia | Pharmacological therapy able to improve the cognitive performances of patients with chronic vascular pathologies currently remains unavailable. Many studies of chronic cerebral hypotension in rodents have revealed alterations in reference memory and learning. Dipyridamole was introduced into clinical medicine in the early 1960s as a coronary vasodilator. It is a potent inhibitor of platelet activation and reduces formation of thrombi in vivo. In addition, it is an antithrombotic agent used for secondary stroke prevention in combination with aspirin. Recent evidence indicates that dipyridamole has anti-inflammatory properties. Bilateral common carotid artery occlusion (2VO) in the rat is recognized as a valid model of chronic cerebral hypotension, also defined as the vascular cognitive impairment rat model. Here, we report that dipyridamole reverses the impairment of spatial working memory 90 days after 2VO. This protective effect might be in relation to dipyridamole's anti-inflammatory properties. | [
"dipyridamole",
"chronic ischemia",
"two-vessel occlusion",
"cognition tests",
"neurological deficit"
] | [
"P",
"R",
"M",
"M",
"U"
] |
1BF74tc | A new image segmentation algorithm with applications to image inpainting | This article describes a new approach to perform image segmentation. First an image is locally modeled using a spatial autoregressive model for the image intensity. Then the residual autoregressive image is computed. This resulting image possesses interesting texture features. The borders and edges are highlighted, suggesting that our algorithm can be used for border detection. Experimental results with real images are provided to verify how the algorithm works in practice. A robust version of our algorithm is also discussed, to be used when the original image is contaminated with additive outliers. A novel application in the context of image inpainting is also offered. | [
"image segmentation",
"image inpainting",
"border detection",
"spatial ar models",
"robust estimators"
] | [
"P",
"P",
"P",
"M",
"M"
] |
-At3tYr | Fuzzy machine vision based clip detection | This paper describes the use of an objective fuzzy approach for fast and accurate vision-based inspection. An inspection problem faced by a Canadian automotive parts manufacturer is being used as a case study. The problem is related to a vision system that is being operated to confirm the placement of metal fastening clips on a structural member that supports a truck dash panel. The manufacturer was interested in identifying the presence or absence of metal clips inserted by a robot arm. It took the manufacturer over 8 months to tune its commercial machine vision system to detect missing clips and yet the accuracy and efficiency of the system are being questioned. Five different universities across Canada have been working in parallel on this problem over a time span of 2 years. To this end, we developed an efficient fuzzy model after trying various statistical approaches. The proposed model properly identifies all the images in a database containing 1910 images. The robustness of the fuzzy model is confirmed by its strong performance on the entire database. | [
"fuzzy machine vision",
"clip detection",
"feature selection",
"pattern recognition",
"subtractive clustering",
"sugeno model",
"automated inspection"
] | [
"P",
"P",
"U",
"U",
"U",
"M",
"M"
] |
5-RLGxP | An efficient method for the computation of Legendre moments | Legendre moments are continuous moments, hence, when applied to discrete-space images, numerical approximation is involved and error occurs. This paper proposes a method to compute the exact values of the moments by mathematically integrating the Legendre polynomials over the corresponding intervals of the image pixels. Experimental results show that the values obtained match those calculated theoretically, and the image reconstructed from these moments have lower error than that of the conventional methods for the same order. Although the same set of exact Legendre moments can be obtained indirectly from the set of geometric moments, the computation time taken is much longer than the proposed method. | [
"moments",
"feature representation"
] | [
"P",
"U"
] |
1MxDPbV | A METHODOLOGY AND EXPERIMENTAL SHELL FOR FORMALLY ADDRESSING CENTRALIZED DISTRIBUTED DECISION-MAKING CHOICES | Managers seeking optimal organizational design require a means for choosing between centralized and distributed decision making processes. We provide a methodology for modelling and subsequently comparing centralized and distributed decision making processes. A detailed example of the implementation of all but the final stage of this methodology is provided using the context of a flexible manufacturing system (FMS) environment. An adaptable experimental shell and interface is set forth and initial experimental results are used to illustrate our methodology. We also illustrate how experimental results can be used in the development of automated bidding systems (''expert'' systems) for use in subsequent simulations for comparing the performance of centralized and distributed decision making alternatives. | [
"methodology",
"distributed decision making",
"fms",
"laboratory experiments"
] | [
"P",
"P",
"P",
"U"
] |