{"id": "kp20k_training_0", "title": "virtually enhancing the perception of user actions", "abstract": "This paper proposes using virtual reality to enhance the perception of actions by distant users on a shared application. Here, distance may refer either to space ( e.g. in a remote synchronous collaboration) or time ( e.g. during playback of recorded actions). Our approach consists in immersing the application in a virtual inhabited 3D space and mimicking user actions by animating avatars. We illustrate this approach with two applications, the one for remote collaboration on a shared application and the other to playback recorded sequences of user actions. We suggest this could be a low cost enhancement for telepresence", "keywords": ["telepresence", "animation", "avatars", "application sharing", "collaborative virtual environments"]} {"id": "kp20k_training_1", "title": "Dynamic range improvement of multistage multibit Sigma Delta modulator for low oversampling ratios", "abstract": "This paper presents an improved architecture of the multistage multibit sigma-delta modulators (EAMs) for wide-band applications. Our approach is based on two resonator topologies, high-Q cascade-of-resonator-with-feedforward (HQCRFF) and low-Q cascade-of-integrator-with-feedforward (LQCEFF). Because of in-band zeros introduced by internal loop filters, the proposed architecture enhances the suppression of the in-band quantization noise at a low OSR. The HQCRFF-based modulator with single-bit quantizer has two modes of operation, modulation and oscillation. When the HQCRFF-based modulator is operating in oscillation mode, the feedback path from the quantizer output to the input summing node is disabled and hence the modulator output is free of the quantization noise terms. Although operating in oscillation mode is not allowed for single-stage SigmaDeltaM, the oscillation of HQCRFF-based modulator can improve dynamic range (DR) of the multistage (MASH) SigmaDeltaM. The key to improving DR is to use HQCRFF-based modulator in the first stage and have the first stage oscillated. When the first stage oscillates, the coarse quantization noise vanishes and hence circuit nonidealities, such as finite op-amp gain and capacitor mismatching, do not cause leakage quantization noise problem. According to theoretical and numerical analysis, the proposed MASH architecture can inherently have wide DR without using additional calibration techniques", "keywords": ["sigma delta modulators", "analog-to-digital converters ", "multistage ", "multibit quantizer", "dynamic range improvement"]} {"id": "kp20k_training_2", "title": "An ontology modelling perspective on business reporting", "abstract": "In this paper, we discuss the motivation and the fundamentals of an ontology representation of business reporting data and metadata structures as defined in the eXtensible business reporting language (XBRL) standard. The core motivation for an ontology representation is the enhanced potential for integrated analytic applications that build on quantitative reporting data combined with structured and unstructured data from additional sources. Applications of this kind will enable significant enhancements in regulatory compliance management, as they enable business analytics combined with inference engines for statistical, but also for logical inferences. In order to define a suitable ontology representation of business reporting language structures, an analysis of the logical principles of the reporting metadata taxonomies and further classification systems is presented. Based on this analysis, a representation of the generally accepted accounting principles taxonomies in XBRL by an ontology provided in the web ontology language (OWL) is proposed. An additional advantage of this representation is its compliance with the recent ontology definition metamodel (ODM) standard issued by OMG", "keywords": ["enterprise information integration and interoperability", "languages for conceptual modelling", "ontological approaches to content and knowledge management", "ontology-based software engineering for enterprise solutions", "domain engineering"]} {"id": "kp20k_training_3", "title": "The self-organizing map", "abstract": "An overview of the self-organizing map algorithm, on which the papers in this issue are based, is presented in this article", "keywords": ["self-organizing map", "learning vector quantization"]} {"id": "kp20k_training_4", "title": "The Amygdala and Development of the Social Brain", "abstract": "The amygdala comprises part of an extended network of neural circuits that are critically involved in the processing of socially salient stimuli. Such stimuli may be explicitly social, such as facial expressions, or they may be only tangentially social, such as abstract shapes moving with apparent intention relative to one another. The coordinated interplay between neural activity in the amygdala and other brain regions, especially the medial prefrontal cortex, the occipitofrontal cortex, the fusiform gyrus, and the superior temporal sulcus, allows us to develop social responses and to engage in social behaviors appropriate to our species. The harmonious functioning of this integrated social cognitive network may be disrupted by congenital or acquired lesions, by genetic anomalies, and by exceptional early experiences. Each form of disruption is associated with a slightly different outcome, dependent on the timing of the experience, the location of the lesion, or the nature of the genetic anomaly. Studies in both humans and primates concur; the dysregulation of basic emotions, especially the processing of fear and anger, is an almost invariable consequence of such disruption. These, in turn, have direct or indirect consequences for social behavior", "keywords": ["social brain", "amygdala", "behavior", "facial expression"]} {"id": "kp20k_training_5", "title": "Modeling and Cost Analysis of an Improved Movement-Based Location Update Scheme in Wireless Communication Networks", "abstract": "A movement based location update (MBLU) scheme is an LU scheme, under which a user equipment (UE) performs an LU when the number of cells crossed by the UE reaches a movement threshold. The MBLU scheme suffers from ping-pong LU effect. The ping-pong LU effect arises when the UE that moves repetitively between two adjacent cells performs LUs in the same way as in the case of straight movement. To tackle the ping-pong LU effect encountered by the original MBLU (OMBLU) scheme, an improved MBLU (IMBLU) scheme was proposed in the literature. Under the IMBLU scheme, the UE performs an LU when the number of different cells, rather than the number of cells, visited by the UE reaches the movement threshold. In this paper we develop an embedded Markov chain model to calculate the signaling cost of the IMBLU scheme. We derive analytical formulas for the signaling cost, whose accuracy is tested through simulation. It is observed from a numerical study based on these formulas that 1) the signaling cost is a downward convex function of the movement threshold, implying the existence of an optimal movement threshold that can minimize the signaling cost, and that 2) the reduction in the signaling cost achieved by the IMBLU scheme relative to the OMBLU scheme is more prominent in the case of stronger repetitiveness in the UE movement. The model developed and the formulas derived in this paper can guide the implementation of the IMBLU scheme in wireless communication networks", "keywords": ["movement-based location update", "ping-pong lu effect", "modeling", "embedded markov chain"]} {"id": "kp20k_training_6", "title": "A modified offset roll printing for thin film transistor applications", "abstract": "In order to realize a high resolution and high throughput printing method for thin film transistor application, a modified offset roll printing was studied. This roll printing chiefly consists of a blanket with low surface energy and a printing plate (clich) with high surface energy. In this study, a finite element analysis was done to predict the blanket deformation and to find the optimal angle of clichs sidewall. Various etching methods were investigated to obtain a high resolution clich and the surface energy of the blanket and clich was analyzed for ink transfer. A high resolution clich with the sidewall angle of 90 and the intaglio depth of 13?m was fabricated by the deep reactive ion etching method. Based on the surface energy analysis, we extracted the most favorable condition to transfer inks from a blanket to a clich, and thus thin films were deposited on a Si-clich to increase the surface energy. Through controlling roll speed and pressure, two inks, etch-resist and silver paste, were printed on a rigid substrate, and the fine patterns of 10?m width and 6?m line spacing were achieved. By using this printing process, the top gate amorphous indiumgalliumzinc-oxide TFTs with channel width/length of 12/6?m were successfully fabricated by printing etch-resists", "keywords": ["printing plate ", "surface energy", "offset roll printing", "tft"]} {"id": "kp20k_training_8", "title": "Hyperspectral image segmentation through evolved cellular automata", "abstract": "Efficient segmentation of hyperspectral images through the use of cellular automata. The rule set for the CA is automatically obtained using an evolutionary algorithm. Synthetic images of much lower dimensionality are used to evolve the automata. The CA works with spectral information but do not project it onto a lower dimension. The CA-based classification outperforms reference techniques", "keywords": ["hyperspectral imaging", "evolution", "cellular automata", "segmentation"]} {"id": "kp20k_training_9", "title": "Analytical and empirical evaluation of the impact of Gaussian noise on the modulations employed by Bluetooth Enhanced Data Rates", "abstract": "Bluetooth (BT) is a leading technology for the deployment of wireless Personal Area Networks and Body Area Networks. Versions 2.0 and 2.1 of the standard, which are massively implemented in commercial devices, improve the throughput of the BT technology by enabling the so-called Enhanced Data Rates (EDR). EDRs are achieved by utilizing new modulation techniques (?/4-DQPSK and 8-DPSK), apart from the typical Gaussian Frequency Shift Keying modulation supported by previous versions of BT. This manuscript presents and validates a model to characterize the impact of white noise on the performance of these modulations. The validation is systematically accomplished in a testbed with actual BT interfaces and a calibrated white noise generator", "keywords": ["bluetooth", "bit error rate", "modulation", "white noise"]} {"id": "kp20k_training_10", "title": "Spectral analysis of irregularly-sampled data: Paralleling the regularly-sampled data approaches", "abstract": "The spectral analysis of regularly-sampled (RS) data is a well-established topic, and many useful methods are available for performing it under different sets of conditions. The same cannot be said about the spectral analysis of irregularly-sampled (IS) data: despite a plethora of published works on this topic, the choice of a spectral analysis method for IS data is essentially limited, on either technical or computational grounds, to the periodogram and its variations. In our opinion this situation is far from satisfactory, given the importance of the spectral analysis of IS data for a considerable number of applications in such diverse fields as engineering, biomedicine, economics, astronomy, seismology, and physics, to name a few. In this paper we introduce a number of IS data approaches that parallel the methods most commonly used for spectral analysis of RS data: the periodogram (PER), the Capon method (CAP), the multiple-signal characterization method (MUSIC), and the estimation of signal parameters via rotational invariance technique (ESPRIT). The proposed IS methods are as simple as their RS counterparts, both conceptually and computationally. In particular, the fast algorithms derived for the implementation of the RS data methods can be used mutatis mutandis to implement the proposed parallel IS methods. Moreover, the expected performance-based ranking of the IS methods is the same as that of the parallel RS methods: all of them perform similarly on data consisting of well-separated sinusoids in noise, MUSIC and ESPRIT outperform the other methods in the case of closely-spaced sinusoids in white noise, and CAP outperforms PER for data whose spectrum has a small-to-medium dynamic range (MUSIC and ESPRIT should not be used in the latter case", "keywords": ["spectral analysis", "irregular sampling", "nonuniform sampling", "sinusoids in noise", "carma signals"]} {"id": "kp20k_training_11", "title": "Time-Series Data Mining", "abstract": "In almost every scientific field, measurements are performed over time. These observations lead to a collection of organized data called time series. The purpose of time-series data mining is to try to extract all meaningful knowledge from the shape of data. Even if humans have a natural capacity to perform these tasks, it remains a complex problem for computers. In this article we intend to provide a survey of the techniques applied for time-series data mining. The first part is devoted to an overview of the tasks that have captured most of the interest of researchers. Considering that in most cases, time-series task relies on the same components for implementation, we divide the literature depending on these common aspects, namely representation techniques, distance measures, and indexing methods. The study of the relevant literature has been categorized for each individual aspects. Four types of robustness could then be formalized and any kind of distance could then be classified. Finally, the study submits various research trends and avenues that can be explored in the near future. We hope that this article can provide a broad and deep understanding of the time-series data mining research field", "keywords": ["algorithms", "performance", "distance measures", "data indexing", "data mining", "query by content", "sequence matching", "similarity measures", "stream analysis", "temporal analysis", "time series"]} {"id": "kp20k_training_12", "title": "A small hybrid JIT for embedded systems", "abstract": "Just-In-Time (JIT) Compilation is a technology used to improve speed of virtual machines that support dynamic loading of applications. It is better known as a technique that accelerates Java programs. Current JIT compilers are either huge in size or compile complete methods of the application requiring large amounts of memory for their functioning. This has made Java Virtual Machines for embedded systems devoid of JIT compilers. This paper explains a simple technique of combining interpretation with compilation to get a hybrid interpretation strategy. It also describes a new code generation technique that works using its self-code. The combination gives a JIT compiler that is very small (10K) and suitable for Java Virtual Machines for embedded systems", "keywords": ["java", "dynamic compilation", "jit", "embedded system", "code generation"]} {"id": "kp20k_training_13", "title": "Rationality of induced ordered weighted operators based on the reliability of the source of information in group decision-making", "abstract": "The aggregation of preference relations in group decision-making (GDM) problems can be carried out based on either the reliability of the preference values to be aggregated, as is the case with ordered weighted averaging operators, or on the reliability of the source of information that provided the preferences, as is the case with weighted mean operators. In this paper, we address the problem of aggregation based on the reliability of the source of information, with a double aim: a) To provide a general framework for induced ordered weighted operators based upon the source of information, and b) to provide a study of their rationality. We study the conditions which need to be verified by an aggregation operator in order to maintain the rationality assumptions on the individual preferences in the aggregation phase of the selection process of alternatives. In particular, we show that any aggregation operator based on the reliability of the source of information does verify these conditions", "keywords": ["aggregation operators", "induced aggregation", "group decision-making", "preference relations", "rationality", "consistency"]} {"id": "kp20k_training_14", "title": "Digital preservation of knowledge in the public sector: a pre-ingest tool", "abstract": "This paper describes the need for coordinating pre-ingest activities in digital preservation of archival records. As a result of the wide use of electronic records management systems (ERMS) in agencies, the focus is on several issues relating to the interaction of the agencys ERMS and public repositories. This paper indicates the importance of using digital recordkeeping metadata to meet more precisely and at the same time semi-automatically the criteria set by memory institutions. The paper provides an overview of one prospective solution and describes the Estonian National Archives universal archiving module (UAM). A case study reports the use of the UAM in preserving the digital records of the Estonian Minister for Population and Ethnic Affairs. In this project, the preparation and transfer of archival records was divided into ten phases, starting from the description of the archival creator and ending with controlled transfer. The case study raises questions about how much recordkeeping metadata can be used in archival description and how the interaction of the agencys ERMS and ingest by the archives could be more automated. The main issues (e.g. classification, metadata elements variations, mapping, and computer files conversions) encountered during that project are discussed. Findings show that the Open Archival Information System functional models ingest part should be reconceptualised to take into account preparatory work. Adding detailed metadata about the structure, context and relationships in the right place at the right time could get one step closer to digital codified knowledge archiving by creating synergies with various other digital repositories", "keywords": ["digital preservation", "ingest", "pre-ingest", "universal archiving module", "estonia"]} {"id": "kp20k_training_15", "title": "The Regulation of SERCA-Type Pumps by Phospholamban and Sarcolipin", "abstract": "Both sarcolipin (SLN) and phospholamban (PLN) lower the apparent affinity of either SERCA1a or SERCA2a for Ca2+. Since SLN and PLN are coexpressed in the heart, interactions among these three proteins were investigated. When SERCA1a or SERCA2a were coexpressed in HEK-293 cells with both SLN and PLN, superinhibition resulted. The ability of SLN to elevate the content of PLN monomers accounts, at least in part, for the superinhibitory effects of SLN in the presence of PLN. To evaluate the role of SLN in skeletal muscle, SLN cDNA was injected directly into rat soleus muscle and force characteristics were analyzed. Overexpression of SLN resulted in significant reductions in both twitch and tetanic peak force amplitude and maximal rates of contraction and relaxation and increased fatigability with repeated electrical stimulation. Ca2+ uptake in muscle homogenates was impaired, suggesting that overexpression of SLN may reduce the sarcoplasmic reticulum Ca2+ store. SLN and PLN appear to bind to the same regulatory site in SERCA. However, in a ternary complex, PLN occupies the regulatory site and SLN binds to the exposed side of PLN and to SERCA", "keywords": ["ca2+-atpase", "sarcolipin", "phospholamban", "cardiomyopathy", "regulatory molecules"]} {"id": "kp20k_training_16", "title": "Bootstrap confidence intervals for principal response curves", "abstract": "The principal response curve (PRC) model is of use to analyse multivariate data resulting from experiments involving repeated sampling in time. The time-dependent treatment effects are represented by PRCs, which are functional in nature. The sample PRCs can be estimated using a raw approach, or the newly proposed smooth approach. The generalisability of the sample PRCs can be judged using confidence bands. The quality of various bootstrap strategies to estimate such confidence bands for PRCs is evaluated. The best coverage was obtained with BCa BC a intervals using a non-parametric bootstrap. The coverage appeared to be generally good, except for the case of exactly zero population PRCs for all conditions. Then, the behaviour is irregular, which is caused by the sign indeterminacy of the PRCs. The insights obtained into the optimal bootstrap strategy are useful to apply in the PRC model, and more generally for estimating confidence intervals in singular value decomposition based methods", "keywords": ["resampling", "singular value decomposition"]} {"id": "kp20k_training_17", "title": "A concurrent specification of Brzozowski's DFA construction algorithm", "abstract": "In this paper two concurrent versions of Brzozowski's deterministic finite automaton (DFA) construction algorithm are developed from first principles, the one being a slight refinement of the other. We rely on Hoare's CSP as our notation. The specifications that are proposed of the Brzozowski algorithm are in terms of the concurrent composition of a number of top-level processes, each participating process itself composed of several other concurrent processes. After considering a number of alternatives, this particular overall architectural structure seemed like a natural and elegant mapping from the sequential algorithm's structure. While we have carefully argued the reasons for constructing the concurrent versions as proposed in the paper, there are of course, a large range of alternative design choices that could be made. There might also be scope for a more fine-grained approach to updating sets or checking for similarity of regular expressions. At this stage, we have chosen to abstract away from these considerations, and leave their exploration for a subsequent step in our research", "keywords": ["automaton construction", "concurrency", "csp", "regular expressions"]} {"id": "kp20k_training_18", "title": "collaborative multimedia learning environments", "abstract": "I use the term \"collaborative\", to identify a way that enables conversation to occur in, about, and around the digital medium, therefore making the \"digital artifacts\" contributed by all individuals a key element of a conversation as opposed to consecutive, linear presentations used by most faculty at the Design School.Installations of collaborative multimedia in classrooms at the Harvard University Graduate School of Design show an enhancement of the learning process via shared access to media resources and enhanced spatial conditions within which these resources are engaged. Through observation and controlled experiments I am investigating how the use of shared, collaborative interfaces for interaction with multiple displays in a co-local environment enhances the learning process. The multiple spatial configurations and formats of learning mandate that with more effective interfaces and spaces for sharing digital media with fellow participants, the classroom can be used much more effectively and thus, learning and interaction with multimedia can be improved", "keywords": ["shared interfaces", "multiple displays", "digital artifacts", "rich control interfaces", "beneficial interruption"]} {"id": "kp20k_training_19", "title": "clustering multi-way data via adaptive subspace iteration", "abstract": "Clustering multi-way data is a very important research topic due to the intrinsic rich structures in real-world datasets. In this paper, we propose the subspace clustering algorithm on multi-way data, called ASI-T (Adaptive Subspace Iteration on Tensor). ASI-T is a special version of High Order SVD (HOSVD), and it simultaneously performs subspace identification using 2DSVD and data clustering using K-Means. The experimental results on synthetic data and real-world data demonstrate the effectiveness of ASI-T", "keywords": ["tensor", "multi-way data", "subspace", "clustering"]} {"id": "kp20k_training_20", "title": "SAT-based model-checking for security protocols analysis", "abstract": "We present a model checking technique for security protocols based on a reduction to propositional logic. At the core of our approach is a procedure that, given a description of the protocol in a multi-set rewriting formalism and a positive integer k, builds a propositional formula whose models (if any) correspond to attacks on the protocol. Thus, finding attacks on protocols boils down to checking a propositional formula for satisfiability, problem that is usually solved very efficiently by modern SAT solvers. Experimental results indicate that the approach scales up to industrial strength security protocols with performance comparable with (and in some cases superior to) that of other state-of-the-art protocol analysers", "keywords": ["security protocols", "bounded model checking", "sat-based model checking", "multi-set rewriting"]} {"id": "kp20k_training_21", "title": "Existence of positive solutions for 2n 2 n th-order singular superlinear boundary value problems", "abstract": "This paper investigates the existence of positive solutions for 2n th-order (n>1 ) singular superlinear boundary value problems. A necessary and sufficient condition for the existence of C2n?2[0,1] as well as C2n?1[0,1] positive solutions is given by constructing a special cone and with the e-Norm", "keywords": ["singular boundary value problem", "positive solution", "e-norm", "cone", "fixed-point theorem"]} {"id": "kp20k_training_22", "title": "Embedding the Internet in the lives of college students - Online and offline Behavior", "abstract": "The Internet is increasingly becoming embedded in the lives of most American citizens. College students constitute a group who have made particularly heavy use of the technology for everything from downloading music to distance education to instant messaging. Researchers know a lot about the uses made of the Internet by this group of people but less about the relationship between their offline activities and online behavior. This study reports the results of a web survey of a group of university undergraduates exploring the nature of both online and offline in five areas-the use of news and information, the discussion of politics, the seeking of health information, the use of blogs, and the downloading of media and software", "keywords": ["internet", "college students", "information technology", "online behavior", "news", "blogs", "downloading"]} {"id": "kp20k_training_23", "title": "A capacitive tactile sensor array for surface texture discrimination", "abstract": "This paper presents a silicon MEMS based capacitive sensing array, which has the ability to resolve forces in the sub mN range, provides directional response to applied loading and has the ability to differentiate between surface textures. Texture recognition is achieved by scanning surfaces over the sensing array and assessing the frequency spectrum of the sensor outputs", "keywords": ["mems", "tactile sensor", "biomimetic", "capacitive sensor"]} {"id": "kp20k_training_24", "title": "Segmenting, modeling, and matching video clips containing multiple moving objects", "abstract": "This paper presents a novel representation for dynamic scenes composed of multiple rigid objects that may undergo different motions and are observed by a moving camera. Multiview constraints associated with groups of affine-covariant scene patches and a normalized description of their appearance are used to segment a scene into its rigid components, construct three-dimensional models of these components, and match instances of models recovered from different image sequences. The proposed approach has been applied to the detection and matching of moving objects in video sequences and to shot matching, i.e., the identification of shots that depict the same scene in a video clip", "keywords": ["affine-covariant patches", "structure from motion", "motion segmentation", "shot matching", "video retrieval"]} {"id": "kp20k_training_25", "title": "Weighted fuzzy interpolative reasoning systems based on interval type-2 fuzzy sets", "abstract": "In this paper, we present a weighted fuzzy interpolative reasoning method for sparse fuzzy rule-based systems based on interval type-2 fuzzy sets. We also apply the proposed weighted fuzzy interpolative reasoning method to deal with the truck backer-upper control problem. The proposed method satisfies the seven evaluation indices for fuzzy interpolative reasoning. The experimental results show that the proposed method outperforms the existing methods. It provides us with a useful way for dealing with fuzzy interpolative reasoning in sparse fuzzy rule-based systems", "keywords": ["fuzzy interpolative reasoning", "interval type-2 fuzzy sets", "sparse fuzzy rule-based systems", "weighted fuzzy interpolative reasoning"]} {"id": "kp20k_training_26", "title": "Vestibular PREHAB", "abstract": "A sudden unilateral loss or impairment of vestibular function causes vertigo, dizziness, and impaired postural function. In most occasions, everyday activities supported or not by vestibular rehabilitation programs will promote a compensation and the symptoms subside. As the compensatory process requires sensory input, matching performed motor activity, both motor learning of exercises and matching to sensory input are required. If there is a simultaneous cerebellar lesion caused by the tumor or the surgery of the posterior cranial fossa, there may be a risk of a combined vestibulocerebellar lesion, with reduced compensatory abilities and with prolonged or sometimes permanent disability. On the other hand, a slow gradual loss of unilateral function occurring as the subject continues well-learned everyday activities may go without any prominent symptoms. A pretreatment plan was therefore implemented before planned vestibular lesions, that is, PREHAB. This was first done in subjects undergoing gentamicin treatment for morbus Mnire. Subjects would perform vestibular exercises for 14 days before the first gentamicin installation, and then continue doing so until free of symptoms. Most subjects would only experience slight dizziness while losing vestibular function. The approachwhich is reported herewas then expanded to patients with pontine-angle tumors requiring surgery, but with remaining vestibular function to ease postoperative symptoms and reduce risk of combined cerebellovestibular lesions. Twelve patients were treated with PREHAB and had gentamicin installations transtympanically. In all cases there was a caloric loss, loss of VOR in head impulse tests, and impaired subjective vertical and horizontal. Spontaneous, positional nystagmus, subjective symptoms, and postural function were normalized before surgery and postoperative recovery was swift. Pretreatment training with vestibular exercises continued during the successive loss of vestibular function during gentamicin treatment, and pre-op gentamicin ablation of vestibular function offers a possibility to reduce malaise and speed up recovery", "keywords": ["vestibular", "compensation", "prehab", "rehabilitation", "schwannoma", "recovery"]} {"id": "kp20k_training_27", "title": "SW1PerS: Sliding windows and 1-persistence scoring; discovering periodicity in gene expression time series data", "abstract": "Identifying periodically expressed genes across different processes (e.g. the cell and metabolic cycles, circadian rhythms, etc) is a central problem in computational biology. Biological time series may contain (multiple) unknown signal shapes of systemic relevance, imperfections like noise, damping, and trending, or limited sampling density. While there exist methods for detecting periodicity, their design biases (e.g. toward a specific signal shape) can limit their applicability in one or more of these situations", "keywords": ["periodicity", "gene expression", "time series", "sliding windows", "persistent homology"]} {"id": "kp20k_training_28", "title": "Parallel generation of unstructured surface grids", "abstract": "In this paper, a new grid generation system is presented for the parallel generation of unstructured triangular surface grids. The object-oriented design and implementation of the system, the internal components and the parallel meshing process itself are described. Initially in a rasterisation stage, the geometry to be meshed is analysed and a smooth distribution of local element sizes in 3-D space is set up automatically and stored in a Cartesian mesh. This background mesh is used by the advancing front surface mesher as spacing definition for the triangle generation. Both the rasterisation and the meshing are MPI-parallelised. The underlying principles and strategies will be outlined together with the advantages and limitations of the approach. The paper will be concluded with examples demonstrating the capabilities of the presented approach", "keywords": ["unstructured surface mesh generation", "geometry rasterisation", "mpi-parallel", "automatic", "object-orientation"]} {"id": "kp20k_training_29", "title": "H structured model reduction algorithms for linear discrete systems via LMI-based optimisation", "abstract": "In this article, H structured model reduction is addressed for linear discrete systems. Two important classes of systems are considered for structured model reduction, i.e. Markov jump systems and uncertain systems. The problem we deal with is the development of algorithms with the flexibility to allow any structure in the reduced-order system design, such as the structure of an original system, decentralisation of a networked system, pole assignment of the reduced system, etc. The algorithms are derived such that an associated model reduction error guarantees to satisfy a prescribed H norm-bound constraint. A new condition for the existence of desired reduced-order models preserving a certain structure is presented in a set of linear matrix inequalities (LMI) and non-convex equality constraints. Effective computational algorithms involving LMI are suggested to solve the matrix inequalities characterising a solution of the structured model reduction problem. Numerical examples demonstrate the advantages of the proposed model reduction method", "keywords": ["h design", "structured model reduction", "markovian jump linear systems", "uncertain systems", "bilinear matrix inequality", "linear matrix inequality"]} {"id": "kp20k_training_30", "title": "A power-aware code-compression design for RISC/VLIW architecture", "abstract": "We studied the architecture of embedded computing systems from the viewpoint of power consumption in memory systems and used a selective-code-compression (SCC) approach to realize our design. Based on the LZW (Lempel-Ziv-Welch) compression algorithm, we propose a novel cost effective compression and decompression method. The goal of our study was to develop a new SCC approach with an extended decision policy based on the prediction of power consumption. Our decompression method had to be easily implemented in hardware and to collaborate with the embedded processor. The hardware implementation of our decompression engine uses the TSMC 0.18 mu m-2p6m model and its cell-based libraries. To calculate power consumption more accurately, we used a static analysis method to estimate the power overhead of the decompression engine. We also used variable sized branch blocks and considered several features of very long instruction word (VLIW) processors for our compression, including the instruction level parallelism (ILP) technique and the scheduling of instructions. Our code-compression methods are not limited to VLIW machines, and can be applied to other kinds of reduced instruction set computer (RISC) architecture", "keywords": ["lzw compression", "cell-based libraries", "instruction level parallelism ", "vliw processors"]} {"id": "kp20k_training_31", "title": "Globallocal negotiations for implementing configurable packages: The power of initial organizational decisions", "abstract": "The purpose of this paper is to draw attention to the critical influence that initial organizational decisions regarding power and knowledge balance between internal members and external consultants have on the globallocal negotiation that characterizes configurable packages implementation. To do this, we conducted an intensive research study of a configurable information technology (IT) implementation project in a Canadian firm", "keywords": ["configurable technology", "erp implementation", "critical discourse analysis", "temporal bracketing analysis", "intensive research", "qualitative research methods", "global/local negotiation", "power/knowledge balance"]} {"id": "kp20k_training_32", "title": "Communication in Random Geometric Radio Networks with Positively Correlated Random Faults", "abstract": "We study the feasibility and time of coin communication in random geometric radio networks, where nodes fail randomly with positive correlation. We consider a set of radio stations with the same communication range, distributed in a random uniform way on a unit square region. In order to capture fault dependencies, we introduce the ranged spot model in which damaging events, called spots, occur randomly and independently on the region, causing faults in all nodes located within distance s from them. Node faults within distance 2s become dependent in this model and are positively correlated. We investigate the impact of the spot arrival rate on the feasibility and the time of communication in the fault-free part of the network. We provide an algorithm which broadcasts correctly with probability l - epsilon in faulty random geometric radio networks of diameter D in time O (D + log l/epsilon", "keywords": ["fault-tolerance", "dependent faults", "broadcast", "crash faults", "random", "geometric radio network"]} {"id": "kp20k_training_33", "title": "Consumer complaint behaviour in telecommunications: The case of mobile phone users in Spain", "abstract": "Consumer complaint behaviour theory is used to analyze Spanish telecommunications data. The main determinants are the type of problem, socio-demographic factors and the user?s type of contract. Proper complaint handling leads to satisfied, loyal and profitable consumers", "keywords": ["consumer complaint behaviour", "mobile phones", "consumer retention", "consumer satisfaction", "consumer loyalty", "voice", "exit", "service failure", "complainers"]} {"id": "kp20k_training_34", "title": "Combining OWL ontologies using epsilon-Connections", "abstract": "The standardization of the Web Ontology Language ( OWL) leaves ( at least) two crucial issues for Web-based ontologies unsatisfactorily resolved, namely how to represent and reason with multiple distinct, but linked ontologies, and how to enable effective knowledge reuse and sharing on the Semantic Web. In this paper, we present a solution for these fundamental problems based on E- Connections. We aim to use E- Connections to provide modelers with suitable means for developing Web ontologies in a modular way and to provide an alternative to the owl: imports construct. With such motivation, we present in this paper a syntactic and semantic extension of the Web Ontology language that covers E- Connections of OWL-DL ontologies. We show how to use such an extension as an alternative to the owl: imports construct in many modeling situations. We investigate different combinations of the logics SHIN( D), SHON( D) and SHIO( D) for which it is possible to design and implement reasoning algorithms, well- suited for optimization. Finally, we provide support for E-Connections in both an ontology editor, SWOOP, and an OWL reasoner, Pellet. ", "keywords": ["web ontology language", "integration and combination of ontologies", "combination of knowledge representation formalisms", "description logics reasoning"]} {"id": "kp20k_training_35", "title": "Model with artificial neural network to predict the relationship between the soil resistivity and dry density of compacted soil", "abstract": "This paper presents a technique to obtain the outcomes of soil dry density and optimum moisture contents with artificial neural network (ANN) for compacted soil monitoring through soil resistivity measurement in geotechnical engineering. The compacted soil monitoring through soil electrical resistivity shows the important role in the construction of highway embankments, earth dams and many other engineering structure. Generally, soil compaction is estimated through the determination of maximum dry density at optimum moisture contents in laboratory test. To estimate the soil compaction in conventional soil monitoring technique is time consuming and costly for the laboratory testing with a lot of samples of compacted soil. In this work, an ANN model is developed for predicting the relationship between dry density of compacted soil and soil electrical resistivity based on experimental data in soil profile. The regression analysis between the output and target values shows that the R-2 values are 0.99 and 0.93 for the training and testing sets respectively for the implementation of ANN in soil profile. The significance of our research is to obtain an intelligent model for getting faster, cost-effective and consistent outcomes in soil compaction monitoring through electrical resistivity for a wide range of applications in geotechnical investigation", "keywords": ["soil compaction", "ann modeling", "electrical resistivity", "dry density"]} {"id": "kp20k_training_36", "title": "A fuzzy bi-criteria transportation problem", "abstract": "In this paper, a fuzzy bi-criteria transportation problem is studied. Here, the model concentrates on two criteria: total delivery time and total profit of transportation. The delivery times on links are fuzzy intervals with increasing linear membership functions, whereas the total delivery time on the network is a fuzzy interval with a decreasing linear membership function. On the other hand, the transporting profits on links are fuzzy intervals with decreasing linear membership functions and the total profit of transportation is a fuzzy number with an increasing linear membership function. Supplies and demands are deterministic numbers. A nonlinear programming model considers the problem using the max-min criterion suggested by Bellman and Zadeh. We show that the problem can be simplified into two bi-level programming problems, which are solved very conveniently. A proposed efficient algorithm based on parametric linear programming solves the bi-level problems. To explain the algorithm two illustrative examples are provided, systematically. ", "keywords": ["fuzzy interval", "membership function", "bi-criteria transportation", "fuzzy transportation", "bi-level programming", "parametric programming"]} {"id": "kp20k_training_37", "title": "Capacity Gain of Mixed Multicast/Unicast Transport Schemes in a TV Distribution Network", "abstract": "This paper presents three approaches to estimate the required resources in an infrastructure where digital TV channels can be delivered in unicast or multicast (broadcast) mode. Such situations arise for example in Cable TV, IPTV distribution networks or in (future) hybrid mobile TV networks. The three approaches presented are an exact calculation, a Gaussian approximation and a simulation tool. We investigate two scenarios that allow saving bandwidth resources. In a static scenario, the most popular channels are multicast and the less popular channels rely on unicast. In a dynamic scenario, the list of multicast channels is dynamic and governed by the users' behavior. We prove that the dynamic scenario always outperforms the static scenario. We demonstrate the robustness, versatility and the limits of our three approaches. The exact calculation application is limited because it is computationally expensive for cases with large numbers of users and channels, while the Gaussian approximation is good exactly for such systems. The simulation tool takes long to yield results for small blocking probabilities. We explore the capacity gain regions under varying model parameters. Finally, we illustrate our methods by discussing some realistic network scenarios using channel popularities based on measurement data as much as possible", "keywords": ["capacity planning", "digital tv/video", "iptv", "mobile tv", "multicast", "streaming", "switched broadcast", "unicast"]} {"id": "kp20k_training_38", "title": "quantitatively evaluating the influence of online social interactions in the community-assisted digital library", "abstract": "Online social interactions are useful in information seeking from digital libraries, but how to measure their influence on the user's information access actions has not yet been revealed. Studies on this problem give us interesting insights into the workings of human dynamics in the context of information access from digital libraries. On the basis, we wish to improve the technological supports to provide more intelligent services in the ongoing China-America Million Books Digital Library so that it can reach its potential in serving human needs. Our research aims at developing a common framework to model online social interaction process in community-assisted digital libraries. The underlying philosophy of our work is that the online social interaction can be viewed as a dynamic process, and the next state of each participant in this process (e.g., personal information access competency) depends on the value of the previous states of all participants involving interactions in the period. Hence, considering the dynamics of interaction process, we model each participant with a Hidden Markov Model (HMM) chain and then employ the Influence Model , which was developed by C. Asavathiratham as a Dynamic Bayes Net (DBN) of representing the influences a number of Markov chains have on each other, to analyze the effects of participants influencing each other. Therefore, one can think of the entire interaction process as a DBN framework having two levels of structure: the local level and the network level. Each participant i has a local HMM chain &Ggr;( A ) which characterizes the transition of his internal states in the interaction process with state-transition probability ∑over j d ij P ( S i t | S j t-1 ) (Here states are his personal information access competence in different periods, while observations are his information access actions). Meanwhile, the network level, which is described by a network graph &Ggr;( D T ) where D ={ d ij } is the influence factor matrix , represents the interacting relations between participants. The strength of each connection, d ij , describes the influence factor of the participant j at its begin on the one i at its end. Hence, this model describes the dynamic inter-influence process of the internal states of all participants involving online interactions. To automatically build the model, we need firstly to extract observed features from the data of online social interactions and information access actions. Obviously, the effects of interactions are stronger if messages are exchanged more frequently, or the participants access more information in the online digital libraries during the period of time. Based on this consideration, we select the interaction measure IM i,j t and the amount of information IA j t as the estimation features of x i t . The interaction measure IA i t and the amount of information parameterize the features calculated automatically from the data of online social interactions between the participants i and j , and the features calculated from the data of information access actions respectively. Secondly, we need to develop a mechanism for learning the parameters d ij and P ( S i t | S j t-1 . Given sequences of observations { x i t } for each chain i , we may easily utilize the Expectation-Maximization algorithm or the gradient-based learning algorithm to get their estimation equations. We ran our experiments in the online digital library of W3C Consortium (www.w3c.org), which contains a mass of news, electronic papers or other materials related to web technologies. Users may access and download any information and materials in this digital library, and also may free discuss on any related technological problems by means of its mailing lists. Six users were selected in our experiments to collaboratively perform paper -gathering tasks related to four given topics. Any user might call for help from the others through the mailing lists when had difficulties in this process. All participants were required to record subjective evaluations of the effects that the others influenced his tasks. Each experiment was scheduled by ten phases. And in each phase, we sampled IM i,j t and IA i t for each participant and then fed them into the learning algorithms to automatically build the influence model. By comparing with the subjective influence graphs, the experimental results show that the influence model can estimate approximately the influences of online social interactions", "keywords": ["community-assisted digital library", "statistical feature extraction", "online social interactions", "the influence model"]} {"id": "kp20k_training_39", "title": "THE CEO/CIO RELATIONSHIP REVISITED - AN EMPIRICAL-ASSESSMENT OF SATISFACTION WITH IS", "abstract": "The necessity of integrating information systems (IS) into corporate strategy has received widespread attention in recent years. Strategic planning has moved IS from serving primarily as a support function to a point where it may influence corporate strategy. The strength of this influence, however, usually is determined by the nature of the relationship between the chief information officer (CIO) and the CEO. Generally the more satisfied CEOs are with CIOs, the greater the influence IS has on top-level decisions. Results of a nationwide survey of motor carrier CEOs and CIOs indicate that CEOs are generally satisfied with their CIOs' activities, and that CIOs perceive CEOs as placing a high priority on strategic IS plans. However, IS does not appear to be truly a part of corporate strategy formulation", "keywords": ["relationship between ceo and cio", "satisfaction with information systems", "strategic planning", "role of information technology in strategic planning", "strategic information systems planning", "information technology in the motor carrier industry", "ceo satisfaction with is", "cio perceptions of corporate use of is"]} {"id": "kp20k_training_40", "title": "Simulations of photosynthesis by a K-subset transforming system with membrane", "abstract": "By considering the inner regions of living cells' membranes, P systems with inner regions are introduced. Then, a new type of membrane computing systems are considered, called K-subset transforming systems with membranes, which can treat nonintegral multiplicities of objects. As an application, a K-subset transforming system is proposed in order to model the light reactions of the photosynthesis. The behaviour of such systems is simulated on a computer", "keywords": ["p system", "nonintegral multiplicity", "k-subset", "photosynthesis"]} {"id": "kp20k_training_41", "title": "Technologische Innovation und die Auswirkung auf Geschftsmodell, Organisation und Unternehmenskultur Die Transformation der IBM zum global integrierten, dienstleistungsorientierten Unternehmen", "abstract": "Im vorliegenden Beitrag wird der Einfluss von Innovationen der Informations- und Kommunikationstechnologie (IKT) auf die Transformation von Unternehmen untersucht. Zunchst werden die allgemeinen IKT-getriebenen Entwicklungslinien der Globalisierung und der Dienstleistungsorientierung beschrieben. Die nachfolgende Analyse der Transformation der IBM Corporation ber die letzten 50Jahre zu einem global integrierten, dienstleistungsorientierten Unternehmen macht deutlich, dass IKT-Innovationen mit gleichzeitigen Anpassungen des Geschftsmodells, der Organisation und der Unternehmenskultur begegnet werden muss. Die Fhigkeit zu derartiger Adaption gewinnt eine zunehmend zentrale Bedeutung fr Unternehmen", "keywords": ["innovation", "informations- und kommunikationstechnologie", "geschftsmodell", "organisation", "unternehmenskultur", "transformation", "vernderungsmanagement", "ibm", "innovation", "information and communication technology", "business model", "organization", "corporate culture", "transformation", "change management", "ibm"]} {"id": "kp20k_training_42", "title": "Modular robotics as a tool for education and entertainment", "abstract": "We developed I-BLOCKS, a modular electronic building block system and here we show how this system has proven useful, especially as an educational tool that allows hands-on learning in an easy manner. Through user studies we find limitations of the first I-BLOCKS system, and we show how the system can be improved by introducing a graphical user interface for authoring the contents of the individual I-BLOCK. This is done by developing a new cubic block shape with new physical and electrical connectors, and by including new embedded electronics. We developed and evaluated the I-BLOCKS as a manipulative technology through studies in both schools and hospitals, and in diverse cultures such as in Denmark, Finland, Italy and Tanzania", "keywords": ["constructionism", "developing countries", "educational robots", "educational technology", "entertainment robots"]} {"id": "kp20k_training_43", "title": "The selective use of redundancy for video streaming over Vehicular Ad Hoc Networks", "abstract": "Video streaming over Vehicular Ad Hoc Networks (VANETs) offers the opportunity to deploy many interesting services. These services, however, are strongly prone to packet loss due to the highly dynamic topology and shared wireless medium inherent in the VANETs. A possible strategy to enhance the delivery rate is to use redundancy for handling packet loss. This is a suitable technique for VANETs as it does not require any interaction between the source and receivers. In this work, we discuss novel approaches for the use of redundancy based on the particularities of video streaming over VANETs. A thorough study on the use of redundancy through Erasure Coding and Network Coding in both video unicast and video broadcast in VANETs is provided. We investigate each strategy, design novel solutions and compare their performance. We evaluated the proposed solutions from the perspective not only of cost as bandwidth utilization, but also the offered receiving rate of unique video content at the application layer. This perspective is fundamental to understanding how redundancy can be used without limiting the video quality that can be displayed to end users. Furthermore, we propose the selective use of redundancy solely on data that is more relevant to the video quality. This approach offers increases in overall video quality without leading to an excessive overhead nor to a substantial decrease in the receiving rate of unique video content", "keywords": ["video streaming", "vanets", "redundancy", "erasure coding", "network coding"]} {"id": "kp20k_training_44", "title": "Syntactic recognition of ECG signals by attributed finite automata", "abstract": "A syntactic pattern recognition method of electrocardiograms (ECG) is described in which attributed automata are used to execute the analysis of ECG signals. An ECG signal is first encoded into a string of primitives and then attributed automata are used to analyse the string. We have found that we can perform fast and reliable analysis of ECG signals by attributed automata", "keywords": ["attributed automata", "electrocardiograms", "filtering", "medical computation", "nondeterministic top-down parsing", "primitive extraction", "signal analysis", "syntactic pattern recognition"]} {"id": "kp20k_training_45", "title": "Lane-mark extraction for automobiles under complex conditions", "abstract": "We proposed a vision based lane-mark extraction method. We used multi-adaptive thresholds for different blocks. Based on the results, our method was robust for complex conditions. The proposed system could operate in real-time", "keywords": ["line fitting", "local edge-orientation", "kalman filter"]} {"id": "kp20k_training_46", "title": "Cultural differences explaining the differences in results in GSS: implications for the next decade", "abstract": "For the next decade, the support that comes from Group Support Systems (GSS) will be increasingly directed towards culturally diversified groups. While there have been many GSS studies concerning culture and cultural differences, no dedicated review of GSS researches exists for the identification of current gaps and opportunities of doing cross-cultural GSS research. For this purpose, this paper provides a comprehensive review utilizing a taxonomy of six categories: research type, GSS technology used, independent variables, dependent variables, use of culture, and findings. Additionally, this study also aims to illustrate how differences in experimental results arising from comparable studies, but from a different cultural setting, can be explained consistently using Hofstede's dimensions. To do so, we presented a comparative study on the use of GSS in Australia and Singapore and explain the differences in results using Hofstede's [G. Hofstede, Culture's ConsequencesInternational Differences in Work-related Values, Sage, Beverly Hills, CA (1980).] cultural dimensions. Last, but not least, we present the implications of the impact of culture on GSS research for the next decade from the viewpoint of the three GSS stakeholders: the facilitators, GSS software designers, and the GSS researchers. With the above, this paper seeks (i) to prepare a comprehensive map of GSS research involving culture, and (ii) to prepare a picture of what all these mean and where we should be heading in the next decade", "keywords": ["cultural differences", "gss", "implications"]} {"id": "kp20k_training_47", "title": "Building an IP-based community wireless mesh network: Assessment of PACMAN as an IP address autoconfiguration protocol", "abstract": "Wireless mesh networks are experiencing rapid progress and inspiring numerous applications in different scenarios. due to features such as autoconfiguration, self-healing, connectivity coverage extension and support for dynamic topologies These particular characteristics make wireless mesh networks an appropriate architectural basis for the design of easy-to-deploy community or neighbourhood networks One of the main challenges in building a community network using mesh networks is the minimisation of user intervention in the IP address configuration of the network nodes In this paper we first consider the process of building an IP-based mesh network using typical residential routers, exploring the options for the configuration of their wireless interfaces. Then we focus on IP address autoconfiguration, identifying the specific requirements for community mesh networks and analysing the applicability of existing solutions. As a result of that analysis, we select PACMAN, an efficient distributed address autoconfiguration mechanism originally designed for ad-hoc networks. and we perform an experimental study - using off-the-shelf routers and assuming worst-case scenarios - analysing its behaviour as an IP address autoconfiguration mechanism for community wireless mesh networks The results of the conducted assessment show that PACMAN meets all the identified requirements of the community scenario", "keywords": ["community networks", "wireless mesh networks", "experimental evaluation", "pacman"]} {"id": "kp20k_training_48", "title": "Approximating partition functions of the two-state spin system", "abstract": "Two-state spin system is a classical topic in statistical physics. We consider the problem of computing the partition function of the system on a bounded degree graph. Based on the self-avoiding tree, we prove the system exhibits strong correlation decay under the condition that the absolute value of inverse temperature is small. Due to strong correlation decay property, an FPTAS for the partition function is presented and uniqueness of Gibbs measure of the two-state spin system on a bounded degree infinite graph is proved, under the same condition. This condition is sharp for Ising model. ", "keywords": ["approximation algorithms", "two-state spin system", "ising model", "gibbs measure", "uniqueness of gibbs measure", "fptas", "strong correlation decay"]} {"id": "kp20k_training_49", "title": "Efficient memory utilization for high-speed FPGA-based hardware emulators with SDRAMs", "abstract": "FPGA-based hardware. emulators are often used for the verification of LSI functions. They generally have dedicated external memories, such as SDRAMs, to compensate for the lack of memory capacity in FPGAs. In such a case, access between the FPGAs and the dedicated external memory may represent a major bottleneck with respect to emulation speed since the dedicated external memory may have to emulate a large number of memory blocks. In this paper, we propose three methods, \"Dynamic Clock Control (DCC),\" \"Memory Mapping Optimization (MMO),\" and \"Efficient Access Scheduling (EAS),\" to avoid this bottleneck. DCC controls an emulation clock dynamically in accord with the number of memory accesses within one emulation clock cycle. EAS optimizes the ordering of memory access to the dedicated external memory, and MMO optimizes the arrangement of the dedicated external memory addresses to which respective memories will be emulated. With them, emulation speed can be made 29.0 times faster, as evaluated in actual LSI emulations", "keywords": ["fpga-based hardware emulators", "sdram", "memory controller clock generator"]} {"id": "kp20k_training_50", "title": "Minimum stress optimal design with the level set method", "abstract": "This paper is devoted to minimum stress design in structural optimization. We propose a simple and efficient numerical algorithm for shape and topology optimization based on the level set method coupled with the topological derivative. We compute a shape derivative, as well as a topological derivative, for a stress-based objective function. Using an adjoint equation we implement a gradient algorithm for the minimization of the objective function. Several numerical examples in 2-d and 3-d are discussed", "keywords": ["level set method", "shape derivative", "topological derivative"]} {"id": "kp20k_training_51", "title": "Determination of wire recovery length in steel cables and its practical applications", "abstract": "In the presence of relatively significant states of radial pressures between the helical wires of a steel cable (spiral strand and/or wire rope), and significant levels of interwire friction, the individual broken wires tend to take up their appropriate share of the axial load within a certain length from the fractured end, which is called the recovery (or development) length. The paper presents full details of the formulations for determining the magnitude of recovery length in any layer of an axially loaded multi-layered spiral strand with any construction details. The formulations are developed for cases of fully bedded-in (old) spiral strands within which the pattern of interlayer contact forces and associated significant values of line-contact normal forces between adjacent wires in any layer, are fully stabilised, and also for cases when (in the presence of gaps between adjacent wires) hoop line-contact forces do not exist and only radial forces are present. Based on a previously reported extensive series of theoretical parametric studies using a wide range of spiral strand constructions with widely different wire (and cable) diameters and lay angles, a very simple method (aimed at practising engineers) for determining the magnitude of recovery length in any layer of an axially loaded spiral strand with any type of construction details is prestented. Using the final outcome of theoretical parametric studies, the minimum length of test specimens for axial fatigue tests whose test data may safely be used for estimating the axial fatigue lives of the much longer cables under service conditions may now be determined in a straightforward fashion. Moreover, the control length over which one should count the number of broken wires for cable discard purposes is suggested to be equal to one recovery length whose upper bound value for both spiral strands and/or wire ropes with any construction details is theoretically shown to be equal to 2.5 lay lengths", "keywords": ["wire recovery length", "steel cables", "multi-layered spiral strand", "radial forces"]} {"id": "kp20k_training_52", "title": "Generalized PCM Coding of Images", "abstract": "Pulse-code modulation (PCM) with embedded quantization allows the rate of the PCM bitstream to be reduced by simply removing a fixed number of least significant bits from each codeword. Although this source coding technique is extremely simple, it has poor coding efficiency. In this paper, we present a generalized PCM (GPCM) algorithm for images that simply removes bits from each codeword. In contrast to PCM, however, the number and the specific bits that a GPCM encoder removes in each codeword depends on its position in the bitstream and the statistics of the image. Since GPCM allows the encoding to be performed with different degrees of computational complexity, it can adapt to the computational resources that are available in each application. Experimental results show that GPCM outperforms PCM with a gain that depends on the rate, the computational complexity of the encoding, and the degree of inter-pixel correlation of the image", "keywords": ["binning", "interpolative coding", "pulse-code modulation", "quantization"]} {"id": "kp20k_training_53", "title": "Influence of motor and converter non-linearities on dynamic properties of DC drive with field weakening range", "abstract": "Improvement of the dynamic properties of DC drive in the field weakening range was the aim of investigation. The non-linear model of the drive system was applied. In the paper results of the comparative analysis of two emf control structures are presented. The classic emf control structure with subordinated excitation current control loop was compared with this one consisting of a non-linear compensation block. For both control structures different kinds of the parameter designing for the emf and excitation controllers are considered. Verification of the theoretical assumptions and synthesis methods of the investigated control structures are made by simulation tests using the PSpice language", "keywords": ["dc drive", "dynamic properties", "field control", "field weakening", "non-linear control"]} {"id": "kp20k_training_54", "title": "Rental software valuation in IT investment decisions", "abstract": "The growth of application service providers (ASPs) is very rapid, leading to a number of options to organizations interested in developing new information technology services. The advantages of an ASP include spreading out payments over a contract period and flexibility in terms of responding to changes in technology. Likewise, newer risks are associated with ASPs, including pricing variability. Some of the more common capital budgeting models may not be appropriate in this volatile marketplace. However, option models allow for many of the quirks to be considered. Modification of the option pricing model and an analytical solution method incorporated into a spreadsheet for decision support are described and illustrated. The analytical tool allows for better decisions compared to traditional value analysis methods which do not fully account for the entry and exit options of the market", "keywords": ["options", "capital budgeting", "information technology investment", "application service providers", "stochastic processes", "risk analysis"]} {"id": "kp20k_training_55", "title": "An averaging scheme for macroscopic numerical simulation of nonconvex minimization problems", "abstract": "Averaging or gradient recovery techniques, which are a popular tool for improved convergence or superconvergence of finite element methods in elliptic partial differential equations, have not been recommended for nonconvex minimization problems as the energy minimization process enforces finer and finer oscillations and hence at the first glance, a smoothing step appears even counterproductive. For macroscopic quantities such as the stress field, however, this counterargument is no longer true. In fact, this paper advertises an averaging technique for a surprisingly improved convergence behavior for nonconvex minimization problems. Similar to a finite volume scheme, numerical experiments on a double-well benchmark example provide empirical evidence of superconvergence phenomena in macroscopic numerical simulations of oscillating microstructures", "keywords": ["averaging scheme", "nonconvex minimization", "convexification", "macroscopic numerical simulation", "adaptive mesh refinement"]} {"id": "kp20k_training_56", "title": "Which App? A recommender system of applications in markets: Implementation of the service for monitoring users' interaction", "abstract": "Users face the information overload problem when downloading applications in markets. This is mainly due to (i) the increasing unmanageable number of applications and (ii) the lack of an accurate and fine-grained categorization of the applications in the markets. To address this issue, we present an integrated solution which recommends to the users applications by considering a big amount of information: that is, according to their previously consumed applications, use pattern, tags used to annotate resources and history of ratings. We focus this paper on the service for monitoring users' interaction. ", "keywords": ["recommender system", "context-awareness", "mobile applications", "filtering"]} {"id": "kp20k_training_57", "title": "New statistical features for the design of fiber optic statistical mode sensors", "abstract": "Novel statistical features are proposed for the design of statistical mode sensors. Proposed statistical features are first and second order moments. Features are compared in terms of precision error, non-linearity, and hysteresis", "keywords": ["fiber optic sensor", "statistical mode sensor", "image processing", "statistical features"]} {"id": "kp20k_training_58", "title": "An efficient indexing method for content-based image retrieval", "abstract": "In this paper, we propose an efficient indexing method for content-based image retrieval. The proposed method introduces the ordered quantization to increase the distinction among the quantized feature descriptors. Thus, the feature point correspondences can be determined by the quantized feature descriptors, and they are used to measure the similarity between query image and database image. To implement the above scheme efficiently, a multi-dimensional inverted index is proposed to compute the number of feature point correspondences, and then approximate RANSAC is investigated to estimate the spatial correspondences of feature points between query image and candidate images returned from the multi-dimensional inverted index. The experimental results demonstrate that our indexing method improves the retrieval efficiency while ensuring the retrieval accuracy in the content-based image retrieval", "keywords": ["content-based image retrieval", "feature point correspondences", "ordered quantization", "multi-dimensional inverted index", "approximate ransac"]} {"id": "kp20k_training_59", "title": "Prediction intervals in linear regression taking into account errors on both axes", "abstract": "This study reports the expressions for the variances in the prediction of the response and predictor variables calculated with the bivariate least squares (BLS) regression technique. This technique takes into account the errors on both axes. Our results are compared with those of a simulation process based on six different real data sets. The mean error in the results from the new expressions is between 4% and 5%. With weighted least squares, ordinary least squares, the constant variance ratio approach and orthogonal regression, on the other hand, mean errors can be as high as 85%, 277%, 637% and 1697% respectively. An important property of the prediction intervals calculated with BLS is that the results are not affected when the axes are switched. ", "keywords": ["prediction", "linear regression", "errors on both axes", "confidence intervals", "predictor intervals"]} {"id": "kp20k_training_60", "title": "The PIAM approach to modular integrated assessment modelling", "abstract": "The next generation of integrated assessment modelling is envisaged as being organised as a modular process, in which modules encapsulating knowledge from different scientific disciplines are independently developed at distributed institutions and coupled afterwards in accordance with the question raised by the decision maker. Such a modular approach needs to respect several stages of the model development process, approaching modularisation and integration on a conceptual, numerical, and technical level. The paper discusses the challenges at each level and presents partial solutions developed by the PIAM (Potsdam Integrated Assessment Modules) project at the Potsdam Institute for Climate Impact Research (PIK). The challenges at each level differ greatly in character and in the work done addressing them. At the conceptual level, the notion of conceptual consistency of modular integrated models is discussed. At the numerical level, it is shown how an adequate modularisation of a problem from climateeconomy leads to a modular configuration into which independently developed climate and economic modules can be plugged. At the technical level, a software tool is presented which provides a simple consistent interface for data transfer between modules running on distributed and heterogeneous computer platforms", "keywords": ["modularity", "modular modelling", "model integration", "integrated modelling", "integrated assessment modelling", "climate change"]} {"id": "kp20k_training_61", "title": "Finding multivariate outliers in fMRI time-series data", "abstract": "Multivariate outlier detection methods are applicable to fMRI time-series data. Removing outliers increases spatial specificity without hurting classification. Simulation shows PCOut is more sensitivity to small outliers than HD BACON", "keywords": ["outlier detection", "fmri", "high dimensional data"]} {"id": "kp20k_training_62", "title": "Cardinal Consistency of Reciprocal Preference Relations: A Characterization of Multiplicative Transitivity", "abstract": "Consistency of preferences is related to rationality, which is associated with the transitivity property. Many properties suggested to model transitivity of preferences are inappropriate for reciprocal preference relations. In this paper, a functional equation is put forward to model the \"cardinal consistency in the strength of preferences\" of reciprocal preference relations. We show that under the assumptions of continuity and monotonicity properties, the set of representable uninorm operators is characterized as the solution to this functional equation. Cardinal consistency with the conjunctive representable cross ratio uninorm is equivalent to Tanino's multiplicative transitivity property. Because any two representable uninorms are order isomorphic, we conclude that multiplicative transitivity is the most appropriate property for modeling cardinal consistency of reciprocal preference relations. Results toward the characterization of this uninorm consistency property based on a restricted set of (n - 1) preference values, which can be used in practical cases to construct perfect consistent preference relations, are also presented", "keywords": ["consistency", "fuzzy preference relation", "rationality", "reciprocity", "transitivity", "uninorm"]} {"id": "kp20k_training_63", "title": "Infomarker - A new Internet information service system", "abstract": "As the web grows, the massive increase in information is placing severe burdens on information retrieval and sharing. Automated search engines and directories with small editorial staff are unable to keep up with the increasing submission of web sites. To address the problem, this paper presents Infomarker - an Internet information service system based on Open Directory and Zero-Keyword Inquiry. The Open Directory sets up a net-community in which the increasing net-citizens can each organize a small portion of the web and present it to the others. By means of Zero-Keyword Inquiry, user can get the information he is interested in without inputting any keyword that is often required by search engines. In Infomarker, user can record the web address he likes and can put forward an information request based on his web records. The information matching engine checks the information in the Open Directory to find what fits user's needs and adds it to user's web address records. The key to the matching process is layered keyword mapping. Infomarker provides people with a whole new approach to getting information and shows a wide prospect", "keywords": ["open directory", "zero-keyword inquiry", "information matching engine", "layered keyword mapping"]} {"id": "kp20k_training_64", "title": "Grain flow measurements with X-ray techniques", "abstract": "The use of low energy X-rays, up to 30 keV, densitometry is demonstrated for grain flow rate measurements through laboratory experiments. Mass flow rates for corn were related to measured X-ray intensity in gray scale units with a 0.99 correlation coefficient for flow rates ranging from 2 to 6 kg/s. Larger flow rate values can be measured by using higher energy or a higher tube current. Measurements were done in real time at a 30 Hz sampling rate. Flow rate measurements are relatively independent of grain moisture due to a negligible change in the X-ray attenuation coefficients at typical moisture content values from 15 to 25%. Grain flow profile changes did not affect measurement accuracy. X-rays easily capture variations in the corn thickness profile. Due to the low energy of the X-ray photons, biological shielding can be accomplished with 2-mm-thick lead foil or 5 mm of steel", "keywords": ["precision farming", "x-ray", "yield monitoring", "yield sensor"]} {"id": "kp20k_training_65", "title": "Dynamic performance enhancement of microgrids by advanced sliding mode controller", "abstract": "Dynamics are the most important problems in the microgrid operation. In the islanded microgrid, the mismatch of parallel operations of inverters during dynamics can result in the instability. This paper considers severe dynamics which can occur in the microgrid. Microgrid can have different configurations with different load and generation dynamics which are facing voltage disturbances. As a result, microgrid has many uncertainties and is placed in the distribution network where is full of voltage disturbances. Moreover, characteristics of the distribution network and distributed energy resources in the islanded mode make microgrid vulnerable and easily lead to instability. The main aim of this paper is to discuss the suitable mathematical modeling based on microgrid characteristics and to design properly inner controllers to enhance the dynamics of microgrid with uncertain and changing parameters. This paper provides a method for inner controllers of inverter-based distributed energy resources to have a suitable response for different dynamics. Parallel inverters in distribution networks were considered to be controlled by nonlinear robust voltage and current controllers. Theoretical prove beyond simulation results, reveal evidently the effectiveness of the proposed controller", "keywords": ["current controlled-voltage source inverter", "dynamic stability", "microgrid", "sliding mode control", "transients", "disturbances"]} {"id": "kp20k_training_66", "title": "NEW IDENTIFICATION PROCEDURE FOR CONTINUOUS-TIME RADIO FREQUENCY POWER AMPLIFIER MODEL", "abstract": "In this paper, we present a new method for characterization of radio frequency Power Amplifier (PA) in the presence of nonlinear distortions which affect the modulated signal in Radiocommunication transmission system. The proposed procedure uses a gray box model where PA dynamics are modeled with a MIMO continuous filter and the nonlinear characteristics are described as general polynomial functions, approximated by means of Taylor series. Using the baseband input and output data, model parameters are obtained by an iterative identification algorithm based on Output Error method. Initialization and excitation problems are resolved by an association of a new technique using initial values extraction with a multi-level binary sequence input exciting all PA dynamics. Finally, the proposed estimation method is tested and validated on experimental data", "keywords": ["rf power amplifier", "parameter estimation", "nonlinear distortions", "modeling", "continuous time domain"]} {"id": "kp20k_training_67", "title": "Manufacturing lead-time rules: Customer retention versus tardiness costs", "abstract": "Inaccurate production backlog information is a major cause of late deliveries, which can result in penalty fees and loss of reputation. We identify conditions when it is particularly worthwhile to improve an information system to provide good lead-time information. We first analyze a sequential decision process model of lead-time decisions at a firm which manufactures standard products to order, and has complete backlog information. There are Poisson arrivals, stochastic processing times, customers may balk in response to quoted delivery dates, and revenues are offset by tardiness penalties. We characterize an optimal policy and show how to accelerate computations. The second part of the paper is a computational comparison of this optimum (with full backlog information) with a lead-time quotation rule that is optimal with statistical shop-status information. This reveals when the partial-information method does well and when it is worth implementing measures to improve information transfer between operations and sales", "keywords": ["dynamic programming", "manufacturing", "markov decision processes", "due-date assignment", "marketing"]} {"id": "kp20k_training_68", "title": "On the Wiberg algorithm for matrix factorization in the presence of missing components", "abstract": "This paper considers the problem of factorizing a matrix with missing components into a product of two smaller matrices, also known as principal component analysis with missing data (PCAMD). The Wiberg algorithm is a numerical algorithm developed for the problem in the community of applied mathematics. We argue that the algorithm has not been correctly understood in the computer vision community. Although there are many studies in our community, almost every one of which refers to the Wiberg study, as far as we know, there is no literature in which the performance of the Wiberg algorithm is investigated or the detail of the algorithm is presented. In this paper, we present derivation of the algorithm along with a problem in its implementation that needs to be carefully considered, and then examine its performance. The experimental results demonstrate that the Wiberg algorithm shows a considerably good performance, which should contradict the conventional view in our community, namely that minimization-based algorithms tend to fail to converge to a global minimum relatively frequently. The performance of the Wiberg algorithm is such that even starting with random initial values, it converges in most cases to a correct solution, even when the matrix has many missing components and the data are contaminated with very strong noise. Our conclusion is that the Wiberg algorithm can also be used as a standard algorithm for the problems of computer vision", "keywords": ["matrix factorization", "singular value decomposition", "principal component analysis with missing data ", "structure from motion", "numerical algorithm"]} {"id": "kp20k_training_69", "title": "Semi-supervised local Fisher discriminant analysis for dimensionality reduction", "abstract": "When only a small number of labeled samples are available, supervised dimensionality reduction methods tend to perform poorly because of overfitting. In such cases, unlabeled samples could be useful in improving the performance. In this paper, we propose a semi-supervised dimensionality reduction method which preserves the global structure of unlabeled samples in addition to separating labeled samples in different classes from each other. The proposed method, which we call SEmi-supervised Local Fisher discriminant analysis (SELF), has an analytic form of the globally optimal solution and it can be computed based on eigen-decomposition. We show the usefulness of SELF through experiments with benchmark and real-world document classification datasets", "keywords": ["semi-supervised learning", "dimensionality reduction", "cluster assumption", "local fisher discriminant analysis", "principal component analysis"]} {"id": "kp20k_training_70", "title": "A stable fluidstructure-interaction solver for low-density rigid bodies using the immersed boundary projection method", "abstract": "Dispersion of low-density rigid particles with complex geometries is ubiquitous in both natural and industrial environments. We show that while explicit methods for coupling the incompressible NavierStokes equations and Newton's equations of motion are often sufficient to solve for the motion of cylindrical particles with low density ratios, for more complex particles such as a body with a protrusion they become unstable. We present an implicit formulation of the coupling between rigid body dynamics and fluid dynamics within the framework of the immersed boundary projection method. Similarly to previous work on this method, the resulting matrix equation in the present approach is solved using a block-LU decomposition. Each step of the block-LU decomposition is modified to incorporate the rigid body dynamics. We show that our method achieves second-order accuracy in space and first-order in time (third-order for practical settings), only with a small additional computational cost to the original method. Our implicit coupling yields stable solution for density ratios as low as 10?4. We also consider the influence of fictitious fluid located inside the rigid bodies on the accuracy and stability of our method", "keywords": ["immersed boundary method", "fictitious fluid", "newton's equations of motion", "implicit coupling", "low density ratios", "complex particles"]} {"id": "kp20k_training_71", "title": "Latent word context model for information retrieval", "abstract": "The application of word sense disambiguation (WSD) techniques to information retrieval (IR) has yet to provide convincing retrieval results. Major obstacles to effective WSD in IR include coverage and granularity problems of word sense inventories, sparsity of document context, and limited information provided by short queries. In this paper, to alleviate these issues, we propose the construction of latent context models for terms using latent Dirichletallocation. We propose building one latent context per word, using a well principled representation of local context based on word features. In particular, context words are weighted using a decaying function according to their distance to the target word, which is learnt from data in an unsupervised manner. The resulting latent features are used to discriminate word contexts, so as to constrict querys semantic scope. Consistent and substantial improvements, including on difficult queries, are observed on TREC test collections, and the techniques combines well with blind relevance feedback. Compared to traditional topic modeling, WSD and positional indexing techniques, the proposed retrieval model is more effective and scales well on large-scale collections", "keywords": ["retrieval models", "word context discrimination ", "word context", "topic models", "word sense disambiguation "]} {"id": "kp20k_training_72", "title": "Clusterization, frustration and collectivity in random networks", "abstract": "We consider the random Erdos-Renyi network with enhanced clusterization and Ising spins s = +/- 1 at the network nodes. Mutually linked spins interact with energy J. Magnetic properties of the system that are dependent on the clustering coefficient C are investigated with the Monte Carlo heat bath algorithm. For J > 0 the Curie temperature T(c) increases from 3.9 to 5.5 when C increases from almost zero to 0.18. These results deviate only slightly from the mean field theory. For J < 0 the spin-glass phase appears below T(SG); this temperature decreases with C, on the contrary to the mean field calculations. The results are interpreted in terms of social systems", "keywords": ["random networks", "phase transitions"]} {"id": "kp20k_training_73", "title": "GPS/INS integration utilizing dynamic neural networks for vehicular navigation", "abstract": "Recently, methods based on Artificial Intelligence (AI) have been suggested to provide reliable positioning information for different land vehicle navigation applications integrating the Global Positioning System (GPS) with the Inertial Navigation System (INS). All existing AI-based methods are based on relating the INS error to the corresponding INS output at certain time instants and do not consider the dependence of the error on the past values of INS. This study, therefore, suggests the use of Input-Delayed Neural Networks (IDNN) to model both the INS position and velocity errors based on current and some past samples of INS position and velocity, respectively. This results in a more reliable positioning solution during long GPS outages. The proposed method is evaluated using road test data of different trajectories while both navigational and tactical grade INS are mounted inside land vehicles and integrated with GPS receivers. The performance of the IDNN - based model is also compared to both conventional (based mainly on Kalman filtering) and recently published Al - based techniques. The results showed significant improvement in positioning accuracy especially for cases of tactical grade INS and long GPS outages. ", "keywords": ["gps", "inertial navigation system ", "data fusion", "dynamic neural network", "ins/gps road tests"]} {"id": "kp20k_training_74", "title": "A unified probabilistic framework for automatic 3D facial expression analysis based on a Bayesian belief inference and statistical feature models", "abstract": "Textured 3D face models capture precise facial surfaces along with the associated textures, making it possible for an accurate description of facial activities. In this paper, we present a unified probabilistic framework based on a novel Bayesian Belief Network (BBN) for 3D facial expression and Action Unit (AU) recognition. The proposed BBN performs Bayesian inference based on Statistical Feature Models (SFM) and Gibbs-Boltzmann distribution and feature a hybrid approach in fusing both geometric and appearance features along with morphological ones. When combined with our previously developed morphable partial face model (SFAM), the proposed BBN has the capacity of conducting fully automatic facial expression analysis. We conducted extensive experiments on the two public databases, namely the BU-3DFE dataset and the Bosphorus dataset. When using manually labeled landmarks, the proposed framework achieved an average recognition rate of 94.2% and 85.6% for the 7 and 16 AU on face data from the Bosphorus dataset respectively, and 89.2% for the six universal expressions on the BU-3DFE dataset. Using the landmarks automatically located by SFAM, the proposed BBN still achieved an average recognition rate of 84.9% for the six prototypical facial expressions. These experimental results demonstrate the effectiveness of the proposed approach and its robustness in landmark localization errors. Published by Elsevier B.V", "keywords": ["bayesian belief network", "statistical feature model", "3d face", "facial expression recognition", "action units recognition", "automatic landmarking"]} {"id": "kp20k_training_75", "title": "MPML3D: Scripting Agents for the 3D Internet", "abstract": "The aim of this paper is two-fold. First, it describes a scripting language for specifying communicative behavior and interaction of computer-controlled agents (\"bots\") in the popular three-dimensional (3D) multiuser online world of \"Second Life\" and the emerging \"OpenSimulator\" project. While tools for designing avatars and in-world objects in Second Life exist, technology for nonprogrammer content creators of scenarios involving scripted agents is currently missing. Therefore, we have implemented new client software that controls bots based on the Multimodal Presentation Markup Language 3D (MPML3D), a highly expressive XML-based scripting language for controlling the verbal and nonverbal behavior of interacting animated agents. Second, the paper compares Second Life and OpenSimulator platforms and discusses the merits and limitations of each from the perspective of agent control. Here, we also conducted a small study that compares the network performance of both platforms", "keywords": ["artificial, augmented, and virtual realities", "graphical user interfaces", "synchronous interaction", "visualization", "markup languages", "scripting languages"]} {"id": "kp20k_training_76", "title": "tlb and snoop energy-reduction using virtual caches in low-power chip-multiprocessors", "abstract": "In our quest to bring down the power consumption in low-power chip-multiprocessors, we have found that TLB and snoop accesses account for about 40% of the energy wasted by all L1 data-cache accesses. We have investigated the prospects of using virtual caches to bring down the number of TLB accesses. A key observation is that while the energy wasted in the TLBs are cut, the energy associated with snoop accesses becomes higher. We then contribute with two techniques to reduce the number of snoop accesses and their energy cost. Virtual caches together with the proposed techniques are shown to reduce the energy wasted in the L1 caches and the TLBs by about 30", "keywords": ["chip multiprocessors", "virtual caches", "snoop", "power consumption", "data cache", "association", "energy", "reduction", "cmp", "account", "virtualization", "caches", "cost", "low-power"]} {"id": "kp20k_training_77", "title": "extracting tennis statistics from wireless sensing environments", "abstract": "Creating statistics from sporting events is now widespread with most efforts to automate this process using various sensor devices. The problem with many of these statistical applications is that they require proprietary applications to process the sensed data and there is rarely an option to express a wide range of query types. Instead, applications tend to contain built-in queries with predefined outputs. In the research presented in this paper, data from a wireless network is converted to a structured and highly interoperable format to facilitate user queries by expressing high level queries in a standard database language and automatically generating the results required by coaches", "keywords": ["sensor", "ubisense", "query", "xml"]} {"id": "kp20k_training_78", "title": "Critical success factors of inter-organizational information systems- A case study of Cisco and Xiao Tong in China", "abstract": "This paper reports a case study of an inter-organizational information system (IOS) of Cisco and Xiao Tong in China. We interviewed their senior managers, heads of departments and employees who have been directly affected in their work. Other sources of information are company documents and publicly available background information. The study examines the benefits of the IOS for both corporations. The research also reveals seven critical success factors for the IOS, namely intensive stimulation, shared vision, cross-organizational implementation team, high integration with internal information systems, inter-organizational business process re-engineering, advanced legacy information system and infrastructure and shared industry standard. ", "keywords": ["inter-organizational information systems", "critical success factors", "china"]} {"id": "kp20k_training_79", "title": "Maize grain shape approaches for DEM modelling", "abstract": "The shape of a grain of maize was approached using the multi-sphere method. Models with single-spherical particles and with rolling friction were also used. Results from two DEM software codes were compared. Recommendations on the shape approach for DEM modelling were provided", "keywords": ["dem", "maize", "particle shape", "multi-spheres", "rolling friction", "flow"]} {"id": "kp20k_training_80", "title": "multi-sector antenna performance in dense wireless networks", "abstract": "Sectorized antennas provide an attractive solution to increase wireless network capacity through higher spatial reuse. Despite their increasing popularity, the real-world performance characteristics of such antennas in dense wireless mesh networks are not well understood. In this demo, we demonstrate our multi-sector antenna prototypes and their performance through video streaming over an indoor wireless network in the presence of interfering nodes. We use our graphical tool to vary the sender, receiver, and interferer antenna configurations and the resulting performance is directly visible in the video quality displayed at the receiver", "keywords": ["sectorized antenna", "sector selection", "dense wireless mesh networks", "directional hidden terminal problem"]} {"id": "kp20k_training_81", "title": "Improvement of 3P and 6R mechanical robots reliability and quality applying FMEA and QFD approaches", "abstract": "In the past few years, extending usage of robotic systems has increased the importance of robot reliability and quality. To improve the robot reliability and quality by applying standard approaches such as Failure Mode and Effect Analysis (FMEA) and Quality Function Deployment (QFD) during the design of robot is necessary. FMEA is a qualitative method which determines the critical failure modes in robot design. In this method Risk Priority Number is used to sort failures with respect to critical situation. Two examples of mechanical robots are analyzed by using this method and critical failure modes are determined for each robot. Corrective actions are proposed for critical items to modify robots reliability and reduce their risks. Finally by using QFD, quality of these robots is improved according to the customers requirements. In this method by making four matrixes, optimum values for all technical parameters are determined and the final product has the desired quality", "keywords": ["robot", "fmea", "qfd", "reliability", "quality", "performance"]} {"id": "kp20k_training_82", "title": "Informatics methodologies for evaluation research in the practice setting", "abstract": "A continuing challenge in health informatics and health evaluation is to enable access to the practice of health care so that the determinants of successful care and good health outcomes can be measured, evaluated and analysed. Furthermore the results of the analysis should be available to the health care practitioner or to the patient as might be appropriate, so that he or she can use this information for continual improvement of practice and optimisation of outcomes. In this paper we review two experiences, one in primary care, the FAMUS project, and the other in hospital care, the Autocontrol project. Each project demonstrates an informatics approach for evaluation research in the clinical setting and indicates ways in which useful information can be obtained which with appropriate feed-back and education can be used towards the achievement of better health. Emphasis is given to data collection methods compatible with practice and to high quality information feedback, particularly in the team context, to enable the formulation of strategies for practice improvement", "keywords": ["data collection", "evaluation research", "health informatics", "clinical strategies"]} {"id": "kp20k_training_83", "title": "An improved SOM algorithm and its application to color feature extraction", "abstract": "Reducing the redundancy of dominant color features in an image and meanwhile preserving the diversity and quality of extracted colors is of importance in many applications such as image analysis and compression. This paper presents an improved self-organization map (SOM) algorithm namely MFD-SOM and its application to color feature extraction from images. Different from the winner-take-all competitive principle held by conventional SOM algorithms, MFD-SOM prevents, to a certain degree, features of non-principal components in the training data from being weakened or lost in the learning process, which is conductive to preserving the diversity of extracted features. Besides, MFD-SOM adopts a new way to update weight vectors of neurons, which helps to reduce the redundancy in features extracted from the principal components. In addition, we apply a linear neighborhood function in the proposed algorithm aiming to improve its performance on color feature extraction. Experimental results of feature extraction on artificial datasets and benchmark image datasets demonstrate the characteristics of the MFD-SOM algorithm", "keywords": ["self-organizing map", "color feature extraction", "non-principal component", "competitive mechanism"]} {"id": "kp20k_training_84", "title": "A Motion Planning System for Mobile Robots", "abstract": "In this paper, a motion planning system for a mobile robot is proposed. Path planning tries to find a feasible path for mobile robots to move from a starting node to a target node in an environment with obstacles. A genetic algorithm is used to generate an optimal path by taking the advantage of its strong optimization ability. Mobile robot, obstacle and target localizations are realized by means of camera and image processing. A graphical user interface (GUI) is designed for the motion planning system that allows the user to interact with the robot system and to observe the robot environment. All the software components of the system are written in MATLAB that provides to use non-predefined accessories rather than the robot firmware has, to avoid confusing in C++ libraries of robot's proprietary software, to control the robot in detail and not to re-compile the programs frequently in real-time dynamic operations", "keywords": ["genetic algorithm", "mobile robot", "motion planning"]} {"id": "kp20k_training_85", "title": "A unified strategy for search and result representation for an online bibliographical catalogue", "abstract": "Purpose - One of the biggest concerns of modem information retrieval systems is reducing the user effort required for manual traversal and filtering of long matching document lists. Thus, the first goal of this research is to propose an improved scheme for representation of search results. Further, it aims to explore the impact of various user information needs on the searching process with the aim of finding a unified searching approach well suited for different query types and retrieval tasks. Design/methodology/approach - The BoW online bibliographical catalogue is based on a hierarchical concept index to which entries are linked. The key idea is that searching in the hierarchical catalogue should take advantage of the catalogue structure and return matching topics from the hierarchy, rather than just a long list of entries. Likewise, when new entries are inserted, a search for relevant topics to which they should be linked is required. Therefore, a similar hierarchical scheme for query-topic matching can be applied for both tasks. Findings - The experiments show that different query types used for the above tasks are best treated by different topic ranking functions. To further examine this phenomenon a user study was conducted, where various statistical weighting factors were incorporated and their impact on the performance for different query types was measured. Finally, it is found that the mixed strategy that applies the most suitable ranking function to each query type yielded a significant increase in precision relative to the baseline and to employing any examined strategy in isolation on the entire set of user queries. Originality/value - The main contributions of this paper are: the alternative approach for compact and concise representation of search results, which were implemented in the BoW online bibliographical catalogue; and the unified or mixed strategy for search and result representation applying the most suitable ranking function to each query type, which produced superior results compared to different single-strategy-based approaches", "keywords": ["online catalogues", "information retrieval", "technology led strategy"]} {"id": "kp20k_training_86", "title": "robust multiple-phase switched-capacitor dc-dc converter with digital interleaving regulation scheme", "abstract": "An integrated switched-capacitor (SC) DC-DC converter with a digital interleaving regulation scheme is presented. By interleaving the newly-structured charge pump (CP) cells in multiple phases, the input current ripple and output voltage ripple are reduced significantly. The converter exhibits excellent robustness, even when one of the CP cells fails to operate. A fully digital controller is employed with a hysteretic control algorithm. It features dead-beat system stability and fast transient response. Hspice post-layout simulation shows that, with a 1.5 V input power supply, the SC converter accurately provides an adjustable regulated power output in a range of 1.6 to 2.7 V. The maximum output ripple is 40 mV when a full load of 0.54 W is supplied. Transient response of 1.8 ms is observed when the load current switches from half- to full-load (from 100 to 200 mA", "keywords": ["switched-capacitor dc-dc converter", "interleaving regulation"]} {"id": "kp20k_training_87", "title": "Teeth recognition based on multiple attempts in mobile device", "abstract": "Most traditional biometric approaches generally utilize a single image for personal identification. However, these approaches sometimes failed to recognize users in practical environment due to false-detected or undetected subject. Therefore, this paper proposes a novel recognition approach based on multiple frame images that are implemented in mobile devices. The aim of this paper is to improve the recognition accuracy and to reduce computational complexity through multiple attempts. Here, multiple attempts denote that multiple frame images are used in time of recognition procedure. Among sequential frame images, an adequate subject, i.e., teeth image, is chosen by subject selection module which is operated based on differential image entropy. The selected subject is then utilized as a biometric trait of traditional recognition algorithms including PCA, LDA, and EHMM. The performance evaluation of proposed method is performed using two teeth databases constructed by a mobile device. Through experimental results, we confirm that the proposed method exhibits improved recognition accuracy of about 3.64.8%, and offers the advantage of lower computational complexity than traditional biometric approaches", "keywords": ["teeth recognition", "multiple attempts", "subject selection", "mobile device"]} {"id": "kp20k_training_88", "title": "A conceptual approach for the die structure design", "abstract": "A large number of decisions are made during the conceptual design stage which is characterized by a lack of complete geometric information. While existing CAD systems supporting the geometric aspects of design have had little impact at the conceptual design stage. To support the conceptual die design and the top-down design process, a new concept called conceptual assembly modeling framework (CAMF) is presented in this paper. Firstly, the framework employs the zigzag function-symbol mapping to implement the function design of the die. From the easily understood analytical results of the function-symbol mapping matrix, the designer can evaluate the quality of a proposed die concept. Secondly, a new method-logic assembly modeling is proposed using logic components in this framework to satisfy the characteristic of the conceptual die design. Representing shapes and spatial relations in logic can provide a natural, intuitive method of developing complete computer systems for reasoning about die construction design at the conceptual stage. The logic assembly which consists of logic components is an innovative representation that provides a natural link between the function design of the die and the detailed geometric design", "keywords": ["cad", "conceptual design", "die structure design", "zigzag mapping", "logic component", "logic assembly"]} {"id": "kp20k_training_89", "title": "Approximation algorithm for coloring of dotted interval graphs", "abstract": "Dotted interval graphs were introduced by Aumann et al. [Y. Aumann, M. Lewenstein, O. Melamud, R. Pinter, Z. Yakhini, Dotted interval graphs and high throughput genotyping, in: ACM-SIAM Symposium on Discrete Algorithms. SODA 2005, pp. 339-348] as a generalization of interval graphs. The problem of coloring these graphs found application in high-throughput genotyping. Jiang [M. Jiang, Approximating minimum coloring and maximum independent set in dotted interval graphs, Information Processing Letters 98 (2006) 29-33] improves the approximation ratio of Aumann et al. [Y. Aumann, M. Lewenstein, O. Melamud, R. Pinter, Z. Yakhini, Dotted interval graphs and high throughput genotyping, in: ACM-SIAM Symposium on Discrete Algorithms, SODA 2005, pp. 339-348]. In this work we improve the approximation ratio of Jiang [M. Jiang, Approximating minimum coloring and maximum independent set in dotted interval graphs, Information Processing Letters 98 (2006) 29-33] and Aumarm et al. [Y. Aumann, M. Lewenstein, O. Melamud, R. Pinter, Z. Yakhini, Dotted interval graphs and high throughput genotyping, in: ACM-SIAM Symposium on Discrete Algorithms, SODA 2005, pp. 339-348]. In the exposition we develop a generalization of the problem of finding the maximum number of non-attacking queens on a triangle. ", "keywords": ["approximation algorithms", "dotted interval graph", "intersection graph", "minimum coloring", "microsatellite genotyping"]} {"id": "kp20k_training_90", "title": "Scalable visibility color map construction in spatial databases", "abstract": "Recent advances in 3D modeling provide us with real 3D datasets to answer queries, such as What is the best position for a new billboard? and Which hotel room has the best view? in the presence of obstacles. These applications require measuring and differentiating the visibility of an object (target) from different viewpoints in a dataspace, e.g., a billboard may be seen from many points but is readable only from a few points closer to it. In this paper, we formulate the above problem of quantifying the visibility of (from) a target object from (of) the surrounding area with a visibility color map (VCM). A VCM is essentially defined as a surface color map of the space, where each viewpoint of the space is assigned a color value that denotes the visibility measure of the target from that viewpoint. Measuring the visibility of a target even from a single viewpoint is an expensive operation, as we need to consider factors such as distance, angle, and obstacles between the viewpoint and the target. Hence, a straightforward approach to construct the VCM that requires visibility computation for every viewpoint of the surrounding space of the target is prohibitively expensive in terms of both I/Os and computation, especially for a real dataset comprising thousands of obstacles. We propose an efficient approach to compute the VCM based on a key property of the human vision that eliminates the necessity for computing the visibility for a large number of viewpoints of the space. To further reduce the computational overhead, we propose two approximations; namely, minimum bounding rectangle and tangential approaches with guaranteed error bounds. Our extensive experiments demonstrate the effectiveness and efficiency of our solutions to construct the VCM for real 2D and 3D datasets", "keywords": ["spatial databases", "query processing", "three-dimensional objects", "visibility color map"]} {"id": "kp20k_training_91", "title": "Toward a Neurogenetic Theory of Neuroticism", "abstract": "Recent advances in neuroscience and molecular biology have begun to identify neural and genetic correlates of complex traits. Future theories of personality need to integrate these data across the behavioral, neural, and genetic level of analysis and further explain the underlying epigenetic processes by which genes and environmental variables interact to shape the structure and function of neural circuitry. In this chapter, I will review some of the work that has been conducted at the cognitive, neural, and molecular genetic level with respect to one specific personality traitneuroticism. I will focus particularly on individual differences with respect to memory, self-reference, perception, and attention during processing of emotional stimuli and the significance of gene-by-environment interactions. This chapter is intended to serve as a tutorial bridge for psychologists who may be intrigued by molecular genetics and for molecular biologists who may be curious about how to apply their research to the study of personality", "keywords": ["neuroticism", "personality", "complex traits"]} {"id": "kp20k_training_92", "title": "Technological means of communication and collaboration in archives and records management", "abstract": "This study explores the international collaboration efforts of archivists and records managers starting with the hypothesis that Internet technologies have had a significant impact on both national and international communication for this previously conservative group. The use and importance of mailing lists for this purpose is studied in detail. A quantitative analysis looks globally at the numbers of lists in these fields and the numbers of subscribers. A qualitative analysis of list content is also described. The study finds that archivists and records managers have now created more than 140 mailing lists related to their profession and have been contributing to these lists actively. It also 'estimates' that about half of the profession follows a list relating to their work and that archivists seem to like lists more than records managers do. The study concludes that mailing lists can be seen as a virtual college binding these groups together to develop the field", "keywords": ["archives administration", "records management", "internet", "mailing lists", "forums"]} {"id": "kp20k_training_93", "title": "Privacy Preserving Decision Tree Learning Using Unrealized Data Sets", "abstract": "Privacy preservation is important for machine learning and data mining, but measures designed to protect private information often result in a trade-off: reduced utility of the training samples. This paper introduces a privacy preserving approach that can be applied to decision tree learning, without concomitant loss of accuracy. It describes an approach to the preservation of the privacy of collected data samples in cases where information from the sample database has been partially lost. This approach converts the original sample data sets into a group of unreal data sets, from which the original samples cannot be reconstructed without the entire group of unreal data sets. Meanwhile, an accurate decision tree can be built directly from those unreal data sets. This novel approach can be applied directly to the data storage as soon as the first sample is collected. The approach is compatible with other privacy preserving approaches, such as cryptography, for extra protection", "keywords": ["classification", "data mining", "machine learning", "security and privacy protection"]} {"id": "kp20k_training_95", "title": "Fast parameter-free region growing segmentation with application to surgical planning", "abstract": "In this paper, we propose a self-assessed adaptive region growing segmentation algorithm. In the context of an experimental virtual-reality surgical planning software platform, our method successfully delineates main tissues relevant for reconstructive surgery, such as fat, muscle, and bone. We rely on a self-tuning approach to deal with a great variety of imaging conditions requiring limited user intervention (one seed). The detection of the optimal parameters is managed internally using a measure of the varying contrast of the growing region, and the stopping criterion is adapted to the noise level in the dataset thanks to the sampling strategy used for the assessment function. Sampling is referred to the statistics of a neighborhood around the seed(s), so that the sampling period becomes greater when images are noisier, resulting in the acquisition of a lower frequency version of the contrast function. Validation is provided for synthetic images, as well as real CT datasets. For the CT test images, validation is referred to manual delineations for 10 cases and to subjective assessment for another 35. High values of sensitivity and specificity, as well as Dice's coefficient and Jaccard's index on one hand, and satisfactory subjective evaluation on the other hand, prove the robustness of our contrast-based measure, even suggesting suitability for calibration of other region-based segmentation algorithms", "keywords": ["ct", "segmentation", "region growing", "surgical planning", "virtual reality"]} {"id": "kp20k_training_96", "title": "Accuracy and efficiency in computing electrostatic potential for an ion channel model in layered dielectric/electrolyte media", "abstract": "This paper will investigate the numerical accuracy and efficiency in computing the electrostatic potential for a finite-height cylinder, used in an explicit/implicit hybrid solvation model for ion channel and embedded in a layered dielectric/electrolyte medium representing a biological membrane and ionic solvents. A charge locating inside the cylinder cavity, where ion channel proteins and ions are given explicit atomistic representations, will be influenced by the polarization field of the surrounding implicit dielectric/electrolyte medium. Two numerical techniques, a specially designed boundary integral equation method and an image charge method, will be investigated and compared in terms of accuracy and efficiency for computing the electrostatic potential. The boundary integral equation method based on the three-dimensional layered Green?s functions provides a highly accurate solution suitable for producing a benchmark reference solution, while the image charge method is found to give reasonable accuracy and highly efficient and viable to use the fast multipole method for interactions of a large number of charges in the atomistic region of the hybrid solvation model", "keywords": ["poissonboltzmann equation", "layered electrolytes and dielectrics", "image charge method", "ion channels", "the explicit/implicit hybrid solvation model"]} {"id": "kp20k_training_97", "title": "The social sharing of emotion (SSE) in online social networks: A case study in Live Journal", "abstract": "Using content analysis, we gauge the occurrence of social sharing of emotion (SSE) in Live Journal. We present a theoretical model of a three-cycle process for online SSE. A large part of emotional blog posts showed full initiation of social sharing. Affective feedback provided empathy, emotional support and admiration. This study is the first one to empirically assess the occurrence and structure of online SSE", "keywords": ["social sharing of emotion", "online communication", "social networking sites", "blog", "social interaction", "emotion"]} {"id": "kp20k_training_98", "title": "Non-testing approaches under REACH - help or hindrance? Perspectives from a practitioner within industry", "abstract": "Legislation such as REACH strongly advocates the use of alternative approaches including invitro, (Q)SARs, and chemical categories as a means to satisfy the information requirements for risk assessment. One of the most promising alternative approaches is that of chemical categories, where the underlying hypothesis is that the compounds within the category are similar and therefore should have similar biological activities. The challenge lies in characterizing the chemicals, understanding the mode/mechanism of action for the activity of interest and deriving a way of relating these together to form inferences about the likely activity outcomes. (Q)SARs are underpinned by the same hypothesis but are packaged in a more formalized manner. Since the publication of the White Paper for REACH, there have been a number of efforts aimed at developing tools, approaches and techniques for (Q)SARs and read-across for regulatory purposes. While technical guidance is available, there still remains little practical guidance about how these approaches can or should be applied in either the evaluation of existing (Q)SARs or in the formation of robust categories. Here we provide a perspective of how some of these approaches have been utilized to address our in-house REACH requirements", "keywords": ["reach", "sar", "chemical category", "qmrf", "qprf"]} {"id": "kp20k_training_99", "title": "Realtime performance analysis of different combinations of fuzzyPID and bias controllers for a two degree of freedom electrohydraulic parallel manipulator", "abstract": "Development of a 2 DOF electrohydraulic motion simulator as a parallel manipulator. Control of heave, pitch and combined heave and pitch motion of the parallel manipulator. Design of PID, fuzzyPID, self-tuning fuzzyPID and self-tuning fuzzyPID with bias controllers. Use of different combinations of fuzzyPID and bias controllers for study of real time control performance. Best control response found for the self-tuning fuzzyPID with bias controller", "keywords": ["electrohydraulic systems", "real-time control", "parallel manipulator", "fuzzy control"]} {"id": "kp20k_training_100", "title": "On the depth distribution of linear codes", "abstract": "The depth distribution of a linear code was recently introduced by Etzion. In this correspondence, a number of basic and interesting properties for the depth of finite words and the depth distribution of linear codes are obtained. In addition, we study the enumeration problem of counting the number of linear subcodes with the prescribed depth constraints, and derive some explicit and interesting enumeration formulas. Furthermore, we determine the depth distribution of Reed-Muller code RM (m, r). Finally, we show that there are exactly nine depth-equivalence classes for the ternary [11, 6, 5] Golay codes", "keywords": ["depth", "depth distribution", "depth-equivalence classes", "derivative", "linear codes", "reed-muller codes", "ternary golay code"]} {"id": "kp20k_training_101", "title": "Are we there yet", "abstract": "Statistical approaches to Artificial Intelligence are behind most success stories of the field in the past decade. The idea of generating non-trivial behaviour by analysing vast amounts of data has enabled recommendation systems, search engines, spam filters, optical character recognition, machine translation and speech recognition, among other things. As we celebrate the spectacular achievements of this line of research, we need to assess its full potential and its limitations. What are the next steps to take towards machine intelligence", "keywords": ["artificial intelligence", "intelligent behaviour", "cybernetics", "statistical learning theory", "data driven ai", "intelligent systems", "pattern analysis", "viterbis algorithm", "history of artificial intelligence"]} {"id": "kp20k_training_102", "title": "The complexity of the matroid-greedoid partition problem", "abstract": "We show that the maximum matroid-greedoid partition problem is NP-hard to approximate to within 1/2 + epsilon for any epsilon > 0, which matches the trivial factor 1/2 approximation algorithm. The main tool in our hardness of approximation result is an extractor code with polynomial rate, alphabet size and list size, together with an efficient algorithm for list-decoding. We show that the recent extractor construction of Guruswami, Umans and Vadhan [V. Guruswami. C. Umans, S.P. Vadhan, Unbalanced expanders and randomness extractors from Parvaresh-Vardy codes, in: IEEE Conference on Computational Complexity, IEEE Computer Society, 2007, pp. 96-108] can be used to obtain a code with these properties. We also show that the parameterized matroid-greedoid partition problem is fixed-parameter tractable. ", "keywords": ["matroid", "greedoid", "matroid partition problem", "extractor codes", "fixed-parameter complexity"]} {"id": "kp20k_training_103", "title": "Exploring the ncRNAncRNA patterns based on bridging rules", "abstract": "ncRNAs play an important role in the regulation of gene expression. However, many of their functions have not yet been fully discovered. There are complicated relationships between ncRNAs in different categories. Finding these relationships can contribute to identify ncRNAs functions and properties. We extend the association rule to represent the relationship between two ncRNAs. Based on this rule, we can speculate the ncRNAs function when it interacts with other ncRNAs. We propose two measures to explore the relationships between ncRNAs in different categories. Entropy theory is to calculate how close two ncRNAs are. Association rule is to represent the interactions between ncRNAs. We use three datasets from miRBase and RNAdb. Two from miRBase are designed for finding relationships between miRNAs; the other from RNAdb is designed for relationships among miRNA, snoRNA and piRNA. We evaluate our measures from both biological significance and performance perspectives. All the cross-species patterns regarding miRNA that we found are proven correct using miRNAMap 2.0. In addition, we find novel cross-genomes patterns such as (hsa-mir-190b?hsa-mir-153-2). According to the patterns we find, we can (1) explore one ncRNAs function from another with known function and (2) speculate the functions of both of them based on the relationship even we do no understand either of them. Our methods merits also include: (1) they are suitable for any ncRNA datasets and (2) they are not sensitive to the parameters", "keywords": ["ncrnas", "bridging rules", "entropy", "mirna", "joint entropy", "mutual information"]} {"id": "kp20k_training_104", "title": "Gaussian mixture modelling to detect random walks in capital markets", "abstract": "In this paper, Gaussian mixture modelling is used to detect random walks in capital markets with the Kolmogorov-Smirnov test. The main idea is to use Gaussian mixture modelling to fit asset return distributions and then use the Kolmogorov-Smirnov test to determine the number of components. Several quantities are used to characterize Gaussian mixture models and ascertain whether random walks exist in capital markets. Empirical studies on China securities markets and Forex markets are used to demonstrate the proposed procedure. ", "keywords": ["gaussian mixture modelling", "the random walks hypothesis", "asset return distributions", "em algorithm", "the kolmogorov-smirnov test"]} {"id": "kp20k_training_105", "title": "Scientific design rationale", "abstract": "Design rationale should be regarded both as a tool for the practice of design, and as a method to enable the science of design. Design rationale answers questions about why a given design takes the form that it does. Answers to these why questions represent a significant portion of the knowledge generated from design research. This knowledge, along with that from empirical studies of designs in use, contributes to what Simon called the sciences of the artificial. Most research on the nature and use of design rationale has been analytic or theoretical. In this article, we describe an empirical study of the roles that design rationale can play in the conduct of design research. We report results from an interview study with 16 design researchers investigating how they construe and carry out design as research. The results include an integrated framework of the affordances design rationale can contribute to design research. The framework and supporting qualitative data provide insight into how design rationale might be more effectively leveraged as a first-class methodology for research into the creation and use of artifacts", "keywords": ["affordances", "design rationale", "design research", "design research methodology"]} {"id": "kp20k_training_106", "title": "High flowability monomer resists for thermal nanoimprint lithography", "abstract": "In this paper, we have been using polymer and thermally curable monomer resists in a full 8in. wafer thermal nanoimprint lithography process. Using exactly the same imprinting conditions, we observed that a monomer solution provides a much larger resist redistribution than a polymer resist. Imprinting Fresnel zone plates, composed of micro- and nano-meter features, was possible only with the monomer resist. In order to reduce the shrinkage ratio of the monomer resists, acrylatesilsesquioxane materials were synthesised. With a simple diffusion-like model, we could extract a mean free path of 1.1mm for the monomer resist, while a polymer flows only on distances below 10?m in the same conditions", "keywords": ["nanoimprint lithography", "monomer resists", "flow properties", "polyhedral silsesquioxane"]} {"id": "kp20k_training_107", "title": "Binarized Support Vector Machines", "abstract": "The widely used support vector machine (SVM) method has shown to yield very good results in supervised classification problems. Other methods such as classification trees have become more popular among practitioners than SVM thanks to their interpretability, which is an important issue in data mining. In this work, we propose an SVM-based method that automatically detects the most important predictor variables and the role they play in the classifier. In particular, the proposed method is able to detect those values and intervals that are critical for the classification. The method involves the optimization of a linear programming problem in the spirit of the Lasso method with a large number of decision variables. The numerical experience reported shows that a rather direct use of the standard column generation strategy leads to a classification method that, in terms of classification ability, is competitive against the standard linear SVM and classification trees. Moreover, the proposed method is robust; i.e., it is stable in the presence of outliers and invariant to change of scale or measurement units of the predictor variables. When the complexity of the classifier is an important issue, a wrapper feature selection method is applied, yielding simpler but still competitive classifiers", "keywords": ["supervised classification", "binarization", "column generation", "support vector machines"]} {"id": "kp20k_training_108", "title": "Ambrosio-Tortorelli Segmentation of Stochastic Images: Model Extensions, Theoretical Investigations and Numerical Methods", "abstract": "We discuss an extension of the Ambrosio-Tortorelli approximation of the Mumford-Shah functional for the segmentation of images with uncertain gray values resulting from measurement errors and noise. Our approach yields a reliable precision estimate for the segmentation result, and it allows us to quantify the robustness of edges in noisy images and under gray value uncertainty. We develop an ansatz space for such images by identifying gray values with random variables. The use of these stochastic images in the minimization of energies of Ambrosio-Tortorelli type leads to stochastic partial differential equations for a stochastic smoothed version of the original image and a stochastic phase field for the edge set. For the discretization of these equations we utilize the generalized polynomial chaos expansion and the generalized spectral decomposition (GSD) method. In contrast to the simple classical sampling technique, this approach allows for an efficient determination of the stochastic properties of the output image and edge set by computations on an optimally small set of random variables. Also, we use an adaptive grid approach for the spatial dimensions to further improve the performance, and we extend an edge linking method for the classical Ambrosio-Tortorelli model for use with our stochastic model. The performance of the method is demonstrated on artificial data and a data set from a digital camera as well as real medical ultrasound data. A comparison of the intrusive GSD discretization with a stochastic collocation and a Monte Carlo sampling is shown", "keywords": ["image processing", "ambrosio-tortorelli model", "segmentation", "uncertainty", "stochastic images", "stochastic partial differential equations", "polynomial chaos", "generalized spectral decomposition", "adaptive grid", "edge linking"]} {"id": "kp20k_training_109", "title": "A provably convergent heuristic for stochastic bicriteria integer programming", "abstract": "We propose a general-purpose algorithm APS (Adaptive Pareto-Sampling) for determining the set of Pareto-optimal solutions of bicriteria combinatorial optimization (CO) problems under uncertainty, where the objective functions are expectations of random variables depending on a decision from a finite feasible set. APS is iterative and population-based and combines random sampling with the solution of corresponding deterministic bicriteria CO problem instances. Special attention is given to the case where the corresponding deterministic bicriteria CO problem can be formulated as a bicriteria integer linear program (ILP). In this case, well-known solution techniques such as the algorithm by Chalmet et al. can be applied for solving the deterministic subproblem. If the execution of APS is terminated after a given number of iterations, only an approximate solution is obtained in general, such that APS must be considered a metaheuristic. Nevertheless, a strict mathematical result is shown that ensures, under rather mild conditions, convergence of the current solution set to the set of Pareto-optimal solutions. A modification replacing or supporting the bicriteria ILP solver by some metaheuristic for multicriteria CO problems is discussed. As an illustration, we outline the application of the method to stochastic bicriteria knapsack problems by specializing the general framework to this particular case and by providing computational examples", "keywords": ["combinatorial optimization", "convergence proof", "integer programming", "metaheuristics", "stochastic optimization"]} {"id": "kp20k_training_110", "title": "The impact of a simulation game on operations management education", "abstract": "This study presents a new simulation game and analyzes its impact on operations management education. The proposed simulation was empirically tested by comparing the number of mistakes during the first and second halves of the game. Data were gathered from 100 teams of four or five undergraduate students in business administration, taking their first course in operations management. To assess learning, instead of relying solely on an overall performance measurement, as is usually done in the skill-based learning literature, we analyzed the evolution of different types of mistakes that were made by students in successive rounds of play. Our results show that although simple decision-making skills can be acquired with traditional teaching methods, simulation games are more effective when students have to develop decision-making abilities for managing complex and dynamic situations. ", "keywords": ["simulations", "interactive learning environment", "applications in operations management", "post-secondary education"]} {"id": "kp20k_training_111", "title": "Covering a set of points in a plane using two parallel rectangles", "abstract": "In this paper we consider the problem of finding two parallel rectangles in arbitrary orientation for covering a given set of n points in a plane, such that the area of the larger rectangle is minimized. We propose an algorithm that solves the problem in O(n(3)) time using O(n(2)) space. Without altering the complexity, our approach can be used to solve another optimization problem namely, minimize the sum of the areas of two arbitrarily oriented parallel rectangles covering a given set of points in a plane. ", "keywords": ["algorithms", "computational geometry", "covering", "optimization", "rectangles"]} {"id": "kp20k_training_112", "title": "Investigating the extreme programming system - An empirical study", "abstract": "In this paper we discuss our empirical study about the advantages and difficulties 15 Greek software companies experienced applying Extreme Programming (XP) as a holistic system in software development. Based on a generic XP system including feedback influences and using a cause-effect model including social-technical affecting factors, as our research tool, the study statistically evaluates the application of XP practices in the software companies being studied. Data were collected from 30 managers and developers, using the sample survey technique with questionnaires and interviews, in a time period of six months. Practices were analysed individually, using Descriptive Statistics (DS), and as a whole by building up different models using stepwise Discriminant Analysis (DA). The results have shown that companies, facing various problems with common code ownership, on-site customer, 40-hour week and metaphor, prefer to develop their own tailored XP method and way of working-practices that met their requirements. Pair programming and test-driven development were found to be the most significant success factors. Interactions and hidden dependencies for the majority of the practices as well as communication and synergy between skilled personnel were found to be other significant success factors. The contribution of this preliminary research work is to provide some evidence that may assist companies in evaluating whether the XP system as a holistic framework would suit their current situation", "keywords": ["agile methods", "extreme programming system", "cause-effect model", "feedback model", "developer perception", "manager perception", "empirical study", "stepwise discriminant analysis", "planning game", "pair programming", "test-driven development", "refactoring", "simple design", "common code ownership", "continuous integration", "on-site customer", "short release cycles", "40-hour-week", "coding standards", "metaphor"]} {"id": "kp20k_training_113", "title": "An optimal GTS scheduling algorithm for time-sensitive transactions in IEEE 802.15.4 networks", "abstract": "IEEE 802.15.4 is a new enabling standard for low-rate wireless personal area networks and has been widely accepted as a de facto standard for wireless sensor networking. While primary motivations behind 802.15.4 are low power and low cost wireless communications, the standard also supports time and rate sensitive applications because of its ability to operate in TDMA access modes. The TDMA mode of operation is supported via the Guaranteed Time Slot (GTS) feature of the standard. In a beacon-enabled network topology, the Personal Area Network (PAN) coordinator reserves and assigns the GTS to applications on a first-come-first-served (FCFS) basis in response to requests from wireless sensor nodes. This fixed FCFS scheduling service offered by the standard may not satisfy the time constraints of time-sensitive transactions with delay deadlines. Such operating scenarios often arise in wireless video surveillance and target detection applications running on sensor networks. In this paper, we design an optimal work-conserving scheduling algorithm for meeting the delay constraints of time-sensitive transactions and show that the proposed algorithm outperforms the existing scheduling model specified in IEEE 802.15.4", "keywords": ["gts", "scheduling", "lr-wpan", "schedulability", "edf"]} {"id": "kp20k_training_114", "title": "CONTROLLED DENSE CODING WITH CLUSTER STATE", "abstract": "Two schemes for controlled dense coding with a one-dimensional four-particle cluster state are investigated. In this protocol, the supervisor (Cliff) can control the channel and the average amount of information transmitted from the sender (Alice) to the receiver (Bob) by adjusting the local measurement angle theta. It is shown that the results for the average amounts of information are unique from the different two schemes", "keywords": ["controlled dense coding", "cluster state", "average amount of information", "povm"]} {"id": "kp20k_training_115", "title": "Slope stability analysis using the limit equilibrium method and two finite element methods", "abstract": "In this paper, the factors of safety and critical slip surfaces obtained by the limit equilibrium method (LEM) and two finite element methods (the enhanced limit strength method (ELSM) and strength reduction method (SRM)) are compared. Several representative two-dimensional slope examples are analysed. Using the associated flow rule, the results showed that the two finite element methods were generally in good agreement and that the LEM yielded a slightly lower factor of safety than the two finite element methods did. Moreover, a key condition regarding the stress field is shown to be necessary for ELSM analysis", "keywords": ["lem limit equilibrium method", "srm strength reduction method", "elsm enhanced limit strength method", "fos factor of safety", "srf strength reduction factor", "pso particle swarm optimisation"]} {"id": "kp20k_training_116", "title": "Deformation invariant attribute vector for deformable registration of longitudinal brain MR images", "abstract": "This paper presents a novel approach to define deformation invariant attribute vector (DIAV) for each voxel in 3D brain image for the purpose of anatomic correspondence detection. The DIAV method is validated by using synthesized deformation in 3D brain MRI images. Both theoretic analysis and experimental studies demonstrate that the proposed DIAV is invariant to general nonlinear deformation. Moreover, our experimental results show that the DIAV is able to capture rich anatomic information around the voxels and exhibit strong discriminative ability. The DIAV has been integrated into a deformable registration algorithm for longitudinal brain MR images, and the results on both simulated and real brain images are provided to demonstrate the good performance of the proposed registration algorithm based on matching of DIAVs", "keywords": ["deformable registration", "longitudinal imaging", "brain mri", "deformation invariant attribute vector"]} {"id": "kp20k_training_117", "title": "Carbapenem-resistant Enterobacteriaceae: biology, epidemiology, and management", "abstract": "Introduced in the 1980s, carbapenem antibiotics have served as the last line of defense against multidrug-resistant Gram-negative organisms. Over the last decade, carbapenem-resistant Enterobacteriaceae (CRE) have emerged as a significant public health threat. This review summarizes the molecular genetics, natural history, and epidemiology of CRE and discusses approaches to prevention and treatment", "keywords": ["carbapenem-resistant enterobacteriaceae", "antimicrobial resistance", "carbapenemases", "molecular genetics", "infection control", "treatment"]} {"id": "kp20k_training_118", "title": "hypergraph-based inductive learning for generating implicit key phrases", "abstract": "This paper presents a novel approach to generate implicit key phrases which are ignored in previous researches. Recent researches prefer to extract key phrases with semi-supervised transductive learning methods, which avoid the problem of training data. In this paper, based on a transductive learning method, we formulate the phrases in the document as a hypergraph and expand the hypergraph to include implicit phrases, which are ranked by an inductive learning approach. The highest ranked phrases are seen as implicit key phrases, and experimental results demonstrate the satisfactory performance of this approach", "keywords": ["hypergraph", "implicit key phrase", "inductive semi-supervised learning", "key phrase generation", "transductive learning"]} {"id": "kp20k_training_119", "title": "Strategic commitment to price to stimulate downstream innovation in a supply chain", "abstract": "It is generally in a firms interest for its supply chain partners to invest in innovations. To the extent that these innovations either reduce the partners variable costs or stimulate demand for the end product, they will tend to lead to higher levels of output for all of the firms in the chain. However, in response to the innovations of its partners, a firm may have an incentive to opportunistically increase its own prices. The possibility of such opportunistic behavior creates a hold-up problem that leads supply chain partners to underinvest in innovation. Clearly, this hold-up problem could be eliminated by a pre-commitment to price. However, by making an advance commitment to price, a firm sacrifices an important means of responding to demand uncertainty. In this paper we examine the trade-off that is faced when a firms channel partner has opportunities to invest in either cost reduction or quality improvement, i.e. demand enhancement. Should it commit to a price in order to encourage innovation, or should it remain flexible in order to respond to demand uncertainty. We discuss several simple wholesale pricing mechanisms with respect to this trade-off", "keywords": ["channel coordination", "channels of distribution", "industrial organization", "cost reducing r&d"]} {"id": "kp20k_training_120", "title": "mutation-based software testing using program schemata", "abstract": "Mutation analysis is a powerful technique for assessing the quality of test data used in unit testing software. Unfortunately, current automated mutation analysis systems suffer from severe performance problems. In this paper the principles of mutation analysis are reviewed, current automation approaches are described, and a new method of performing mutation analysis is outlined. Performance improvements of over 300% are reported and other advantages of this new method are highlighted", "keywords": ["software testing", "software", "quality", "method", "systems", "fault-based testing", "test", "unit test", "analysis", "performance", "mutation", "mutation analysis", "program schemata", "data", "paper", "automation"]} {"id": "kp20k_training_121", "title": "Bamboo: A Data-Centric, Object-Oriented Approach to Many-core Software", "abstract": "Traditional data-oriented programming languages such as dataflow languages and stream languages provide a natural abstraction for parallel programming. In these languages, a developer focuses on the flow of data through the computation and these systems free the developer from the complexities of low-level, thread-oriented concurrency primitives. This simplification comes at a cost-traditional data-oriented approaches restrict the mutation of state and, in practice, the types of data structures a program can effectively use. Bamboo borrows from work in typestate and software transactions to relax the traditional restrictions of data-oriented programming models to support mutation of arbitrary data structures. We have implemented a compiler for Bamboo which generates code for the TILEPro64 many-core processor. We have evaluated this implementation on six benchmarks: Tracking, a feature tracking algorithm from computer vision; KMeans, a K-means clustering algorithm; MonteCarlo, a Monte Carlo simulation; FilterBank, a multi-channel filter bank; Fractal, a Mandelbrot set computation; and Series, a Fourier series computation. We found that our compiler generated implementations that obtained speedups ranging from 26.2x to 61.6x when executed on 62 cores", "keywords": ["algorithms", "languages", "many-core programming", "data-centric languages"]} {"id": "kp20k_training_122", "title": "Performance optimization problem in speculative prefetching", "abstract": "Speculative prefetching has been proposed to improve the response time of network access. Previous studies in speculative prefetching focus on building and evaluating access models for the purpose of access prediction. This paper investigates a complementary area which has been largely ignored, that of performance modeling. We analyze the performance of a prefetcher that has uncertain knowledge about future accesses. Our performance metric is the improvement in access time, for which we derive a formula in terms of resource parameters (time available and time required for prefetching) and speculative parameters (probabilities for next access). We develop a prefetch algorithm to maximize the improvement in access time. The algorithm is based on finding the best solution to a stretch knapsack problem, using theoretically proven apparatus to reduce the search space. An integration between speculative prefetching and caching is also investigated", "keywords": ["speculative prefetching", "caching"]} {"id": "kp20k_training_123", "title": "inspiring collaboration through the use of videoconferencing technology", "abstract": "At the beginning of 2007 the University of Washington opened the Odegaard Videoconference Studio which allowed groups on campus to communicate with colleagues that were physically in different locations. The opening of this facility inspired all sorts of collaborating on a more frequent basis as traveling, and more importantly the time and expense involved with traveling, was now not as necessary in order to have a meeting. Many boundaries for collaboration were removed through the use of different types of technology that allowed for video and audio conferencing, and, data and application sharing. This provided for a way to share ideas in more detail, make decisions, and receive feedback quicker, making the overall process more efficient, personal, and overall more effective", "keywords": ["collaboration technologies", "videoconferencing"]} {"id": "kp20k_training_124", "title": "expanders, sorting in rounds and superconcentrators of limited depth", "abstract": "Expanding graphs and superconcentrators are relevant to theoretical computer science in several ways. Here we use finite geometries to construct explicitly highly expanding graphs with essentially the smallest possible number of edges. Our graphs enable us to improve significantly previous results on a parallel sorting problem, by describing an explicit algorithm to sort n elements in k time units using &Ogr;( n &agr;k ) processors, where, e.g., &agr; 2 = 7/4. Using our graphs we can also construct efficient n-superconcentrators of limited depth. For example, we construct an n superconcentrator of depth 3 with &Ogr;( n 4/3 ) edges; better than the previous known results", "keywords": ["processor", "computer science", "graph", "sorting", "relevance", "timing", "use", "examples", "efficiency", "algorithm", "parallel"]} {"id": "kp20k_training_125", "title": "Synchrony and frequency regulation by synaptic delay in networks of self-inhibiting neurons", "abstract": "We show that a pair of mutually coupled self-inhibitory neurons can display stable synchronous oscillations provided only that the delay to the onset of inhibition is sufficiently long. The frequency of these oscillations is determined either entirely by the length of the synaptic delay, or by the synaptic delay and intrinsic time constants. We also show how cells can exhibit transient synchronous oscillations where the length of the transients is determined by the synaptic delay, but where the frequency is largely independent of the delay", "keywords": ["synchronous oscillations", "inhibition", "synaptic delay"]} {"id": "kp20k_training_126", "title": "minimizing power dissipation during write operation to register files", "abstract": "This paper presents a power reduction mechanism for the write operation in register files (RegFiles), which adds a conditional charge-sharing structure to the pair of complementary bit-lines in each column of the RegFile. Because the read and write ports for the RegFile are separately implemented, it is possible to avoid pre-charging the bit-line pair for consecutive writes. More precisely, when writing same values to some cells in the same column of the RegFile, it is possible to eliminate energy consumption due to precharging of the bit-line pair. At the same time, when writing opposite values to some cells in the same column of the RegFile, it is possible to reduce energy consumed in charging the bit-line pair thanks to charge-sharing. Motivated by these observations, we modify the bit-line structure of the write ports in the RegFile such that i) we remove per-cycle bitline pre-charging and ii) we employ conditional data dependent charge-sharing. Experimental results on a set of SPEC2000INT / MediaBench benchmarks show an average of 61.5% energy savings with 5.1% area overhead and 16.2% increase in write access delay", "keywords": ["power", "write operation", "register file"]} {"id": "kp20k_training_127", "title": "A decision support framework for metrics selection in goal-based measurement programs: GQM-DSFMS", "abstract": "Complex GQM-based measurement programs lead to the need for decision support in metric selection. We provide an decision support framework in choosing an optimal set of metrics to maximize measurement goal achievement for a given budget. The framework was evaluated by comparison with expert opinion in a CMMI Level 3 company. Extent of addressing information needs under a fixed budged was higher when selecting metrics using the framework", "keywords": ["software measurement program", "goal based measurement", "goal question metric", "gqm", "decision support", "optimization", "prioritization"]} {"id": "kp20k_training_128", "title": "energy/area/delay trade-offs in the physical design of on-chip segmented bus architecture", "abstract": "The increasing gap between design productivity and chip complexity, and the emerging Systems-On-Chip (SOC) architectural template have led to the wide utilization of reusable Intellectual Property (IP) cores. The physical design implementation of the macro cells (IP blocks or pre-designed blocks) in general needs to find a well balanced solution among chip area, on-chip interconnect energy and critical path delay. We are especially interested in the entire trade-off curve among these three criteria at the floorplanning stage. We show this concept for a real communication scheme based on segmented bus, rather than just an extreme solution. A fast exploration design flow from the memory organization to the final layout is introduced to explore the design space", "keywords": ["communication", "floorplanning", "physical design", "design space", "intellectual property", "design", "delay", "layout", "concept", "general", "reusability", "exploration", "trade-offs", "organization", "product", "macros", "segmented bus", "architecture", "energy", "implementation", "memorialized", "flow", "interconnect", "system on-chip", "complexity", "critic", "template", "scheme"]} {"id": "kp20k_training_129", "title": "System integration of a miniature rotorcraft for aerial tele-operation research", "abstract": "This paper describes the development and integration of the systems required for research into human interaction with a tele-operated miniature rotorcraft. Because of the focus on vehicles capable of operating indoors, the size of the vehicle was limited to 35 cm, and therefore the hardware had to be carefully chosen to meet the ensuing size and weight constraints, while providing sufficient flight endurance. The components described in this work include the flight hardware, electronics, sensors, and software necessary to conduct tele-operation experiments. The integration tasks fall into three main areas. First, the paper discusses the choice of rotorcraft platform best suited for indoor operation addressing the issues of size, payload capabilities, and power consumption. The second task was to determine what electronics and sensing could be integrated into a rotorcraft with significant payload limitations. Finally, the third task involved characterizing the various components both individually and as a complete system. The paper concludes with an overview of ongoing tele-operation research performed with the embedded rotorcraft platform. ", "keywords": ["miniature rotorcraft", "embedded systems", "indoor navigation"]} {"id": "kp20k_training_130", "title": "Rank inclusion in criteria hierarchies", "abstract": "This paper presents a method called Rank Inclusion in Criteria Hierarchies (RICH) for the analysis of incomplete preference information in hierarchical weighting models. In RICH, the decision maker is allowed to specify subsets of attributes which contain the most important attribute or, more generally, to associate a set of rankings with a given set of attributes. Such preference statements lead to possibly non-convex sets of feasible attribute weights, allowing decision recommendations to be obtained through the computation of dominance relations and decision rules. An illustrative example on the selection of a subcontractor is presented, and the computational properties of RICH are considered", "keywords": ["multiple criteria analysis", "decision analysis", "hierarchical weighting models", "incomplete preference information"]} {"id": "kp20k_training_131", "title": "Automatic relative orientation of large scale imagery over urban areas using Modified Iterated Hough Transform", "abstract": "The automation of relative orientation (RO) has been the major focus of the photogrammetric research community in the last decade. Despite the reported progress, there is no reliable (robust) approach that can perform automatic relative orientation (ARO) using large-scale imagery over urban areas. A reliable and general method for solving matching problems in various photogrammetric activities has been developed at The Ohio State University. This approach has been used to solve single photo resection using free-form linear features, surface matching and relative orientation. The approach estimates the parameters of a mathematical model relating the entities of two datasets when the correspondence of the involved entities is unknown. When applied to relative orientation, the coplanarity model is used to relate extracted edge pixels and/or feature points from a stereo-pair. In its execution, the relative orientation parameters are solved sequentially, using the coplanarity model to evaluate all possible pairings of the input primitives and choosing the most probable solution. As a result of this technique, the matched entities that correspond to the parameter solution are implicitly determined. Experiments using real data conclude that this is a robust method for relative orientation for both urban and rural scenes", "keywords": ["matching", "robust parameter estimation", "hough transform", "automatic relative orientation"]} {"id": "kp20k_training_132", "title": "Emergency railway wagon scheduling by hybrid biogeography-based optimization", "abstract": "Railway transportation plays an important role in many disaster relief and other emergency supply chains. Based on the analysis of several recent disaster rescue operations in China, the paper proposes a mathematical model for emergency railway wagon scheduling, which considers multiple target stations requiring relief supplies, source stations for providing supplies, and central stations for allocating railway wagons. Under the emergency environment, the aim of the problem is to minimize the weighted time for delivering all the required supplies to the targets. For efficiently solving the problem, we develop a new hybrid biogeography-based optimization (BBO) algorithm, which uses a local ring topology of population to avoid premature convergence, includes the differential evolution (DE) mutation operator to perform effective exploration, and takes some problem-specific mechanisms for fine-tuning the search process and handling the constraints. Computational experiments show that our algorithm is robust and scalable, and outperforms some state-of-the-art heuristic algorithms on a set of problem instances", "keywords": ["emergency relief supply", "railway wagon scheduling", "biogeography-based optimization ", "ring topology", "differential evolution "]} {"id": "kp20k_training_133", "title": "Biasvariance analysis in estimating true query model for information retrieval", "abstract": "We study the retrieval effectiveness-stability tradeoff in query model estimation. This tradeoff is investigated through a novel angle, i.e., biasvariance tradeoff. We formulate the performance biasvariance and estimation biasvariance. We investigate various query estimation methods using biasvariance analysis. Experiments have been conducted to verify hypotheses on biasvariance analysis", "keywords": ["information retrieval", "query language model", "biasvariance"]} {"id": "kp20k_training_134", "title": "Qualitative constraint satisfaction problems: An extended framework with landmarks", "abstract": "Dealing with spatial and temporal knowledge is an indispensable part of almost all aspects of human activity. The qualitative approach to spatial and temporal reasoning, known as Qualitative Spatial and Temporal Reasoning (QSTR), typically represents spatial/temporal knowledge in terms of qualitative relations (e.g., to the east of, after), and reasons with spatial/temporal knowledge by solving qualitative constraints. When formulating qualitative constraint satisfaction problems (CSPs), it is usually assumed that each variable could be \"here, there and everywhere\".(1) Practical applications such as urban planning, however, often require a variable to take its value from a certain finite domain, i.e. it is required to be 'here or there, but not everywhere'. Entities in such a finite domain often act as reference objects and are called \"landmarks\" in this paper. The paper extends the classical framework of qualitative CSPs by allowing variables to take values from finite domains. The computational complexity of the consistency problem in this extended framework is examined for the five most important qualitative calculi, viz. Point Algebra, Interval Algebra, Cardinal Relation Algebra, RCC5, and RCC8. We show that all these consistency problems remain in NP and provide, under practical assumptions, efficient algorithms for solving basic constraints involving landmarks for all these calculi. ", "keywords": ["qualitative spatial and temporal reasoning", "qualitative calculi", "constraint satisfaction", "landmarks"]} {"id": "kp20k_training_135", "title": "the interaction of software prefetching with ilp processors in shared-memory systems", "abstract": "Current microprocessors aggressively exploit instruction-level parallelism (ILP) through techniques such as multiple issue, dynamic scheduling, and non-blocking reads. Recent work has shown that memory latency remains a significant performance bottleneck for shared-memory multiprocessor systems built of such processors.This paper provides the first study of the effectiveness of software-controlled non-binding prefetching in shared memory multiprocessors built of state-of-the-art ILP-based processors. We find that software prefetching results in significant reductions in execution time (12% to 31%) for three out of five applications on an ILP system. However, compared to previous-generation system, software prefetching is significantly less effective in reducing the memory stall component of execution time on an ILP system. Consequently, even after adding software prefetching, memory stall time accounts for over 30% of the total execution time in four out of five applications on our ILP system.This paper also investigates the interaction of software prefetching with memory consistency models on ILP-based multiprocessors. In particular, we seek to determine whether software prefetching can equalize the performance of sequential consistency (SC) and release consistency (RC). We find that even with software prefetching, for three out of five applications, RC provides a significant reduction in execution time (15% to 40%) compared to SC", "keywords": ["prefetching", "applications", "generation", "performance", "reduction", "art", "timing", "account", "instruction-level parallelism", "model", "paper", "sequential consistency", "component", "processor", "interaction", "shared memory", "software", "latency", "systems", "exploit", "shared memory multiprocessors", "memorialized", "binding", "consistency", "ilp", "effect", "dynamic scheduling"]} {"id": "kp20k_training_136", "title": "A scalable and extensible framework for query answering over RDF", "abstract": "The Semantic Web is gaining increasing interest to fulfill the need of sharing, retrieving, and reusing information. In this context, the Resource Description Framework (RDF) has been conceived to provide an easy way to represent any kind of data and metadata, according to a lightweight model and syntaxes for serialization (RDF/XML, N3, etc.). Despite RDF has the advantage of being general and simple, it cannot be used as a storage model as it is, since it can be easily shown that even simple management operations involve serious performance limitations. In this paper we present a framework which provides a flexible and persistent layer relying on a novel storage model that guarantees good scalability and performance of query evaluation. The approach is based on the notion of construct, that represents a concept of the domain of interest. This makes the approach easily extensible and independent from the specific knowledge representation language. Based on this representation, reasoning capabilities are supported by a rule-based engine. Finally we present experimental results over real world scenarios to demonstrate the feasibility of the approach", "keywords": ["rdf", "rdfs", "query answering", "metamodel"]} {"id": "kp20k_training_137", "title": "Using interactive 3-D visualization for public consultation", "abstract": "3-D models are often developed to aid the design and development of indoor and outdoor environments. This study explores the use of interactive 3-D visualization for public consultation for outdoor environments. Two visualization techniques (interactive 3-D visualization and static visualization) were compared using the method of individual testing. Visualization technique had no effect on the perception of the represented outdoor environment, but there was a preference for using interactive 3-D. Previously established mechanisms for a preference for interactive 3-D visualization in other domains were confirmed in the perceived strengths and weaknesses of visualization techniques. In focus-group discussion, major preferences included provision of more information through interactive 3-D visualization and wider access to information for public consultation. From a users' perspective, the findings confirm the strong potential of interactive 3-D visualization for public consultation. ", "keywords": ["virtual reality", "visualization", "public consultation", "outdoor environment", "e-government"]} {"id": "kp20k_training_138", "title": "Polymorphic nodal elements and their application in discontinuous Galerkin methods", "abstract": "In this work, we discuss two different but related aspects of the development of efficient discontinuous Galerkin methods on hybrid element grids for the computational modeling of gas dynamics in complex geometries or with adapted grids. In the first part, a recursive construction of different nodal sets for hp finite elements is presented. They share the property that the nodes along the sides of the two-dimensional elements and along the edges of the three-dimensional elements are the LegendreGaussLobatto points. The different nodal elements are evaluated by computing the Lebesgue constants of the corresponding Vandermonde matrix. In the second part, these nodal elements are applied within the modal discontinuous Galerkin framework. We still use a modal based formulation, but introduce a nodal based integration technique to reduce computational cost in the spirit of pseudospectral methods. We illustrate the performance of the scheme on several large scale applications and discuss its use in a recently developed space-time expansion discontinuous Galerkin scheme", "keywords": ["discontinuous galerkin", "nodal", "modal", "polynomial interpolation", "hp finite elements", "lebesgue constants", "quadrature free", "unstructured", "triangle", "quadrilateral", "polygonal", "tetrahedron", "hexahedron", "prism", "pentahedron", "pyramid"]} {"id": "kp20k_training_139", "title": "Deployment-Based Solution for Prolonging Lifetime in Sensor Networks with Multiple Mobile Sinks", "abstract": "Enhancing sensor network lifetime is an important research topic for wireless sensor networks. Solutions based on linear programming, clustering, controlled non-uniform node distributions and mobility are presented separately in the literature. Even thought, the problem is still open and not fully solved. Drawbacks exist for all the above solutions when considered separately. Perhaps a solution that is able to provide composite benefits of some of them could better solve the problem. In this paper, we introduce a solution for prolonging the lifetime of sensor networks. The proposed solution is based on a deployment strategy of multiple mobile sinks. In our proposal, data traffic is directed away from the network center toward the network peripheral where sinks would be initially deployed. Sinks stay stationary while collecting the data reports that travel over the network perimeter toward them. Eventually perimeter nodes would be exposed to a peeling phenomenon which results in partitioning one or more sinks from their one-hop neighbors. The partitioned sinks move discrete steps following the direction of the progressive peeling towards the network center. The mechanism maintains the network connectivity and delays the occurrence of partition. Moreover, it balances the load among nodes and reduces the energy consumption. The performance of the proposed protocol is evaluated using intensive simulations. The results show the efficiency (in terms of both reliability and connectivity) of our deployment strategy with the associated data collection protocol", "keywords": ["sensor networks", "data collection", "mobile sinks", "deployment"]} {"id": "kp20k_training_140", "title": "design and applications of an algorithm benchmark system in a computational problem solving environment", "abstract": "Benchmark tests are often used to evaluate the quality of products by a set of common criteria. In this paper we describe a computational problem solving environment based on open source codes and an algorithm benchmark system, which is embedded in the environment as a plug-in system. The algorithm benchmark system can be used to compare the performance of various algorithms or to evaluate the behavior of an algorithm with different input instances. The current implementation allows users to compare or evaluate algorithms written in C/C++. Some examples of the algorithm benchmark system that evaluates the memory utilization, time complexity and the output of algorithms are also presented. Algorithm benchmark impresses the learning effect; students can not only comprehend the performance of respective algorithms but also write their own programs to challenge the best known results", "keywords": ["problem-solving environment", "algorithm visualization", "benchmark", "knowledge portal"]} {"id": "kp20k_training_141", "title": "automated performance tuning", "abstract": "This tutorial presents automated techniques for implementing and optimizing numeric and symbolic libraries on modern computing platforms including SSE, multicore, and GPU. Obtaining high performance requires effective use of the memory hierarchy, short vector instructions, and multiple cores. Highly tuned implementations are difficult to obtain and are platform dependent. For example, Intel Core i7 980 XE has a peak floating point performance of over 100 GFLOPS and the NVIDIA Tesla C870 has a peak floating point performance of over 500 GFLOPS, however, achieving close to peak performance on such platforms is extremely difficult. Consequently, automated techniques are now being used to tune and adapt high performance libraries such as ATLAS (math-atlas.sourceforge.net), PLASMA (icl.cs.utk.edu/plasma) and MAGMA (icl.cs.utk.edu/magma) for dense linear algebra, OSKI (bebop.cs.berkeley.edu/oski) for sparse linear algebra, FFTW (www.fftw.org) for the fast Fourier transform (FFT), and SPIRAL (www.spiral.net) for wide class of digital signal processing (DSP) algorithms. Intel currently uses SPIRAL to generate parts of their MKL and IPP libraries", "keywords": ["autotuning", "high-performance computing", "vectorization", "code generation and optimization", "parallelism"]} {"id": "kp20k_training_142", "title": "Explicit solutions for a class indirect pharmacodynamic response models", "abstract": "Explicit solutions for four, ordinary differential equation (ODE)-based, types of indirect response models are presented. These response models were introduced by Dayneka et aL in 1993 [J. Pharmacokinet. Biopharm. 21 (1993) 457] to describe pharmacodynamic responses utilizing inhibitory or stimulatory Em,x type functions. The explicit solutions are expressed in terms of hypergeometric F-2(1) functions and their analytical continuations. A practical application is demonstrated for modeling the kinetics of drug action for ibandronate, a potent bisphosphonate that suppresses bone turnover resulting in a reduction in the markers of bone turnover. Ten times shorter model evaluation times, with the explicit solution compared with the differential equation implementation, may enhance situations where a large number of model evaluations are needed, such as clinical trial simulations and parameter estimation. ", "keywords": ["indirect response model", "explicit solution", "hypergeometric function f-2", "nonmem"]} {"id": "kp20k_training_143", "title": "a web-based consumer-oriented intelligent decision support system for personalized e-services", "abstract": "Due to the rapid advancement of electronic commerce and web technologies in recent years, the concepts and applications of decision support systems have been significantly extended. One quickly emerging research topic is the consumer-oriented decision support system that provides functional supports to consumers for efficiently and effectively making personalized decisions. In this paper we present an integrated framework for developing web-based consumer-oriented intelligent decision support systems to facilitate all phases of consumer decision-making process in business-to-consumer e-services applications. Major application functional modules comprised in the system framework include consumer and personalized management, navigation and search, evaluation and selection, planning and design, community and collaboration management, auction and negotiation, transactions and payments, quality and feedback control, as well as communications and information distributions. System design and implementation methods will be illustrated using an example. Also explored are various potential e-services application domains including e-tourism and e-investment", "keywords": ["personalization", "decision making process", "intelligent decision support system", "e-services"]} {"id": "kp20k_training_144", "title": "Efficient segment-based video transcoding proxy for mobile multimedia services", "abstract": "To support various bandwidth requirements for mobile multimedia services for future heterogeneous mobile environments, such as portable notebooks, personal digital assistants (PDAs), and 3G cellular phones, a transcoding video proxy is usually necessary to provide mobile clients with adapted video streams by not only transcoding videos to meet different needs on demand, but also caching them for later use. Traditional proxy technology is not applicable to a video proxy because it is less cost-effective to cache the complete videos to fit all kinds of clients in the proxy. Since transcoded video objects have inheritance dependency between different bit-rate versions, we can use this property to amortize the retransmission overhead from transcoding other objects cached in the proxy. In this paper, we propose the object relation graph (ORG) to manage the static relationships between video versions and an efficient replacement algorithm to dynamically manage video segments cached in the proxy. Specifically, we formulate a transcoding time constrained profit function to evaluate the profit from caching each version of an object. The profit function considers not only the sum of the costs of caching individual versions of an object, but also the transcoding relationship among these versions. In addition, an effective data structure, cached object relation tree (CORT), is designed to facilitate the management of multiple versions of different objects cached in the transcoding proxy. Experimental results show that the proposed algorithm outperforms companion schemes in terms of the byte-hit ratios and the startup latency", "keywords": ["transcoding", "segment caching", "multimedia", "mobile network"]} {"id": "kp20k_training_145", "title": "Automated process planning method to machine A B-Spline free-form feature on a mill-turn center", "abstract": "In this paper, we present a methodology for automating the process planning and NC code generation for a widely encountered class of free-form features that can be machined on a 3-axis mill-turn center. The free-form feature family that is considered is that of extruded protrusions whose cross-section is a closed, periodic B-Spline curve. in this methodology, for machining a part with B-Spline protrusion located at the free end, the part is first rough turned to the maximum profile diameter of the B-Spline, followed by rough profile cutting and finish profiling with axially mounted end mill tools. The identification and sequencing of machining volumes is completely automated, as is the generation of actual NC code. The approach supports both convex and non-convex profiles. In the case of non-convex profiles, the process planning algorithm ensures that there is no gouging of the work piece by the tool. The algorithm also identifies when sections of the tool path lie outside the work piece and utilizes rapid traverses in these regions to reduce cutting time. This methodology presents an integrated turn-mill process planning where by making the process fully automated from design with no user intervention making the overall process planning efficient. The algorithm was tested on several examples and test parts using the unmodified NC code obtained from the implementation were run on a Moriseiki mill-turn center. The parts that were produced met the dimensional specifications of the desired part. ", "keywords": ["computer-aided process planning", "feature-based design", "computer-aided manufacturing"]} {"id": "kp20k_training_146", "title": "Stability results for two classes of linear time-delay and hybrid systems", "abstract": "The stability of linear time-delay systems with point internal delays is difficult to deal with in practice because of the fact that their characteristic equation is usually of transcendent type rather than of polynomial type. This feature causes usually the system to possess an infinite number of poles. In this paper, stability tests for this class of systems are obtained either based on extensions of classical tests applicable to delay-free systems or on approaches within the framework of two-dimensional digital filters. Some of those two-dimensional stability tests are also proved to be useful for stability testing of a common class of linear hybrid systems which involve coupled continuous and digital substates after a slight \"ad-hoc\" adaptation of the tests for that situation", "keywords": ["stability ", "man-machine systems", "time series analysis"]} {"id": "kp20k_training_147", "title": "A pseudo-nearest-neighbor approach for missing data recovery on Gaussian random data sets", "abstract": "Missing data handling is an important preparation step for most data discrimination or mining tasks. Inappropriate treatment of missing data may cause large errors or false results. In this paper, we study the effect of a missing data recovery method, namely the pseudo-nearest-neighbor substitution approach, on Gaussian distributed data sets that represent typical cases in data discrimination and data mining applications. The error rate of the proposed recovery method is evaluated by comparing the clustering results of the recovered data sets to the clustering results obtained on the originally complete data sets. The results are also compared with that obtained by applying two other missing data handling methods, the constant default value substitution and the missing data ignorance (non-substitution) methods. The experiment results provided a valuable insight to the improvement of the accuracy for data discrimination and knowledge discovery on large data sets containing missing values", "keywords": ["missing data", "missing data recovery", "data imputation", "data clustering", "gaussian data distribution", "data mining"]} {"id": "kp20k_training_148", "title": "A Lagrangian relaxation approach to the edge-weighted clique problem", "abstract": "The b-clique polytope CPnb is the convex hull of the node and edge incidence vectors of all subcliques of size at most b of a complete graph on n nodes. Including the Boolean quadric polytope QPn=CPnn as a special case and being closely related to the quadratic knapsack polytope, it has received considerable attention in the literature. In particular, the max-cut problem is equivalent with optimizing a linear function over CPnn. The problem of optimizing linear functions over CPnb has so far been approached via heuristic combinatorial algorithms and cutting-plane methods. We study the structure of CPnb in further detail and present a new computational approach to the linear optimization problem based on the idea of integrating cutting planes into a Lagrangian relaxation of an integer programming problem that Balas and Christofides had suggested for the traveling salesman problem. In particular, we show that the separation problem for tree inequalities becomes polynomial in our Lagrangian framework. Finally, computational results are presented", "keywords": ["mathematical programming", "clique polytope", "cut polytope", "cutting plane", "boolean quadric polytope", "quadratic knapsack polytope", "lagrangian relaxation"]} {"id": "kp20k_training_149", "title": "resource aware programming in the pixie os", "abstract": "This paper presents Pixie, a new sensor node operating system designed to support the needs of data-intensive applications. These applications, which include high-resolution monitoring of acoustic, seismic, acceleration, and other signals, involve high data rates and extensive in-network processing. Given the fundamentally resource-limited nature of sensor networks, a pressing concern for such applications is their ability to receive feedback on, and adapt their behavior to, fluctuations in both resource availability and load. The Pixie OS is based on a dataflow programming model based on the concept of resource tickets, a core abstraction for representing resource availability and reservations. By giving the system visibility and fine-grained control over resource management, a broad range of policies can be implemented. To shield application programmers from the burden of managing these details, Pixie provides a suite of resource brokers, which mediate between low-level physical resources and higher-level application demands. Pixie is implemented in NesC and supports limited backwards compatibility with TinyOS. We describe Pixie in the context of two applications: limb motion analysis for patients undergoing treatment for motion disorders, and acoustic target detection using a network of microphones. We present a range of experiments demonstrating Pixie's ability to accurately account for resource availability at runtime and enable a range of both generic and application-specific adaptations", "keywords": ["network", "sensor", "motion analysis", "applications", "policy", "signaling", "context", "acceleration", "sensor networks", "concept", "experience", "account", "paper", "resource reservations", "resource-aware programming", "runtime", "motion", "control", "program modelling", "management", "visibility", "program", "wireless sensor networks", "availability", "detection", "systems", "abstraction", "behavior", "operating system", "data", "process", "support", "physical", "feedback", "core", "monitor", "compatibility", "resource management", "dataflow", "resource", "programmer", "generic", "tinyos"]} {"id": "kp20k_training_150", "title": "Highly Undersampled Magnetic Resonance Image Reconstruction via Homotopic l(0)-Minimization", "abstract": "In clinical magnetic resonance imaging (MRI), any reduction in scan time offers a number of potential benefits ranging from high-temporal-rate observation of physiological processes to improvements in patient comfort. Following recent developments in compressive sensing (CS) theory, several authors have demonstrated that certain classes of MR images which possess sparse representations in some transform domain can be accurately reconstructed from very highly undersampled K-space data by solving a convex l(1)-minimization problem. Although l(1)-based techniques are extremely powerful, they inherently require a degree of oversampling above the theoretical minimum sampling rate to guarantee that exact reconstruction can be achieved. In this paper, we propose a generalization of the CS paradigm based on homotopic approximation of the l(0) quasi-norm and show how MR image reconstruction can be pushed even further below the Nyquist limit and significantly closer to the theoretical bound. Following a brief review of standard CS methods and the developed theoretical extensions, several example MRI reconstructions from highly undersampled K-space data are presented", "keywords": ["compressed sensing", "compressive sensing ", "image reconstruction", "magnetic resonance imaging ", "nonconvex optimization"]} {"id": "kp20k_training_151", "title": "An incremental verification algorithm for real-time systems", "abstract": "We present an incremental algorithm for model checking the red-time systems against the requirements specified in the real-time extension of modal mu-calculus. Using this algorithm, we avoid the repeated construction and analysis of the whole state-space during the course of evolution of the system from time to time. We use a finite representation of the system, like most other algorithms on real-time systems. We construct and update a graph (called TSG) that is derived from the region graph and the formula. This allows us to halt the construction of this graph when enough nodes have been explored to determine the truth of the formula. TSG is minimal in the sense of partitioning the infinite state space into regions and it expresses a relation on the set of regions of the partition. We use the structure of the formula to derive this partition. When a change is applied to the timed automaton of the system, we find a new partition from the current partition and the TSG with minimum cost", "keywords": ["model-checking", "timed mu-calculus", "timed automata", "requirements specification", "labeled transition systems"]} {"id": "kp20k_training_152", "title": "A Survey on Transport Protocols for Wireless Multimedia Sensor Networks", "abstract": "Wireless networks composed of multimedia-enabled resource-constrained sensor nodes have enriched a large set of monitoring sensing applications. In such communication scenario, however, new challenges in data transmission and energy-efficiency have arisen due to the stringent requirements of those sensor networks. Generally, congested nodes may deplete the energy of the active congested paths toward the sink and incur in undesired communication delay and packet dropping, while bit errors during transmission may negatively impact the end-to-end quality of the received data. Many approaches have been proposed to face congestion and provide reliable communications in wireless sensor networks, usually employing some transport protocol that address one or both of these issues. Nevertheless, due to the unique characteristics of multimedia-based wireless sensor networks, notably minimum bandwidth demand, bounded delay and reduced energy consumption requirement, communication protocols from traditional scalar wireless sensor networks are not suitable for multimedia sensor networks. In the last decade, such requirements have fostered research in adapting existing protocols or proposing new protocols from scratch. We survey the state of the art of transport protocols for wireless multimedia sensor networks, addressing the recent developments and proposed strategies for congestion control and loss recovery. Future research directions are also discussed, outlining the remaining challenges and promising investigation areas", "keywords": ["wireless multimedia sensor networks", "transport protocols", "congestion control", "loss recovery", "survey"]} {"id": "kp20k_training_153", "title": "Two integrable couplings of the Tu hierarchy and their Hamiltonian structures", "abstract": "The double integrable couplings of the Tu hierarchy are worked out by use of Vector loop algebras G 6 and G 9 respectively. Also the Hamiltonian structures of the obtained system are given by the quadratic-form identity", "keywords": ["tu hierarchy", "vector loop algebra", "integrable couplings", "quadratic-form identity", "hamiltonian structure"]} {"id": "kp20k_training_154", "title": "Dynamic simulation of bioreactor systems using orthogonal collocation on finite elements", "abstract": "The dynamics of continuous biological processes is addressed in this paper. Numerical simulation of a conventional activated sludge process shows that despite the large differences in the dynamics of the species investigated. the orthogonal collocation on finite element technique with three internal collocation and four elements (OCFE-34) gives excellent numerical results for bioreactor models up to a Peclet number of 50. It is shown that there is little improvement in numerical accuracy when a much larger internal collocation point is introduced. Over and above Peclet number of 50, considered to be large for this process. simulation with the global orthogonal collocation (GOC) technique is infeasible. Due to the banded nature of its structural matrix, the method of lines (MOL) technique requires the lowest computing time, typically four times less than that required by the OCFE-34. Validation of the hydraulics of an existing pilot-scale subsurface flow (SSF) constructed wetland process using the aforementioned numerical techniques suggested that the OCFE is superior to the MOL and GOC in terms of numerical stability, ", "keywords": ["activated sludge", "orthogonal collocation on finite element", "global orthogonal collocation", "method of lines", "peclet number", "ssf constructed wetland"]} {"id": "kp20k_training_155", "title": "Detecting regularities on grammar-compressed strings", "abstract": "We address the problems of detecting and counting various forms of regularities in a string represented as a straight-line program (SLP) which is essentially a context free grammar in the Chomsky normal form. Given an SLP of size n that represents a string s of length N, our algorithm computes all runs and squares in s in O(n3h) O ( n 3 h ) time and O(n2) O ( n 2 ) space, where h is the height of the derivation tree of the SLP. We also show an algorithm to compute all gapped-palindromes in O(n3h+gnhlog?N) O ( n 3 h + g n h log ? N ) time and O(n2) O ( n 2 ) space, where g is the length of the gap. As one of the main components of the above solution, we propose a new technique called approximate doubling which seems to be a useful tool for a wide range of algorithms on SLPs. Indeed, we show that the technique can be used to compute the periods and covers of the string in O(n2h) O ( n 2 h ) time and O(nh(n+log2?N)) O ( n h ( n + log 2 ? N ) ) time, respectively", "keywords": ["straight-line programs ", "runs", "squares", "gapped palindromes", "compressed string processing algorithms"]} {"id": "kp20k_training_156", "title": "Achieving reusability and composability with a simulation conceptual model", "abstract": "Reusability and composability (R&C) are two important quality characteristics that have been very difficult to achieve in the Modelling and Simulation (M&S) discipline. Reuse provides many technical and economical benefits. Composability has been increasingly crucial for M&S of a system of systems, in which disparate systems are composed with each other. The purpose of this paper is to describe how R&C can be achieved by using a simulation conceptual model (CM) in a community of interest (COI). We address R&C in a multifaceted manner covering many M&S areas (types). M&S is commonly employed where R&C are very much needed by many COIs. We present how a CM developed for a COI can assist in R&C for the design of any type of large-scale complex M&S application in that COI. A CM becomes an asset for a COI and offers significant economic benefits through its broader applicability and more effective utilization", "keywords": ["composability", "conceptual model", "reusability", "simulation", "simulation model development"]} {"id": "kp20k_training_157", "title": "Wavelength decomposition approach for computing blocking probabilities in multicast WDM optical networks", "abstract": "We present an approximate analytical method to evaluate the blocking probabilities in multicast Wavelength Division Multiplexing (WDM) networks without wavelength converters. Our approach is based on the wavelength decomposition approach in which the WDM network is divided into layers (colors) and the moment matching method is used to characterize the overflow traffic from one layer to another. Analyzing blocking probabilities for unicast and multicast calls in each layer of the network is derived from an exact approach. We assume static routing with either First-Fit or random wavelength assignment algorithm. Results are presented which indicate the accuracy of our method", "keywords": ["blocking probability", "mutlicast routing", "wdm"]} {"id": "kp20k_training_158", "title": "A new local meshless method for steady-state heat conduction in heterogeneous materials", "abstract": "In this paper a truly meshless method based on the integral form of energy equation is presented to study the steady-state heat conduction in the anisotropic and heterogeneous materials. The presented meshless method is based on the satisfaction of the integral form of energy balance equation for each sub-particle (sub-domain) inside the material. Moving least square (MLS) approximation is used for approximation of the field variable over the randomly located nodes inside the domain. In the absence of heat generation, the domain integration is eliminated from the formulation of presented method and the computational efforts are reduced substantially with respect to the conventional MLPG method. A direct method is presented for treatment of material discontinuity at the heterogeneous material in the presented meshless method. As a practical problem the heat conduction in fibrous composite material is studied and the steady-state heat conduction in unidirectional fibermatrix composites is investigated. The solution domain includes a small area of the composite system called representative volume element (RVE). Comparison of numerical results shows that the presented meshless method is simple, effective, accurate and less costly method for micromechanical analysis of heat conduction in heterogeneous materials", "keywords": ["truly meshless method", "heat conduction problem", "heterogeneous material", "micromechanical analysis", "fiber reinforced composite"]} {"id": "kp20k_training_159", "title": "A Territory Defining Multiobjective Evolutionary Algorithms and Preference Incorporation", "abstract": "We have developed a steady-state elitist evolutionary algorithm to approximate the Pareto-optimal frontiers of multiobjective decision making problems. The algorithms define a territory around each individual to prevent crowding in any region. This maintains diversity while facilitating the fast execution of the algorithm. We conducted extensive experiments on a variety of test problems and demonstrated that our algorithm performs well against the leading multiobjective evolutionary algorithms. We also developed a mechanism to incorporate preference information in order to focus on the regions that are appealing to the decision maker. Our experiments show that the algorithm approximates the Pareto-optimal solutions in the desired region very well when we incorporate the preference information", "keywords": ["crowding prevention", "evolutionary algorithms", "guidance", "multiobjective optimization", "preference incorporation"]} {"id": "kp20k_training_160", "title": "A holistic frame-of-reference for modelling social systems", "abstract": "Purpose - To outline a philosophical system of inquiry that may be used as a frame-of-reference for modelling social systems. Design/methodology/approach - The paper draws on insights from cognitive science, autopoiesis, management cybernetics and non-linear dynamics. Findings - The outcome of this paper is an outline of a frame-of-reference to be used as a starting point (or a frame of orientation) for any problem solving/modelling intent or act. The framework highlights the importance of epistemological reflection and the need to avoid any separation of the process of knowing from that of modelling. It also emphasises the importance of inquiry into the assumptions that underpin the methods, tools and techniques that we employ, and into the tacit beliefs of the human actors who use them. Research limitations/implications - The presented frame-of-reference should be regarded as an evolving system of inquiry, one that seeks to incorporate contemporary human insight. Practical implications - Exactly, how the frame-of-reference presented in this paper should be exploited within an organisational or educational context, is a question to which there is no single \"correct\" answer. What is primarily important, however, is that it should be used to raise the profile of, and disseminate the benefits that accrue from, inquiry which goes beyond the simple application of tools and methods. Originality/value - This paper proposes a new framework-of-reference for modelling social systems that draws on insights from cognitive science, autopoiesis, management cybernetics and non-linear dynamics", "keywords": ["cybernetics", "modelling", "social dynamics"]} {"id": "kp20k_training_161", "title": "A source-synchronous double-data-rate parallel optical transceiver IC", "abstract": "Source-synchronous double-data-rate (DDR) signaling is widely used in electrical interconnects to eliminate clock recovery and to double communication bandwidth. This paper describes the design of a parallel optical transceiver integrated circuit (IC) that uses source-synchronous DDR optical signaling. On the transmit side, two 8-b electrical inputs are multiplexed, encoded, and sent over two high-speed optical links. On the receive side, the procedure is reversed to produce two 8-b electrical outputs. The proposed IC integrates analog vertical-cavity surface-emitting lasers (VCSELs), drivers and optical receivers with digital DDR multiplexing, serialization, and deserialization circuits. It was fabricated in a 0.5-mu m silicon-on-sapphire (SOS) complementary metal-oxide-semiconductor (CMOS) process. Linear arrays of quad VCSELs and photodetectors were attached to the proposed transceiver IC using Hip-chip bonding. A free-space optical link system was constructed to demonstrate correct IC functionality. The test results show successful transceiver operation at a data rate of 500 Mb/s with a 250-MHz DDR clock, achieving a gigabit of aggregate bandwidth. While the proposed DDR scheme is well suited for low-skew fiber-ribbon, free-space, and waveguide optical links, it can also be extended to links with higher skew with the addition of skew-compensation circuitry. To the authors' knowledge, this is the first demonstration of parallel optical transceivers that use source-synchronous DDR signaling", "keywords": ["flip-chip", "high-speed-interconnect", "optical interconnects", "optoelectronic-integrated circuits", "source-synchronous signaling"]} {"id": "kp20k_training_162", "title": "FPCODE: AN EFFICIENT APPROACH FOR MULTI-MODAL BIOMETRICS", "abstract": "Although face recognition technology has progressed substantially, its performance is still not satisfactory due to the challenges of great variations in illumination, expression and occlusion. This paper aims to improve the accuracy of personal identification, when only few samples are registered as templates, by integrating multiple modal biometrics, i.e. face and palmprint. We developed in this paper a feature code, namely FPCode, to represent the features of both face and palmprint. Though feature code has been used for palmprint recognition in literature, it is first applied in this paper for face recognition and multi-modal biometrics. As the same feature is used, fusion is much easier. Experimental results show that both feature level and decision level fusion strategies achieve much better performance than single modal biometrics. The proposed approach uses fixed length 1/0 bits coding scheme that is very efficient in matching, and at the same time achieves higher accuracy than other fusion methods available in literature", "keywords": ["face recognition", "palmprint recognition", "gabor feature", "fusion code", "feature fusion"]} {"id": "kp20k_training_163", "title": "Kinetics and energetics during uphill and downhill carrying of different weights", "abstract": "During physically heavy work tasks the musculoskeletal tissues are exposed to both mechanical and metabolic loading. The aim of the present study was to test a biomechanical model for prediction of whole-body energy turnover from kinematic and anthropometric data during load carrying. Total loads of 0, 10 and 20kg were carried symmetrically or asymmetrically in the hands, while walking on a treadmill (4.5kmh?1) horizontally, uphill, or downhill the slopes being 8%. Mean values for the directly measured oxygen uptake ranged for all trials from 0.5 to 2.1 l O2min?1, and analysis of variance showed significant differences regarding slope, load carried, and symmetry. The calculated values of oxygen uptake based on the biomechanical model correlated significantly with the directly measured values, fitting to the line Y=0.990X+0.144 , where Y is the estimated and X is the measured oxygen uptake in lmin?1. The close relationship between energy turnover rate measured directly and estimated based on a biomechanical model justifies the assessment of the metabolic load from kinematic data", "keywords": ["biomechanics", "manual material handling"]} {"id": "kp20k_training_164", "title": "Granular prototyping in fuzzy clustering", "abstract": "We introduce a logic-driven clustering in which prototypes are formed and evaluated in a sequential manner. The way of revealing a structure in data is realized by maximizing a certain performance index (objective function) that takes into consideration an overall level of matching (to be maximized) and a similarity level between the prototypes (the component to be minimized). The prototypes identified in the process come with the optimal weight vector that serves to indicate the significance of the individual features (coordinates) in the data grouping represented by the prototype. Since the topologies of these groupings are in general quite diverse the optimal weight vectors are reflecting the anisotropy of the feature space, i.e., they show some local ranking of features in the data space. Having found the prototypes we consider an inverse similarity problem and show how the relevance of the prototypes translates into their granularity", "keywords": ["direct and inverse matching problem", "granular prototypes", "information granulation", "logic-based clustering", "similarity index", "t- and s-norms"]} {"id": "kp20k_training_165", "title": "empirical evaluation of latency-sensitive application performance in the cloud", "abstract": "Cloud computing platforms enable users to rent computing and storage resources on-demand to run their networked applications and employ virtualization to multiplex virtual servers belonging to different customers on a shared set of servers. In this paper, we empirically evaluate the efficacy of cloud platforms for running latency-sensitive multimedia applications. Since multiple virtual machines running disparate applications from independent users may share a physical server, our study focuses on whether dynamically varying background load from such applications can interfere with the performance seen by latency-sensitive tasks. We first conduct a series of experiments on Amazon's EC2 system to quantify the CPU, disk, and network jitter and throughput fluctuations seen over a period of several days. We then turn to a laboratory-based cloud and systematically introduce different levels of background load and study the ability to isolate applications under different settings of the underlying resource control mechanisms. We use a combination of micro-benchmarks and two real-world applications--the Doom 3 game server and Apple's Darwin Streaming Server--for our experimental evaluation. Our results reveal that the jitter and the throughput seen by a latency-sensitive application can indeed degrade due to background load from other virtual machines. The degree of interference varies from resource to resource and is the most pronounced for disk-bound latency-sensitive tasks, which can degrade by nearly 75% under sustained background load. We also find that careful configuration of the resource control mechanisms within the virtualization layer can mitigate, but not eliminate, this interference", "keywords": ["cloud computing", "virtualization", "multimedia", "resource isolation"]} {"id": "kp20k_training_166", "title": "The role of ChineseAmerican scientists in ChinaUS scientific collaboration: a study in nanotechnology", "abstract": "In this paper, we use bibliometric methods and social network analysis to analyze the pattern of ChinaUS scientific collaboration on individual level in nanotechnology. Results show that ChineseAmerican scientists have been playing an important role in ChinaUS scientific collaboration. We find that ChinaUS collaboration in nanotechnology mainly occurs between Chinese and ChineseAmerican scientists. In the co-authorship network, ChineseAmerican scientists tend to have higher betweenness centrality. Moreover, the series of polices implemented by the Chinese government to recruit oversea experts seems to contribute a lot to ChinaUS scientific collaboration", "keywords": ["scientific collaboration", "chineseamerican", "nanotechnology", "collaboration network"]} {"id": "kp20k_training_167", "title": "Localization of spherical fruits for robotic harvesting", "abstract": "The orange picking robot (OPR) is a project for developing a robot that is able to harvest oranges automatically. One of the key tasks in this robotic application is to identify the fruit and to measure its location in three dimensions. This should be performed using image processing techniques which must be sufficiently robust to cope with variations in lighting conditions and a changing environment. This paper describes the image processing system developed so far to guide automatic harvesting of oranges, which here has been integrated in the first complete full-scale prototype OPR", "keywords": ["fruit harvesting", "color clustering", "stereo matching", "visual tracking"]} {"id": "kp20k_training_168", "title": "game based learning for computer science education", "abstract": "Today, learners increasingly demand for innovative and motivating learning scenarios that strongly respond to their habits of using media. One of the many possible solutions to this demand is the use of computer games to support the acquisition of knowledge. This paper reports on chances and challenges of applying a game-based learning scenario for the acquisition of IT knowledge as realized by the German BMBF project SpITKom. After briefly describing the learning potential of Multiplayer Browser Games as well as the educational objectives and target group of the SpITKom project, we will present the main results of a study that was carried out in the first phase of the project to guide the game design. In the course of the study, data were collected regarding (a) the computer game preferences of the target group and (b) the target group's competencies in playing computer games. We will then introduce recommendations that were deduced from the study's findings and that outline the concept and the prototype of the game", "keywords": ["game design", "game based learning", "it knowledge", "learners difficult to reach"]} {"id": "kp20k_training_169", "title": "Efficient evaluation functions for evolving coordination", "abstract": "This paper presents fitness evaluation functions that efficiently evolve coordination in large multi-component systems. In particular, we focus on evolving distributed control policies that are applicable to dynamic and stochastic environments. While it is appealing to evolve such policies directly for an entire system, the search space is prohibitively large in most cases to allow such an approach to provide satisfactory results. Instead, we present an approach based on evolving system components individually where each component aims to maximize its own fitness function. Though this approach sidesteps the exploding state space concern, it introduces two new issues: (1) how to create component evaluation functions that are aligned with the global evaluation function; and (2) how to create component evaluation functions that are sensitive to the fitness changes of that component, while remaining relatively insensitive to the fitness changes of other components in the system. If the first issue is not addressed, the resulting system becomes uncoordinated; if the second issue is not addressed, the evolutionary process becomes either slow to converge or worse, incapable of converging to good solutions. This paper shows how to construct evaluation functions that promote coordination by satisfying these two properties. We apply these evaluation functions to the distributed control problem of coordinating multiple rovers to maximize aggregate information collected. We focus on environments that are highly dynamic (changing points of interest), noisy (sensor and actuator faults), and communication limited (both for observation of other rovers and points of interest) forcing the rovers to evolve generalized solutions. On this difficult coordination problem, the control policy evolved using aligned and component-sensitive evaluation functions outperforms global evaluation functions by up to 400%. More notably, the performance improvements increase when the problems become more difficult (larger, noisier, less communication). In addition we provide an analysis of the results by quantifying the two characteristics (alignment and sensitivity discussed above) leading to a systematic study of the presented fitness functions", "keywords": ["evolution strategies", "distributed control", "fitness evaluation"]} {"id": "kp20k_training_170", "title": "Contention-free communication scheduling for array redistribution", "abstract": "Array redistribution is required often in programs on distributed memory parallel computers. It is essential to use efficient algorithms for redistribution, otherwise the performance of the programs may degrade considerably. The redistribution overheads consist of two parts: index computation and interprocessor communication. If there is no communication scheduling in a redistribution algorithm, the communication contention may occur, which increases the communication waiting time. In order to solve this problem, in this paper, we propose a technique to schedule the communication so that it becomes contention-free. Our approach initially generates a communication table to represent the communication relations among sending nodes and receiving nodes. According to the communication table, we then generate another table named communication scheduling table. Each column of communication scheduling table is a permutation of receiving node numbers in each communication step. Thus the communications in our redistribution algorithm are contention-free. Our approach can deal with multi-dimensional shape changing redistribution", "keywords": ["parallelizing compilers", "hpf", "array redistribution", "communication scheduling", "distributed memory machines"]} {"id": "kp20k_training_171", "title": "Quadratic weighted median filters for edge enhancement of noisy images", "abstract": "Quadratic Volterra filters are effective in image sharpening applications. The linear combination of polynomial terms, however, yields poor performance in noisy environments. Weighted median (WM) filters, in contrast, are well known for their outlier suppression and detail preservation properties. The WM sample selection methodology is naturally extended to the quadratic sample case, yielding a filter structure referred to as quadratic weighted median (QWM) that exploits the higher order statistics of the observed samples while simultaneously being robust to outliers arising in the higher order statistics of environment noise. Through statistical analysis of higher order samples, it is shown that, although the parent Gaussian distribution is light tailed, the higher order terms exhibit heavy-tailed distributions. The optimal combination of terms contributing to a quadratic system, i.e., cross and square, is approached from a maximum likelihood perspective which yields the WM processing of these terms. The proposed QWM filter structure is analyzed through determination of the output variance and breakdown probability. The studies show that the QWM exhibits lower variance and breakdown probability indicating the robustness of the proposed structure. The performance of the QWM filter is tested on constant regions, edges and real images, and compared to its weighted-sum dual, the quadratic Volterra filter. The simulation results show that the proposed method simultaneously suppresses the noise and enhances image details. Compared with the quadratic Volterra sharpener, the QWM filter exhibits superior qualitative and quantitative performance in noisy image sharpening", "keywords": ["asymptotic tail mass", "maximum likelihood estimation", "robust image sharpening", "unsharp masking", "volterra filtering", "weighted median filtering"]} {"id": "kp20k_training_172", "title": "ELRA - European language resources association-background, recent developments and future perspectives", "abstract": "The European Language Resources Association (ELRA) was founded in 1995 with the mission of providing language resources (LR) to European research institutions and companies. In this paper we describe the background, the mission and the major activities since then", "keywords": ["evaluation", "language resources", "production", "standards", "validation"]} {"id": "kp20k_training_173", "title": "MULTIPLE CONCURRENCE OF MULTI-PARTITE QUANTUM SYSTEM", "abstract": "We propose a new way of description of the global entanglement property of a multi-partite pure state quantum system. Based on the idea of bipartite concurrence, by dividing the multi-partite quantum system into two subsystems, a combination of all the bipartite concurrences of a multipartite quantum system is used to describe the entanglement property of the multi-partite system. We derive the analytical results for GHZ-state, W-state with arbitrary number of qubits, and cluster state with the number of particles no greater than 6", "keywords": ["multiple concurrence of multi-partite quantum system", "entanglement", "ghz-state", "w-state", "cluster state"]} {"id": "kp20k_training_174", "title": "Tolerant information retrieval with backpropagation networks", "abstract": "Neural networks can learn fi-om human decisions and preferences. Especially in, human-computer interaction, adaptation to the behaviour and expectations of the user is necessary. Ih information retrieval, an important area within human-computer interaction, expectations are difficult to meet. The inherently vague nature of information retrieval has bed to the application of vague processing techniques. Neural networks seem to have great potential to model the cognitive processes involved more appropriately. Current models based on neural networks and their implications for human-computer interaction ar-e analysed. COSIMIR (Cognitive Similarity Learning in Information Retrieval), an innovative model integrating human knowledge into the core of the retrieval process, is presented. It applies backpropagation to information retrieval, integrating human-centred and soft and tolerant computing into the core of the retrieval process. A further backpropagation model, the transformation network for heterogeneous data sources, is discussed. Empirical evaluations have provided promising results", "keywords": ["backpropagation", "human-computer interaction", "information retrieval", "neural networks", "similarity", "spreading activation"]} {"id": "kp20k_training_175", "title": "Approximation of mean time between failure when a system has periodic maintenance", "abstract": "This paper describes a simple technique for estimating the mean time between failure (MTBF) of a system that has periodic maintenance at regular intervals. This type of maintenance is typically found in high reliability, mission-oriented applications where it is convenient to perform maintenance after the completion of the mission. This approximation technique can greatly simplify the MTBF analysis for large systems. The motivation for this analysis was to understand the nature of the error in the approximation and to develop a means for quantifying that error. This paper provides the derivation of the equations that bound the error that can result when using this approximation method. It shows that, for most applications, the MTBF calculations can be greatly simplified with only a very small sacrifice in accuracy", "keywords": ["mean time between failure ", "periodic maintenance", "reliability modeling"]} {"id": "kp20k_training_176", "title": "Supplying Web 2.0: An empirical investigation of the drivers of consumer transmutation of culture-oriented digital information goods", "abstract": "This paper describes an empirical study of behaviors associated with consumers' creative modification of digital information goods found in Web 2.0 and elsewhere. They are products of culture such as digital images, music, video, news and computer games. We will refer to them as \"digital culture products\". How do consumers who transmute such products differ from those who do not, and from each other? This study develops and tests a theory of consumer behavior in transmuting digital culture products, separating consumers into different groups based on how and why they transmute. With our theory, we posit these groups as having differences of motivation, as measured by product involvement and innovativeness, and of ability as measured by computer skills. A survey instrument to collect data from Internet-capable computer users on the relevant constructs, and on their transmutation activities, is developed and distributed using a web-based survey hosting service. The data are used to test hypotheses that consumers' enduring involvement and innovativeness are positively related to transmutation behaviors, and that computer self-efficacy moderates those relationships. The empirical results support the hypotheses that enduring involvement and innovativeness do motivate transmutation behavior. The data analysis also supports the existence of a moderating relationship of computer self-efficacy with respect to enduring involvement, but not of computer self-efficacy with respect to innovativeness. The findings further indicate that transmutation activities should be expected to impact Web 2.0-oriented companies, both incumbents and start-ups, as they make decisions about how to incorporate consumers into their business models not only as recipients of content, but also as its producers", "keywords": ["information goods", "culture products", "digital entertainment", "creativity", "digital mashup", "remix", "media products", "consumer behavior"]} {"id": "kp20k_training_177", "title": "Polynomial cost for solving IVP for high-index DAE", "abstract": "We show that the cost of solving initial value problems for high-index differential algebraic equations is polynomial in the number of digits of accuracy requested. The algorithm analyzed is built on a Taylor series method developed by Pryce for solving a general class of differential algebraic equations. The problem may be fully implicit, of arbitrarily high fixed index and contain derivatives of any order. We give estimates of the residual which are needed to design practical error control algorithms for differential algebraic equations. We show that adaptive meshes are always more efficient than non-adaptive meshes. Finally, we construct sufficiently smooth interpolants of the discrete solution", "keywords": ["differential algebraic equations", "initial value problem", "adaptive step-size control", "taylor series", "structural analysis", "automatic differentiation", "holder mean"]} {"id": "kp20k_training_178", "title": "A Novel Wavelength Hopping Passive Optical Network (WH-PON) for Provision of Enhanced Physical Security", "abstract": "A novel secure wavelength hopping passive optical network (WH-PON) is presented in which physical layer security is introduced to the access network. The WH-PON design uses a pair of matched tunable lasers in the optical line terminal to create a time division multiplexed signal in which each data frame is transmitted at a unique wavelength. The transmission results for a 32-channel WH-PON operating at a data rate of 2.5 Gb/s are presented in this paper. The inherent security of the WH-PON design is verified through an attempted cross-channel eavesdropping attempt at an optical network unit. The results presented verify that the WH-PON provides secure broadband service in the access network", "keywords": ["access network", "broadband", "fiber-to-the-x", "passive optical network", "tunable laser", "wavelength hopping"]} {"id": "kp20k_training_179", "title": "On the Information Flow Required for Tracking Control in Networks of Mobile Sensing Agents", "abstract": "We design controllers that permit mobile agents with distributed or networked sensing capabilities to track (follow) desired trajectories, identify what trajectory information must be distributed to each agent for tracking, and develop methods to minimize the communication needed for the trajectory information distribution", "keywords": ["cooperative control", "dynamical networks", "tracking"]} {"id": "kp20k_training_180", "title": "Analysis of timing-based mutual exclusion with random times", "abstract": "Various timing-based mutual exclusion algorithms have been proposed that guarantee mutual exclusion if certain timing assumptions hold. In this paper, we examine how these algorithms behave when the time for the basic operations is governed by probability distributions. In particular, we are concerned with how often such algorithms succeed in allowing a processor to obtain a critical region and how this success rate depends on the random variables involved. We explore this question in the case where operation times are governed by exponential and gamma distributions, using both theoretical analysis and simulations", "keywords": ["mutual exclusion", "timed mutual exclusion", "markov chains", "locks"]} {"id": "kp20k_training_181", "title": "Modeling virtual worlds in databases", "abstract": "A method of modeling virtual worlds in databases is presented. The virtual world model is conceptually divided into several distinct elements, which are separately represented in a database. The model pen-nits to dynamically generate virtual scenes. ", "keywords": ["databases", "data structures", "modeling", "virtual reality"]} {"id": "kp20k_training_182", "title": "An efficient scheduling algorithm for scalable video streaming over P2P networks", "abstract": "During recent years, the Internet has witnessed rapid advancement in peer-to-peer (P2P) media streaming. In these applications, an important issue has been the block scheduling problem, which deals with how each node requests the media data blocks from its neighbors. In most streaming systems, peers are likely to have heterogeneous upload/download bandwidths, leading to the fact that different peers probably perceive different streaming quality. Layered (or scalable) streaming in P2P networks has recently been proposed to address the heterogeneity of the network environment. In this paper, we propose a novel block scheduling scheme that is aimed to address the P2P layered video streaming. We define a soft priority function for each block to be requested by a node in accordance with the blocks significance for video playback. The priority function is unique in that it strikes good balance between different factors, which makes the priority of a block well represent the relative importance of the block over a wide variation of block size between different layers. The block scheduling problem is then transformed to an optimization problem that maximizes the priority sum of the delivered video blocks. We develop both centralized and distributed scheduling algorithms for the problem. Simulation of two popular scalability types has been conducted to evaluate the performance of the algorithms. The simulation results show that the proposed algorithm is effective in terms of bandwidth utilization and video quality", "keywords": ["p2p streaming", "scalable video coding", "block scheduling algorithm"]} {"id": "kp20k_training_183", "title": "A Threshold for a Polynomial Solution of #2SAT", "abstract": "The #SAT problem is a classical #P-complete problem even for monotone, Horn and two conjunctive formulas (the last known as #2SAT). We present a novel branch and bound algorithm to solve the #2SAT problem exactly. Our procedure establishes a new threshold where #2SAT can be computed in polynomial time. We show that for any 2-CF formula F with n variables where #2SAT(F) <= p(n), for some polynomial p, #2SAT(F) is computed in polynomial time. This is a new way to measure the degree of difficulty for solving #2SAT and, according to such measure our algorithm allows to determine a boundary between 'hard' and 'easy' instances of the #2SAT problem", "keywords": ["2sat problem", "branch-bound algorithm", "polynomial thresholds", "efficient counting"]} {"id": "kp20k_training_184", "title": "Bagging and Boosting statistical machine translation systems", "abstract": "In this article we address the issue of generating diversified translation systems from a single Statistical Machine Translation (SMT) engine for system combination. Unlike traditional approaches, we do not resort to multiple structurally different SMT systems, but instead directly learn a strong SMT system from a single translation engine in a principled way. Our approach is based on Bagging and Boosting which are two instances of the general framework of ensemble learning. The basic idea is that we first generate an ensemble of weak translation systems using a base learning algorithm, and then learn a strong translation system from the ensemble. One of the advantages of our approach is that it can work with any of current SMT systems and make them stronger almost \"for free\". Beyond this, most system combination methods are directly applicable to the proposed framework for generating the final translation system from the ensemble of weak systems. We evaluate our approach on Chinese-English translation in three state-of-the-art SMT systems, including a phrase-based system, a hierarchical phrase-based system and a syntax-based system. Experimental results on the NIST MT evaluation corpora show that our approach leads to significant improvements in translation accuracy over the baselines. More interestingly, it is observed that our approach is able to improve the existing system combination systems. The biggest improvements are obtained by generating weak systems using Bagging/Boosting, and learning the strong system using a state-of-the-art system combination method. ", "keywords": ["statistical machine translation", "ensemble learning", "system combination"]} {"id": "kp20k_training_185", "title": "Functional dimensioning and tolerancing software for concurrent engineering applications", "abstract": "This paper describes the development of a prototype software package for solving functional dimensioning and tolerancing (FD&T) problems in a Concurrent Engineering environment. It provides a systematic way of converting functional requirements of a product into dimensional specifications by means of the following steps: firstly, the relationships necessary for solving FD&T problems are represented in a matrix form, known as functional requirements/dimensions (FR/D) matrix. Secondly, the values of dimensions and tolerances are then determined by satisfying all these relationships represented in a FR/D matrix by applying a comprehensive strategy which includes: tolerance allocation strategies for different types of FD&T problems and for determining an optimum solution order for coupled functional equations. The prototype software is evaluated by its potential users, and the results indicate that it can be an effective computer-based tool for solving FD&T problems in a CE environment. ", "keywords": ["functional dimensioning and tolerancing", "concurrent engineering", "tolerance allocation"]} {"id": "kp20k_training_186", "title": "Parametric Model-Checking of Stopwatch Petri Nets", "abstract": "At the border between control and verification, parametric verification can be used to synthesize constraints on the parameters to ensure that a system verifies given specifications. In this paper we propose a new framework for the parametric verification of time Petri nets with stopwatches. We first introduce a parametric extension of time Petri nets with inhibitor arcs (ITPNs) with temporal parameters and we define a symbolic representation of the parametric state-space based on the classical state-class graph method. Then, we propose semi-algorithms for the parametric model-checking of a subset of parametric TCTL formulae on ITPNs. These results have been implemented in the tool ROMEO and we illustrate them in a case-study based on a scheduling problem", "keywords": ["time petri nets", "stopwatches", "model-checking", "parameters", "state-class graph"]} {"id": "kp20k_training_187", "title": "Modelling the interaction of catecholamines with the alpha(1A) Adrenoceptor towards a ligand-induced receptor structure", "abstract": "Adrenoceptors are members of the important G protein coupled receptor family for which the detailed mechanism of activation remains unclear. In this study, we have combined docking and molecular dynamics simulations to model the ligand induced effect on an homology derived human alpha(1A) adrenoceptor. Analysis of agonist/alpha(1A) adrenoceptor complex interactions focused on the role of the charged amine group, the aromatic ring, the N-methyl group of adrenaline, the beta hydroxyl group and the catechol meta and para hydroxyl groups of the catecholamines. The most critical interactions for the binding of the agonists are consistent with many earlier reports and our study suggests new residues possibly involved in the agonist-binding site, namely Thr-174 and Cys-176. We further observe a number of structural changes that occur upon agonist binding including a movement of TM-V away from TM-III and a change in the interactions of Asp-123 of the conserved DRY motif. This may cause Arg-124 to move out of the TM helical bundle and change the orientation of residues in IC-II and IC-III, allowing for increased affinity of coupling to the G-protein", "keywords": ["alpha-adrenoceptor", "agonists", "molecular docking", "molecular dynamics", "receptor activation"]} {"id": "kp20k_training_188", "title": "probabilistic string similarity joins", "abstract": "Edit distance based string similarity join is a fundamental operator in string databases. Increasingly, many applications in data cleaning, data integration, and scientific computing have to deal with fuzzy information in string attributes. Despite the intensive efforts devoted in processing (deterministic) string joins and managing probabilistic data respectively, modeling and processing probabilistic strings is still a largely unexplored territory. This work studies the string join problem in probabilistic string databases, using the expected edit distance (EED) as the similarity measure. We first discuss two probabilistic string models to capture the fuzziness in string values in real-world applications. The string-level model is complete, but may be expensive to represent and process. The character-level model has a much more succinct representation when uncertainty in strings only exists at certain positions. Since computing the EED between two probabilistic strings is prohibitively expensive, we have designed efficient and effective pruning techniques that can be easily implemented in existing relational database engines for both models. Extensive experiments on real data have demonstrated order-of-magnitude improvements of our approaches over the baseline", "keywords": ["string joins", "probabilistic strings", "approximate string queries"]} {"id": "kp20k_training_189", "title": "A high performance simulator of the immune response", "abstract": "The application of concepts and methods of statistical mechanics to biological problems is one of the most promising frontiers of computational physics. For instance Cellular Automata (CA), i.e. fully discrete dynamical systems evolving according to Boolean laws, appear to be extremely well suited to the simulation of the immune system dynamics. A prominent example of immunological CA is represented by the CeladaSeiden automaton that has proven capable of providing several new insights into the dynamics of the immune system response. In the present paper we describe a parallel version of the CeladaSeiden automaton. Details on the parallel implementation as well as performance data on the IBM SP2 parallel platform are presented and commented on", "keywords": ["immune response", "cellular automata ", "parallel virtual machine ", "memory management"]} {"id": "kp20k_training_190", "title": "Speaker adaptation of language and prosodic models for automatic dialog act segmentation of speech", "abstract": "Speaker-dependent modeling has a long history in speech recognition, but has received less attention in speech understanding. This study explores speaker-specific modeling for the task of automatic segmentation of speech into dialog acts (DAs), using a linear combination of speaker-dependent and speaker-independent language and prosodic models. Data come from 20 frequent speakers in the ICSI meeting corpus; adaptation data per speaker ranges from 5k to 115k words. We compare performance for both reference transcripts and automatic speech recognition output. We find that: (1) speaker adaptation in this domain results both in a significant overall improvement and in improvements for many individual speakers, (2) the magnitude of improvement for individual speakers does not depend on the amount of adaptation data, and (3) language and prosodic models differ both in degree of improvement, and in relative benefit for specific DA classes. These results suggest important future directions for speaker-specific modeling in spoken language understanding tasks", "keywords": ["spoken language understanding", "dialog act segmentation", "speaker adaptation", "prosody modeling", "language modeling"]} {"id": "kp20k_training_191", "title": "An efficient method for electromagnetic scattering analysis", "abstract": "We present a novel method to solve the magnetic field integral equation (MFIE) using the method of moments (MoM) efficiently. This method employs a linear combination of the divergence-conforming RaoWiltonGlisson (RWG) function and the curl-conforming nRWG function to test the MFIE in MoM. The discretization process and the relationship of this new testing function with the previously employed RWG and nRWG testing functions are presented. Numerical results of radar cross section (RCS) data for objects with sharp edges and corners show that accuracy of the MFIE can be improved significantly through the use of the new testing functions. At the same time, only the commonly used RWG basis functions are needed for this method", "keywords": ["combined raowiltonglisson function ", "electromagnetic scattering", "magnetic field integral equation ", "method of moments "]} {"id": "kp20k_training_192", "title": "Empirical mode decomposition synthesis of fractional processes in 1D-and 2D-space", "abstract": "We report here on image texture analysis and on numerical simulation of fractional Brownian textures based on the newly emerged Empirical Mode Decomposition (EMD). EMD introduced by N.E. Huang et al. is a promising tool to non-stationary signal representation as a sum of zero-mean AM-FM components called Intrinsic Mode Functions (IMF). Recent works published by P. Flandrin et al. relate that, in the case of fractional Gaussian noise (fGn), EMD acts essentially as a dyadic filter bank that can be compared to wavelet decompositions. Moreover, in the context of fGn identification, P. Handrin et al. show that variance progression across IMFs is related to Hurst exponent H through a scaling law. Starting with these recent results, we propose a new algorithm to generate fGn, and fractional Brownian motion (fBm) of Hurst exponent H from IMFs obtained from EMD of a White noise, i.e. ordinary Gaussian noise (fGn with H= 1/2). ", "keywords": ["empirical mode decomposition", "fractional processes synthesis", "gaussian and brownian texture images"]} {"id": "kp20k_training_193", "title": "Flow topology in a steady three-dimensional lid-driven cavity", "abstract": "We present in this paper a thorough investigation of three-dimensional flow in a cubical cavity, subject to a constant velocity lid on its roof. In this steady-state analysis, we adopt the mixed formulation on tri-quadratic elements to preserve mass conservation. To resolve difficulties in the asymmetric and indefinite large-size matrix equations, we apply the BiCGSTAB solution solver. To achieve stability, weighting functions are designed in favor of variables on the upstream side. To achieve accuracy, the weighting functions are properly chosen so that false diffusion errors can be largely suppressed by the equipped streamline operator. Our aim is to gain some physical insight into the vortical flow using a theoretically rigorous topological theory. To broaden our understanding of the vortex dynamics in the cavity, we also study in detail the longitudinal spiralling motion in the flow interior. ", "keywords": ["three-dimensional", "bicgstab solution solver", "topological theory"]} {"id": "kp20k_training_194", "title": "Image object classification using saccadic search, spatio-temporal pattern encoding and self-organisation", "abstract": "A method for extracting features from photographic images is investigated. The input image is through a saccadic search algorithm divided into a set of sub-images, segmented and coded by a spatio-temporal encoding engine. The input image is thus represented by a set of characteristic pattern signatures, well suited for classification by an unsupervised neural network. A strategy using multiple self-organising feature maps (SOM) in a hierarchical manner is used. With this approach, using a certain degree of user selection, a database of sub-images is grouped according to similarities in signature space", "keywords": ["saccadic eye movement", "foveation", "segmentation", "pcnn time-series", "signatures", "hierarchical som"]} {"id": "kp20k_training_195", "title": "Theoretical properties of LFSRs for built-in self test", "abstract": "Linear Feedback Shift-Registers have been studied for a long time as interesting solutions for error detection and correction techniques in transmissions. In the test domain, and principally in Built-In Self Test applications, they are often used as generators of pseudo-random test sequences. Conversely, their potential to generate prescribed deterministic test sequences is dealt within more recent works, and nowadays, allows the investigation of efficient test with a pseudo-deterministic BIST technique. Pseudo-deterministic test sequences are composed of both deterministic and pseudo-random test patterns and offer high fault coverage with a tradeoff between test length and hardware cost. In this paper, synthesis techniques for LFSRs that embed such kind of sequences are described", "keywords": ["built-in self test", "linear feedback shift register", "hardware test pattern generator"]} {"id": "kp20k_training_196", "title": "Two fixed-parameter algorithms for vertex covering by paths on trees", "abstract": "VERTEX COVERING BY PATHS ON TREES with applications in machine translation is the task to cover all vertices of a tree T = (V, E) by choosing a minimum-weight subset of given paths in the tree. The problem is NP-hard and has recently been solved by an exact algorithm running in O(4 ", "keywords": ["graph algorithms", "combinatorial problems", "fixed-parameter tractability", "exact algorithms"]} {"id": "kp20k_training_197", "title": "Graphical dynamic linear models: specification, use and graphical transformations", "abstract": "In this work, we propose a dynamic graphical model as a tool for Bayesian inference and forecasting in dynamic systems described by a series which is dependent on a state vector evolving according to a Markovian law. We build sequential algorithms for the probabilities propagation. This sequentiality turns out to be represented by the dynamic graphical structure alter carrying out several goal-oriented sequential graphical transformations. MSG: ", "keywords": ["graphical models", "dynamic models", "markovian dynamic systems", "learning and forecasting algorithms", "graphical transformations"]} {"id": "kp20k_training_198", "title": "Alignment with non-overlapping inversions and translocations on two strings", "abstract": "An inversion and a translocation are important in bio sequence analysis and motivate researchers to consider the sequence alignment problem using these operations. Based on inversion and translocation, we introduce a new alignment problem with non-overlapping inversions and translocationsgiven two strings x and y, find an alignment with non-overlapping inversions and translocations for x and y. This problem has interesting application for finding a common sequence from two mutated sequences. We, in particular, consider the alignment problem when non-overlapping inversions and translocations are allowed for both x and y. We design an efficient algorithm that determines the existence of such an alignment and retrieves an alignment, if exists", "keywords": ["sequence alignment", "non-overlapping inversion", "translocation"]} {"id": "kp20k_training_199", "title": "A twist to partial least squares regression", "abstract": "A modification of the PLS1 algorithm is presented. Stepwise optimization over a set of candidate loading weights obtained by taking powers of the y-X correlations and X standard deviations generalizes the classical PLS1 based on y-X covariances and hence adds flexibility to the modelling. When good linear predictions can be obtained, the suggested approach often finds models with fewer and more interpretable components. Good performance is demonstrated when compared with the classical PLS1 on calibration benchmark data sets. An important part of the comparisons is managed by a novel model selection strategy. The selection is based on choosing the simplest model among those with a cross-validation error smaller than the pre-specified significance limit of a chi(2)-statistic. ", "keywords": ["pls1", "powers of correlations and standard deviations", "cross-validation", "model selection", "model interpretation"]} {"id": "kp20k_training_200", "title": "A robust and efficient finite volume scheme for the discretization of diffusive flux on extremely skewed meshes in complex geometries", "abstract": "In this paper an improved finite volume scheme to discretize diffusive flux on a non-orthogonal mesh is proposed. This approach, based on an iterative technique initially suggested by Khosla [P.K. Khosla, S.G. Rubin, A diagonally dominant second-order accurate implicit scheme, Computers and Fluids 2 (1974) 207209] and known as deferred correction, has been intensively utilized by Muzaferija [S. Muzaferija, Adaptative finite volume method for flow prediction using unstructured meshes and multigrid approach, Ph.D. Thesis, Imperial College, 1994] and later Fergizer and Peric [J.H. Fergizer, M. Peric, Computational Methods for Fluid Dynamics, Springer, 2002] to deal with the non-orthogonality of the control volumes. Using a more suitable decomposition of the normal gradient, our scheme gives accurate solutions in geometries where the basic idea of Muzaferija fails. First the performances of both schemes are compared for a Poisson problem solved in quadrangular domains where control volumes are increasingly skewed in order to test their robustness and efficiency. It is shown that convergence properties and the accuracy order of the solution are not degraded even on extremely skewed mesh. Next, the very stable behavior of the method is successfully demonstrated on a randomly distorted grid as well as on an anisotropically distorted one. Finally we compare the solution obtained for quadrilateral control volumes to the ones obtained with a finite element code and with an unstructured version of our finite volume code for triangular control volumes. No differences can be observed between the different solutions, which demonstrates the effectiveness of our approach", "keywords": ["finite volume", "diffusive flux discretization", "poisson equation", "deferred correction", "skewed meshes", "distorted grid"]} {"id": "kp20k_training_201", "title": "Evaluation of Trend Localization with Multi-Variate Visualizations", "abstract": "Multi-valued data sets are increasingly common, with the number of dimensions growing. A number of multi-variate visualization techniques have been presented to display such data. However, evaluating the utility of such techniques for general data sets remains difficult. Thus most techniques are studied on only one data set. Another criticism that could be levied against previous evaluations of multi-variate visualizations is that the task doesn't require the presence of multiple variables. At the same time, the taxonomy of tasks that users may perform visually is extensive. We designed a task, trend localization, that required comparison of multiple data values in a multi-variate visualization. We then conducted a user study with this task, evaluating five multi-variate visualization techniques from the literature (Brush Strokes, Data-Driven Spots, Oriented Slivers, Color Blending, Dimensional Stacking) and juxtaposed grayscale maps. We report the results and discuss the implications for both the techniques and the task", "keywords": ["user study", "multi-variate visualization", "visual task design", "visual analytics"]} {"id": "kp20k_training_202", "title": "An efficient reconfigurable multiplier architecture for Galois field GF(2m", "abstract": "This paper describes an efficient architecture of a reconfigurable bit-serial polynomial basis multiplier for Galois field GF(2m), where 1 0, such that CDC-paths increase in cost by at most a factor t = (1 2 sin(0/2))(-2). We propose a novel distributed algorithm to compute the spanner using an expected number of 0(n log n) fixed-size messages. In the second part, we present a distributed algorithm to find minimum-cost CDC-paths between two nodes using 0(n(2)) fixed-size messages, by developing an extension of Edmonds' algorithm for minimum-cost perfect matching. In a centralized implementation, our algorithm runs in 0(n(2)) time improving the previous best algorithm which requires 0(n(3)) running time. Moreover, this running time improves to 0(n/0) when used in conjunction with the spanner developed. ", "keywords": ["algorithms", "spanners", "routing", "directional antennas"]} {"id": "kp20k_training_239", "title": "Composition of aspects based on a relation model: Synergy of multiple paradigms", "abstract": "Software composition for timely and affordable software development and evolution is one of the oldest pursuits of software engineering. In current software composition techniques, Component- Based Software Development (CBSD) and Aspect-Oriented Software Development (AOSD) have attracted academic and industrial attention. Blackbox composition used in CBSD provides simple and safe modularization for its strong information hiding, which is, however, the main obstacle for a black box composite to evolve later. This implies that an application developed through black box composition cannot take advantage of Aspect-Oriented Programming (AOP) used in AOSD. On the contrary, AOP enhances maintainability and comprehensibility by modularizing concerns crosscutting multiple components but lacks the support for the hierarchical and external composition of aspects themselves and compromises the important software engineering principles such as encapsulation, which is almost perfectly supported in black box composition. The role and role model have been recognized to have many similarities with CBSD and AOP but have significant differences with those composition techniques as well. Although each composition paradigm has its own advantages and disadvantages, there is no substantial support to realize the synergy of these composition paradigms; the black box composition, AOP, and role model. In this paper, a new composition technique based on representational abstraction of the relationship between component instances is introduced. The model supports the simple, elegant, and dynamic composition of components with its declarative form and provides the hooks through which an aspect can evolve and a parallel developed aspect can be merged at the instance level", "keywords": ["software composition", "aspect-oriented programming", "black box composition", "component-based software development", "role", "relation model", "logic"]} {"id": "kp20k_training_240", "title": "Generalization performance of magnitude-preserving semi-supervised ranking with graph-based regularization", "abstract": "Semi-supervised ranking is a relatively new and important learning problem inspired by many applications. We propose a novel graph-based regularized algorithm which learns the ranking function in the semi-supervised learning framework. It can exploit geometry of the data while preserving the magnitude of the preferences. The least squares ranking loss is adopted and the optimal solution of our model has an explicit form. We establish error analysis of our proposed algorithm and demonstrate the relationship between predictive performance and intrinsic properties of the graph. The experiments on three datasets for recommendation task and two quantitative structureactivity relationship datasets show that our method is effective and comparable to some other state-of-the-art algorithms for ranking", "keywords": ["ranking", "semi-supervised learning", "generalization performance", "graph laplacian", "reproducing kernel hilbert space"]} {"id": "kp20k_training_241", "title": "Accessible haptic user interface design approach for users with visual impairments", "abstract": "With the number of people with visual impairments (e.g., low vision and blind) continuing to increase, vision loss has become one of the most challenging disabilities. Today, haptic technology, using an alternative sense to vision, is deemed an important component for effectively accessing information systems. The most appropriately designed assistive technology is critical for those with visual impairments to adopt assistive technology and to access information, which will facilitate their tasks in personal and professional life. However, most of the existing design approaches are inapplicable and inappropriate to such design contexts as users with visual impairments interacting with non-graphical user interfaces (i.e., haptic technology). To resolve such design challenges, the present study modified a participatory design approach (i.e., PICTIVE, Plastic Interface for Collaborative Technology Initiatives Video Exploration) to be applicable to haptic technologies, by considering the brain plasticity theory. The sense of touch is integrated into the design activity of PICTIVE. Participants with visual impairments were able to effectively engage in designing non-visual interfaces (e.g., haptic interfaces) through non-visual communication methods (e.g., touch modality", "keywords": ["human factors", "design method", "visual impairments", "non-visual interfaces", "accessibility", "usability"]} {"id": "kp20k_training_242", "title": "effect of probabilistic task allocation based on statistical analysis of bid values", "abstract": "This paper presents the effect of adaptively introducing appropriate strategies into the award phase of the contract net protocol (CNP) in a massively multi-agent system (MMAS", "keywords": ["contract net protocol", "task allocation", "massively multi-agent systems", "coordination"]} {"id": "kp20k_training_243", "title": "higher-order concurrent programs with finite communication topology (extended abstract", "abstract": "Concurrent ML (CML) is an extension of the functional language Standard ML(SML) with primitives for the dynamic creation of processes and channels and for the communication of values over channels. Because of the powerful abstraction mechanisms the communication topology of a given program may be very complex and therefore an efficient implementation may be facilitated by knowledge of the topology. This paper presents an analysis for determining when a bounded number of processes and channels will be generated. The analysis proceeds in two stages. First we extend a polymorphic type system for SML to deduce not only the type of CML programs but also their communication behaviour expressed as terms in a new process algebra. Next we develop an analysis that given the communication behaviour predicts the number of processes and channels required during the execution of the CML program. The correctness of the analysis is proved using a subject reduction property for the type system", "keywords": ["type system", "communication", " ml ", "program", "order", "concurrent program", "efficiency", "abstraction", "values", "analysis", "dynamic", "correctness", "process algebra", "polymorphic", "reduction", "implementation", "standardization", "topologies", "process", "extensibility", "complexity", "paper", "knowledge", "functional languages"]} {"id": "kp20k_training_244", "title": "On the polyhedral structure of a multi-item production planning model with setup times", "abstract": "We present and study a mixed integer programming model that arises as a substructure in many industrial applications. This model generalizes a number of structured MIP models previously studied, and it provides a relaxation of various capacitated production planning problems and other fixed charge network flow problems. We analyze the polyhedral structure of the convex hull of this model, as well as of a strengthened LP relaxation. Among other results, we present valid inequalities that induce facets of the convex hull under certain conditions. We also discuss how to strengthen these inequalities by using known results for lifting valid inequalities for 0-1 continuous knapsack problems", "keywords": ["mixed integer programming", "production planning", "polyhedral combinatorics", "capacitated lot-sizing", "fixed charge network flow"]} {"id": "kp20k_training_246", "title": "Coherence between one random and one periodic signal for measuring the strength of responses in the electro-encephalogram during sensory stimulation", "abstract": "Coherence between a pulse train representing periodic stimuli and the EEG has been used in the objective detection of steady-state evoked potentials. This work aimed to quantify the strength of the stimulus responses based on the statistics of coherence estimate between one random and one periodic signal focusing on the confidence limits and power of significance tests in detecting responses. To detect the responses in 95% of cases, a signal-to-noise ratio of about -7.9 dB was required when using 48 windows (M) in the coherence estimation. The ratio, however, increased to -1.2 dB when M was 12. The results were tested in Monte Carlo simulations and applied to EEGs obtained from 14 subjects during visual stimulation. The method showed differences in the strength of responses at the stimulus frequency and its harmonics, as well as variations between individuals and over cortical regions. In contrast to those from the parietal and temporal regions, results for the occipital region gave confidence limits (with M = 12) that were above zero for all subjects, indicating statistically significant responses. The proposed technique extends the usefulness of coherence as a measure of stimulus responses and allows statistical analysis that could also be applied usefully in a range of other biological signals", "keywords": ["coherence", "eeg", "statistics", "rhythmic stimulation", "synchrony measure"]} {"id": "kp20k_training_247", "title": "Exploring the dynamics of adaptation with evolutionary activity plots", "abstract": "Evolutionary activity statistics and their visualization are introduced, and their motivation is explained. Examples of their use are described, and their strengths and limitations are discussed. References to more extensive or general accounts of these techniques are provided", "keywords": ["evolutionary activity", "evolutionary adaptation", "visualization"]} {"id": "kp20k_training_248", "title": "Repeated Exposure to the Abused Inhalant Toluene Alters Levels of Neurotransmitters and Generates Peroxynitrite in Nigrostriatal and Mesolimbic Nuclei in Rat", "abstract": "Toluene, a volatile hydrocarbon found in a variety of chemical compounds, is misused and abused by inhalation for its euphorigenic effects. Toluene's reinforcing properties may share a common characteristic with other drugs of abuse, namely, activation of the mesolimbic dopamine system. Prior studies in our laboratory found that acutely inhaled toluene activated midbrain dopamine neurons in the rat. Moreover, single systemic injections of toluene in rats produced a dose-dependent increase in locomotor activity which was blocked by depletion of nucleus accumbens dopamine or by pretreatment with a D2 dopamine receptor antagonist. Here we examined the effects of seven daily intraperitoneal injections of 600 mg/kg toluene on the content of serotonin and dopamine in the caudate nucleus (CN) and nucleus accumbens (NAC), substantia nigra, and ventral tegmental area at 2, 4, and 24 h after the last injection. Also, the roles of nitric oxide, peroxynitrite, and the production of 3-nitrosotyrosine (3-NT), in the CN and NAC were assessed at the same time points. Toluene treatments increased dopamine levels in the CN and NAC, and serotonin levels in CN, NAC, and ventral tegmental area. Measurements of the dopamine metabolite dihydroxyphenylacetic acid (DOPAC) further suggested a change in transmitter utilization in CN and NAC. Lastly, 3-NT levels also showed a differential change between CN and NAC, but at different time points post-toluene injection. These results point out the complexity of action of toluene on neurotransmitter function following a course of chronic exposure. Changes in the production of 3-NT also suggest that toluene-induced neurotoxicity may mediate via generation of peroxynitrite", "keywords": ["inhalant", "toluene", "neurotransmitter", "dopamine", "serotonin", "oxidative stress", "peroxynitrite", "3-nitrosotyrosine", "nigrostriatal and mesolimbic nuclei", "neurotoxicity"]} {"id": "kp20k_training_249", "title": "supporting ad-hoc ranking aggregates", "abstract": "This paper presents a principled framework for efficient processing of ad-hoc top-k (ranking) aggregate queries, which provide the k groups with the highest aggregates as results. Essential support of such queries is lacking in current systems, which process the queries in a nave materialize-group-sort scheme that can be prohibitively inefficient. Our framework is based on three fundamental principles. The Upper-Bound Principle dictates the requirements of early pruning, and the Group-Ranking and Tuple-Ranking Principles dictate group-ordering and tuple-ordering requirements. They together guide the query processor toward a provably optimal tuple schedule for aggregate query processing. We propose a new execution framework to apply the principles and requirements. We address the challenges in realizing the framework and implementing new query operators, enabling efficient group-aware and rank-aware query plans. The experimental study validates our framework by demonstrating orders of magnitude performance improvement in the new query plans, compared with the traditional plans", "keywords": ["top-k query processing", "ranking", "decision support", "aggregate query", "olap"]} {"id": "kp20k_training_250", "title": "Meaningful and meaningless solutions for cooperative n-person games", "abstract": "Game values often represent data that can be measured in more than one acceptable way (e.g., monetary amounts). We point out that in such a case a statement about cooperative n-person game models might be meaningless in the sense that its truth or falsity depends on the choice of an acceptable way to measure game values. In particular, we analyze statements about solution concepts such as the core, stable sets, the nucleolus, the Shapley value (and some of its generalizations", "keywords": ["robustness and sensitivity analysis", "game theory"]} {"id": "kp20k_training_251", "title": "Numerical optimization algorithm for rotationally invariant multi-orbital slave-boson method", "abstract": "We develop a generalized numerical optimization algorithm for the rotationally invariant multi-orbital slave boson approach, which is applicable for arbitrary boundary constraints of high-dimensional objective function by combining several classical optimization techniques. After constructing the calculation architecture of rotationally invariant multi-orbital slave boson model, we apply this optimization algorithm to find the stable ground state and magnetic configuration of two-orbital Hubbard models. The numerical results are consistent with available solutions, confirming the correctness and accuracy of our present algorithm. Furthermore, we utilize it to explore the effects of the transverse Hunds coupling terms on metalinsulator transition, orbital selective Mott phase and magnetism. These results show the quick convergency and robust stable character of our algorithm in searching the optimized solution of strongly correlated electron systems", "keywords": ["slave boson", "numerical optimization algorithm", "hubbard model", "metalinsulator transition"]} {"id": "kp20k_training_252", "title": "molecular dynamics simulation of large-scale carbon nanotubes on a shared-memory architecture", "abstract": "Carbon nanotubes are expected to play a significant role in the design and manufacture of many nano-mechanical and nano-electronic devices of future. It is important, therefore, that atomic level elastomechanical response properties of both single and multiwall nanotubes be investigated in detail. Classical molecular dynamics simulations employing Brenner's reactive potential with long range van der Waals interactions have been used in mechanistic response studies of carbon nanotubes to external strains. The studies of single and multiwalled carbon nanotubes under compressive strains show the instabilities beyond elastic response. Due to inclusion of non-bonded long range interactions, the simulations also show the redistribution of strain and strain energy from sideways bucklng to the formation of highly localized strained kink sites. Bond rearrangements occur at the kink sites, leading to formation of topological defects, preventing the tube from relaxing fully back to it's original configuration. Elastomechanic response behavior of single and multiwall carbon nanotubes to externally applied compressive strains is simulated and studied in detail. We will describe the results and discuss their implication towards the stability of any molecular mechanical structure made of carbon nanotubes", "keywords": ["interaction", "role", "stability", "shared memory", "mechanical properties", "play", "structure", "atom", "simulation", "inclusion", "carbon nanotubes", "large-scale", "behavior", "device", "origin2000", "architecture", "manufacturability", "design", "energy", "configurability", "response", "parallel", "future", "molecular dynamics"]} {"id": "kp20k_training_253", "title": "Nonprimitive recursive complexity and undecidability for Petri net equivalences", "abstract": "The aim of this note is twofold. Firstly, it shows that the undecidability result for bisimilarity in [Theor. Comput. Sci. 148 (1995) 281-301] can be immediately extended for the whole range of equivalences land preorders) on labelled Petri nets. Secondly, it shows that restricting our attention to nets with finite reachable space, the respective (decidable) problems are nonprimitive recursive; this approach also applies to Mayr and Meyer's result [J. ACM 28 (1981) 561-576] for the reachability set equality, yielding a more direct proof. ", "keywords": ["petri-nets", "decidability", "complexity"]} {"id": "kp20k_training_254", "title": "time-decaying aggregates in out-of-order streams", "abstract": "Processing large data streams is now a major topic in data management. The data involved can be truly massive, and the required analyses complex. In a stream of sequential events such as stock feeds, sensor readings, or IP traffic measurements, data tuples pertaining to recent events are typically more important than older ones. This can be formalized via time-decay functions, which assign weights to data based on the age of data. Decay functions such as sliding windows and exponential decay have been studied under the assumption of well-ordered arrivals, i.e., data arrives in non-decreasing order of time stamps. However, data quality issues are prevalent in massive streams (due to network asynchrony and delays etc.), and correct arrival order is not guaranteed. We focus on the computation of decayed aggregates such as range queries, quantiles, and heavy hitters on out-of-order streams, where elements do not necessarily arrive in increasing order of timestamps. Existing techniques such as Exponential Histograms and Waves are unable to handle out-of-order streams. We give the first deterministic algorithms for approximating these aggregates under popular decay functions such as sliding window and polynomial decay. We study the overhead of allowing out-of-order arrivals when compared to well-ordered arrivals, both analytically and experimentally. Our experiments confirm that these algorithms can be applied in practice, and compare the relative performance of different approaches for handling out-of-order arrivals", "keywords": ["network", "sensor", "histogram", "order", "out-of-order arrivals", "streams", "asynchronous data streams", "data streaming", "range queries", "polynomial", "event", "approximation", "performance", "computation", "delay", "experience", "timing", "data quality", "traffic measurement", "data management", "practical", "timestamp", "sliding window", "aggregate", "data", "relation", "complexity", "algorithm", "age"]} {"id": "kp20k_training_255", "title": "The Village Telco project: a reliable and practical wireless mesh telephony infrastructure", "abstract": "VoIP (Voice over IP) over mesh networks could be a potential solution to the high cost of making phone calls in most parts of Africa. The Village Telco (VT) is an easy to use and scalable VoIP over meshed WLAN (Wireless Local Area Network) telephone infrastructure. It uses a mesh network of mesh potatoes to form a peer-to-peer network to relay telephone calls without landlines or cell phone towers. This paper discusses the Village Telco infrastructure, how it addresses the numerous difficulties associated with wireless mesh networks, and its efficient deployment for VoIP services in some communities around the globe. The paper also presents the architecture and functions of a mesh potato and a novel combined analog telephone adapter (ATA) and WiFi access point that routes calls. Lastly, the paper presents the results of preliminary tests that have been conducted on a mesh potato. The preliminary results indicate very good performance and user acceptance of the mesh potatoes. The results proved that the infrastructure is deployable in severe and under-resourced environments as a means to make cheap phone calls and render Internet and IP-based services. As a result, the VT project contributes to bridging the digital divide in developing areas", "keywords": ["wlan", "wireless mesh networks", "voip", "mesh potato", "village telco", "rural telephony"]} {"id": "kp20k_training_256", "title": "General Subspace Learning With Corrupted Training Data Via Graph Embedding", "abstract": "We address the following subspace learning problem: supposing we are given a set of labeled, corrupted training data points, how to learn the underlying subspace, which contains three components: an intrinsic subspace that captures certain desired properties of a data set, a penalty subspace that fits the undesired properties of the data, and an error container that models the gross corruptions possibly existing in the data. Given a set of data points, these three components can be learned by solving a nuclear norm regularized optimization problem, which is convex and can be efficiently solved in polynomial time. Using the method as a tool, we propose a new discriminant analysis (i.e., supervised subspace learning) algorithm called Corruptions Tolerant Discriminant Analysis (CTDA), in which the intrinsic subspace is used to capture the features with high within-class similarity, the penalty subspace takes the role of modeling the undesired features with high between-class similarity, and the error container takes charge of fitting the possible corruptions in the data. We show that CTDA can well handle the gross corruptions possibly existing in the training data, whereas previous linear discriminant analysis algorithms arguably fail in such a setting. Extensive experiments conducted on two benchmark human face data sets and one object recognition data set show that CTDA outperforms the related algorithms", "keywords": ["subspace learning", "corrupted training data", "discriminant analysis", "graph embedding"]} {"id": "kp20k_training_257", "title": "CLOSURE PROPERTIES OF HYPER-MINIMIZED AUTOMATA", "abstract": "Two deterministic finite automata are almost equivalent if they disagree in acceptance only for finitely many inputs. An automaton A is hyper-minimized if no automaton with fewer states is almost equivalent to A. A regular language L is canonical if the minimal automaton accepting L is hyper-minimized. The asymptotic state complexity s*(L) of a regular language L is the number of states of a hyper-minimized automaton for a language finitely different from L. In this paper we show that: (1) the class of canonical regular languages is not closed under: intersection, union, concatenation, Kleene closure, difference, symmetric difference, reversal, homomorphism, and inverse homomorphism; (2) for any regular languages L(1) and L(2) the asymptotic state complexity of their sum L(1) boolean OR L(2), intersection L(1) boolean AND L(2), difference L(1) - L(2), and symmetric difference L(1) circle plus L(2) can be bounded by s*(L(1)) . s*(L(2)). This bound is tight in binary case and in unary case can be met in infinitely many cases. (3) For any regular language L the asymptotic state complexity of its reversal L(R) can be bounded by 2(s)* (L). This bound is tight in binary case. (4) The asymptotic state complexity of Kleene closure and concatenation cannot be bounded. Namely, for every k >= 3, there exist languages K, L, and M such that s*(K) = s*(L) = s*(M) = 1 and s*(K*) = s*(L . M) = k. These are answers to open problems formulated by Back et al. [RAIRO-Theor. Inf. Appl. 43 (2009) 69-94", "keywords": ["finite state automata", "regular languages", "hyper-minimized automata"]} {"id": "kp20k_training_258", "title": "On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis", "abstract": "Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models", "keywords": ["sequential data model", "feature selection", "bayes learning", "algebraic geometry"]} {"id": "kp20k_training_259", "title": "A dual-scale lattice gas automata model for gas-solid two-phase flow in bubbling fluidized beds", "abstract": "Modelling the hydrodynamics of gas/solid flow is important for the design and scale-up of fluidized bed reactors. A novel gas/solid dual-scale model based on lattice gas cellular automata (LGCA) is proposed to describe the macroscopic behaviour through microscopic gas-solid interactions. Solid particles and gas pseudo-particles are aligned in lattices with different scales for solid and gas. In addition to basic LGCA rules, additional rules for collision and propagation are specifically designed for gas-solid systems. The solid's evolution is then motivated by the temporal and spatial average momentum gained through solid-solid and gas-solid interactions. A statistical method, based on the similarity principle, is derived for the conversion between model parameters and hydrodynamic properties. Simulations for bubbles generated from a vertical jet in a bubbling fluidized bed based on this model agree well with experimental results, as well as with the results of two-fluid approaches and discrete particle simulations. ", "keywords": ["lattice gas cellular automata", "gas/solid flow", "bubbling fluidized beds", "model"]} {"id": "kp20k_training_260", "title": "scalable algorithms for global snapshots in distributed systems", "abstract": "Existing algorithms for global snapshots in distributed systems are not scalable when the underlying topology is complete. In a network with N processors, these algorithms require O ( N ) space and O ( N ) messages per processor. As a result, these algorithms are not efficient in large systems when the logical topology of the communication layer such as MPI is complete. In this paper, we propose three algorithms for global snapshot: a grid-based, a tree-based and a centralized algorithm. The grid-based algorithm uses O ( N ) space but only O (? N ) messages per processor. The tree-based algorithm requires only O (1) space and O (log N log w ) messages per processor where w is the average number of messages in transit per processor. The centralized algorithm requires only O (1) space and O (log w ) messages per processor. We also have a matching lower bound for this problem. Our algorithms have applications in checkpointing, detecting stable predicates and implementing synchronizers. We have implemented our algorithms on top of the MPI library on the Blue Gene/L supercomputer. Our experiments confirm that the proposed algorithms significantly reduce the message and space complexity of a global snapshot", "keywords": ["fault tolerance", "stable predicates", "global snapshot algorithms", "blue gene/l", "checkpointing"]} {"id": "kp20k_training_261", "title": "A smart TCP acknowledgment approach for multihop wireless networks", "abstract": "Reliable data transfer is one of the most difficult tasks to be accomplished in multihop wireless networks. Traditional transport protocols like TCP face severe performance degradation over multihop networks given the noisy nature of wireless media as well as unstable connectivity conditions in place. The success of TCP in wired networks motivates its extension to wireless networks. A crucial challenge faced by TCP over these networks is how to operate smoothly with the 802.11 wireless MAC protocol which also implements a retransmission mechanism at link level in addition to short RTS/CTS control frames for avoiding collisions. These features render TCP acknowledgments (ACK) transmission quite costly. Data and ACK packets cause similar medium access overheads despite the much smaller size of the ACKs. In this paper, we further evaluate our dynamic adaptive strategy for reducing ACK-induced overhead and consequent collisions. Our approach resembles the sender side's congestion control. The receiver is self-adaptive by delaying more ACKs under nonconstrained channels and less otherwise. This improves not only throughput but also power consumption. Simulation evaluations exhibit significant improvement in several scenarios", "keywords": ["wireless multihop networks", "transport control protocol", "delayed acknowledgments"]} {"id": "kp20k_training_262", "title": "Vasopressin and social odor processing in the olfactory bulb and anterior olfactory nucleus", "abstract": "Central vasopressin facilitates social recognition and modulates numerous complex social behaviors in mammals, including parental behavior, aggression, affiliation, and pair-bonding. In rodents, social interactions are primarily mediated by the exchange of olfactory information, and there is evidence that vasopressin signaling is important in brain areas where olfactory information is processed. We recently discovered populations of vasopressin neurons in the main and accessory olfactory bulbs and anterior olfactory nucleus that are involved in the processing of social odor cues. In this review, we propose a model of how vasopressin release in these regions, potentially from the dendrites, may act to filter social odor information to facilitate odor-based social recognition. Finally, we discuss recent human research linked to vasopressin signaling and suggest that our model of priming-facilitated vasopressin signaling would be a rewarding target for further studies, as a failure of priming may underlie pathological changes in complex behaviors", "keywords": ["olfaction", "social memory", "social recognition"]} {"id": "kp20k_training_263", "title": "Ticks, Tick-Borne Rickettsiae, and Coxiella burnetii in the Greek Island of Cephalonia", "abstract": "Domestic animals are the hosts of several tick species and the reservoirs of some tick-borne pathogens; hence, they play an important role in the circulation of these arthropods and their pathogens in nature. They may act as vectors, but, also, as reservoirs of spotted fever group (SFG) rickettsiae, which are the causative agents of SFG rickettsioses. Q fever is a worldwide zoonosis caused by Coxiella burnetii (C. burnetii), which can be isolated from ticks. A total of 1,848 ticks (954 female, 853 male, and 41 nymph) were collected from dogs, goats, sheep, cattle, and horses in 32 different localities of the Greek island of Cephalonia. Rhipicephalus (Rh.) bursa, Rh. turanicus, Rh. sanguineus, Dermacentor marginatus (D. marginatus), Ixodes gibbosus (I. gibbosus), Haemaphysalis (Ha.) punctata, Ha. sulcata, Hyalomma (Hy.) anatolicum excavatum and Hy. marginatum marginatum were the species identified. C. burnetii and four different SFG rickettsiae, including Rickettsia (R.) conorii, R. massiliae, R. rhipicephali, and R. aeschlimannii were detected using molecular methods. Double infection with R. massiliae and C. burnetii was found in one of the positive ticks", "keywords": ["ticks", "rickettsia conorii", "rickettsia massiliae", "rickettsia rhipicephali", "rickettsia aeschlimannii", "coxiella burnetii", "greece"]} {"id": "kp20k_training_264", "title": "Quiver polynomials in iterated residue form", "abstract": "Degeneracy loci polynomials for quiver representations generalize several important polynomials in algebraic combinatorics. In this paper we give a nonconventional generating sequence description of these polynomials when the quiver is of Dynkin type", "keywords": ["quiver", "degeneracy loci", "equivariant cohomology", "iterated residues"]} {"id": "kp20k_training_265", "title": "The relationship among soft sets, soft rough sets and topologies", "abstract": "Molodtsovs soft set theory is a newly emerging tool to deal with uncertain problems. Based on the novel granulation structures called soft approximation spaces, Feng et al. initiated soft rough approximations and soft rough sets. Fengs soft rough sets can be seen as a generalized rough set model based on soft sets, which could provide better approximations than Pawlaks rough sets in some cases. This paper is devoted to establishing the relationship among soft sets, soft rough sets and topologies. We introduce the concept of topological soft sets by combining soft sets with topologies and give their properties. New types of soft sets such as keeping intersection soft sets and keeping union soft sets are defined and supported by some illustrative examples. We describe the relationship between rough sets and soft rough sets. We obtain the structure of soft rough sets and the topological structure of soft sets, and reveal that every topological space on the initial universe is a soft approximating space", "keywords": ["soft sets", "topological soft sets", "soft rough approximations", "soft rough sets", "rough sets", "topologies"]} {"id": "kp20k_training_266", "title": "A highly efficient VLSI architecture for H.264/AVC CAVLC decoder", "abstract": "In this paper, an efficient algorithm is proposed to improve the decoding efficiency of the context-based adaptive variable length coding (CAVLC) procedure. Due to the data dependency among symbols in the decoding How, the CAVLC decoder requires large computation time, which dominates the overall decoder system performance. To expedite its decoding speed, the critical path in the CAVLC decoder is first analyzed and then reduced by forwarding the adaptive detection for succeeding symbols. With a shortened critical path, the CAVLC architecture is further divided into two segments, which can be easily implemented by a pipeline structure. Consequently, the overall performance is effectively improved. In the hardware implementation, a low power combined LUT and single output buffer have been adopted to reduce the area as well as power consumption without affecting the decoding performance. Experimental results show that the proposed architecture surpassing other recent designs can approximately reduce power consumption by 40% and achieve three times decoding speed in comparison to the original decoding procedure suggested in the H.264 standard. The maximum frequency can be larger than 210 MHz, which can easily support the real-time requirement for resolutions higher than the HD1080 format", "keywords": ["context-based adaptive variable length coding ", "h.264/avc", "variable length coding"]} {"id": "kp20k_training_267", "title": "building database applications of virtual reality with x-vrml", "abstract": "A new method of building active database-driven virtual reality applications is presented. The term \"active\" is used to describe applications that allow server-side user interaction, dynamic composition of virtual scenes, access to on-line data, continuous visualization, and implementation of persistency.The use the X-VRML language for building active applications of virtual reality is proposed. X-VRML is a high-level XML-based language that overcomes the main limitations of the current virtual reality systems by providing convenient access to databases, object-orientation, parameterization, and imperative programming techniques. Applications of X-VRML include on-line data visualization, geographical information systems, scientific visualization, virtual games, and e-commerce applications such as virtual shops. In this paper, methods of accessing databases from X-VRML are described, architectures of X-VRML systems for different application domains are discussed, and examples of database applications of virtual reality implemented in X-VRML are presented", "keywords": ["data visualization", "activation", "applications", "use", "examples", "databases", "domain", "parameterization", "games", "java", " virtual reality ", "web3d", "paper", "access", "information system", "object-oriented", "compositing", "program", "mpeg-4", "method", "multimedia", "visualization", "systems", "architecture", "user interaction", "dynamic", "language", "xml", "implementation", "data", "vrml", "virtualization", "scientific visualiztion", "continuation", "server"]} {"id": "kp20k_training_268", "title": "Distributed H infinity filtering for sensor networks with switching topology", "abstract": "In this article, the distributed H filtering problem is investigated for a class of sensor networks under topology switching. The main purpose is to design the distributed H filter that allows one to regulate the sensor's working modes. Firstly, a switched system model is proposed to reflect the working mode change of the sensors. Then, a stochastic sequence is adopted to model the packet dropout phenomenon occurring in the channels from the plant to the networked sensors. By utilising the Lyapunov functional method and stochastic analysis, some sufficient conditions are established to ensure that the filtering error system is mean-square exponentially stable with a prescribed H performance level. Furthermore, the filter parameters are determined by solving a set of linear matrix inequalities (LMIs). Our results relates the decay rate of the filtering error system to the switching frequency of the topology directly and shows the existence of such a distributed filter when the topology is not varying very frequently, which is helpful for the sensor state regulation. Finally, the effectiveness of the proposed design method is demonstrated by two numerical examples", "keywords": ["distributed filtering", "sensor networks", "energy efficient", "switching topology", "exponentially stable", "lmis"]} {"id": "kp20k_training_269", "title": "The evolution of goal-based information modelling: literature review", "abstract": "Purpose - The first in a series on goal-based information modelling, this paper presents a literature review of two goal-based measurement methods. The second article in the series will build on this background to present an overview of some recent case-based research that shows the applicability of the goal-based methods for information modelling (as opposed to measurement). The third and concluding article in the series will present a new goal-based information model - the goal-based information framework (GbIF) - that is well suited to the task of documenting and evaluating organisational information flow. Design/methodology/approach - Following a literature review of the goal-question-metric (GQM) and goal-question-indicator-measure (GQIM) methods, the paper presents the strengths and weaknesses of goal-based approaches. Findings - The literature indicates that the goal-based methods are both rigorous and adaptable. With over 20 years of use, goal-based methods have achieved demonstrable and quantifiable results in both practitioner and academic studies. The down side of the methods are the potential expense and the \"expansiveness\" of goal-based models. The overheads of managing the goal-based process, from early negotiations on objectives and goals to maintaining the model (adding new goals, questions and indicators), could make the method unwieldy and expensive for organisations with limited resources. An additional challenge identified in the literature is the narrow focus of \"top-down\" (i.e. goal-based) methods. Since the methods limit the focus to a pre-defined set of goals and questions, the opportunity for discovery of new information is limited. Research limitations/implications - Much of the previous work on goal-based methodologies has been confined to software measurement contexts in larger organisations with well-established information gathering processes. Although the next part of the series presents goal-based methods outside of this native context, and within low maturity organisations, further work needs to be done to understand the applicability of these methods in the information science discipline. Originality/value - This paper presents ail overview of goal-based methods. The next article in the series will present the method outside the native context of software measurement. With the universality of the method established, information scientists will have a new tool to evaluate and document organisational information flow", "keywords": ["information", "modelling"]} {"id": "kp20k_training_270", "title": "A communication reduction approach to iteratively solve large sparse linear systems on a GPGPU cluster", "abstract": "Finite Element Methods (FEM) are widely used in academia and industry, especially in the fields of mechanical engineering, civil engineering, aerospace, and electrical engineering. These methods usually convert partial difference equations into large sparse linear systems. For complex problems, solving these large sparse linear systems is a time consuming process. This paper presents a parallelized iterative solver for large sparse linear systems implemented on a GPGPU cluster. Traditionally, these problems do not scale well on GPGPU clusters. This paper presents an approach to reduce the communications between cluster compute nodes for these solvers. Additionally, computation and communication are overlapped to reduce the impact of data exchange. The parallelized system achieved a speedup of up to 15.3 times on 16 NVIDIA Tesla GPUs, compared to a single GPU. An analytical evaluation of the algorithm is conducted in this paper, and the analytical equations for predicting the performance are presented and validated", "keywords": ["iterative solver", "gpgpu cluster", "communication reduction", "sparse linear systems"]} {"id": "kp20k_training_271", "title": "transductive inference using multiple experts for brushwork annotation in paintings domain", "abstract": "Many recent studies perform annotation of paintings based on brushwork. In these studies the brushwork is modeled indirectly as part of the annotation of high-level artistic concepts such as the artist name using low-level texture. In this paper, we develop a serial multi-expert framework for explicit annotation of paintings with brushwork classes. In the proposed framework, each individual expert implements transductive inference by exploiting both labeled and unlabelled data. To minimize the problem of noise in the feature space, the experts select appropriate features based on their relevance to the brushwork classes. The selected features are utilized to generate several models to annotate the unlabelled patterns. The experts select the best performing model based on Vapnik combined bound. The transductive annotation using multiple experts out-performs the conventional baseline method in annotating patterns with brushwork classes", "keywords": ["select", " framework ", "method", "brushwork", "inference", "space", "transductive inference", "painting", "annotation", "concept", "data", "relevance", "model", "paper", "feature", "noise", "feature selection", "class", "pattern"]} {"id": "kp20k_training_272", "title": "On the integration of equations of motion for particle-in-cell codes", "abstract": "An area-preserving implementation of the 2nd order Runge-Kutta integration method for equations of motion is presented. For forces independent of velocity the scheme possesses the same numerical simplicity and stability as the leapfrog method, and is not implicit for forces which do depend on velocity. It can be therefore easily applied where the leapfrog method in general cannot. We discuss the stability of the new scheme and test its performance in calculations of particle motion in three cases of interest. First, in the ubiquitous and numerically demanding example of nonlinear interaction of particles with a propagating plane wave, second, in the case of particle motion in a static magnetic field and, third, in a nonlinear dissipative case leading to a limit cycle. We compare computed orbits with exact orbits and with results from the leapfrog and other low-order integration schemes. Of special interest is the role of intrinsic stochasticity introduced by time diferencing, which can destroy orbits of an otherwise exactly integrable system and therefore constitutes a restriction on the applicability of an integration scheme in such a context [A. Friedman, S.P. Auerbach, J. Comput. Phys. 93 (1991) 171]. In particular, we show that for a plane wave the new scheme proposed herein can be reduced to a symmetric standard map. This leads to the nonlinear stability condition Delta t omega(B) <= 1, where Delta t is the time step and omega(B) the particle bounce frequency. ", "keywords": ["equations of motion", "2nd order integration methods", "nonlinear oscillations"]} {"id": "kp20k_training_273", "title": "system support for mobile augmented reality services", "abstract": "Developing and deploying augmented reality (AR) services in pervasive computing environments is quite difficult because almost of all current systems require heavy and bulky head-mounted displays (HMDs) and are based on inflexible centralized architectures for detecting service locations and superimposing AR images. We propose a light-weight mobile AR service framework that combines personal mobile devices most of people own nowadays, visual tags as inexpensive AR techniques, and mobile code that enables easy-to-deploy environments. Our framework enables developers to easily deploy mobile AR services in pervasive computing environments and users to interact them in a both of practical and intuitive way", "keywords": ["vidgets framework", "mobile augmented reality"]} {"id": "kp20k_training_274", "title": "Fabrication of the wireless systems for controlling movements of the electrical stimulus capsule in the small intestines", "abstract": "Diseases of the gastro-intestinal tract are becoming more prevalent. New techniques and devices, such as the wireless capsule endoscope and the telemetry capsule, that are able to measure the various signals of the digestive organs (temperature, pH, and pressure), have been developed for the observation of the digestive organs. In these capsule devices, there are no methods of moving and grasping them. In order to make a swift diagnosis and to give proper medication, it is necessary to control the moving speed of the capsule. This paper presents a wireless system for the control of movements of an electrical stimulus capsule. This includes an electrical stimulus capsule which can be swallowed and an external transmitting control system. A receiver, a receiving antenna (small multi-loop), a transmitter, and a transmitting antenna (monopole) were designed and fabricated taking into consideration the MPE, power consumption, system size, signal-to-noise ratio and the modulation method. The wireless system, which was designed and implemented for the control of movements of the electrical stimulus capsule, was verified by in-vitro experiments which were performed on the small intestines of a pig. As a result, we found that when the small intestines are contracted by electrical stimuli, the capsule can move to the opposite direction, which means that the capsule can go up or down in the small intestines", "keywords": ["wireless capsule endoscope", "electrical stimulus capsule", "moving speed", "wireless system", "receiver", "transmitter", "small multi-loop", "in-vitro experiments"]} {"id": "kp20k_training_275", "title": "A CONTINUOUS WAVELET-BASED APPROACH TO DETECT ANISOTROPIC PROPERTIES IN SPATIAL POINT PROCESSES", "abstract": "A two-dimensional stochastic point process can be regarded as a random measure and thus represented as a (countable) sum of Delta Dirac measures concentrated at some points. Integration with respect to the point process itself leads to the concept of the continuous wavelet transform of a point process. Applying then suitable translation, rotation and dilation operations through a non unitary operator, we obtain a transformed point process which highlights main properties of the original point process. The choice of the mother wavelet is relevant and we thus conduct a detailed analysis proposing three two-dimensional mother wavelets. We use this approach to detect main directions present in the point process, and to test for anisotropy", "keywords": ["anisotropic point processes", "continuous wavelet transform", "curvature", "end-stopped mother wavelet", "mexican hat mother wavelet", "morlet mother wavelet", "energy density position representation", "random measure", "transformed point processes"]} {"id": "kp20k_training_276", "title": "ROBUST OBJECT TRACKING USING JOINT COLOR-TEXTURE HISTOGRAM", "abstract": "A novel object tracking algorithm is presented in this paper by using the joint color-texture histogram to represent a target and then applying it to the mean shift framework. Apart from the conventional color histogram features, the texture features of the object are also extracted by using the local binary pattern (LBP) technique to represent the object. The major uniform LBP patterns are exploited to form a mask for joint color-texture feature selection. Compared with the traditional color histogram based algorithms that use the whole target region for tracking, the proposed algorithm extracts effectively the edge and corner features in the target region, which characterize better and represent more robustly the target. The experimental results validate that the proposed method improves greatly the tracking accuracy and efficiency with fewer mean shift iterations than standard mean shift tracking. It can robustly track the target under complex scenes, such as similar target and background appearance, on which the traditional color based schemes may fail to track", "keywords": ["object tracking", "mean shift", "local binary pattern", "color histogram"]} {"id": "kp20k_training_277", "title": "Quasi-Resonant Interconnects: A Low Power, Low Latency Design Methodology", "abstract": "Design and analysis guidelines for quasi-resonant interconnect networks (QRN) are presented in this paper. The methodology focuses on developing an accurate analytic distributed model of the on-chip interconnect and inductor to obtain both low power and low latency. Excellent agreement is shown between the proposed model and SpectraS simulations. The analysis and design of the inductor, insertion point, and driver resistance for minimum power-delay product is described. A case study demonstrates the design of a quasi-resonant interconnect, transmitting a 5 Gb/s data signal along a 5 mm line in a TSMC 0.18-mu m CMOS technology. As compared to classical repeater insertion, an average reduction of 91.1% and 37.8% is obtained in power consumption and delay, respectively. As compared to optical links, a reduction of 97.1% and 35.6% is observed in power consumption and delay, respectively", "keywords": ["latency", "on-chip inductors", "on-chip interconnects", "power dissipation", "resonance"]} {"id": "kp20k_training_278", "title": "Combining Hashing and Enciphering Algorithms for Epidemiological Analysis of Gathered Data", "abstract": "Objectives: Compiling individual records coming from different sources is necessary for multi-center studies. Legal aspects can be satisfied by implementing anonymization procedures. When using these procedures with a different key for each study it becomes almost impossible to link records from separate data collections. Methods: The originality of the method relies on the way the combination of hashing and enciphering techniques is performed: like in asymmetric encryption, two keys are used but the private key depends on the patient's identity. Results:The combination of hashing and enciphering techniques provides a great improvement in the overall security of the proposed scheme. Conclusion: This methodology makes stored data available for use in the field of public health, while respecting legal security requirements", "keywords": ["security", "patient identification", "encryption", "hashing"]} {"id": "kp20k_training_279", "title": "A personalized English learning recommender system for ESL students", "abstract": "This paper has developed an online personalized English learning recommender system capable of providing ESL students with reading lessons that suit their different interests and therefore increase the motivation to learn. The system, using content-based analysis, collaborative filtering, and data mining techniques, analyzes real students reading data and generates recommender scores, based on which to help select appropriate lessons for respective students. Its performance having been tracked over a period of one year, this recommender system has proved to be very useful in heightening ESL learners motivation and interest in reading", "keywords": ["online learning", "learning system", "esl", "data mining", "association rules", "clustering", "recommender system"]} {"id": "kp20k_training_280", "title": "Graph-based hierarchical conceptual clustering", "abstract": "Hierarchical conceptual clustering has proven to be a useful, although under-explored, data mining technique. A graph-based representation of structural information combined with a substructure discovery technique has been shown to be successful in knowledge discovery. The SUBDUE substructure discovery system provides one such combination of approaches. This work presents SUBDUE and the development of its clustering functionalities. Several examples are used to illustrate the validity of the approach both in structured and unstructured domains, as well as to compare SUBDUE to the Cobweb clustering algorithm. We also develop a new metric for comparing structurally-defined clusterings. Results show that SUBDUE successfully discovers hierarchical clusterings in both structured and unstructured data", "keywords": ["clustering", "cluster analysis", "concept formation", "structural data", "graph match"]} {"id": "kp20k_training_281", "title": "An intelligent system employing an enhanced fuzzy c-means clustering model: Application in the case of forest fires", "abstract": "Fuzzy c-means is a well-established clustering algorithm. According to this approach instead of having each data point Dpi=(X,Y) belonging only to a specific cluster in a crisp manner, each Dpi belongs to all of the determined clusters with a different degree of membership. In this way cluster overlapping is allowed. This research effort enhances the fuzzy c-means model in an intelligent manner, employing a flexible fuzzy termination criterion. The enhanced fuzzy c-means clustering algorithm performs several iterations before the proper centers of the clusters more or less stabilize, which means that their coordinates remain almost equal to the previous ones. In this way the algorithm is expanded to perform in a more flexible and human like intelligent way, avoiding the chance of infinite loops and the performance of unnecessary iterations. A corresponding software system has been developed in C++ programming language applying the extended model. The system has been applied for the clustering of the Greek forest departments according to their forest fire risk. Two risk factors were taken into consideration, namely the number of forest fires and the annual burned forested areas. The design and the development of the innovative model-system and the results of its application are presented and discussed in this research paper", "keywords": ["extended fuzzy c-means clustering", "innovative fuzzy termination criterion", "forest fires", "forest fire risk clustering"]} {"id": "kp20k_training_282", "title": "Miniaturization of UWB Antennas and its Influence on Antenna-Transceiver Performance in Impulse-UWB Communication", "abstract": "In this paper, a co-design methodology and the effect of antenna miniaturization in an impulse UWB system/transceiver is presented. Modified small-size printed tapered monopole antennas (PTMA) are designed in different scaling sizes. In order to evaluate the performance and functionality of these antennas, the effect of each antenna is studied in a given impulse UWB system. The UWB system includes an impulse UWB transmitter and two kinds of UWB receivers are considered, one based on correlation detection and one on energy detection schemes. A tunable low-power Impulse UWB transmitter is designed and the benefit of co-designing it with the PTMA antenna is investigated for the 3.110.6GHz band. A comparison is given between a 50(Omega ) design and a co-designed version. Our antenna/transceiver co-design methodology shows improvement in both transmitter efficiency and whole system performance. The simulation results show that the PTMA antenna and its miniaturized geometries are suitable for UWB applications", "keywords": ["uwb antennas", "design methodology", "impulse radio", "transceiver", "ultra-wideband"]} {"id": "kp20k_training_283", "title": "INDUCED QUASI-ARITHMETIC UNCERTAIN LINGUISTIC AGGREGATION OPERATOR", "abstract": "Induced quasi-arithmetic aggregation operators are considered to aggregate uncertain linguistic information by using order inducing variables. We introduce the induced correlative uncertain linguistic aggregation operator with Choquet integral and we also present the induced uncertain linguistic aggregation operator by using the Dempster-Shafer theory of evidence. The special cases of the new proposed operators are investigated. Many existing linguistic aggregation operators are special cases of our new operators and more new uncertain linguistic aggregation operators can be derived from them. Decision making methods based on the new aggregation operators are proposed and architecture material supplier selection problems are presented to illustrate the feasibility and efficiency of the new methods", "keywords": ["choquet integral", "dempster-shafer theory", "uncertain linguistic variable", "aggregation operator", "decision making"]} {"id": "kp20k_training_284", "title": "On fuzzy congruence of a near-ring module", "abstract": "The aim of this paper is to introduce fuzzy submodule and fuzzy congruence of an R-module (Near-ring module), to obtain the correspondence between fuzzy congruences and fuzzy submodules of an R-module, to define quotient R-module of an R-module over a fuzzy submodule and to obtain correspondence between fuzzy congruences of an R-module and fuzzy congruences of quotient R-module over a fuzzy submodule of an R-module. ", "keywords": ["algebra", "r-module", "fuzzy submodule", "quotient module", "fuzzy congruence"]} {"id": "kp20k_training_285", "title": "Self-bounded controlled invariant subspaces in measurable signal decoupling with stability: Minimal-order feedforward solution", "abstract": "The structural properties of self-bounded controlled invariant subspaces are fundamental to the synthesis of a dynamic feedforward compensator achieving insensitivity of the controlled output to a disturbance input accessible for measurement, on the assumption that the system is stable or pre-stabilized by an inner feedback. The control system herein devised has several important features: i) minimum order of the feedforward compensator; ii) minimum number of unassignable dynamics internal to the feedforward compensator; iii) maximum number of dynamics, external to the feedforward compensator, arbitrarily assignable by a possible inner feedback. From the numerical point of view, the design method herein detailed does not involve any computation of eigenspaces, which may be critical for systems of high order. The procedure is first presented for left-invertible systems. Then, it is extended to non-left-invertible systems by means of a simple, original, squaring-down technique", "keywords": ["geometric approach", "linear systems", "self-bounded controlled invariant subspaces", "measurable signal decoupling", "non-left-invertible systems"]} {"id": "kp20k_training_286", "title": "hypergraph-based multilevel matrix approximation for text information retrieval", "abstract": "In Latent Semantic Indexing (LSI), a collection of documents is often pre-processed to form a sparse term-document matrix, followed by a computation of a low-rank approximation to the data matrix. A multilevel framework based on hypergraph coarsening is presented which exploits the hypergraph that is canonically associated with the sparse term-document matrix representing the data. The main goal is to reduce the cost of the matrix approximation without sacrificing accuracy. Because coarsening by multilevel hypergraph techniques is a form of clustering, the proposed approach can be regarded as a hybrid of factorization-based LSI and clustering-based LSI. Experimental results indicate that our method achieves good improvement of the retrieval performance at a reduced cost", "keywords": ["multilevel hypergraph partitioning", "text information retrieval", "latent semantic indexing", "low-rank matrix approximation"]} {"id": "kp20k_training_287", "title": "Balanced paths in acyclic networks: Tractable cases and related approaches", "abstract": "Given a weighted acyclic network G and two nodes s and t in G, we consider the problem of computing k balanced paths from s to t, that is, k paths such that the difference in cost between the longest and the shortest path is minimized. The problem has several variants. We show that, whereas the general problem is solvable in pseudopolynomial time, both the arc-disjoint and the node-disjoint variants (i.e., the variants where the k paths are required to be arc-disjoint and node-disjoint, respectively) are strongly NP-Hard. We then address some significant special cases of such variants, and propose exact as well as approximate algorithms for their solution. The proposed approaches are also able to solve versions of the problem in which k origin-destination pairs are provided, and a set of k paths linking the origin-destination pairs has to be computed in such a way to minimize the difference in cost between the longest and the shortest path in the set. ", "keywords": ["layered networks", "balanced paths", "cost difference", "pseudopolynomial approaches"]} {"id": "kp20k_training_288", "title": "The ?-connected assignment problem", "abstract": "Given a graph and costs of assigning to each vertex one of k different colors, we want to find a minimum cost assignment such that no color q induces a subgraph with more than a given number (?q) of connected components. This problem arose in the context of contiguity-constrained clustering, but also has a number of other possible applications. We show the problem to be NP-hard. Nevertheless, we derive a dynamic programming algorithm that proves the case where the underlying graph is a tree to be solvable in polynomial time. Next, we propose mixed-integer programming formulations for this problem that lead to branch-and-cut and branch-and-price algorithms. Finally, we introduce a new class of valid inequalities to obtain an enhanced branch-and-cut. Extensive computational experiments are reported", "keywords": ["assignment", "clustering", "cutting", "pricing", "integer programming"]} {"id": "kp20k_training_289", "title": "Stable Spaces for Real-time Clothing", "abstract": "We present a technique for learning clothing models that enables the simultaneous animation of thousands of detailed garments in real-time. This surprisingly simple conditional model learns and preserves the key dynamic properties of a cloth motion along with folding details. Our approach requires no a priori physical model, but rather treats training data as a \"black box.\" We show that the models learned with our method are stable over large time-steps and can approximately resolve cloth-body collisions. We also show that within a class of methods, no simpler model covers the full range of cloth dynamics captured by ours. Our method bridges the current gap between skinning and physical simulation, combining benefits of speed from the former with dynamic effects from the latter. We demonstrate our approach on a variety of apparel worn by male and female human characters performing a varied set of motions typically used in video games (e.g., walking, running, jumping, etc", "keywords": ["cloth animation", "character animation", "virtual reality", "cloth simulation", "video games"]} {"id": "kp20k_training_290", "title": "using topes to validate and reformat data in end-user programming tools", "abstract": "End-user programming tools offer no data types except \"string\" for many categories of data, such as person names and street addresses. Consequently, these tools cannot automatically validate or reformat these data. To address this problem, we have developed a user-extensible model for string-like data. Each \"tope\" in this model is a user-defined abstraction that guides the interpretation of strings as a particular kind of data. Specifically, each tope implementation contains software functions for recognizing and reformatting instances of that tope's kind of data. This makes it possible at runtime to distinguish between invalid data, valid data, and questionable data that could be valid or invalid. Once identified, questionable and/or invalid data can be double-checked and possibly corrected, thereby increasing the overall reliability of the data. Valid data can be automatically reformatted to any of the formats appropriate for that kind of data. To show the general applicability of topes, we describe new features that topes have enabled us to provide in four tools", "keywords": ["web macros", "data", "abstraction", "web applications", "spreadsheets", "end-user programming", "validation", "end-user software engineering"]} {"id": "kp20k_training_291", "title": "Rough Sets and the role of the monetary policy in financial stability (macroeconomic problem) and the prediction of insolvency in insurance sector (microeconomic problem", "abstract": "This paper faces two questions related with financial stability. The first one is a macroeconomic problem in which we try to further investigate the role of monetary policy in explaining banking sector fragility and, ultimately, systemic banking crisis. It analyses a large sample of countries in the period 19811999. We find that the degree of central bank independence is one of the key variables to explain financial crisis. However, the effects of the degree of independence are not linear. Surprisingly, either a high degree of independence or a high degree of dependence are compatible with a situation of financial stability, while intermediate levels of independence are more likely associated with financial crisis. It seems that it is the uncertainty related with a non-clear allocation of monetary policy responsibilities that contributes to financial crisis episodes. The second one is a microeconomic problem: the prediction of insolvency in insurance companies. This question has been a concern of several parties stemmed from the perceived need to protect general public and to minimize the costs associated such as the effects on state insurance guaranty funds or the responsibilities for management and auditors. We have developed a bankruptcy prediction model for Spanish non-life insurance companies and the results obtained are very encouraging in comparison with previous analysis. This model could be used as an early warning system for supervisors in charge of the soundness of these entities and/or in charge of the financial system stability. Most methods applied in the past to tackle these two problems are techniques of statistical nature and, variables employed in these models do not usually satisfy statistical assumptions what complicates the analysis. We propose an approach to undertake these questions based on Rough Set Theory", "keywords": ["rough sets", "financial stability", "central bank independence", "insolvency", "insurance companies"]} {"id": "kp20k_training_292", "title": "CLASSIFICATION OF SELF-DUAL CODES OF LENGTH 36", "abstract": "A complete classification of binary self-dual codes of length 36 is given", "keywords": ["self-dual code", "weight enumerator", "mass formula"]} {"id": "kp20k_training_293", "title": "Supporting pervasive computing applications with active context fusion and semantic context delivery", "abstract": "Future pervasive computing applications are envisioned to adapt the applications behaviors by utilizing various contexts of an environment and its users. Such context information may often be ambiguous and also heterogeneous, which make the delivery of unambiguous context information to real applications extremely challenging. Thus, a significant challenge facing the development of realistic and deployable context-aware services for pervasive computing applications is the ability to deal with these ambiguous contexts. In this paper, we propose a resource optimized quality assured context mediation framework based on efficient context-aware data fusion and semantic-based context delivery. In this framework, contexts are first fused by an active fusion technique based on Dynamic Bayesian Networks and ontology, and further mediated using a composable ontological rule-based model with the involvement of users or application developers. The fused context data are then organized into an ontology-based semantic network together with the associated ontologies in order to facilitate efficient context delivery. Experimental results using SunSPOT and other sensors demonstrate the promise of this approach", "keywords": ["pervasive computing", "context awareness", "context fusion", "bayesian networks", "ontology", "sunspot"]} {"id": "kp20k_training_294", "title": "On computing the minimum 3-path vertex cover and dissociation number of graphs", "abstract": "The dissociation number of a graph G is the number of vertices in a maximum size induced subgraph of G with vertex degree at most 1. A k-path vertex cover of a graph G is a subset S of vertices of G such that every path of order k in G contains at least one vertex from S. The minimum 3-path vertex cover is a dual problem to the dissociation number. For this problem, we present an exact algorithm with a running time of O*(1.5171(n)) on a graph with n vertices. We also provide a polynomial time randomized approximation algorithm with an expected approximation ratio of 23/11 for the minimum 3-path vertex cover. ", "keywords": ["path vertex cover", "dissociation number", "approximation"]} {"id": "kp20k_training_295", "title": "Interval multiplicative transitivity for consistency, missing values and priority weights of interval fuzzy preference relations", "abstract": "In this paper, the concept of multiplicative transitivity of a fuzzy preference relation, as defined by Tanino [T. Tanino, Fuzzy preference orderings in group decision-making, Fuzzy Sets and Systems 12 (1984) 117131], is extended to discover whether an interval fuzzy preference relation is consistent or not, and to derive the priority vector of a consistent interval fuzzy preference relation. We achieve this by introducing the concept of interval multiplicative transitivity of an interval fuzzy preference relation and show that, by solving numerical examples, the test of consistency and the weights derived by the simple formulas based on the interval multiplicative transitivity produce the same results as those of linear programming models proposed by Xu and Chen [Z.S. Xu, J. Chen, Some models for deriving the priority weights from interval fuzzy preference relations, European Journal of Operational Research 184 (2008) 266280]. In addition, by taking advantage of interval multiplicative transitivity of an interval fuzzy preference relation, we put forward two approaches to estimate missing value(s) of an incomplete interval fuzzy preference relation, and present numerical examples to illustrate these two approaches", "keywords": ["interval multiplicative transitivity", "interval fuzzy preference relation", "consistency", "missing values", "priority vector"]} {"id": "kp20k_training_296", "title": "An O(n log n) algorithm for finding a shortest central link segment", "abstract": "A central link segment of a simple n-vertex polygon P is a segment s inside P that minimizes the quantity max(x epsilon P) min(y epsilon s) d(L)(x, y), where d(L)(x, y) is the link distance between points a: and y of P. In this paper we present an O(n log n) algorithm for finding a central link segment of P. This generalizes previous results for finding an edge or a segment of P from which P is visible. Moreover, in the same time bound, our algorithm finds a central link segment of minimum length. Constructing a central link segment has applications to the problems of finding an optimal robot placement in a simply connected polygonal region and determining the minimum value k for which a given polygon is k-visible from some segment", "keywords": ["algorithm design and analysis", "computational geometry", "link distance", "simple polygon", "shortest segment"]} {"id": "kp20k_training_297", "title": "Deconstructing switch-reference", "abstract": "This paper develops a new view on switch-reference, a phenomenon commonly taken to involve a morphological marker on a verb indicating whether the subject of this verb is coreferent with or disjoint from the subject of another verb. Ipropose a new structural source of switch-reference marking, which centers around coordination at different heights of the clausal structure, coupled with distinct morphological realizations of the syntactic coordination head. Conjunction of two VPs has two independent consequences: First, only a single external argument is projected; second, the coordinator head is realized by some marker A (the same subject marker). Conjunction of two vPs, by contrast, leads to projection of two independent external arguments and a different realization of the coordination by a marker B (the different subject marker). The hallmark properties of this analysis are that (i)subject identity or disjointness is only indirectly tied to the switch-reference markers, furnishing a straightforward account of cases where this correlation breaks down; (ii)switch-reference does not operate across fully developed clauses, which accounts for the widely observed featural defectiveness of switch-reference clauses; (iii)same subject and different subject constructions differ in their syntactic structure, thus accommodating cases where the choice of the switch-reference markers has an impact on event structure. The analysis is mainly developed on the basis of evidence from the Mexican language Seri, the Papuan language Amele, and the North-American language Kiowa", "keywords": ["coordination", "clause linkage", "reference tracking", "distributed morphology", "event semantics", "verbal projections"]} {"id": "kp20k_training_298", "title": "An optimized parallel LSQR algorithm for seismic tomography", "abstract": "The LSQR algorithm developed by Paige and Saunders (1982) is considered one of the most efficient and stable methods for solving large, sparse, and ill-posed linear (or linearized) systems. In seismic tomography, the LSQR method has been widely used in solving linearized inversion problems. As the amount of seismic observations increase and tomographic techniques advance, the size of inversion problems can grow accordingly. Currently, a few parallel LSQR solvers are presented or available for solving large problems on supercomputers, but the scalabilities are generally weak because of the significant communication cost among processors. In this paper, we present the details of our optimizations on the LSQR code for, but not limited to, seismic tomographic inversions. The optimizations we have implemented to our LSQR code include: reordering the damping matrix to reduce its band-width for simplifying the communication pattern and reducing the amount of communication during calculations; adopting sparse matrix storage formats for efficiently storing and partitioning matrices; using the MPI I/O functions to parallelize the date reading and result writing processes; providing different data partition strategies for efficiently using computational resources. A large seismic tomographic inversion problem, the full-3D waveform tomography for Southern California, is used to explain the details of our optimizations and examine the performance on Yellowstone supercomputer at the NCAR-Wyoming Supercomputing Center (NWSC). The results showed that the required wall time of our code for the same inversion problem is much less than that of the LSQR solver from the PETSc library (Balay et al., 1997", "keywords": ["lsqr algorithm", "tomographic inversion", "mpi", "computational seismology", "inverse problems", "parallel scientific computing"]} {"id": "kp20k_training_299", "title": "on computer-assisted classification of coupled integrable equations", "abstract": "We show how the triangularization method of the second author can be successfully applied to the problem of classification of homogeneous coupled integrable equations. The classifications rely on the recent algorithm developed by the first author that requires solving 17 systems of polynomial equations. We show that these systems can be completely resolved in the case of coupled Korteweg-de Vries, Sawada-Kotera and Kaup-Kupershmidttype equations", "keywords": ["generalized symmetries", "integrable pdes", "polynomial systems", "triangular decompositions", "mathematical physics"]} {"id": "kp20k_training_300", "title": "A novel method for fingerprint verification that approaches the problem as a two-class pattern recognition problem", "abstract": "We present a system for fingerprint verification that approaches the problem as a two-class pattern recognition problem. The distances of the test fingerprint to the reference fingerprints are normalized by the corresponding mean values obtained from the reference set, to form a five-dimensional feature vector. This feature vector is then projected onto a one-dimensional Karhunen-Loeve space and then classified into one of the two classes (genuine or impostor", "keywords": ["fingerprint verification", "support vector machine"]} {"id": "kp20k_training_301", "title": "The uncovering of hidden structures by Latent Semantic Analysis", "abstract": "Latent Semantic Analysis (LSA) is a well-known method for information retrieval. It has also been applied as a model of cognitive processing and word-meaning acquisition. This dual importance of LSA derives from its capacity to modulate the meaning of words by contexts, dealing successfully with polysemy and synonymy. The underlying reasons that make the method work are not clear enough. We propose that the method works because it detects an underlying block structure (the blocks corresponding to topics) in the term-by-document matrix. In real cases this block structure is hidden because of perturbations. We propose that the correct explanation for LSA must be searched in the structure of singular vectors rather than in the profile of singular values. Using the PerronFrobenius theory we show that the presence of disjoint blocks of documents is marked by sign-homogeneous entries in the vectors corresponding to the documents of one block and zeros elsewhere. In the case of nearly disjoint blocks, perturbation theory shows that if the perturbations are small, the zeros in the leading vectors are replaced by small numbers (pseudo-zeros). Since the singular values of each block might be very different in magnitude, their order does not mirror the order of blocks. When the norms of the blocks are similar, LSA works fine, but we propose that when the topics have different sizes, the usual procedure of selecting the first k singular triplets (k being the number of blocks) should be replaced by a method that selects the perturbed Perron vectors for each block", "keywords": ["perronfrobenius theory", "perturbation theory", "lsa", "information search and retrieval"]} {"id": "kp20k_training_302", "title": "computing monodromy groups defined by plane algebraic curves", "abstract": "We present a symbolic-numeric method to compute the monodromy group of a plane algebraic curve viewed as a ramified covering space of the complex plane. Following the definition, our algorithm is based on analytic continuation of algebraic functions above paths in the complex plane. Our contribution is three-fold : first of all, we show how to use a minimum spanning tree to minimize the length of paths ; then, we propose a strategy that gives a good compromise between the number of steps and the truncation orders of Puiseux expansions, obtaining for the first time a complexity result about the number of steps; finally, we present an efficient numerical-modular algorithm to compute Puiseux expansions above critical points,which is a non trivial task", "keywords": ["algebraic curves", "riemann surfaces", "symbolic-numeric computation", "monodromy"]} {"id": "kp20k_training_303", "title": "Stone-like representation theorems and three-valued filters in R-0- algebras (nilpotent minimum algebras", "abstract": "Nilpotent minimum algebras (NM-algebras) are algebraic counterpart of a formal deductive system where conjunction is modeled by the nilpotent minimum t-norm, a logic also independently introduced by Guo-Jun Wang in the mid 1990s. Such algebras are to this logic just what Boolean algebras are to the classical propositional logic. In this paper, by introducing respectively the Stone topology and a three-valued fuzzy Stone topology on the set of all maximal filters in an NM-algebra, we first establish two analogues for an NM-algebra of the well-known Stone representation theorem for a Boolean algebra, which state that the Boolean skeleton of an NM-algebra is isomorphic to the algebra of all clopen subsets of its Stone space and the three-valued skeleton is isomorphic to the algebra of all clopen fuzzy subsets of its three-valued fuzzy Stone space, respectively. Then we introduce the notions of Boolean filter and of three-valued filter in an NM-algebra, and finally we prove that three-valued filters and closed subsets of the Stone space of an NM-algebra are in one-to-one correspondence and Boolean filters uniquely correspond to closed subsets of the subspace consisting of all ultrafilters. ", "keywords": ["non-classical logics", "nilpotent minimum", "finite square intersection property", "prime ideal theorem", "maximal filter", "stone representation theorem"]} {"id": "kp20k_training_304", "title": "An adaptive learning scheme for load balancing with zone partition in multi-sink wireless sensor network", "abstract": "In many researches on load balancing in multi-sink WSN, sensors usually choose the nearest sink as destination for sending data. However, in WSN, events often occur in specific area. If all sensors in this area all follow the nearest-sink strategy, sensors around nearest sink called hotspot will exhaust energy early. It means that this sink is isolated from network early and numbers of routing paths are broken. In this paper, we propose an adaptive learning scheme for load balancing scheme in multi-sink WSN. The agent in a centralized mobile anchor with directional antenna is introduced to adaptively partition the network into several zones according to the residual energy of hotspots around sink nodes. In addition, machine learning is applied to the mobile anchor to make it adaptable to any traffic pattern. Through interactions with the environment, the agent can discovery a near-optimal control policy for movement of mobile anchor. The policy can achieve minimization of residual energys variance among sinks, which prevent the early isolation of sink and prolong the network lifetime", "keywords": ["adaptive learning", "reinforcement learning problem", "load balancing", "multi-sink wireless sensor network", "q-learning based adaptive zone partition scheme"]} {"id": "kp20k_training_305", "title": "interactive visual tools to explore spatio-temporal variation", "abstract": "CommonGIS is a developing software system for exploratory analysis of spatial data. It includes a multitude of tools applicable to different data types and helping an analyst to find answers to a variety of questions. CommonGIS has been recently extended to support exploration of spatio-temporal data, i.e. temporally variant data referring to spatial locations. The set of new tools includes animated thematic maps, map series, value flow maps, time graphs, and dynamic transformations of the data. We demonstrate the use of the new tools by considering different analytical questions arising in the course of analysis of thematic spatio-temporal data", "keywords": ["animated maps", "temporal variation", "time-series spatial data", "information visualisation", "time-series analysis", "exploratory data analysis"]} {"id": "kp20k_training_306", "title": "Multiprocessor system-on-chip (MPSoC) technology", "abstract": "The multiprocessor system-on-chip (MPSoC) uses multiple CPUs along with other hardware subsystems to implement a system. A wide range of MPSoC architectures have been developed over the past decade. This paper surveys the history of MPSoCs to argue that they represent an important and distinct category of computer architecture. We consider some of the technological trends that have driven the design of MPSoCs. We also survey computer-aided design problems relevant to the design of MPSoCs", "keywords": ["configurable processors", "encoding", "hardware/software codesign", "multiprocessor", "multiprocessor system-on-chip "]} {"id": "kp20k_training_307", "title": "Statistical behavior of joint least-square estimation in the phase diversity context", "abstract": "The images recorded by optical telescopes are often degraded by aberrations that induce phase variations in the pupil plane. Several wavefront sensing techniques have been proposed to estimate aberrated phases. One of them is phase diversity, for which the joint least-square approach introduced by Gonsalves et al. is a reference method to estimate phase coefficients from the recorded images. In this paper, we rely on the asymptotic theory of Toeplitz matrices to show that Gonsalves' technique provides a consistent phase estimator as the size of the images grows. No comparable result is yielded by the classical joint maximum likelihood interpretation (e.g., as found in the work by Paxman et al.). Finally, our theoretical analysis is illustrated through simulated problems", "keywords": ["error analysis", "least-squares methods", "optical image processing", "parameter estimation", "phase diversity", "statistics", "toeplitz matrices"]} {"id": "kp20k_training_308", "title": "Integrated in silico approaches for the prediction of Ames test mutagenicity", "abstract": "The bacterial reverse mutation assay (Ames test) is a biological assay used to assess the mutagenic potential of chemical compounds. In this paper approaches for the development of an in silico mutagenicity screening tool are described. Three individual in silico models, which cover both structure activity relationship methods (SARs) and quantitative structure activity relationship methods (QSARs), were built using three different modelling techniques: (1) an in-house alert model: which uses SAR approach where alerts are generated based on experts judgements; (2) a kNN approach (k-Nearest Neighbours), which is a QSAR model where a prediction is given based on outcomes of its k chemical neighbours; (3) a naive Bayesian model (NB), which is another QSAR model, where a prediction is derived using a Bayesian formula through preselected identified informative chemical features (e.g., physico-chemical, structural descriptors). These in silico models, were compared against two well-known alert models (DEREK and ToxTree) and also against three different consensus approaches (Categorical Bayesian Integration Approach (CBI), Partial Least Squares Discriminate Analysis (PLS-DA) and simple majority vote approach). By applying these integration methods on the validation sets it was shown that both integration models (PLS-DA and CBI) achieved better performance than any of the individual models or consensus obtained by simple majority rule. In conclusion, the recommendation of this paper is that when obtaining consensus predictions for Ames mutagenicity, approaches like PLS-DA or CBI should be the first choice for the integration as compared to a simple majority vote approach", "keywords": ["ames", "qsar", "sar", "admet", "in silico models"]} {"id": "kp20k_training_309", "title": "Visualization and clustering of categorical data with probabilistic self-organizing map", "abstract": "This paper introduces a self-organizing map dedicated to clustering, analysis and visualization of categorical data. Usually, when dealing with categorical data, topological maps use an encoding stage: categorical data are changed into numerical vectors and traditional numerical algorithms (SOM) are run. In the present paper, we propose a novel probabilistic formalism of Kohonen map dedicated to categorical data where neurons are represented by probability tables. We do not need to use any coding to encode variables. We evaluate the effectiveness of our model in four examples using real data. Our experiments show that our model provides a good quality of results when dealing with categorical data", "keywords": ["probabilistic self-organizing map", "categorical variables", "visualization", "em algorithm"]} {"id": "kp20k_training_310", "title": "Stiffness analysis of parallelogram-type parallel manipulators using a strain energy method", "abstract": "Stiffness analysis of a general PTPM using an algebraic method. Result comparison between the proposed method and a finite element analysis method. A new stiffness index relating the stiffness property to the wrench experienced in a task", "keywords": ["stiffness analysis", "parallelogram-type parallel manipulator", "strain energy method", "algebraic method", "stiffness index"]} {"id": "kp20k_training_311", "title": "Simulation of natural and social process interactions - An example from Bronze Age Mesopotamia", "abstract": "New multimodel simulations of Bronze Age Mesopotamian settlement system dynamics, using advanced object-based simulation frameworks, are addressing fine-scale interaction of natural processes (crop growth, hydrology, etc.) and social processes (kinship-driven behaviors, farming and herding practices, etc.) on a daily basis across multi-enerational model runs. Key components of these simulations are representations of initial settlement populations that are demographically and socially plausible, and detailed models of social mechanisms that can produce and maintain realistic textures of social structure and dynamics over time. The simulation engine has broad applicability and is also being used to address modern problems such as agroeconomic sustainability in Southeast Asia. This article describes the simulation framework and presents results of initial studies, highlighting some social system representations", "keywords": ["multimodel", "simulations", "agent-based", "holistic", "environment", "social", "interaction"]} {"id": "kp20k_training_312", "title": "Newton-Like Dynamics and Forward-Backward Methods for Structured Monotone Inclusions in Hilbert Spaces", "abstract": "In a Hilbert space setting we introduce dynamical systems, which are linked to Newton and LevenbergMarquardt methods. They are intended to solve, by splitting methods, inclusions governed by structured monotone operators M=A+B, where A is a general maximal monotone operator, and B is monotone and locally Lipschitz continuous. Based on the Minty representation of A as a Lipschitz manifold, we show that these dynamics can be formulated as differential systems, which are relevant to the CauchyLipschitz theorem, and involve separately B and the resolvents of A. In the convex subdifferential case, by using Lyapunov asymptotic analysis, we prove a descent minimizing property and weak convergence to equilibria of the trajectories. Time discretization of these dynamics gives algorithms combining Newtons method and forward-backward methods for solving structured monotone inclusions", "keywords": ["monotone inclusions", "newton method", "levenbergmarquardt regularization", "dissipative dynamical systems", "lyapunov analysis", "weak asymptotic convergence", "forward-backward algorithms", "gradient-projection methods"]} {"id": "kp20k_training_313", "title": "Damage identification of a target substructure with moving load excitation", "abstract": "This paper presents a substructural damage identification approach under moving vehicular loads based on a dynamic response reconstruction technique. The relationship between two sets of time response vectors from the substructure subject to moving loads is formulated with the transmissibility matrix based on impulse response function in the wavelet domain. Only the finite element model of the intact target substructure and the measured dynamic acceleration responses from the target substructure in the damaged state are required. The time-histories of moving loads and interface forces on the substructure are not required in the proposed algorithm. The dynamic response sensitivity-based method is adopted for the substructural damage identification with the local damage modeled as a reduction in the elemental stiffness factor. The adaptive Tikhonov regularization technique is employed to have an improved identification result when noise effect is included in the measurements. Numerical studies on a three-dimensional box-section girder bridge deck subject to a single moving force or a two-axle three-dimensional moving vehicle are conducted to investigate the performance of the proposed substructural damage identification approach. The simulated local damage can be identified with 5% noise in the measured data", "keywords": ["substructure", "damage identification", "response reconstruction", "transmissibility", "wavelet", "moving loads"]} {"id": "kp20k_training_314", "title": "randomized parallel communication (preliminary version", "abstract": "Using a simple finite degree interconnection network among n processors and a straightforward randomized algorithm for packet delivery, it is possible to deliver a set of n packets travelling to unique targets from unique sources in 0( log n ) expected time. The expected delivery time is in other words the depth of the interconnection graph. The b-way shufile networks are examples of such. This represents a crude analysis of the transient response to a sudden but very uniform request load on the network. Variations in the uniformity of the load are also considered. Consider s i packets with randomly chosen targets beginning at a source labelled i . The expected overall delay is then [equation] where the labelling is chosen so that s 1 ?s 2 ?. These ideas can be used to guage the asymptotic efficiency of various synchronous parallel algorithms which use such a randomized communications system. The only important assumption is that variations in the physical transmission time along any connection link are negligible in comparison to the amount of work done at a processor", "keywords": ["communication", "network", "use", "examples", "efficiency", "synchronization", "analysis", "parallel algorithm", "delay", "timing", "response", "worst case", "randomization", "comparisons", "linking", "randomized algorithm", "parallel communication", "average response time", "processor", "label", "systems", "parallel", "variation", "interconnect", "graph", "physical", "version", "connection", "interconnection network"]} {"id": "kp20k_training_315", "title": "feature selection for fast speech emotion recognition", "abstract": "In speech based emotion recognition, both acoustic features extraction and features classification are usually time consuming,which obstruct the system to be real time. In this paper, we proposea novel feature selection (FSalgorithm to filter out the low efficiency features towards fast speech emotion recognition.Firstly, each acoustic feature's discriminative ability, time consumption and redundancy are calculated. Then, we map the original feature space into a nonlinear one to select nonlinear features,which can exploit the underlying relationship among the original features. Thirdly, high discriminative nonlinear feature with low time consumption is initially preserved. Finally, a further selection is followed to obtain low redundant features based on these preserved features. The final selected nonlinear features are used in features' extraction and features' classification in our approach, we call them qualified features. The experimental results demonstrate that recognition time consumption can be dramatically reduced in not only the extraction phase but also the classification phase. Moreover, a competitive of recognition accuracy has been observed in the speech emotion recognition", "keywords": ["emotion recognition", "time consumption", "qualified features", "feature selection", "nonlinear space"]} {"id": "kp20k_training_316", "title": "Automated inspection planning of free-form shape parts by laser scanning", "abstract": "The inspection operation accounts for a large portion of manufacturing lead time, and its importance in quality control cannot be overemphasized. In recent years, due to the development of laser technology, the accuracy of laser scanners has been improved significantly so that they can be used in a production environment. They are noncontact-type-measuring devices and usually have the scanning speed that is 50100 times faster than that of coordinate measuring machines. This laser-scanning technology provides us a platform that enables us to perform a 100% inspection of complicated shape parts. This research proposes algorithms that lead to the automation of laser scanner-based inspection operations. The proposed algorithms consist of three steps: firstly, all possible accessible directions at each sampled point on a part surface are generated considering constraints existing in a laser scanning operation. The constraints include satisfying the view angle, the depth of view, checking interference with a part, and avoiding collision with the probe. Secondly, the number of scans and the most desired direction for each scan are calculated. Finally, the scan path that gives the minimum scan time is generated. The proposed algorithms are applied to sample parts and the results are discussed", "keywords": ["automated inspection", "reverse engineering", "laser scanner"]} {"id": "kp20k_training_317", "title": "GBF: a grammar based filter for Internet applications", "abstract": "Observing network traffic is necessary for achieving different purposes such as system performance, network debugging and/or information security. Observations, as such, are obtained from low-level monitors that may record a large volume of relevant and irrelevant events. Thus adequate filters are needed to pass interesting information only. This work presents a multilayer system, GBF that integrates both packet (low-level) and document (high-level) filters. Actually, the design of GBF is grammar-based so that it relies upon a set of context-free grammars to carry out various processes, specially the document reconstruction process. GBF consists of three layers, acquisition layer, packet filter layer, and reconstruction layer. The performance of the reconstruction process is evaluated in terms of the time consumed during service separation and session separation tasks", "keywords": ["packet monitoring", "event filterng", "sniffing", "context free grammar", "document reconstruction"]} {"id": "kp20k_training_318", "title": "Enhanced particle swarm optimizer incorporating a weighted particle", "abstract": "This study proposes an enhanced particle swarm optimizer incorporating a weighted particle (EPSOWP) to improve the evolutionary performance for a set of benchmark functions. In conventional particle swarm optimizer (PSO), there are two principal forces to guide the moving direction of each particle. However, if the current particle lies too close to either the personal best particle or the global best particle, the velocity is mainly updated by only one term. As a result, search step becomes smaller and the optimization of the swarm is likely to be trapped into a local optimum. To address this problem, we define a weighted particle for incorporation into the particle swarm optimization. Because the weighted particle has a better opportunity getting closer to the optimal solution than the global best particle during the evolution, the EPSOWP is capable of guiding the swarm to a better direction to search the optimal solution. Simulation results show the effectiveness of the EPSOWP, which outperforms various evolutionary algorithms on a selected set of benchmark functions. Furthermore, the proposed EPSOWP is applied to controller design and parameter identification for an inverted pendulum system as well as parameter learning of neural network for function approximation to show its viability to solve practical design problems", "keywords": ["particle swarm optimization ", "weighted particle", "convergence", "pid controller design", "inverted pendulum system", "neural network"]} {"id": "kp20k_training_319", "title": "Media access protocol for a coexisting cognitive femtocell network", "abstract": "Femtocell networks are widely deployed to extend cellular network coverage into indoor environments such as large office spaces and homes. Cognitive radio functionality can be implemented in femtocell networks based on an overlay mechanism under the assumption of a hierarchical access scenario. This study introduces a novel femtocell network architecture, that is characterized by a completely autonomous femtocell bandwidth access and a distributed media access control protocol for supporting data and real-time traffic. The detailed description of the architecture and media access protocol is presented. Furthermore, in-depth theoretical analysis is performed on the proposed media access protocol using discrete-time Markov chain modeling to validate the effectiveness of the proposed protocol and architecture", "keywords": ["cognitive radio network", "dynamic spectrum access", "femtocell network", "media access control"]} {"id": "kp20k_training_320", "title": "Integrating computer animation and multimedia", "abstract": "Multimedia provides an immensely powerful tool for the dissemination of both information and entertainment. Current multimedia presentations consist of synchronised excerpts of media (such as sound, video gi text) which are coordinated by an author to ensure a clear narrative is presented to the audience. However each of the segments of the presentation consist of previously recorded footage, only the timing and synchronisation are dynamically constructed. The next logical advance for such systems is therefore to include the capability of generating material 'on-the-fly' in response to the actions of the audience. This paper describes a mechanism for using computer animation to generate this interactive material. Unlike previous animation techniques the approach presented here is suitable for use in constructing a storyline which the author can control, but the user can influence. In order to allow such techniques to be used we also present a multimedia authoring gr playback system which incorporates interactive animation with existing media", "keywords": ["multimedia", "computer animation", "keyframing"]} {"id": "kp20k_training_321", "title": "an ontology for supporting communities of practice", "abstract": "In the context of the Palette project aimed at enhancingallindividual and organizational learning in Communities of Practice (CoPs), we are developing Knowledge Management (KM) services. Our approach is based on an ontology dedicated to CoPs and built from analysis of information sources about eleven CoPs available in Palette project. This ontology aims both at modeling the members of the CoP and at annotating the CoP knowledge resources. The paper describes our method for building this ontology, its structure and contents and it analyses our experience feedback from the cooperative building of this ontology", "keywords": ["community of practice", "knowledge management", "ontology"]} {"id": "kp20k_training_322", "title": "A set of neural tools for human-computer interactions: Application to the handwritten character recognition, and visual speech recognition problems", "abstract": "This paper presents a new technique of data coding and an associated set of homogenous processing tools for the development of Human Computer Interactions (HCI). The proposed technique facilitates the fusion of different sensorial modalities and simplifies the implementations. The coding takes into account the spatio-temporal nature of the signals to be processed in the framework of a sparse representation of data. Neural networks adapted to such a representation of data are proposed to perform the recognition tasks. Their development is illustrated by two examples: one of on-line handwritten character recognition; and the other of visual speech recognition", "keywords": ["human-machine interaction", "lipreading", "on-line handwritten character recognition", "spatio-temporal coding", "spatio-temporal neural networks", "spatio-temporal patterns", "spiking neurons", "visual speech recognition"]} {"id": "kp20k_training_323", "title": "impact of sub-optimal checkpoint intervals on application efficiency in computational clusters", "abstract": "As computational clusters rapidly grow in both size and complexity, system reliability and, in particular, application resilience have become increasingly important factors to consider in maintaining efficiency and providing improved computational performance over predecessor systems. One commonly used mechanism for providing application fault tolerance in parallel systems is the use of checkpointing. By making use of a multi-cluster simulator, we study the impact of sub-optimal checkpoint intervals on overall application efficiency. By using a model of a 1926 node cluster and workload statistics from Los Alamos National Laboratory to parameterize the simulator, we find that dramatically overestimating the AMTTI has a fairly minor impact on application efficiency while potentially having a much more severe impact on user-centric performance metrics such a queueing delay. We compare and contrast these results with the trends predicted by an analytical model", "keywords": ["prediction", "simulation", "checkpointing", "resilience"]} {"id": "kp20k_training_324", "title": "An approach to a content-based retrieval of multimedia data", "abstract": "This paper presents a data model tailored for multimedia data representation, along with the main characteristics of a Multimedia Query Language that exploits the features of the proposed model. The model addresses data presentation, manipulation and content-based retrieval. It consists of three parts: a Multimedia Description Model, which provides a structural view of raw multimedia data, a Multimedia Presentation Model, and a Multimedia Interpretation Model which allows semantic information to be associated with multimedia data. The paper focuses on the structuring of a multimedia data model which provides support for content-based retrieval of multimedia data. The Query Language is an extension of a traditional query language which allows restrictions to be expressed on features, concepts, and the structural aspects of the objects of multimedia data and the formulation of queries with imprecise conditions. The result of a query is an approximate set of database objects which partially match such a query", "keywords": ["multimedia information systems", "information storage and retrieval", "data modeling"]} {"id": "kp20k_training_325", "title": "Monte Carlo EM with importance reweighting and its applications in random effects models1", "abstract": "In this paper we propose a new Monte Carlo EM algorithm to compute maximum likelihood estimates in the context of random effects models. The algorithm involves the construction of efficient sampling distributions for the Monte Carlo implementation of the E-step, together with a reweighting procedure that allows repeatedly using a same sample of random effects. In addition, we explore the use of stochastic approximations to speed up convergence once stability has been reached. Our algorithm is compared with that of McCulloch (1997). Extensions to more general problems are discussed", "keywords": ["importance sampling", "metropolishastings algorithm", "stochastic approximations"]} {"id": "kp20k_training_326", "title": "A perceptual approach for stereoscopic rendering optimization", "abstract": "The traditional way of stereoscopic rendering requires rendering the scene for left and right eyes separately; which doubles the rendering complexity. In this study, we propose a perceptually-based approach for accelerating stereoscopic rendering. This optimization approach is based on the Binocular Suppression Theory, which claims that the overall percept of a stereo pair in a region is determined by the dominant image on the corresponding region. We investigate how binocular suppression mechanism of human visual system can be utilized for rendering optimization. Our aim is to identify the graphics rendering and modeling features that do not affect the overall quality of a stereo pair when simplified in one view. By combining the results of this investigation with the principles of visual attention, we infer that this optimization approach is feasible if the high quality view has more intensity contrast. For this reason, we performed a subjective experiment, in which various representative graphical methods were analyzed. The experimental results verified our hypothesis that a modification, applied on a single view, is not perceptible if it decreases the intensity contrast, and thus can be used for stereoscopic rendering", "keywords": ["stereoscopic rendering", "binocular vision", "binocular suppression", "perception"]} {"id": "kp20k_training_327", "title": "using traditional loop unrolling to fit application on a new hybrid reconfigurable architecture", "abstract": "This paper presents a strategy to modify a sequential implementation of an H.264/AVC motion estimation to run on a new reconfigurable architecture called RoSA. The modifications aim to provide more parallelism that will be exploited by the architecture. In the strategy presented in this paper we used traditional loop unrolling and profile information as techniques to modify the application and to generate a best fit solution to RoSA architecture", "keywords": ["stream-based", "reconfigurable architecture", "optimization", "performance"]} {"id": "kp20k_training_328", "title": "Evaluating fluid semantics for passive stochastic process algebra cooperation", "abstract": "Fluid modelling is a next-generation technique for analysing massive performance models. Passive cooperation is a popular cooperation mechanism frequently used by performance engineers. Therefore having an accurate translation of passive cooperation into a fluid model is of direct practical application. We compare different existing styles of fluid model translations of passive cooperation in a stochastic process algebra and show how the previous model can be improved upon significantly. We evaluate the new passive cooperation fluid semantics and show that the first-order fluid model is a good approximation to the dynamics of the underlying continuous-time Markov chain. We show that in a family of possible translations to the fluid model, there is an optimal translation which can be expected to introduce least error. Finally, we use these new techniques to show how the scalability of a passively-cooperating distributed software architecture could be assessed", "keywords": ["stochastic process algebra", "fluid approximation", "passive cooperation"]} {"id": "kp20k_training_329", "title": "using new media to improve self-help for clients and staff", "abstract": "One of the most common frustrations for any person looking for technical support is actually finding effective technical support. Even if a solution seems clear, it can be misunderstood if the vernacular is not just right. A large part of a successful support call involves being able to determine the actual problem based on the information the client provides. Help desk analysts must have the ability to translate \"non-tech\" descriptions to identify a problem in technical terms and then communicate a solution using vernacular the client can understand. This process is always a little different. If we aim to be successful analysts, we must speak different \"languages\" in order to help our clients. Based on this logic, it stands to reason that our self-help documentation must do the same. Providing a variety of methods to get self-help ensures a message will be received by a wider audience. In the world of modern media, audiences are presented with many ways to consume information. This ensures the message is heard by the most people in a manner that is the most appealing and the most clear. New methods of consuming information have become possible as the face of mainstream media has become democratized over the last few years. This is thanks largely to the fact that the tools needed to create and distribute content have become affordable and readily available to anyone with a bit of technical skill. Anyone with a laptop, a webcam and a little imagination can and do create content. Considering all of this, we asked ourselves, \"Why shouldn't we?.\" We have found that creating content in new media is relatively easy and fun. Finding and creating new methods to deliver content positively engages and challenges our help desk team. Thinking about how to best use new media requires help desk analysts to rethink otherwise standardized and mundane processes and create fresh perspectives. The creation and production of new media establishes stronger ownership of procedures and process. We would like to share the following from our ongoing experiences with new media at our help desk: General issues we see with clients finding help How creating new media creates stronger ownership and morale with staff Expanding the technical skills of help desk staff How using new media improves our client experience Casting a wider net (ensuring a message gets to the most people) How we use new media and what we have done with it How to make your own video podcast in 1,345 easy steps", "keywords": ["self-help", "video podcast", "team building", "client support", "new media"]} {"id": "kp20k_training_330", "title": "A framework for preservation of cloud users data privacy using dynamic reconstruction of metadata", "abstract": "In the rising paradigm of cloud computing, attainment of sustainable levels of cloud users trust in using cloud services is directly dependent on effective mitigation of its associated impending risks and resultant security threats. Among the various indispensible security services required to ensure effective cloud functionality leading to enhancement of users confidence in using cloud offerings, those related to the preservation of cloud users data privacy are significantly important and must be matured enough to withstand the imminent security threats, as emphasized in this research paper. This paper highlights the possibility of exploiting the metadata stored in cloud's database in order to compromise the privacy of users data items stored using a cloud provider's simple storage service. It, then, proposes a framework based on database schema redesign and dynamic reconstruction of metadata for the preservation of cloud users data privacy. Using the sensitivity parameterization parent class membership of cloud database attributes, the database schema is modified using cryptographic as well as relational privacy preservation operations. At the same time, unaltered access to database files is ensured for the cloud provider using dynamic reconstruction of metadata for the restoration of original database schema, when required. The suitability of the proposed technique with respect to private cloud environments is ensured by keeping the formulation of its constituent steps well aligned with the recommendations proposed by various Standards Development Organizations working in this domain", "keywords": ["cloud computing", "private cloud", "ubuntu enterprise cloud eucalyptus", "privacy", "metadata"]} {"id": "kp20k_training_331", "title": "A systematic literature review on SOA migration", "abstract": "When Service Orientation was introduced as the solution for retaining and rehabilitating legacy assets, both researchers and practitioners proposed techniques, methods, and guidelines for SOA migration. With so much hype surrounding SOA, it is not surprising that the concept was interpreted in many different ways, and consequently, different approaches to SOA migration were proposed. Accordingly, soon there was an abundance of methods that were hard to compare and eventually adopt. Against this backdrop, this paper reports on a systematic literature review that was conducted to extract the categories of SOA migration proposed by the research community. We provide the state-of-the-art in SOA migration approaches, and discuss categories of activities carried out and knowledge elements used or produced in those approaches. From such categorization, we derive a reference model, called SOA migration frame of reference, that can be used for selecting and defining SOA migration approaches. As a co-product of the analysis, we shed light on how SOA migration is perceived in the field, which further points to promising future research directions. ", "keywords": ["migration", "service orientation", "systematic literature review", "knowledge management"]} {"id": "kp20k_training_332", "title": "Application of projection pursuit learning to boundary detection and deblurring in images", "abstract": "Projection pursuit learning networks (PPLNs) have been used in many fields of research but have not been widely used in image processing. In this paper we demonstrate how this highly promising technique may be used to connect edges and produce continuous boundaries. We also propose the application of PPLN to deblurring a degraded image when little or no a priori information about the blur is available. The PPLN was successful at developing an inverse blur filter to enhance blurry images. Theory and background information on projection pursuit regression (PPR) and PPLN are also presented", "keywords": ["boundary detection", "image deblurring", "projection pursuit regression", "projection pursuit learning networks"]} {"id": "kp20k_training_333", "title": "Learning to transform time series with a few examples", "abstract": "We describe a semisupervised regression algorithm that learns to transform one time series into another time series given examples of the transformation. This algorithm is applied to tracking, where a time series of observations from sensors is transformed to a time series describing the pose of a target. Instead of defining and implementing such transformations for each tracking task separately, our algorithm learns a memoryless transformation of time series from a few example input-output mappings. The algorithm searches for a smooth function that fits the training examples and, when applied to the input time series, produces a time series that evolves according to assumed dynamics. The learning procedure is fast and lends itself to a closed-form solution. It is closely related to nonlinear system identification and manifold learning techniques. We demonstrate our algorithm on the tasks of tracking RFID tags from signal strength measurements, recovering the pose of rigid objects, deformable bodies, and articulated bodies from video sequences. For these tasks, this algorithm requires significantly fewer examples compared to fully supervised regression algorithms or semisupervised learning algorithms that do not take the dynamics of the output time series into account", "keywords": ["semisupervised learning", "example-based tracking", "manifold learning", "nonlinear system identification"]} {"id": "kp20k_training_334", "title": "Almost periodic solutions to abstract semilinear evolution equations with Stepanov almost periodic coefficients", "abstract": "In this paper, almost periodicity of the abstract semilinear evolution equation u'(t) = A(t)u(t) f(t, u(t)) with Stepanov almost periodic coefficients is discussed. We establish a new composition theorem of Stepanov almost periodic functions; and, with its help, we study the existence and uniqueness of almost periodic solutions to the above semilinear evolution equation. Our results are even new for the case of A(t) equivalent to A", "keywords": ["almost periodic", "stepanov almost periodic", "semilinear evolution equations", "banach space"]} {"id": "kp20k_training_335", "title": "phoenix-based clone detection using suffix trees", "abstract": "A code clone represents a sequence of statements that are duplicated in multiple locations of a program. Clones often arise in source code as a result of multiple cut/paste operations on the source, or due to the emergence of crosscutting concerns. Programs containing code clones can manifest problems during the maintenance phase. When a fault is found or an update is needed on the original copy of a code section, all similar clones must also be found so that they can be fixed or updated accordingly. The ability to detect clones becomes a necessity when performing maintenance tasks. However, if done manually, clone detection can be a slow and tedious activity that is also error prone. A tool that can automatically detect clones offers a significant advantage during software evolution. With such an automated detection tool, clones can be found and updated in less time. Moreover, restructuring or refactoring of these clones can yield better performance and modularity in the program.This paper describes an investigation into an automatic clone detection technique developed as a plug-in for Microsoft's new Phoenix framework. Our investigation finds function-level clones in a program using abstract syntax trees (ASTs) and suffix trees. An AST provides the structural representation of the code after the lexical analysis process. The AST nodes are used to generate a suffix tree, which allows analysis on the nodes to be performed rapidly. We use the same methods that have been successfully applied to find duplicate sections in biological sequences to search for matches on the suffix tree that is generated, which in turn reveal matches in the code", "keywords": ["software analysis", "suffix trees", "clone detection", "code clones"]} {"id": "kp20k_training_336", "title": "Slimeware: Engineering Devices with Slime Mold", "abstract": "The plasmodium of the acellular slime mold Physarum polycephalum is a gigantic single cell visible to the unaided eye. The cell shows a rich spectrum of behavioral patterns in response to environmental conditions. In a series of simple experiments we demonstrate how to make computing, sensing, and actuating devices from the slime mold. We show how to program living slime mold machines by configurations of repelling and attracting gradients and demonstrate the workability of the living machines on tasks of computational geometry, logic, and arithmetic", "keywords": ["parallel biological computers", "amorphous computers", "living technology", "slime mold"]} {"id": "kp20k_training_337", "title": "Automated aspect-oriented decomposition of process-control systems for ultra-high dependability assurance", "abstract": "This paper presents a method for decomposing process-control systems. This decomposition method is automated, meaning that a series of principles that can be evolved to support automated tools are given to help a designer decompose complex systems into a collection of simpler components. Each component resulting from the decomposition process can be designed and implemented independently of the other components. Also, these components can be tested or verified by the end-user independently of each other. Moreover, the system properties, such as safety, stability, and reliability, can be mathematically inferred from the properties of the individual components. These components are referred to as IDEAL ( Independently Developable End-user Assessable Logical) components. This decomposition method is applied to a case study specified by the High-Integrity Systems group at Sandia National Labs, which involves the control of a future version of the Bay Area Rapid Transit ( BART) system", "keywords": ["software decomposition", "dependability assurance", "process-control systems", "aspect-oriented modeling"]} {"id": "kp20k_training_338", "title": "Diffusion-Confusion Based Light-Weight Security for Item-RFID Tag-Reader Communication", "abstract": "In this paper we propose a challenge-response protocol called: DCSTaR, which takes a novel approach to solve security issues that are specific to low-cost item-RFID tags. Our DCSTaR protocol is built upon light-weight primitives such as 16 bit: Random Number Generator, Exclusive-OR, and Cyclic Redundancy Check and utilizing these primitives it also provides a simple Diffusion-Confusion cipher to encrypt the challenge and response from the tag to the RFID reader. As a result our protocol achieves RFID tag-reader-server mutual authentication, communicating-data confidentiality and integrity, secure key-distribution and key-protection. It also provides an efficient way for consumers to verify whether tagged items are genuine or fake and to protect consumers' privacy while carrying tagged items", "keywords": ["rfid", "tag-reader communication security", "light-weight cryptography", "customer privacy", "diffusion-confusion cipher", "epcglobal class-1 gen-2"]} {"id": "kp20k_training_339", "title": "On solutions of functional-integral equations of Urysohn type on an unbounded interval", "abstract": "In this paper we establish the existence of solutions of functional-integral and quadratic Urysohn integral on the interval R(+) = [0, infinity). The technique of proving applied in this paper is based on the concept of measure of noncompactness and the fixed point theorem. Some new results are given. ", "keywords": ["nonlinear integral equation", "measure of noncompactness", "fixed point theorem"]} {"id": "kp20k_training_340", "title": "a cultural probes study on video sharing and social communication on the internet", "abstract": "The focus of this article is the link between video sharing and interpersonal communication on the internet. Previous works on social television systems belong to two categories: 1) studies on how collocated groups of viewers socialize while watching TV, and 2) studies on novel Social TV applications (e.g. experimental set-ups) and devices (e.g. ambient displays) that provide technological support for TV sociability over a distance. The main shortcoming of those studies is that they have not considered the dominant contemporary method of Social TV. Early adopters of technology have been watching and sharing video online. We employed cultural probes in order to gain in-depth information about the social aspect of video sharing on the internet. Our sample consisted of six heavy users of internet video, watching an average of at least one hour of internet video a day. In particular, we explored how they are integrating video into their daily social communication practices. We found that internet video is shared and discussed with distant friends. Moreover, the results of the study indicate several opportunities and threats for the development of integrated mass and interpersonal communication applications and services", "keywords": ["cultural probes", "online communication", "user study", "internet video"]} {"id": "kp20k_training_341", "title": "Phenotypic Modulation of Vascular Smooth Muscle Cells", "abstract": "The smooth muscle myosin heavy chain (MHC) gene and its isoforms are excellent molecular markers that reflect smooth muscle phenotypes. The SMemb/Nonmuscle Myosin Heavy Chain B (NMHC-B) is a distinct MHC gene expressed predominantly in phenotypically modulated SMCs (synthetic-type SMC). To dissect the molecular mechanisms governing phenotypic modulation of SMCs, we analyzed the transcriptional regulatory mechanisms underlying expression of the SMemb gene. We previously reported two transcription factors, BTEB2/IKLF and Hex, which transactivate the SMemb gene promoter based on the transient reporter transfection assays. BTEB2/IKLF is a zinc finger transcription factor, whereas Hex is a homeobox protein. BTEB2/IKLF expression in SMCs is downregulated with vascular development in vivo but upregulated in cultured SMCs and in neointima in response to vascular injury after balloon angioplasty. BTEB2/IKLF and Hex activate not only the SMemb gene but also other genes activated in synthetic SMCs including plasminogen activator inhibitor-1 (PAI-1), iNOS, PDGF-A, Egr-1, and VEGF receptors. Mitogenic stimulation activates BTEB2/IKLF gene expression through MEK1 and Egr-1. Elevation of intracellular cAMP is also important in phenotypic modulation of SMCs, because the SMemb promoter is activated under cooperatively by cAMP-response element binding protein (CREB) and Hex", "keywords": ["vascular smooth muscle cells", "phenotypic modulation"]} {"id": "kp20k_training_342", "title": "Intent specifications: An approach to building human-centered specifications", "abstract": "This paper examines and proposes an approach to writing software specifications, based on research in systems theory, cognitive psychology, and human-machine interaction. The goal is to provide specifications that support human problem solving and the tasks that humans must perform in software development and evolution. A type of specification, called intent specifications, is constructed upon this underlying foundation", "keywords": ["requirements", "requirements specification", "safety-critical software", "software evolution", "human-centered specifications", "means-ends hierarchy", "cognitive engineering"]} {"id": "kp20k_training_344", "title": "An improved evaluation of ladder logic diagrams and Petri nets for the sequence controller design in manufacturing systems", "abstract": "Sequence controller designs play a key role in advanced manufacturing systems. Traditionally, the ladder logic diagram (LLD) has been widely applied to programmable logic controllers (PLC), while recently the Petri net (PN) has emerged as an alternative tool for the sequence control of complex systems. The evaluation of both approaches has become crucial and has thus attracted attention", "keywords": ["ladder logic diagrams", "petri nets", "plc", "sequence controllers", "manufacturing systems"]} {"id": "kp20k_training_345", "title": "Lightweight detection of node presence in MANETs", "abstract": "While mobility in the sense of node movement has been an intensively studied aspect of mobile ad hoc networks (MANETs), another aspect of mobility has not yet been subjected to systematic research: nodes may not only move around but also enter and leave the network. In fact, many proposed protocols for MANETs exhibit worst case behavior when an intended communication partner is currently not present. Therefore, knowing whether a given node is currently present in the network can often help to avoid unnecessary overhead. In this paper, we present a solution to the presence detection problem. It uses a Bloom filter-based beaconing mechanism to aggregate and distribute information about the presence of network nodes. We describe the algorithm and discuss design alternatives. We assess the algorithms properties both analytically and through simulation, and thereby underline the effectiveness and applicability of our approach", "keywords": ["presence detection", "mobile ad hoc networks", "manets", "soft state bloom filter"]} {"id": "kp20k_training_346", "title": "An integrated toolchain for model based functional safety analysis", "abstract": "We design a complete toolchain for integrating fault tolerance analysis into modeling. The goal of this work is to bridge the gap between the different specialized tools available. Having an integrated environment will reduce errors, ensure coherence and simplify analysis", "keywords": ["bayesian networks", "safety analysis", "model-based design", "functional testing"]} {"id": "kp20k_training_347", "title": "SCALE INVARIANT FEATURE MATCHING USING ROTATION-INVARIANT DISTANCE FOR REMOTE SENSING IMAGE REGISTRATION", "abstract": "Scale invariant feature transform (SIFT) has been widely used in image matching. But when SIFT is introduced in the registration of remote sensing images, the keypoint pairs which are expected to be matched are often assigned two different value of main orientation owing to the significant difference in the image intensity between remote sensing image pairs, and therefore a lot of incorrect matches of keypoints will appear. This paper presents a method using rotation-invariant distance instead of Euclid distance to match the scale invariant feature vectors associated with the keypoints. In the proposed method, the feature vectors are reorganized into feature matrices, and fast Fourier transform (FFT) is introduced to compute the rotation-invariant distance between the matrices. Much more correct matches are obtained by the proposed method since the rotation-invariant distance is independent of the main orientation of the keypoints. Experimental results indicate that the proposed method improves the match performance compared to other state-of-art methods in terms of correct match rate and aligning accuracy", "keywords": ["remote sensing image", "image registration", "sift", "main orientation", "feature matching", "rotation-invariance distance"]} {"id": "kp20k_training_348", "title": "computer-related gender differences", "abstract": "Computer-related gender differences are examined using survey responses from 651 college students. Issues studied include gender differences regarding interest and enjoyment of both using a computer and computer programming. Interesting gender differences with implications for teaching are examined for the groups (family, teachers, friends, others) that have the most influence on students' interest in computers. Traditional areas such as confidence, career understanding and social bias are also discussed. Preliminary results for a small sample of technology majors indicate that computer majors have unique interests and attitudes compared to other science majors", "keywords": ["gender issues"]} {"id": "kp20k_training_349", "title": "Analysis of EEG signals by combining eigenvector methods and multiclass support vector machines", "abstract": "A new approach based on the implementation of multiclass support vector machine (SVM) with the error correcting output codes (ECOC) is presented for classification of electroencephalogram (EEG) signals. In practical applications of pattern recognition, there are often diverse features extracted from raw data which needs recognizing. Decision making was performed in two stages: feature extraction by eigenvector methods and classification using the classifiers trained on the extracted features. The aim of the study is classification of the EEG signals by the combination of eigenvector methods and multiclass SVM. The purpose is to determine an optimum classification scheme for this problem and also to infer clues about the extracted features. The present research demonstrated that the eigenvector methods are the features which well represent the EEG signals and the multiclass SVM trained on these features achieved high classification accuracies", "keywords": ["multiclass support vector machine ", "eigenvector methods", "electroencephalogram signals"]} {"id": "kp20k_training_350", "title": "lambda-RBAC: PROGRAMMING WITH ROLE-BASED ACCESS CONTROL", "abstract": "We study mechanisms that permit program components to express role constraints on clients, focusing on programmatic security mechanism, which permit access controls to be expressed, in situ, as part of the code realizing basic functionality. In this setting, two questions immediately arise. (1) The user of a component faces the issue of safety: is a particular role sufficient to use the component? (2) The component designer faces the dual issue of protection: is a particular role demanded in all execution paths of the component? We provide a formal calculus and static analysis to answer both questions", "keywords": ["role-based access control", "lambda-calculus", "static analysis"]} {"id": "kp20k_training_351", "title": "Output-only Modal Analysis using Continuous-Scan Laser Doppler Vibrometry and application to a 20kW wind turbine", "abstract": "Continuous-Scan Laser Doppler Vibrometry (CSLDV) is a technique where the measurement point continuously sweeps over a structure while measuring, capturing both spatial and temporal information. The continuous-scan approach can greatly accelerate measurements, allowing one to capture spatially detailed mode shapes in the same amount of time that conventional methods require to measure the response at a single point. The method is especially beneficial when testing large structures, such as wind turbines, that have low natural frequencies and hence may require very long time records at each measurement point. Several CSLDV methods have been presented that use sinusoidal excitation or impulse excitation, but CSLDV has not previously been employed with an unmeasured, broadband random input. This work extends CSLDV to that class of input, developing an Output-only Modal Analysis method (OMA-CSLDV). A recently developed algorithm for linear time-periodic system identification, which makes use of harmonic power spectra and the harmonic transfer function concept developed by Wereley [17], is used in conjunction with CSLDV measurements. One key consideration, the choice of the scan frequency, is explored. The proposed method is validated on a randomly excited free-free beam, where one-dimensional mode shapes are captured by scanning the laser along the length of the beam. The first seven natural frequencies and mode shapes are extracted from the harmonic power spectrum of the vibrometer signal and show good agreement with the analytically-derived modes of the beam. The method is then applied to identify the mode shapes of a parked 20kW wind turbine using a ground based laser and with only a light breeze providing excitation", "keywords": ["modal identification", "output-only modal analysis", "operational modal analysis", "laser doppler vibrometry", "periodically time varying"]} {"id": "kp20k_training_352", "title": "Preferences in Wikipedia abstracts: Empirical findings and implications for automatic entity summarization", "abstract": "We empirically study how Wikipedians summarize entity descriptions in practice. We compare entity descriptions in DBpedia with their Wikipedia abstracts. We analyze the length of a summary and the priorities of property values. We analyze the priorities of, diversity of, and correlation between properties. Implications for automatic entity summarization are drawn from the findings", "keywords": ["dbpedia", "entity summarization", "feature selection", "property ranking", "wikipedia"]} {"id": "kp20k_training_353", "title": "Multi-organ localization with cascaded global-to-local regression and shape prior", "abstract": "We propose a fast and robust method for multiple organs localization. Our method provides organ-dedicated confidence maps for each organ. It extends the cascade of random forest with additional shape prior. The values of the testing and learning parameters can be explained physically. We evaluate our method on 130 CT volumes and show its good accuracy", "keywords": ["multi-organ localization", "regression", "random forest", "3d ct", "abdominal organs"]} {"id": "kp20k_training_354", "title": "Ordered interval routing schemes", "abstract": "An Interval Routing Scheme (IRS) represents the routing tables in a network in a space-efficient way by labeling each vertex with an unique integer address, and the outgoing edges at each vertex with disjoint subintervals of these addresses. An IRS that has at most k intervals per edge label is called a k-IRS. In this paper, we propose a new type of interval routing scheme, called an Ordered Interval Routing Scheme (OIRS), that uses an ordering of the outgoing edges at each vertex and allows non-disjoint intervals in the labels of those edges. We show for a number of graph classes that using an OIRS instead of an IRS reduces the size of the routing tables in the case of optimal routing, i.e., routing along shortest paths. We show that optimal routing in any k-tree is possible using an OIRS with at most 2k?1 2 k ? 1 intervals per edge label, although the best known result for an IRS is 2k+1 2 k + 1 intervals per edge label. Any torus has an optimal 1-OIRS, although it may not have an optimal 1-IRS. We present similar results for the Petersen graph, k-garland graphs and a few other graphs", "keywords": ["oirs", "interval routing", "routing table"]} {"id": "kp20k_training_355", "title": "Performance analysis in non-Rayleigh and non-Rician communications channels", "abstract": "This paper investigates the probability of erasure for mobile communication channels containing limited number of scatterers. Two kinds of channels with and without line of sight are examined. The resultant data is depicted by graphs to express the differences in existing theoretical models more clearly. The results indicate that the probability of erasure is different from that of predicted by both Rayleigh and Rician models for small number of scatterers", "keywords": ["fading", "mobile communications", "non-rayleigh and non-rician channel"]} {"id": "kp20k_training_356", "title": "Computational geometry column 41", "abstract": "The recent result that n congruent balls in R(d) have at most 4 distinct geometric permutations is described", "keywords": ["line transversal", "geometric permutation", "stabbing"]} {"id": "kp20k_training_357", "title": "Trends of environmental information systems in the context of the European Water Framework Directive", "abstract": "In Europe, the development of Environmental Information Systems for the water domain is heavily influenced by the need to support the processes of the European Water Framework Directive (WFD). The aim of the WFD is to ensure that all European waters, these being groundwater, surface or coastal waters, are protected according to a common standard. While the WFD itself does only include concrete information technology (IT) recommendations on a very high-level of data exchange, regional and/or national environmental agencies build or adapt their information systems according to their specific requirements in order to deliver the results for the first WFD reporting phase on time. Moreover, as the WFD requires a water management policy centered on natural river basin districts instead of administrative and political regions, the agencies have to co-ordinate their work, possibly across national borders. On this background, the present article analyses existing IT recommendations for the WFD implementation strategy and motivates the need to develop an IT Framework Architecture that comprises different views such as an organisational, a process, a data and a functional view. After having presented representative functions of operational water body information systems for the thematic and the co-operation layer, the article concludes with a summary of future IT developments that are required to efficiently support the WFD implementation", "keywords": ["environmental information systems", "water framework directive", "eis", "wfd", "gml", "inspire", "gmes", "java", "ogc"]} {"id": "kp20k_training_358", "title": "A finite volume method for viscous incompressible flows using a consistent flux reconstruction scheme", "abstract": "An incompressible Navier-Stokes solver using curvilinear body-fitted collocated grid has been developed to solve unconfined flow past arbitrary two-dimensional body geometries. In this solver, the full Navier-Stokes equations have been solved numerically in the physical plane itself without using any transformation to the computational plane. For the proper coupling of pressure and velocity field on collocated grid, a new scheme, designated 'consistent flux reconstruction' (CFR) scheme, has been developed. In this scheme, the cell face centre velocities are obtained explicitly by solving the momentum equations at the centre of the cell faces. The velocities at the cell centres are also updated explicitly by solving the momentum equations at the cell centres. By resorting to such a fully explicit treatment considerable simplification has been achieved compared to earlier approaches. In the present investigation the solver has been applied to unconfined flow past a square cylinder at zero and non-zero incidence at low and moderate Reynolds numbers and reasonably good agreement has been obtained with results available from literature. ", "keywords": ["curvilinear collocated grid", "incompressible navier-stokes solver", "finite volume method", "physical plane", "explicit-explicit scheme", "consistent flux reconstruction"]} {"id": "kp20k_training_359", "title": "practical online retrieval evaluation", "abstract": "Online evaluation is amongst the few evaluation techniques available to the information retrieval community that is guaranteed to reflect how users actually respond to improvements developed by the community. Broadly speaking, online evaluation refers to any evaluation of retrieval quality conducted while observing user behavior in a natural context. However, it is rarely employed outside of large commercial search engines due primarily to a perception that it is impractical at small scales. The goal of this tutorial is to familiarize information retrieval researchers with state-of-the-art techniques in evaluating information retrieval systems based on natural user clicking behavior, as well as to show how such methods can be practically deployed. In particular, our focus will be on demonstrating how the Interleaving approach and other click based techniques contrast with traditional offline evaluation, and how these online methods can be effectively used in academic-scale research. In addition to lecture notes, we will also provide sample software and code walk-throughs to showcase the ease with which Interleaving and other click-based methods can be employed by students, academics and other researchers", "keywords": ["interleaving", "preference judgments", "web search", "clickthrough data", "online evaluation"]} {"id": "kp20k_training_360", "title": "A privacy-preserving clustering approach toward secure and effective data analysis for business collaboration", "abstract": "The sharing of data has been proven beneficial in data mining applications. However, privacy regulations and other privacy concerns may prevent data owners from sharing information for data analysis. To resolve this challenging problem, data owners must design a solution that meets privacy requirements and guarantees valid data clustering results. To achieve this dual goal, we introduce a new method for privacy-preserving clustering called Dimensionality Reduction-Based Transformation (DRBT). This method relies on the intuition behind random projection to protect the underlying attribute values subjected to cluster analysis. The major features of this method are: (a) it is independent of distance-based clustering algorithms; (b) it has a sound mathematical foundation; and (c) it does not require CPU-intensive operations. We show analytically and empirically that transforming a data set using DRBT, a data owner can achieve privacy preservation and get accurate clustering with a little overhead of communication cost", "keywords": ["privacy-preserving data mining", "privacy-preserving clustering", "dimensionality reduction", "random projection", "privacy-preserving clustering over centralized data", "privacy-preserving clustering over vertically partitioned data"]} {"id": "kp20k_training_361", "title": "Unified read requests", "abstract": "Most work on multimedia storage systems has assumed that clients will be serviced using a round-robin strategy. The server services the clients in rounds and each client is allocated a time slice within that round. Furthermore, most such algorithms are evaluated on the basis of a tightly specified cost function. This is the basis for well known algorithms such as FCFS, SCAN, SCAN-EDF, etc. In this paper, we describe a Request Merging (RM) module that takes as input, a set of client requests, and a set of constraints on the desired performance such as client waiting time or maximum disk bandwidth, and a cost function. It produces as output, a Unified Read Request (URR), telling the storage server which data items to read, and when the clients would like these data items to be delivered to them. Given a cost function cf, a URR is optimal if there is no other URR satisfying the constraints with a lower cost. We present three algorithms in this paper, each of which accomplishes this kind of request merging. The first algorithm OptURR is guaranteed to produce minimal cost URRs with respect to arbitrary cost functions. In general, the problem of computing an optimal URR is NP-complete, even when only two data objects are considered. To alleviate this problem, we develop two other algorithms, called GreedyURR and FastURR that may produce sub-optimal URRs, but which have some nicer computational properties. We will report on the pros and cons of these algorithms through an experimental evaluation", "keywords": ["multimedia storage server", "request merging", "optimality", "cost function"]} {"id": "kp20k_training_362", "title": "Brain-Computer Evolutionary Multiobjective Optimization: A Genetic Algorithm Adapting to the Decision Maker", "abstract": "The centrality of the decision maker (DM) is widely recognized in the multiple criteria decision-making community. This translates into emphasis on seamless human-computer interaction, and adaptation of the solution technique to the knowledge which is progressively acquired from the DM. This paper adopts the methodology of reactive search optimization (RSO) for evolutionary interactive multiobjective optimization. RSO follows to the paradigm of \"learning while optimizing,\" through the use of online machine learning techniques as an integral part of a self-tuning optimization scheme. User judgments of couples of solutions are used to build robust incremental models of the user utility function, with the objective to reduce the cognitive burden required from the DM to identify a satisficing solution. The technique of support vector ranking is used together with a k-fold cross-validation procedure to select the best kernel for the problem at hand, during the utility function training procedure. Experimental results are presented for a series of benchmark problems", "keywords": ["interactive decision making", "machine learning", "reactive search optimization", "support vector ranking"]} {"id": "kp20k_training_363", "title": "Linear Separability of Gene Expression Data Sets", "abstract": "We study simple geometric properties of gene expression data sets, where samples are taken from two distinct classes (e.g., two types of cancer). Specifically, the problem of linear separability for pairs of genes is investigated. If a pair of genes exhibits linear separation with respect to the two classes, then the joint expression level of the two genes is strongly correlated to the phenomena of the sample being taken from one class or the other. This may indicate an underlying molecular mechanism relating the two genes and the phenomena (e. g., a specific cancer). We developed and implemented novel efficient algorithmic tools for finding all pairs of genes that induce a linear separation of the two sample classes. These tools are based on computational geometric properties and were applied to 10 publicly available cancer data sets. For each data set, we computed the number of actual separating pairs and compared it to an upper bound on the number expected by chance and to the numbers resulting from shuffling the labels of the data at random empirically. Seven out of these 10 data sets are highly separable. Statistically, this phenomenon is highly significant, very unlikely to occur at random. It is therefore reasonable to expect that it manifests a functional association between separating genes and the underlying phenotypic classes", "keywords": ["gene expression analysis", "dna microarrays", "diagnosis", "linear separation"]} {"id": "kp20k_training_364", "title": "A language for representing and extracting 3D geometry semantics from paper-based sketches", "abstract": "The key contribution is a visual language to formally represent form geometry semantics on paper. Parsing the language allows for the automatic generation of 3D virtual models. A proof-of-concept prototype tool was implemented. The language is capable to roughly model forms with linear topological ordering. Evaluation results show that practising designers would use the language", "keywords": ["humancomputer interaction", "computer-aided sketching", "3d modelling"]} {"id": "kp20k_training_365", "title": "Decentralized list scheduling", "abstract": "Classical list scheduling is a very popular and efficient technique for scheduling jobs for parallel and distributed platforms. It is inherently centralized. However, with the increasing number of processors, the cost for managing a single centralized list becomes too prohibitive. A suitable approach to reduce the contention is to distribute the list among the computational units: each processor only has a local view of the work to execute. Thus, the scheduler is no longer greedy and standard performance guarantees are lost", "keywords": ["scheduling", "list algorithms", "work stealing"]} {"id": "kp20k_training_366", "title": "Antenna impedance matching with neural networks", "abstract": "Impedance matching between transmission lines and antennas is an important and fundamental concept in electromagnetic theory. One definition of antenna impedance is the resistance and reactance seen at the antenna terminals or the ratio of electric to magnetic fields at the input. The primary intent of this paper is real-time compensation for changes in the driving point impedance of an antenna due to frequency deviations. In general, the driving point impedance of an antenna or antenna array is computed by numerical methods such as the method of moments or similar techniques. Some configurations do lend themselves to analytical solutions, which will be the primary focus of this work. This paper employs a neural control system to match antenna feed lines to two common antennas during frequency sweeps. In practice, impedance matching is performed off-line with Smith charts or relatively complex formulas but they rarely perform optimally over a large bandwidth. There have been very few attempts to compensate for matching errors while the transmission system is in operation and most techniques have been targeted to a relatively small range of frequencies. The approach proposed here employs three small neural networks to perform real-time impedance matching over a broad range of frequencies during transmitter operation. Double stub tuners are being explored in this paper but the approach can certainly be applied to other methodologies. The ultimate purpose of this work is the development of an inexpensive microcontroller-based system", "keywords": ["impedance matching", "control system", "vswr"]} {"id": "kp20k_training_367", "title": "Cellular Automata over Group Alphabets: Undergraduate Education and the PascGalois Project", "abstract": "This purpose of this note is to report efforts underway in the PascGalois Project (www.pascgalois.org) to provide connections between standard courses in the undergraduate mathematics curriculum (e.g. abstract algebra, number theory, discrete mathematics) and cellular automata. The value of these connections to the mathematical education of undergraduates will be described. Project course supplements, supporting software, and areas of student research will also be summarized", "keywords": ["pascgalois project", "pascgalois je", "group alphabets", "fractal dimensions", "growth rate dimensions", "abstract algebra", "undergraduate research"]} {"id": "kp20k_training_368", "title": "from structured documents to novel query facilities", "abstract": "Structured documents (e.g., SGML) can benefit a lot from database support and more specifically from object-oriented database (OODB) management systems. This paper describes a natural mapping from SGML documents into OODB's and a formal extension of two OODB query languages (one SQL-like and the other calculus) in order to deal with SGML document retrieval. Although motivated by structured documents, the extensions of query languages that we present are general and useful for a variety of other OODB applications. A key element is the introduction of paths as first class citizens. The new features allow to query data (and to some extent schema) without exact knowledge of the schema in a simple and homogeneous fashion", "keywords": ["applications", "order", "document retrieval", "structure", "systems", "formalism", "object-oriented database", "sql", "query languages", "data", "support", "map", "schema", "extensibility", "general", "knowledge", "paper", "feature", "documentation", "database", "management", "class", "query"]} {"id": "kp20k_training_369", "title": "Determination of Oxidized Low-Density Lipoproteins (ox-LDL) versus ox-LDL/?2GPI Complexes for the Assessment of Autoimmune-Mediated Atherosclerosis", "abstract": "The immunolocalization of oxidized low-density lipoproteins (ox-LDL), ?2-glycoprotein I (?2GPI), CD4+/CD8+ immunoreactive lymphocytes, and immunoglobulins in atherosclerotic lesions strongly suggested an active participation of the immune system in atherogenesis. Oxidative stress leading to ox-LDL production is thought to play a central role in both the initiation and progression of atherosclerosis. ox-LDL is highly proinflammatory and chemotactic for macrophage/monocyte and immune cells. Enzyme-linked immunosorbent assays (ELISAs) to measure circulating ox-LDL have been developed and are being currently used to assess oxidative stress as risk factor or marker of atherosclerotic disease. ox-LDL interacts with ?2GPI and circulating ox-LDL/?2GPI complexes have been demonstrated in patients with systemic lupus erythematosus (SLE) and antiphospholipid syndrome (APS). It has been postulated that ?2GPI binds ox-LDL to neutralize its proinflammatory and proatherosclerotic effects. Because ?2GPI is ubiquitous in plasma, its interaction with ox-LDL may mask oxidized epitopes recognized by capture antibodies potentially interfering with immunoassays results. The measurement of ox-LDL/?2GPI complexes may circumvent this interference representing a more physiological and accurate way of measuring ox-LDL", "keywords": ["oxidized low-density lipoprotein ", "2-glycoprotein i (?2gpi", "oxidative stress", "elisa", "atherosclerosis", "autoimmunity"]} {"id": "kp20k_training_370", "title": "Text document clustering based on frequent word meaning sequences", "abstract": "Most of existing text clustering algorithms use the vector space model, which treats documents as bags of words. Thus, word sequences in the documents are ignored, while the meaning of natural languages strongly depends on them. In this paper, we propose two new text clustering algorithms, named Clustering based on Frequent Word Sequences (CFWS) and Clustering based on Frequent Word Meaning Sequences (CFWMS). A word is the word form showing in the document, and a word meaning is the concept expressed by synonymous word forms. A word (meaning) sequence is frequent if it occurs in more than certain percentage of the documents in the text database. The frequent word (meaning) sequences can provide compact and valuable information about those text documents. For experiments, we used the Reuters-21578 text collection, CISI documents of the Classic data set [Classic data set, ftp://ftp.cs.cornell.edu/pub/smart/], and a corpus of the Text Retrieval Conference (TREC) [High Accuracy Retrieval from Documents (HARD) Track of Text Retrieval Conference, 2004]. Our experimental results show that CFWS and CFWMS have much better clustering accuracy than Bisecting k-means (BKM) [M. Steinbach, G. Karypis, V. Kumar, A Comparison of Document Clustering Techniques, KDD-2000 Workshop on Text Mining, 2000], a modified bisecting k-means using background knowledge (BBK) [A. Hotho, S. Staab, G. Stumme, Ontologies improve text document clustering, in: Proceedings of the 3rd IEEE International Conference on Data Mining, 2003, pp. 541544] and Frequent Itemset-based Hierarchical Clustering (FIHC) [B.C.M. Fung, K. Wang, M. Ester, Hierarchical document clustering using frequent itemsets, in: Proceedings of SIAM International Conference on Data Mining, 2003] algorithms", "keywords": ["text documents", "clustering", "frequent word sequences", "frequent word meaning sequences", "web search", "wordnet"]} {"id": "kp20k_training_371", "title": "an online approach based on locally weighted learning for short-term traffic flow prediction", "abstract": "Traffic flow prediction is a basic function of Intelligent Transportation System. Due to the complexity of traffic phenomenon, most existing methods build complex models such as neural networks for traffic flow prediction. As a model may lose effect with time lapse, it is important to update the model on line. However, the high computational cost of maintaining a complex model puts great challenge for model updating. The high computation cost lies in two aspects: computation of complex model coefficients and huge amount training data for it. In this paper, we propose to use a nonparametric approach based on locally weighted learning to predict traffic flow. Our approach incrementally incorporates new data to the model and is computationally efficient, which makes it suitable for online model updating and predicting. In addition, we adopt wavelet analysis to extract the periodic characteristic of the traffic data, which is then used for the input of the prediction model instead of the raw traffic flow data. The primary experiments on real data demonstrate the effectiveness and efficiency of our approach", "keywords": ["prediction", "online locally weighted learning", "traffic", "real time"]} {"id": "kp20k_training_372", "title": "usable computing on open distributed systems", "abstract": "An open distributed system provides a best-effort guarantee on the quality of service provided to applications. This has worked well for throughput-based applications of the kind typically executed in Condor or BOINCstyle environments. For other applications, the absence of timeliness of correctness guarantees limit the utility or appeal of this environment. Computational results that are too late or erroneous are not usable to the application. We present techniques designed to efficiently promote usable computing in open distributed systems", "keywords": ["autonomic computing", "grid computing", "computing paradigm"]} {"id": "kp20k_training_373", "title": "A note on the not 3-choosability of some families of planar graphs", "abstract": "A graph G is L-list colorable if for a given list assignment L = {L(v): v epsilon V}, there exists a proper coloring c of G such that c(v) epsilon L(v) for all v epsilon V. If G is L-list colorable for any list assignment with vertical bar L(v)vertical bar >= k for all v epsilon V, then G is said k-choosable. In [M. Voigt, A not 3-choosable planar graph without 3-cycles, Discrete Math. 146 (1995) 325-328] and [M. Voigt, A non-3-choosable planar graph without cycles of length 4 and 5, 2003, Manuscript], Voigt gave a planar graph without 3-cycles and a planar graph without 4-cycles and 5-cycles which are not 3-choosable. In this note, we give smaller and easier graphs than those proposed by Voigt and suggest an extension of Erdos' relaxation of Steinberg's conjecture to 3-choosability. ", "keywords": ["combinatorial problems", "coloring", "list-coloring", "choosability"]} {"id": "kp20k_training_374", "title": "Impedance spectroscopy studies of moisture uptake in low-k dielectrics and its relation to reliability", "abstract": "Water incursion into low-k BEOL capacitors was monitored via impedance spectroscopy. It is a non-destructive, zero DC field, low AC field probe (<0.5V). Samples are tested at device operation conditions and are re-testable. Thermal activation energies related to water bonding with dielectric are measured. The increase in AC loss is correlated with poorer reliability, i.e. early failure", "keywords": ["low-k", "impedance spectroscopy", "dielectric relaxation", "ac losses", "time dependent dielectric breakdown", "reliability"]} {"id": "kp20k_training_375", "title": "A multi-level depiction method for painterly rendering based on visual perception cue", "abstract": "Increasing the level of detail (LOD) in brushstrokes within areas of interest improved the realism of painterly rendering. Using a modified quad-tree, we segmented an image into areas with similar levels of saliency; each of these segments was then used to control the brush strokes during rendering. We could also simulate real oil painting steps based on saliency information. Our method runs in a reasonable fine and produces results that are visually appealing and competitive with previous techniques", "keywords": ["non-photorealistic rendering", "painting technique", "image saliency"]} {"id": "kp20k_training_376", "title": "Preventive replacement for systems with condition monitoring and additional manual inspections", "abstract": "Researched a problem of both condition monitoring and inspection. Defined two types of preventive replacements. Utilized the delay time concept to model the failure process. Formulated a decision problem of two decision variables simultaneously", "keywords": ["maintenance", "condition monitoring", "inspection", "delay-time", "two-stage failure process"]} {"id": "kp20k_training_377", "title": "Redundant and force-differentiated systems in engineering and nature", "abstract": "Sophisticated load-carrying structures, in nature as well as man-made, share some common properties. A clear differentiation of tension, compression and shear is in nature primarily manifested in the properties of materials adapted to the efforts, whereas they in engineering are distributed on different components. For stability and failure safety, redundancy on different levels is also commonly used. The paper aims at collecting and expanding previous methods for the computational treatment of redundant and force-differentiated systems. A common notation is sought, giving and developing criteria for describing the diverse problems from a common structural mechanical viewpoint. From this, new criteria for the existence of solutions, and a method for treatment of targeted dynamic solutions are developed. Added aspects to previously described examples aim at emphasizing similarities and differences between engineering and nature, in the forms of a tension truss structure and the human musculoskeletal system", "keywords": ["structures", "equilibrium", "statics", "dynamics", "mechanisms", "redundancy", "target control"]} {"id": "kp20k_training_378", "title": "Simultaneous optimization of the material properties and the topology of functionally graded structures", "abstract": "A level set based method is proposed for the simultaneous optimization of the material properties and the topology of functionally graded structures. The objective of the present study is to determine the optimal material properties (via the material volume fractions) and the structural topology to maximize the performance of the structure in a given application. In the proposed method, the volume fraction and the structural boundary are considered as the design variables, with the former being discretized as a scalar field and the latter being implicitly represented by the level set method. To perform simultaneous optimization, the two design variables are integrated into a common objective functional. Sensitivity analysis is conducted to obtain the descent directions. The optimization process is then expressed as the solution to a coupled HamiltonJacobi equation and diffusion partial differential equation. Numerical results are provided for the problem of mean compliance optimization in two dimensions", "keywords": ["topology optimization", "level set method", "dynamic implicit boundary", "functionally graded materials", "heterogeneous objects"]} {"id": "kp20k_training_379", "title": "The impact of head movements on user involvement in mediated interaction", "abstract": "We examine engagement within conversational behaviours of the subject when interacting with a socially expressive system. We found real-time communication requires more than verbal communication, and head nodding. Head nodding effects depend on precise on-screen movement by synchronize the on-screen movement with the head movement", "keywords": ["engagement", "nonverbal behaviours", "head movements", "face-to-face interaction", "telepresence robot"]} {"id": "kp20k_training_380", "title": "Parsing images into regions, curves, and curve groups", "abstract": "In this paper, we present an algorithm for parsing natural images into middle level vision representations-regions, curves, and curve groups (parallel curves and trees). This algorithm is targeted for an integrated solution to image segmentation and curve grouping through Bayesian inference. The paper makes the following contributions. (1) It adopts a layered (or 2. 1 D-sketch) representation integrating both region and curve models which compete to explain an input image. The curve layer occludes the region layer and curves observe a partial order occlusion relation. (2) A Markov chain search scheme Metropolized Gibbs Samplers (MGS) is studied. It consists of several pairs of reversible jumps to traverse the complex solution space. An MGS proposes the next state within the jump scope of the current state according to a conditional probability like a Gibbs sampler and then accepts the proposal with a Metropolis-Hastings step. This paper discusses systematic design strategies of devising reversible jumps for a complex inference task. (3) The proposal probability ratios in jumps are factorized into ratios of discriminative probabilities. The latter are computed in a bottom-up process, and they drive the Markov chain dynamics in a data-driven Markov chain Monte Carlo framework. We demonstrate the performance of the algorithm in experiments with a number of natural images", "keywords": ["image segmentation", "perceptual organization", "curve grouping", "graph partition", "data-driven markov chain monte carlo", "metropolized gibbs sampler"]} {"id": "kp20k_training_381", "title": "Laparoscopic Management of Adnexal Masses", "abstract": "Suspected ovarian neoplasm is a common clinical problem affecting women of all ages. Although the majority of adnexal masses are benign, the primary goal of diagnostic evaluation is the exclusion of malignancy. It has been estimated that approximately 510% of women in the United States will undergo a surgical procedure for a suspected ovarian neoplasm during their lifetime. Despite the magnitude of the problem, there is still considerable disagreement regarding the optimal surgical management of these lesions. Traditional management has relied on laparotomy to avoid undertreatment of a potentially malignant process. Advances in detection, diagnosis, and minimally invasive surgical techniques make it necessary now to review this practice in an effort to avoid unnecessary morbidity among patients. Here, we review the literature on the laparosopic approach to the treatment of the adnexal mass without sacrificing the principles of oncologic surgery. We highlight potentials of minimally invasive surgery and address the risks associated with the laparoscopic approach", "keywords": ["adnexal masses", "ovarian neoplasm", "laparotomy"]} {"id": "kp20k_training_382", "title": "Dealing with plagiarism in the information systems research community: A look at factors that drive plagiarism and ways to address them", "abstract": "Imagine yourself spending years conducting a research project and having it published as an article in a refereed journal, only to see a plagiarized copy of the article later published in another journal. Then imagine yourself being left to fight for your rights alone, and eventually finding out that it would be very difficult to hold the plagiarist accountable for what he or she did. The recent decision by the Association of Information Systems to create a standing committee on member misconduct suggests that while this type of situation may sound outrageous, it is likely to become uncomfortably frequent in the information systems research community if proper measures are not taken by a community-backed organization. In this article, we discuss factors that can drive plagiarism, as well as potential measures to prevent it. Our goal is to discuss alternative ways in which plagiarism can be prevented and dealt with when it arises. We hope to start a debate that provides the basis on which broader mechanisms to deal with plagiarism can be established, which we envision as being associated with and complementary to the committee created by the Association for Information Systems", "keywords": ["ethics", "committees", "community", "plagiarism", "information systems research"]} {"id": "kp20k_training_383", "title": "Percolation in the secrecy graph", "abstract": "The secrecy graph is a random geometric graph which is intended to model the connectivity of wireless networks under secrecy constraints. Directed edges in the graph are present whenever a node can talk to another node securely in the presence of eavesdroppers, which, in the model, is determined solely by the locations of the nodes and eavesdroppers. In the case of infinite networks, a critical parameter is the maximum density of eavesdroppers that can be accommodated while still guaranteeing an infinite component in the network, i.e., the percolation threshold. We focus on the case where the locations of the nodes and eavesdroppers are given by Poisson point processes, and present bounds for different types of percolation, including in-, out- and undirected percolation", "keywords": ["percolation", "branching process", "secrecy graph"]} {"id": "kp20k_training_384", "title": "Evaluation of Region-of-Interest coders using perceptual image quality assessments", "abstract": "Perceptual image assessment is proposed for coder performance evaluation. Proposed assessment uses a linear combination of perceptual measures just based on features. Region-of-Interest coder perceptual evaluation aims at identifying coder behavior. Some perceptual assessments are adequate to evaluate test coders", "keywords": ["region-of-interest", "image coding", "wavelet", "distortion measure", "quality assessment", "perceptual evaluation", "human visual system", "mean-observed scores", "rate-distortion function"]} {"id": "kp20k_training_385", "title": "achieving anycast in dtns by enhancing existing unicast protocols", "abstract": "Many DTN environments, such as emergency response networks and pocket-switched networks, are based on human mobility and communication patterns, which naturally lead to groups. In these scenarios, group-based communication is central, and hence a natural and useful routing paradigm is anycast, where a node attempts to communicate with at least one member of a particular group. Unfortunately, most existing anycast solutions assume connectivity, and the few specifically for DTNs are single-copy in nature and have only been evaluated in highly limited mobility models. In this paper, we propose a protocol-independent method of enhancing a large number of existing DTN unicast protocols, giving them the ability to perform anycast communication. This method requires no change to the unicast protocols themselves and instead changes their world view by adding a thin layer beneath the routing layer. Through a thorough set of simulations, we also evaluate how different parameters and network conditions affect the performance of these newly transformed anycast protocols", "keywords": ["routing", "anycast", "dtn"]} {"id": "kp20k_training_386", "title": "a framework for supporting data integration using the materialized and virtual approaches", "abstract": "This paper presents a framework for data integration currently under development in the Squirrel project. The framework is based on a special class of mediators, called Squirrel integration mediators. These mediators can support the traditional virtual and materialized approaches, and also hybrids of them.In the Squirrel mediators, a relation in the integrated view can be supported as (a) fully materialized, (b) fully virtual, or (c) partially materialized (i.e., with some attributes materialized and other attributes virtual). In general, (partially) materialized relations of the integrated view are maintained by incremental updates from the source databases. Squirrel mediators provide two approaches for doing this: (1) materialize all needed auxiliary data, so that data sources do not have to be queried when processing the incremental updates; or (2) leave some or all of the auxiliary data virtual, and query selected source databases when processing incremental updates.The paper presents formal notions of consistency and \"freshness\" for integrated views defined over multiple autonomous source databases. It is shown that Squirrel mediators satisfy these properties", "keywords": [" framework ", "views", "formalism", "developer", "process", "data", "project", "mediator", "support", "general", "attributes", "relation", "consistency", "paper", "virtualization", "database", "autonomic", "hybrid", "update", "query", "data integrity", "class", "integrability", "incremental"]} {"id": "kp20k_training_387", "title": "Validation and verification of intelligent systems - what are they and how are they different", "abstract": "Researchers and practitioners in the field of expert systems all generally agree that to be useful, any fielded intelligent system must be adequately verified and validated. But what does this mean in concrete terms? What exactly is verification? What exactly is validation? How are they different? Many authors have attempted to define these terms and, as a result, several interpretations have surfaced. It is our opinion that there is great confusion as to what these terms mean. how they are different, and how they are implemented. This paper. therefore, has two aims-to clarify the meaning of the terms validation and verification as they apply to intelligent systems, and to describe how several researchers are implementing these. The second part of the paper, therefore, details some techniques that can be used to perform the verification and validation of systems. Also discussed is the role of testing as part of the above-mentioned processes", "keywords": ["validation", "verification", "evaluation", "expert systems", "intelligent systems"]} {"id": "kp20k_training_388", "title": "Multiple blocking sets and multisets in Desarguesian planes", "abstract": "In AG(2, q (2)), the minimum size of a minimal (q - 1)-fold blocking set is known to be q (3) - 1. Here, we construct minimal (q - 1)-fold blocking sets of size q (3) in AG(2, q (2)). As a byproduct, we also obtain new two-character multisets in PG(2, q (2)). The essential idea in this paper is to investigate q (3)-sets satisfying the opposite of Ebert's discriminant condition", "keywords": ["multiple blocking set", "multiset"]} {"id": "kp20k_training_389", "title": "A simple weighting scheme for classification in two-group discriminant problems", "abstract": "This paper introduces a new weighted linear programming model, which is simple and has strong intuitive appeal for two-group classifications. Generally, in applying weights to solve a classification problem in discriminant analysis where the relative importance of every observation is known, larger weights (penalties) will be assigned to those more important observations. The perceived importance of an observation is measured here as the willingness of the decision-maker to misclassify this observation. For instance, a decision-maker is least willing to see a classification rule that misclassifies a top financially strong firm to the group that contains bankrupt firms. Our weighted-linear programming model provides an objective-weighting scheme whereby observations can be weighted according to their perceived importance. The more important this observation, the heavier its assigned weight. Results of a simulation experiment that uses contaminated data show that the weighted linear programming model consistently and significantly outperforms existing linear programming and standard statistical approaches in attaining higher average hit-ratios in the 100 replications for each of the 27 cases tested. Scope and purpose Generally, in applying weights to solve a discriminant problem where the relative importance of every observation is known, larger weights (penalties) will be assigned to those more important observations. However, if decision-makers do not have prior or additional information about the observations, it is very difficult to assign weights to the observations. Subjective judgements from decision-makers may be a way of obtaining those weights. An alternative way is to suggest an objective weighting scheme for obtaining classification weights of observations from the data matrix of the training sample. We suggest a new approach, which provides an objective weighting scheme whereby individual observations can be weighted according to their perceived importance. The more important the observation, the heavier its assigned weight will be. The importance of individual observation is first determined in one of two stages of our model using more than one discriminant function. Simulation experiments are run to test this new approach", "keywords": ["classification", "discriminant analysis", "linear programming", "statistics"]} {"id": "kp20k_training_390", "title": "HYBRID INTELLIGENT PACKING SYSTEM (HIPS) THROUGH INTEGRATION OF ARTIFICIAL NEURAL NETWORKS, ARTIFICIAL-INTELLIGENCE, AND MATHEMATICAL-PROGRAMMING", "abstract": "A successful solution to the packing problem is a major step toward material savings on the scrap that could be avoided in the cutting process and therefore money savings. Although the problem is of great interest, no satisfactory algorithm has been found that can be applied to all the possible situations. This paper models a Hybrid Intelligent Packing System (HIPS) by integrating Artificial Neural Networks (ANNs), Artificial Intelligence (AI), and Operations Research (OR) approaches for solving the packing problem. The HIPS consists of two main modules, an intelligent generator module and a tester module. The intelligent generator module has two components: (i) a rough assignment module and (ii) a packing module. The rough assignment module utilizes the expert system and rules concerning cutting restrictions and allocation goals in order to generate many possible patterns. The packing module is an ANN that packs the generated patterns and performs post-solution adjustments. The tester module, which consists of a mathematical programming model, selects the sets of patterns that will result in a minimum amount of scrap", "keywords": ["cutting and packing", "parallel processing", "data driven", "connectionist", "extensional programming"]} {"id": "kp20k_training_391", "title": "Distributed Scheduling and Resource Allocation for Cognitive OFDMA Radios", "abstract": "Scheduling spectrum access and allocating power and rate resources are tasks affecting critically the performance of wireless cognitive radio (CR) networks. The present contribution develops a primal-dual optimization framework to schedule any-to-any CR communications based on orthogonal frequency division multiple access and allocate power so as to maximize the weighted average sum-rate of all users. Fairness is ensured among CR communicators and possible hierarchies are respected by guaranteeing minimum rate requirements for primary users while allowing secondary users to access the spectrum opportunistically. The framework leads to an iterative channel-adaptive distributed algorithm whereby nodes rely only on local information exchanges with their neighbors to attain global optimality. Simulations confirm that the distributed online algorithm does not require knowledge of the underlying fading channel distribution and converges to the optimum almost surely from any initialization", "keywords": ["cognitive radios", "resource allocation", "quality of service", "distributed online implementation"]} {"id": "kp20k_training_392", "title": "physically based hydraulic erosion simulation on graphics processing unit", "abstract": "Visual simulation of natural erosion on terrains has always been a fascinating research topic in the field of computer graphics. While there are many algorithms already developed to improve the visual quality of terrain, the recent simulation methods revolve around physically-based hydraulic erosion because it can generate realistic natural-looking terrains. However, many of such algorithms were tested only on low resolution terrains. When simulated on a higher resolution terrain, most of the current algorithms become computationally expensive. This is why in many applications today, terrains are generated off-line and loaded during the application runtime. This method restricts the number of terrains which can be stored if there is a limitation on storage capacity. Recently, graphics hardware has evolved into an indispensable tool in improving the speed of computation. This has motivated us to develop an erosion algorithm to map to graphics hardware for faster terrain generation. In this paper, we propose a fast and efficient hydraulic erosion procedural technique that utilizes the GPUs powerful computation capability in order to generate high resolution erosion on terrains. Our method is based on the Newtonian physics approach that is implemented on a two-dimensional data structure which stores height fields, water amount, and dissolved sediment and water velocities. We also present a comprehensive comparison between the CPU and GPU implementations together with the visual results and the statistics on simulation time taken", "keywords": ["terrain", "physically based modeling", "natural phenomena", "visual simulation", "hydraulic erosion"]} {"id": "kp20k_training_393", "title": "novel immune-based framework for securing ad hoc networks", "abstract": "One of the main security issues in mobile ad hoc networks (MANETs) is a malicious node that can falsify a route advertisement, overwhelm traffic without forwarding it, help to forward corrupted data and inject false or uncompleted information, and many other security problems. Mapping immune system mechanisms to networking security is the main objective of this paper which may significantly contribute in securing MANETs. In a step for providing secured and reliable broadband services, formal specification logic along with a novel immuneinspired security framework (I 2 MANETs) are introduced. The different immune components are synchronized with the framework through an agent that has the ability to replicate, monitor, detect, classify, and block/isolate the corrupted packets and/or nodes in a federated domain. The framework functions as the Human Immune System in first response, second response, adaptability, distributability, and survivability and other immune features and properties. Interoperability with different routing protocols is considered. The framework has been implemented in a real environment. Desired and achieved results are presented", "keywords": ["security", "manets", "specification logic", "mobile agent"]} {"id": "kp20k_training_394", "title": "Slabpose columnsort: A new oblivious algorithm for out-of-core sorting on distributed-memory clusters", "abstract": "Our goal is to develop a robust out-of-core sorting program for a distributed-memory cluster. The literature contains two dominant paradigms for out-of-core sorting algorithms: merging-based and partitioning-based. We explore a third paradigm, that of oblivious algorithms. Unlike the two dominant paradigms, oblivious algorithms do not depend on the input keys and therefore lead to predetermined I/O and communication patterns in an out-of-core setting. Predetermined I/O and communication patterns facilitate overlapping I/O, communication, and computation for efficient implementation. We have developed several out-of-core sorting programs using the paradigm of oblivious algorithms. Our baseline implementation, 3-pass columnsort, was based on Leighton's columnsort algorithm. Though efficient in terms of I/O and communication, 3-pass columnsort has a restriction on the maximum problem size. As our first effort toward relaxing this restriction, we developed two implementations: subblock columnsort and M-columnsort. Both of these implementations incur substantial performance costs: subblock columnsort performs additional disk I/O, and M-columnsort needs substantial amounts of extra communication and computation. In this paper we present slabpose columnsort, a new oblivious algorithm that we have designed explicitly for the out-of-core setting. Slabpose columnsort relaxes the problem-size restriction at no extra I/O or communication cost. Experimental evidence on a Beowulf cluster shows that unlike subblock columnsort and M-columnsort, slabpose columnsort runs almost as fast as 3-pass columnsort. To the best of our knowledge, our implementations are the first out-of-core multiprocessor sorting algorithms that make no assumptions about the keys and produce output that is perfectly load balanced and in the striped order assumed by the Parallel Disk Model", "keywords": ["columnsort", "out-of-core", "parallel sorting", "distributed-memory cluster", "oblivious algorithms"]} {"id": "kp20k_training_395", "title": "A real-time kinematics on the translational crawl motion of a quadruped robot", "abstract": "It is known that the kinematics of a quadruped robot is complex due to its topology and the redundant actuation in the robot. However, it is fundamental to compute the inverse and direct kinematics for the sophisticated control of the robot in real-time. In this paper, the translational crawl gait of a quadruped robot is introduced and the approach to find the solution of the kinematics for such a crawl motion is proposed. Since the resulting kinematics is simplified, the formulation can be used for the real-time control of the robot. The results of simulation and experiment shows that the present method is feasible and efficient", "keywords": ["crawl velocity", "joint position", "joint velocity", "quadruped robot", "real-time kinematics", "trajectory of center-of-gravity", "translational crawl gait"]} {"id": "kp20k_training_396", "title": "Geographical classification of olive oils by the application of CART and SVM to their FT-IR", "abstract": "This paper reports the application of Fourier-transform infrared (FT-IR) spectroscopy to the geographical classification of extra virgin olive oils. Two chemometrical techniques, classification and regression trees (CART) and support vector machines (SVM) based on the Gaussian kernel and the recently introduced Euclidean distance-based Pearson VII Universal Kernel (PUK), were applied to discriminate between Italian and non-Italian and between Ligurian and non-Ligurian olive oils. The PUK is applied in literature with success on regression problems. In this paper the mapping power of this universal kernel for classification was investigated. In this study it was observed that SVM performed better than CART. SVM based on the PUK provide models with a high selectivity and sensitivity (thus a better accuracy) as compared to those obtained using the Gaussian kernel. The wave numbers selected in the classification trees were interpreted demonstrating that the trees were chemically justified. This study also shows that FT-IR spectroscopy associated with SVM and CART can be used to correctly discriminate between various origins of olive oils, demonstrating that the combination of techniques might be a powerful tool for supporting the claimed origin of olive oils. ", "keywords": ["ft-ir", "olive oil", "classification and regression trees", "support vector machines"]} {"id": "kp20k_training_397", "title": "ARFNNs under Different Types SVR for Identification of Nonlinear Magneto-Rheological Damper Systems with Outliers", "abstract": "This paper demonstrates different types support vector regression (SVR) for annealing robust fuzzy neural networks (ARFNNs) to identification of nonlinear magneto-rheological (MR) damper with outliers. A SVR has the good performances to determine the number of rule in the simplified fuzzy inference system and initial weights for the fuzzy neural networks. In this paper, we independently proposed two different types SVR for the ARFNNs. Hence, a combination model that fuses simplified fuzzy inference system, SVR and radial basis function networks is used. Based on these initial structures, and then annealing robust learning algorithm (ARLA) can be used effectively to adjust the parameters of structures. Simulation results show the superiority of the proposed method with the different types SVR for the nonlinear MR damper systems with outliers", "keywords": ["magneto-rheological damper", "fuzzy neural networks", "support vector regression", "annealing robust learning algorithm"]} {"id": "kp20k_training_398", "title": "Fuzzy linear regression model based on fuzzy scalar product", "abstract": "The new concept and method of imposing imprecise (fuzzy) input and output data upon the conventional linear regression model is proposed in this paper. We introduce the fuzzy scalar (inner) product to formulate the fuzzy linear regression model. In order to invoke the conventional approach of linear regression analysis for real-valued data, we transact the alpha-level linear regression models of the fuzzy linear regression model. We construct the membership functions of fuzzy least squares estimators via the form of \"Resolution Identity\" which is a well-known formula in fuzzy sets theory. In order to obtain the membership value of any given least squares estimate taken from the fuzzy least squares estimator, we transform the original problem into the optimization problems. We also provide two computational procedures to solve the optimization problems", "keywords": ["fuzzy number", "fuzzy linear regression model", "fuzzy scalar product", "least squares estimator", "optimization"]} {"id": "kp20k_training_399", "title": "Relating torque and slip in an odometric model for an autonomous agricultural vehicle", "abstract": "This paper describes a method of considering the slip that is experienced by the wheels of an agricultural autonomous guided vehicle such that the accuracy of dead-reckoning navigation may be improved. Traction models for off-road locomotion are reviewed. Using experimental data from an agricultural AGV, a simplified form suitable for vehicle navigation is derived. This simplified model relates measurements of the torques applied to the wheels with wheel slip, and is used as the basis of an observation model for odometric sensor data in the vehicle's extended Kalman filter (EKF) navigation system. The slip model parameters are included as states in the vehicle EKF so that the vehicle may adapt to changing surface properties. Results using real field data and a simulation of the vehicle EKF show that positional accuracy can be increased by a slip-aware odometric model, and that when used as part of a multi-sensor navigation system, the consistency of the EKF state estimator is improved", "keywords": ["navigation", "kalman filter", "odometry", "slip", "traction"]} {"id": "kp20k_training_400", "title": "Evaluation of Folksonomy Induction Algorithms", "abstract": "Algorithms for constructing hierarchical structures from user-generated metadata have caught the interest of the academic community in recent years. In social tagging systems, the output of these algorithms is usually referred to as folksonomies (from folk-generated taxonomies). Evaluation of folksonomies and folksonomy induction algorithms is a challenging issue complicated by the lack of golden standards, lack of comprehensive methods and tools as well as a lack of research and empirical/simulation studies applying these methods. In this article, we report results from a broad comparative study of state-of-the-art folksonomy induction algorithms that we have applied and evaluated in the context of five social tagging systems. In addition to adopting semantic evaluation techniques, we present and adopt a new technique that can be used to evaluate the usefulness of folksonomies for navigation. Our work sheds new light on the properties and characteristics of state-of-the-art folksonomy induction algorithms and introduces a new pragmatic approach to folksonomy evaluation, while at the same time identifying some important limitations and challenges of folksonomy evaluation. Our results show that folksonomy induction algorithms specifically developed to capture intuitions of social tagging systems outperform traditional hierarchical clustering techniques. To the best of our knowledge, this work represents the largest and most comprehensive evaluation study of state-of-the-art folksonomy induction algorithms to date", "keywords": ["algorithms", "experimentation", "folksonomies", "taxonomies", "evaluation", "social tagging systems"]} {"id": "kp20k_training_401", "title": "TELEPORTATION OF N-QUDIT STATE", "abstract": "In this paper, we study the teleportation of arbitrary N-qudit state with the tensor representation. The necessary and sufficient condition for realizing a successful or perfect teleportation is obtained, as will be shown, which is determined by the measurement matrix T-delta and the quantum channel parameter matrix X. The general expressions of the measurement matrix T-delta are written out and the quantum channel parameter matrix X are discussed. As an example, we show the details of three-ququart state teleportation", "keywords": ["qudit state", "ququart state", "channel parameter matrix ", "measurement matrix", "transformation matrix"]} {"id": "kp20k_training_402", "title": "Bio-Interactive Healthcare Service System Using Lifelog Based Context Computing", "abstract": "Intelligent bio-sensor information processing was developed using lifelog based context aware technology to provide a flexible and dynamic range of diagnostic capabilities to satisfy healthcare requirements in ubiquitous and mobile computing environments. To accomplish this, various noise signals were grouped into six categories by context estimation and effectively reconfigured noise reduction filters by neural network and genetic algorithm. The neural network-based control module effectively selected an optimal filter block by noise context-based clustering in running mode, and filtering performance was improved by genetic algorithm in evolution mode. Due to its adaptive criteria, genetic algorithm was used to explore the action configuration for each identified bio-context to implement our concept. Our proposed Bio-interactive healthcare service system adopts the concepts of biological context-awareness with evolutionary computations in working environments modeled and identified as bio-sensors based environmental contexts. We used an unsupervised learning algorithm for lifelog based context modeling and a supervised learning algorithm for context identification", "keywords": ["biometric interaction", "context awareness", "interactive healthcare"]} {"id": "kp20k_training_403", "title": "SMB: Collision detection based on temporal coherence", "abstract": "The paper presents a novel collision detection algorithm, termed the sort moving boxes (SMB) for large number of moving 2D/3D objects which are represented by their axis-aligned bounding boxes (AABBs). The main feature of the algorithm is the full exploitation of the temporal coherence of the objects exhibited in a dynamic environment. In the algorithm, the AABBs are first projected to each Cartesian axis. The projected intervals on the axes are separately sorted by the diminishing increment sort (DIS) and further divided into subsections. By processing all the intervals within the subsections to check if they overlap, a complete contact list can be built. The SMB is a fast and robust collision detection algorithm, particularly for systems involving a large number of moving AABBs, and also supports for the dynamic insertion and deletion of objects. Its performance in terms of both expected total detection time and memory requirements is proportional to the total number of AABBs, N, and is not influenced by size differences of AABBs, the space size and packing density over a large range up to ten times difference. The only assumption made is that the sorted list at one time step will remain an almost sorted list at the next time step, which is valid for most applications whose movement and deformation of each AABB and the dynamic change of the total number N are approximately continuous", "keywords": ["collision detection", "contact search", "sort", "axis-aligned bounding boxes ", "moving", "temporal coherence"]} {"id": "kp20k_training_404", "title": "An international analysis of the extensions to the IEEE LOMv1.0 metadata standard", "abstract": "We analyzed 44 works using the IEEE LOMv1.0 standard and found 15 types of extensions made to it. Due to Mexico interoperability difficulties, we compared its extensions with the rest of the world. We found that local extensions do not help to increase the system's interoperability ability. We found the action most important after implementing extensions is to publish them", "keywords": ["metadata", "learning objects", "interoperability", "extensions", "ieee lomv1.0 standard", "metadata application profiles"]} {"id": "kp20k_training_406", "title": "An architectural history of metaphors", "abstract": "This paper presents a review and an historical perspective on the architectural metaphor. It identifies common characteristics and peculiaritiesas they apply to given historical periodsand analyses the similarities and divergences. The review provides a vocabulary, which will facilitate an appreciation of existing and new metaphors", "keywords": ["metaphor", "architecture", "art", "traditional or classical art", "ancient prehistoric", "modern and contemporary architecture"]} {"id": "kp20k_training_407", "title": "time-based query performance predictors", "abstract": "Query performance prediction is aimed at predicting the retrieval effectiveness that a query will achieve with respect to a particular ranking model. In this paper, we study query performance prediction for a ranking model that explicitly incorporates the time dimension into ranking. Different time-based predictors are proposed as analogous to existing keyword-based predictors. In order to improve predicting performance, we combine different predictors using linear regression and neural networks. Extensive experiments are conducted using queries and relevance judgments obtained by crowdsourcing", "keywords": ["time-aware ranking", "query performance prediction"]} {"id": "kp20k_training_408", "title": "Asymptotically sufficient partitions and quantizations", "abstract": "We consider quantizations of observations represented by finite partitions of observation spaces. Partitions usually decrease the sensitivity of observations to their probability distributions. A sequence of quantizations is considered to be asymptotically sufficient for a statistical problem if the loss of sensitivity is asymptotically negligible. The sensitivity is measured by f-divergences of distributions or the closely related f-informations including the classical Shannon information. It is demonstrated that in some cases the maximization of f-divergences means the same as minimization of distortion of observations in the classical sense considered in mathematical statistics and information theory. The main result of the correspondence is a general sufficient condition ford the asymptotic sufficiency of quantizations. Selected applications of this condition are studied leading to new simple criteria of asymptotic optimality for quantizations of vector-valued observations and observations on general Poisson processes", "keywords": ["abstract observation spaces", "asymptotically sufficient partitions", "asymptotically sufficient quantizations", "euclidean observation spaces", "f-divergences", "f-informations", "general poisson processes", "optimal quantizations", "sufficient statistics"]} {"id": "kp20k_training_409", "title": "The topology aware file distribution problem", "abstract": "We present theoretical results for large-file distribution on general networks of known topology (known link bandwidths and router locations). We show that the problem of distributing a file in minimum time is NP-hard in this model, and we give an O(log n) approximation algorithm, where n is the number of workstations that require the file. We also characterize our method as optimal amongst the class of \"no-link-sharing\" algorithms", "keywords": ["network", "file distribution", "approximation"]} {"id": "kp20k_training_410", "title": "Achieving quality assurance functionality in the food industry using a hybrid case-based reasoning and fuzzy logic approach", "abstract": "Quality control of food inventories in the warehouse is complex as well as challenging due to the fact that food can easily deteriorate. Currently, this difficult storage problem is managed mostly by using a human dependent quality assurance and decision making process. This has however, occasionally led to unimaginative, arduous and inconsistent decisions due to the injection of subjective human intervention into the process. Therefore, it could be said that current practice is not powerful enough to support high-quality inventory management. In this paper, the development of an integrative prototype decision support system, namely, Intelligent Food Quality Assurance System (IFQAS) is described which will assist the process by automating the human based decision making process in the quality control of food storage. The system, which is composed of a Case-based Reasoning (CBR) engine and a Fuzzy rule-based Reasoning (FBR) engine, starts with the receipt of incoming food inventory. With the CBR engine, certain quality assurance operations can be suggested based on the attributes of the food received. Further of this, the FBR engine can make suggestions on the optimal storage conditions of inventory by systematically evaluating the food conditions when the food is receiving. With the assistance of the system, a holistic monitoring in quality control of the receiving operations and the storage conditions of the food in the warehouse can be performed. It provides consistent and systematic Quality Assurance Guidelines for quality control which leads to improvement in the level of customer satisfaction and minimization of the defective rate", "keywords": ["food quality", "case-based reasoning", "fuzzy logic", "decision support system", "operation guidelines", "storage conditions"]} {"id": "kp20k_training_411", "title": "Coverage and connectivity in three-dimensional underwater sensor networks", "abstract": "Unlike a terrestrial network. an underwater sensor network call have significant height which makes it a three-dimensional network. There are many important sensor network design problems where the physical dimensionality of the network plays it significant role. One Such problem is determining how to deploy minimum number of sensor nodes so that all points inside the network is within the sensing range of at least one sensor and all sensor nodes call communicate with each other, possibly over a multi-hop path. The solution to this problem depends oil the ratio of the communication ran-e and the sensing range of each sensor. Under sphere-based communication and sensing model, placing a node at the center of each virtual cell created by truncated octahedron-based tessellation solves this problem when this ratio is greater than 1.7889. However, for smaller values of this ratio, the solution depends on how much communication redundancy the network needs. We provide Solutions for both limited and full communication redundancy requirements. ", "keywords": ["three-dimensional", "coverage", "connectivity", "polyhedron", "node placement", "sphere-based sensing and communication"]} {"id": "kp20k_training_412", "title": "error correction of voicemail transcripts in scanmail", "abstract": "Despite its widespread use, voicemail presents numerous usability challenges: People must listen to messages in their entirety, they cannot search by keywords, and audio files do not naturally support visual skimming. SCANMail overcomes these flaws by automatically generating text transcripts of voicemail messages and presenting them in an email-like interface. Transcripts facilitate quick browsing and permanent archive. However, errors from the automatic speech recognition (ASR) hinder the usefulness of the transcripts. The work presented here specifically addresses these problems by evaluating user-initiated error correction of transcripts. User studies of two editor interfaces-a grammar-assisted menu and simple replacement by typing-reveal reduced audio playback times and an emphasis on editing important words with the menu, suggesting its value in mobile environments where limited input capabilities are the norm and user privacy is essential. The study also adds to the scarce body of work on ASR confidence shading, suggesting that shading may be more helpful than previously reported", "keywords": ["speech recognition", "voicemail", "editor interfaces", "confidence shading", "error correction"]} {"id": "kp20k_training_413", "title": "Study of stress waves in geomedia and effect of a soil cover layer on wave attenuation using a 1-D finite-difference method", "abstract": "The propagation and attenuation of blast-induced stress waves differs between geomedia such as rock or soil mass. This paper numerically studies the propagation and attenuation of blast-induced elastoplastic waves in deep geomedia by using a one-dimensional (I-D) finite-difference code. Firstly, the elastoplastic Cap models for rock and soil masses are introduced into the governing equations of spherical wave motion and a FORTRAN code based on the finite difference method is developed. Secondly, an underground spherical blast is simulated with this code and verified by software, RENEWTO. The propagation of stress-waves in rock and soil masses is numerically investigated, respectively. Finally, the effect of a soil cover layer on the attenuation of stress waves in the rear rock mass is studied. It is determined that large plastic deformation of geomedia can effectively dissipate the energy of stress-waves inward and the developed I-D finite difference code coupled with elastoplastic Cap models is convenient and effective in the numerical simulations for underground spherical explosion. ", "keywords": ["geomedia", "elastoplastic cap model", "stress-waves", "soil cover layer", "attenuation", "finite difference method"]} {"id": "kp20k_training_414", "title": "Keyed hash function based on a dynamic lookup table of functions", "abstract": "In this paper, we present a novel keyed hash function based on a dynamic lookup table of functions. More specifically, we first exploit the piecewise linear chaotic map (PWLCM) with secret keys used for producing four 32-bit initial buffers and then elaborate the lookup table of functions used for selecting composite functions associated with messages. Next, we convert the divided message blocks into ASCII code values, check the equivalent indices and then find the associated composite functions in the lookup table of functions. For each message block, the four buffers are reassigned by the corresponding composite function and then the lookup table of functions is dynamically updated. After all the message blocks are processed, the final 128-bit hash value is obtained by cascading the last reassigned four buffers. Finally, we evaluate our hash function and the results demonstrate that the proposed hash algorithm has good statistical properties, strong collision resistance, high efficiency, and better statistical performance compared with existing chaotic hash functions", "keywords": ["chaos", "keyed hash function", "piecewise linear chaotic map", "lookup table of functions", "transfer function", "composite function"]} {"id": "kp20k_training_415", "title": "Exploring hierarchical multidimensional data with unified views of distribution and correlation", "abstract": "Data analysts explore data by inspecting features such as clustering, distribution and correlation. Much existing research has focused on different visualisations for different data exploration tasks. For example, a data analyst might inspect clustering and correlation with scatterplots, but use histograms to inspect a distribution. Such visualisations allow an analyst to confirm prior expectations. For example, a scatterplot may confirm an expected correlation or may show deviations from the expected correlation. In order to better facilitate discovery of unexpected features in data, however, a combination of different perspectives may be needed. In this paper, we combine distributional and correlational views of hierarchical multidimensional data. Our unified view supports the simultaneous exploration of data distribution and correlation. By presenting a unified view, we aim to increase the chances of discovery of unexpected data features, and to provide the means to explore such features in detail. Further, our unified view is equipped with a small number of primitive interaction operators which a user composes to facilitate smooth and flexible exploration. ", "keywords": ["data analysis", "multidimensional data", "data distribution", "correlation"]} {"id": "kp20k_training_416", "title": "Application driven network-on-chip architecture exploration & refinement for a complex SoC", "abstract": "This article presents an overview of the design process of an interconnection network, using the technology proposed by Arteris. Section 2 summarizes the various features a NoC is required to implement to be integrated in modern SoCs. Section 3 describes the proposed top-down approach, based on the progressive refinement of the NoC description, from its functional specification (Sect. 4) to its verification (Sect. 8). The approach is illustrated by a typical use-case of a NoC embedded in a hand-held gaming device. The methodology relies on the definition of the performance behavior and expectation (Sect. 5), which can be early and efficiently simulated against various NoC architectures. The system architect is then able to identify bottle-necks and converge towards the NoC implementation fulfilling the requirements of the target application (Sect. 6", "keywords": ["multimedia system-on-chip ", "network-on-chip ", "memory-mapped transaction interconnect", "dynamic memory scheduling", "quality-of-service ", "performance verification", "architecture exploration", "systemc transaction level modeling "]} {"id": "kp20k_training_417", "title": "Real-valued MVDR beamforming using spherical arrays with frequency invariant characteristic", "abstract": "Complex-valued minimum variance distortionless response (MVDR) beamforming for wideband signals has very high computational amount. In this paper, we design a novel real-valued MVDR beamformer for spherical arrays. The dependence of the array steering matrix on source signal directions and frequencies is decoupled using spherical harmonic decomposition. Then a compensation network is designed to solve the frequency dependence of the array response and to get a new array response only determined by the spherical harmonics of the source directions. All frequency bins of wideband signals can be used together instead of being processed independently. By exploiting the property of the conjugate spherical harmonics, a unitary transform can be found to acquire a real-valued frequency invariant steering matrix (FISM). Based on the FISM, real-valued MVDR (RV-MVDR) is developed to obtain good performance with low computational amount. Simulation results demonstrate the performance of our proposed method for beamforming and direction-of-arrival (DOA) estimation by comparing with the complex-valued and real-weighted MVDR methods", "keywords": ["spherical arrays", "spherical harmonic decomposition", "real-valued minimum variance distortionless response ", "frequency invariant beamforming", "unitary transform"]} {"id": "kp20k_training_418", "title": "PRIVATE DATABASE QUERIES USING QUANTUM STATES WITH LIMITED COHERENCE TIMES", "abstract": "We describe a method for private database queries using exchange of quantum states with bits encoded in mutually incompatible bases. For technology with limited coherence time, the database vendor can announce the encoding after a suitable delay to allow the user to privately learn one of two items in the database without the ability to also definitely infer the second item. This quantum approach also allows the user to choose to learn other functions of the items, such as the exclusive-or of their bits, but not to gain more information than equivalent to learning one item, on average. This method is especially useful for items consisting of a few bits by avoiding the substantial overhead of conventional cryptographic approaches", "keywords": ["quantum computing", "private data access", "digital property rights"]} {"id": "kp20k_training_419", "title": "Scheduling for information gathering on sensor network", "abstract": "We investigate a unique wireless sensor network scheduling problem in which all nodes in a cluster send exactly one packet to a designated sink node in an effort to minimize transmission time. However, node transmissions must be sufficiently isolated either in time or in space to avoid collisions. The problem is formulated and solved via graph representation. We prove that an optimal transmission schedule can be obtained efficiently through a pipeline-like schedule when the underlying topology is either line or tree. The minimum time required for a line or tree topology with n nodes is 3(n-2). We further prove that our scheduling problem is NP-hard for general graphs. We propose a heuristic algorithm for general graphs. Our heuristic tries to schedule as many independent segments as possible to increase the degree of parallel transmissions. This algorithm is compared to an RTS/CTS based distributed algorithm. Preliminary simulated results indicate that our heuristic algorithm outperforms the RTS/CTS based distributed algorithm ( up to 30%) and exhibits stable behavior", "keywords": ["sensor network", "hybrid network", "scheduling", "all-to-one information gathering"]} {"id": "kp20k_training_420", "title": "synchronization analysis and control in chaos system based on complex network", "abstract": "For a certain kind of complex network, Lorenz chaos system is used to describe the state equation of nodes in network. By constructing a Lyapunov function, it is proved that this network model can achieve synchronization under the adaptive control scheme. The control strategy is simple, effective and easy for the engineering design in the future. The simulation results show the effectiveness of control scheme", "keywords": ["synchronization", "chaos system", "adaptive control", "complex network"]} {"id": "kp20k_training_421", "title": "Improved property in organic light-emitting diode utilizing two Al/Alq3 layers", "abstract": "We reported on the fabrication of organic light-emitting devices (OLEDs) utilizing the two Al/Alq3 layers and two electrodes. This novel green device with structure of Al(110nm)/tris(8-hydroxyquinoline) aluminum (Alq3)(65nm)/Al(110nm)/Alq3(50nm)/N,N?-dipheny1-N, N?-bis-(3-methy1phyeny1)-1, 1?-bipheny1-4, 4?-diamine (TPD)(60nm)/ITO(60nm)/Glass. TPD were used as holes transporting layer (HTL), and Alq3 was used as electron transporting layer (ETL), at the same time, Alq3 was also used as emitting layer (EL), Al and ITO were used as cathode and anode, respectively. The results showed that the device containing the two Al/Alq3 layers and two electrodes had a higher brightness and electroluminescent efficiency than the device without this layer. At current density of 14mA/cm2, the brightness of the device with the two Al/Alq3 layers reach 3693cd/m2, which is higher than the 2537cd/m2 of the Al/Alq3/TPD:Alq3/ITO/Glass device and the 1504.0cd/m2 of the Al/Alq3/TPD/ITO/Glass. Turn-on voltage of the device with two Al/Alq3 layers was 7V, which is lower than the others", "keywords": ["oleds", "emitting layer", "transporting layer"]} {"id": "kp20k_training_422", "title": "Concept development for kindergarten children through a health simulation", "abstract": "According to many dental professionals, the decay process resulting from the accumulation of sugar on teeth is a very difficult concept for young children to learn. Playing the dental hygiene game with ThinkingTags not only brings context into the classroom, but also allows children to work with digital manipulatives that provide rich personal experiences and instant feedback. Instead of watching a demonstration of the accumulation of sugars on a computer screen, or being told about dental health, this simulation allows pre-school children to experience improving or decaying dental health without any real adverse health effects. Small, wearable, microprocessor-driven Tags were brought into the kindergarten classroom to simulate the decay process, providing information about sugars in foods and creating a discussion about teeth. Preliminary analyses suggest that this program was effective and enthusiastically received by this age group", "keywords": ["collaboration", "dialogue", "discourse analysis", "pre-school", "simulation", "wireless"]} {"id": "kp20k_training_423", "title": "High output impedance current-mode four-function filter with reduced number of active and passive elements using the dual output current conveyor", "abstract": "This paper reports a new single-input multi-output current-mode multifunction filter which can simultaneously realise LP, HP, BP and BR filter functions all at high impedance outputs. The circuit permits orthogonal adjustment of quality factor Q and omega (0), employs only five grounded passive components and no element matching conditions are imposed. A second order all-pass function can easily be obtained. The passive sensitivities are shown to be low", "keywords": ["current conveyors", "multifunction filters", "current-mode circuits"]} {"id": "kp20k_training_424", "title": "constraint programming for itemset mining", "abstract": "The relationship between constraint-based mining and constraint programming is explored by showing how the typical constraints used in pattern mining can be formulated for use in constraint programming environments. The resulting framework is surprisingly flexible and allows us to combine a wide range of mining constraints in different ways. We implement this approach in off-the-shelf constraint programming systems and evaluate it empirically. The results show that the approach is not only very expressive, but also works well on complex benchmark problems", "keywords": ["constraint programming", "itemset mining"]} {"id": "kp20k_training_425", "title": "experiences mining open source release histories", "abstract": "Software releases form a critical part of the life cycle of a software project. Typically, each project produces releases in its own way, using various methods of versioning, archiving, announcing and publishing the release. Understanding the release history of a software project can shed light on the project history, as well as the release process used by that project, and how those processes change. However, many factors make automating the retrieval of release history information difficult, such as the many sources of data, a lack of relevant standards and a disparity of tools used to create releases. In spite of the large amount of raw data available, no attempt has been made to create a release history database of a large number of projects in the open source ecosystem. This paper presents our experiences, including the tools, techniques and pitfalls, in our early work to create a software release history database which will be of use to future researchers who want to study and model the release engineering process in greater depth", "keywords": ["release engineering", "data mining"]} {"id": "kp20k_training_426", "title": "Balancing throughput and response time in online scientific Clouds via Ant Colony Optimization (SP2013/2013/00006", "abstract": "The Cloud Computing paradigm focuses on the provisioning of reliable and scalable infrastructures (Clouds) delivering execution and storage services. The paradigm, with its promise of virtually infinite resources, seems to suit well in solving resource greedy scientific computing problems. The goal of this work is to study private Clouds to execute scientific experiments coming from multiple users, i.e., our work focuses on the Infrastructure as a Service (IaaS) model where custom Virtual Machines (VM) are launched in appropriate hosts available in a Cloud. Then, correctly scheduling Cloud hosts is very important and it is necessary to develop efficient scheduling strategies to appropriately allocate VMs to physical resources. The job scheduling problem is however NP-complete, and therefore many heuristics have been developed. In this work, we describe and evaluate a Cloud scheduler based on Ant Colony Optimization (ACO). The main performance metrics to study are the number of serviced users by the Cloud and the total number of created VMs in online (non-batch) scheduling scenarios. Besides, the number of intra-Cloud network messages sent are evaluated. Simulated experiments performed using CloudSim and job data from real scientific problems show that our scheduler succeeds in balancing the studied metrics compared to schedulers based on Random assignment and Genetic Algorithms", "keywords": ["cloud computing", "scientific problems", "job scheduling", "swarm intelligence", "ant colony optimization", "genetic algorithms"]} {"id": "kp20k_training_427", "title": "Exploring the CSCW spectrum using process mining", "abstract": "Process mining techniques allow for extracting information from event logs. For example, the audit trails of a workflow management system or the transaction logs of an enterprise resource planning system can be used to discover models describing processes, organizations, and products. Traditionally, process mining has been applied to structured processes. In this paper, we argue that process mining can also be applied to less structured processes supported by computer supported cooperative work (CSCW) systems. In addition, the ProM framework is described. Using ProM a wide variety of process mining activities are supported ranging from process discovery and verification to conformance checking and social network analysis", "keywords": ["process mining", "business activity monitoring", "business process intelligence", "cscw", "data mining"]} {"id": "kp20k_training_428", "title": "Effects of spatial and temporal variation in environmental conditions on simulation of wildfire spread", "abstract": "Implementation of a wildfire spread model based on the level set method. Investigation of wildfire propagation under stochastic wind and fuel conditions. Local variation in combustion condition slows the rate of propagation. Local variation in wind direction is found to increase flank spread. A harmonic mean is preferential for spatially varying parameters in spread models", "keywords": ["perimeter propagation", "simulation", "modelling", "fire growth", "level set", "spark"]} {"id": "kp20k_training_429", "title": "Utilization of spatial decision support systems decision-making in dryland agriculture: A Tifton burclover case study", "abstract": "FSAW delineated Wyoming agricultural land into relative ranks for burclover establishment. Defuzzification produced final output map with crisp scores and calculated centroid. Calculated centroid map demonstrated efficacy of SDSS in agricultural decision-making. Effective land suitability ranking validated value of ex-ante agricultural technologies. Presented information has potential to determine burclover feasibility in Wyoming", "keywords": ["gis geographic information systems", "idw inverse distance weighting", "fsaw fuzzy simple additive weighting", "madm multiple attribute decision-making", "mcdm multiple criteria decision-making", "sdss spatial decision support systems"]} {"id": "kp20k_training_430", "title": "Propagation engine prototyping with a domain specific language", "abstract": "Constraint propagation is at the heart of constraint solvers. Two main trends co-exist for its implementation: variable-oriented propagation engines and constraint-oriented propagation engines. Those two approaches ensure the same level of local consistency but their efficiency (computation time) can be quite different depending on the instance solved. However, it is usually accepted that there is no best approach in general, and modern constraint solvers implement only one. In this paper, we would like to go a step further providing a solver independent language at the modeling stage to enable the design of propagation engines. We validate our proposal with a reference implementation based on the Choco solver and the MiniZinc constraint modeling language", "keywords": ["propagation", "constraint solver", "domain specific language", "implementation"]} {"id": "kp20k_training_431", "title": "A Projection Pursuit framework for supervised dimension reduction of high dimensional small sample datasets", "abstract": "The analysis and interpretation of datasets with large number of features and few examples has remained as a challenging problem in the scientific community, owing to the difficulties associated with the curse-of-the-dimensionality phenomenon. Projection Pursuit (PP) has shown promise in circumventing this phenomenon by searching low-dimensional projections of the data where meaningful structures are exposed. However, PP faces computational difficulties in dealing with datasets containing thousands of features (typical in genomics and proteomics) due to the vast quantity of parameters to optimize. In this paper we describe and evaluate a PP framework aimed at relieving such difficulties and thus ease the construction of classifier systems. The framework is a two-stage approach, where the first stage performs a rapid compaction of the data and the second stage implements the PP search using an improved version of the SPP method (Guo et al., 2000, [32]). In an experimental evaluation with eight public microarray datasets we showed that some configurations of the proposed framework can clearly overtake the performance of eight well-established dimension reduction methods in their ability to pack more discriminatory information into fewer dimensions", "keywords": ["projection pursuit", "classification", "gene expression", "dimension reduction"]} {"id": "kp20k_training_432", "title": "Conservation Functions for 1-D Automata: Efficient Algorithms, New Results, and a Partial Taxonomy", "abstract": "We present theorems that can be used for improved efficiency in the calculation of conservation functions for cellular automata. We report results obtained from implementations of algorithms based on these theorems that show conservation laws for 1-D cellular automata of higher order than any previously known. We introduce the notion of trivial and core conservation functions to distinguish truly new conservation functions from simple extensions of lower-order ones. We then present the complete list of conservation functions up to order 16 for the 256 elementary 1-d binary cellular automata. These include CAs that were not previously known to have nontrivial conservation functions", "keywords": ["cellular automata", "conservation functions", "linear algebra", "classification scheme", "taxonomy"]} {"id": "kp20k_training_433", "title": "A reference bacterial genome dataset generated on the MinION portable single-molecule nanopore sequencer", "abstract": "The MinION is a new, portable single-molecule sequencer developed by Oxford Nanopore Technologies. It measures four inches in length and is powered from the USB 3.0 port of a laptop computer. The MinION measures the change in current resulting from DNA strands interacting with a charged protein nanopore. These measurements can then be used to deduce the underlying nucleotide sequence", "keywords": ["genomics", "nanopore sequencing"]} {"id": "kp20k_training_434", "title": "Serial batching scheduling of deteriorating jobs in a two-stage supply chain to minimize the makespan", "abstract": "For the scheduling problem with a buffer, an optimal algorithm is developed for solving it. For the scheduling problem without buffer, some useful properties are derived. A heuristic is designed for solving it, and a novel lower bound is also derived. Two special cases are well analyzed, and two optimal algorithms are developed for solving them, respectively", "keywords": ["batch scheduling", "supply chain", "deterioration", "transportation", "heuristic"]} {"id": "kp20k_training_435", "title": "Entanglement monotones and maximally entangled states in multipartite qubit systems", "abstract": "We present a method to construct entanglement measures for pure states of multipartite qubit systems. The key element of our approach is an antilinear operator that we call comb in reference to the hairy-ball theorem. For qubits (i.e. spin 1/2) the combs are automatically invariant under SL(2, C). This implies that the filters obtained from the combs are entanglement monotones by construction. We give alternative formulae for the concurrence and the 3-tangle as expectation values of certain antilinear operators. As an application we discuss inequivalent types of genuine four-, five- and six-qubit entanglement", "keywords": ["entanglement monotones", "multipartite entanglement", "antilineax operators"]} {"id": "kp20k_training_436", "title": "Automatic verification of Java programs with dynamic frames", "abstract": "Framing in the presence of data abstraction is a challenging and important problem in the verification of object-oriented programs Leavens et al. (Formal Aspects Comput (FACS) 19:159-189, 2007). The dynamic frames approach is a promising solution to this problem. However, the approach is formalized in the context of an idealized logical framework. In particular, it is not clear the solution is suitable for use within a program verifier for a Java-like language based on verification condition generation and automated, first-order theorem proving. In this paper, we demonstrate that the dynamic frames approach can be integrated into an automatic verifier based on verification condition generation and automated theorem proving. The approach has been proven sound and has been implemented in a verifier prototype. The prototype has been used to prove correctness of several programming patterns considered challenging in related work", "keywords": ["program verification", "dynamic frames", "frame problem", "data abstraction"]} {"id": "kp20k_training_437", "title": "Properties of the transmission of pulse sequences in a bistable chain of unidirectionally coupled neurons", "abstract": "We study the propagation of pulse sequences in a chain of neurons with sigmoidal inputoutput relations. The propagating speeds of pulse fronts depend on the widths of the preceding pulses and adjacent pulse fronts interact attractively. Sequences of pulse widths are then modulated through transmission. Equations for changes in pulse width sequences are derived with a kinematical model of propagating pulse fronts. The transmission of pulse width sequences in the chain is expressed as a linear system with additive noise. The gain of the system function increases exponentially with the number of neurons in a high-frequency region. The power spectrum of variations in pulse widths due to spatiotemporal noise also increases in the same manner. Further, the interaction between pulse fronts keeps the coherence and mutual information of initial and transmitted pulse sequences. Results of an experiment on an analog circuit confirm these properties", "keywords": ["transmission line", "chain of neurons", "pulse", "noise"]} {"id": "kp20k_training_438", "title": "Building geometric feature based maps for indoor service robots", "abstract": "This paper presents an efficient geometric approach to the Simultaneous Localization and Mapping problem based on an Extended Kalman Filter. The map representation and building process is formulated, fully implemented and successfully experimented in different indoor environments with different robots. The use of orthogonal shape constraints is proposed to deal with the inconsistency of the estimation. Built maps are successfully used for the navigation of two different service robots: an interactive tour guide robot, and an assistive walking aid for the frail elderly", "keywords": ["simultaneous localization and mapping", "extended kalman filter", "inconsistency", "service robot"]} {"id": "kp20k_training_439", "title": "The cross-entropy method with patching for rare-event simulation of large Markov chains", "abstract": "There are various importance sampling schemes to estimate rare event probabilities in Markovian systems such as Markovian reliability models and Jackson networks. In this work, we present a general state-dependent importance sampling method which partitions the state space and applies the cross-entropy method to each partition. We investigate two versions of our algorithm and apply them to several examples of reliability and queueing models. In all these examples we compare our method with other importance sampling schemes. The performance of the importance sampling schemes is measured by the relative error of the estimator and by the efficiency of the algorithm. The results from experiments show considerable improvements both in running time of the algorithm and the variance of the estimator", "keywords": ["cross-entropy", "rare events", "importance sampling", "large-scale markov chains"]} {"id": "kp20k_training_440", "title": "Combined simulation for process control: extension of a general purpose simulation tool", "abstract": "Combined discrete event and continuous views of production processes are important in designing computer control systems for both process industries and manufacturing. The paper presents an extension of the popular Matlab-Simulink simulation tool to facilitate the simulation of the discrete sequential control logic applied to continuous processes. The control system is modelled as a combined system where the discrete and the continuous parts of the system are separated and an interface is introduced between them. The sequential control logic is represented by a Sequential Function Chart (SFC). A SFC blockset is defined to enable graphical composition of the SFC and its integration into the Simulink environment. A simulation mechanism is implemented which is called periodically from the standard Simulink simulation engine and carries out the correct state transition sequence of the discrete model and executes corresponding SFC actions. Two simulation case studies are given to illustrate the possible application of the developed simulation environment: the simulation of a batch process cell, as an example from the area of process control and an example of a manufacturing system, i.e. the control of a laboratory scale modular production system. ", "keywords": ["simulation", "hybrid systems", "petri nets"]} {"id": "kp20k_training_441", "title": "Action recognition feedback-based framework for human pose reconstruction from monocular images", "abstract": "A novel framework based on action recognition feedback for pose reconstruction of articulated human body from monocular images is proposed in this paper. The intrinsic ambiguity caused by perspective projection makes it difficult to accurately recover articulated poses from monocular images. To alleviate such ambiguity, we exploit the high-level motion knowledge as action recognition feedback to discard those implausible estimates and generate more accurate pose candidates using large number of motion constraints during natural human movement. The motion knowledge is represented by both local and global motion constraints. The local spatial constraint captures motion correlation between body parts by multiple relevance vector machines while the global temporal constraint preserves temporal coherence between time-ordered poses via a manifold motion template. Experiments on the CMU Mocap database demonstrate that our method performs better on estimation accuracy than other methods without action recognition feedback", "keywords": ["human pose reconstruction", "action recognition feedback", "motion correlation", "manifold motion template"]} {"id": "kp20k_training_442", "title": "Burr size reduction in drilling by ultrasonic assistance", "abstract": "Accuracy and surface finish play an important role in modern industry. Undesired projections of materials, known as burrs, reduce the part quality and negatively affect the assembly process. A recent and promising method for reducing burr size in metal cutting is the use of ultrasonic assistance, where high-frequency and low-amplitude vibrations are added in the feed direction during cutting. Note that this cutting process is distinct from ultrasonic machining. This paper presents the design of an ultrasonically vibrated workpiece holder, and a two-stage experimental investigation of ultrasonically assisted drilling of A1100-0 aluminum workpieces. The results of 175 drilling experiments with uncoated and TiN-coated drills are reported and analyzed. The effect of ultrasonic assistance on burr size, chip formation, thrust forces and tool wear is studied. The results demonstrate that under suitable ultrasonic vibration conditions, the burr height and width can be reduced in comparison to conventional drilling", "keywords": ["burr", "drilling", "metal cutting", "ultrasonic assistance", "ultrasonic assisted drilling", "vibration assisted drilling"]} {"id": "kp20k_training_443", "title": "Selecting Coherent and Relevant Plots in Large Scatterplot Matrices", "abstract": "The scatterplot matrix (SPLOM) is a well-established technique to visually explore high-dimensional data sets. It is characterized by the number of scatterplots (plots) of which it consists of. Unfortunately, this number quadratically grows with the number of the data sets dimensions. Thus, an SPLOM scales very poorly. Consequently, the usefulness of SPLOMs is restricted to a small number of dimensions. For this, several approaches already exist to explore such small SPLOMs. Those approaches address the scalability problem just indirectly and without solving it. Therefore, we introduce a new greedy approach to manage large SPLOMs with more than 100 dimensions. We establish a combined visualization and interaction scheme that produces intuitively interpretable SPLOMs by combining known quality measures, a pre-process reordering and a perception-based abstraction. With this scheme, the user can interactively find large amounts of relevant plots in large SPLOMs", "keywords": ["visual analytics", "quality measure", "high-dimensional data", "scatterplot matrix"]} {"id": "kp20k_training_444", "title": "The random electrode selection ensemble for EEG signal classification", "abstract": "Pattern classification methods are a crucial direction in the current study of braincomputer interface (BCI) technology. A simple yet effective ensemble approach for electroencephalogram (EEG) signal classification named the random electrode selection ensemble (RESE) is developed, which aims to surmount the instability demerit of the Fisher discriminant feature extraction for BCI applications. Through the random selection of recording electrodes answering for the physiological background of user-intended mental activities, multiple individual classifiers are constructed. In a feature subspace determined by a couple of randomly selected electrodes, principal component analysis (PCA) is first used to carry out dimensionality reduction. Successively Fisher discriminant is adopted for feature extraction, and a Bayesian classifier with a Gaussian mixture model (GMM) approximating the feature distribution is trained. For a test sample the outputs from all the Bayesian classifiers are combined to give the final prediction for its label. Theoretical analysis and classification experiments with real EEG signals indicate that the RESE approach is both effective and efficient", "keywords": ["eeg signal classification", "classifier ensemble", "bayesian classifier", "gaussian mixture model", "fisher discriminant analysis"]} {"id": "kp20k_training_445", "title": "Robust schur stability of polynomials with polynomial parameter dependency", "abstract": "The paper considers the robust Schur stability verification of polynomials with coefficients depending polynomially on parameters varying in given intervals. A new algorithm is presented which relies on the expansion of a multivariate polynomial into Bernstein polynomials and is based on the decomposition of the family of polynomials into its symmetric and antisymmetric parts. It is shown how the inspection of both polynomial families on the upper half of the unit circle can be reduced to the analysis of two related polynomial families on the real interval [-1, 1]. Then the Bernstein expansion can be applied in order to check whether both polynomial families have a zero in this interval in common", "keywords": ["schur stability", "robust stability", "bernstein polynomials"]} {"id": "kp20k_training_446", "title": "Use of nano-scale double-gate MOSFETs in low-power tunable current mode analog circuits", "abstract": "Use of independently-driven nano-scale double gate (DG) MOSFETs for low-power analog circuits is emphasized and illustrated. In independent drive configuration, the top gate response of DG-MOSFETs can be altered by application of a control voltage on the bottom gate. We show that this could be a powerful method to conveniently tune the response of conventional CMOS analog circuits especially for current-mode design. Several examples of such circuits, including current mirrors, a differential current amplifier and differential integrators are illustrated and their performance gauged using TCAD simulations. The topologies and biasing schemes explored here show how the nano-scale DG-MOSFETs may pave way for efficient, mismatch-tolerant and smaller circuits with tunable characteristics", "keywords": ["integrated circuits", "tunable analog circuits", "current mode circuits", "mixed-mode simulations", "dg-mosfet"]} {"id": "kp20k_training_447", "title": "Cooperative triangulation in MSBNs without revealing subnet structures", "abstract": "Multiply sectioned Bayesian networks (MSBNs) provide a coherent framework for probabilistic inference in a cooperative multiagent distributed interpretation system. Inference in MSBNs can be performed effectively using a compiled representation. The compilation involves the triangulation of the collective dependency structure (a graph) defined in terms of the union of a set of local dependency structures (a set of graphs). Privacy of agents eliminates the option to assemble these graphs at a central location and to triangulate their union. Earlier work solved distributed triangulation in a restricted case. The method is conceptually complex and the correctness of its extension to the general case is difficult to justify. In this paper, we present a new method that is conceptually simpler and is efficient. We prove its correctness in the general case and demonstrate its performance experimentally ", "keywords": ["triangulation", "chordal graph", "graph theory", "distributed computation", "bayesian networks", "multiply sectioned bayesian networks", "multiagent systems", "cooperation and coordination", "approximate reasoning"]} {"id": "kp20k_training_448", "title": "the complexity of parallel evaluation of linear recurrence", "abstract": "The concept of computers such as C.mmp and ILLIAC IV is to achieve computational speed-up by performing several operations simultaneously with parallel processors. This type of computer organization is referred to as a parallel computer. In this paper, we prove upper bounds on speed-ups achievable by parallel computers for a particular problem, the solution of first order linear recurrences. We consider this problem because it is important in practice and also because it is simply stated so that we might obtain some insight into the nature of parallel computation by studying it", "keywords": ["processor", "concept", "operability", "organization", "order", "parallel computation", "complexity", "paper", "practical", "evaluation", "parallel", "computation"]} {"id": "kp20k_training_449", "title": "Leukocyte image segmentation using simulated visual attention", "abstract": "Computer-aided automatic analysis of microscopic leukocyte is a powerful diagnostic tool in biomedical fields which could reduce the effects of human error, improve the diagnosis accuracy, save manpower and time. However, it is a challenging to segment entire leukocyte populations due to the changing features extracted in the leukocyte image, and this task remains an unsolved issue in blood cell image segmentation. This paper presents an efficient strategy to construct a segmentation model for any leukocyte image using simulated visual attention via learning by on-line sampling. In the sampling stage, two types of visual attention, bottom-up and top-down together with the movement of the human eye are simulated. We focus on a few regions of interesting and sample high gradient pixels to group training sets. While in the learning stage, the SVM (support vector machine) model is trained in real-time to simulate the visual neuronal system and then classifies pixels and extracts leukocytes from the image. Experimental results show that the proposed method has better performance compared to the marker controlled watershed algorithms with manual intervention and thresholding-based methods", "keywords": ["image segmentation", "visual attention", "machine learning", "leukocyte image", "svm"]} {"id": "kp20k_training_450", "title": "On the construction of an aggregated measure of the development of interval data", "abstract": "We analyse some possibilities for constructing an aggregated measure of the development of socio-economical objects in terms of their composite phenomenon (i.e., phenomenon described by many statistical features) if the relevant data are expressed as intervals. Such a measure, based on the deviation of the data structure for a given object from the benchmark of development is a useful tool for ordering, comparing and clustering objects. We present the construction of a composite phenomenon when it is described by interval data and discuss various aspects of stimulation and normalization of the diagnostic features as well as a definition of a benchmark of development (based usually on optimum or expected levels of these features). Our investigation includes the following options for the realization of this purpose: transformation of the interval model into a singlevalued version without any significant loss of its statistical properties, standardization of pure intervals as well as definition of the interval ideal object. For the determination of a distance between intervals, the Hausdorff formula is applied. The simulation study conducted and the empirical analysis showed that the first two variants are especially useful in practice", "keywords": ["multifeature objects", "aggregated measure of development", "interval data", "hausdorff distance"]} {"id": "kp20k_training_451", "title": "user requirements for a web based spreadsheet-mediated collaboration", "abstract": "This paper reports the initial results of a research project to investigate how to develop a web based spreadsheet mediated business collaboration system that could notably enhance the business processes presently carried out by Small to Medium sized Enterprises. Using a scenario-based design approach, a set of user's requirements were extracted from an appropriate field study. These requirements were then analysed in the context of well-known usability principles, and a set of design implications were derived based on a selected set of HCI design patterns related to cooperative interaction design. Starting from that knowledge, suitable interactive collaboration scenarios have been drawn, from which a list of user interface requirements for a web based spreadsheet mediated collaboration system has been formulated", "keywords": ["artifact mediated collaboration", "hci design patterns", "usability principles", "scenario-based design"]} {"id": "kp20k_training_452", "title": "Automated estimation and analyses of meteorological drought characteristics from monthly rainfall data", "abstract": "The paper describes a new software package for automated estimation, display and analyses of various drought indices continuous functions of precipitation that allow quantitative assessment of meteorological drought events to be made. The software at present allows up to five different drought indices to be estimated. They include the Decile Index (DI), the Effective Drought Index (EDI), the Standardized Precipitation Index (SPI) and deviations from the long-term mean and median value. Each index can be estimated from point and spatially averaged rainfall data and a number of options are provided for months' selection and the type of the analysis, including a running mean, single value or multiple annual values. The software also allows spell/run analysis to be performed and maps of a specific index to be constructed. The software forms part of the comprehensive computer package, developed earlier and designed to perform the multitude of water resources analyses and hydro-meteorological data processing. The 7-step procedure of setting up and running a typical drought assessment application is described in detail. The examples of applications are given primarily in the specific context of South Asia where the software has been used", "keywords": ["drought indices", "monthly rainfall time series", "spatsim"]} {"id": "kp20k_training_453", "title": "Introduction to the special issue on statistical signal extraction and filtering", "abstract": "The papers of the Special Issue on Statistical Signal Extraction and Filtering are introduced briefly and the invitation to contribute to the next issue to be devoted to this topic is reiterated. There follows an account of the history and the current developments in the areas of WienerKolmogorov and Kalman filtering, which is a leading topic of the present issue. Other topics will be treated in like manner in subsequent introductions", "keywords": ["statistical signal extraction", "kalman filtering", "wienerkolmogorov filtering"]} {"id": "kp20k_training_454", "title": "Worst-case optimal approximation algorithms for maximizing triplet consistency within phylogenetic networks", "abstract": "The study of phylogenetic networks is of great interest to computational evolutionary biology and numerous different types of such structures are known. This article addresses the following question concerning rooted versions of phylogenetic networks. What is the maximum value of p?[0,1] p ? [ 0 , 1 ] such that for every input set T of rooted triplets, there exists some network N such that at least p|T| p | T | of the triplets are consistent with N ? We call an algorithm that computes such a network (where p is maximum) worst-case optimal. Here we prove that the set containing all triplets (the full triplet set) in some sense defines p . Moreover, given a network N that obtains a fraction p? p ? for the full triplet set (for any p? p ? ), we show how to efficiently modify N to obtain a fraction ?p p ? for any given triplet set T . We demonstrate the power of this insight by presenting a worst-case optimal result for level-1 phylogenetic networks improving considerably upon the 5/12 fraction obtained recently by Jansson, Nguyen and Sung. For level-2 phylogenetic networks we show that p?0.61 p ? 0.61 . We emphasize that, because we are taking |T| | T | as a (trivial) upper bound on the size of an optimal solution for each specific input T, the results in this article do not exclude the existence of approximation algorithms that achieve approximation ratio better than p. Finally, we note that all the results in this article also apply to weighted triplet sets", "keywords": ["triplet", "phylogenetic network", "level-k network"]} {"id": "kp20k_training_455", "title": "Direct search of feasible region and application to a crashworthy helicopter seat", "abstract": "The paper proposes a novel approach to identify the feasible region for a constrained optimisation problem. In engineering applications the search for the feasible region turns out to be extremely useful in the understanding of the problem as the feasible region defines the portion of the domain where design parameters can be ranged to fulfil the constraints imposed on performances, manufacturing and regulations. The search for the feasible region is not a trivial task as non-convex, irregular and disjointed shapes can be found. The algorithm presented in this paper moves from the above considerations and proposes a recursive feasible-infeasible segment bisection algorithm combined with Support Vector Machine (SVM) techniques to reduce the overall computational effort. The method is discussed and then illustrated by means of three simple analytical test cases in the first part of the paper. A real-world application is finally presented: the search for the survivability zone of a crashworthy helicopter seat under different crash conditions. A finite element model, including an anthropomorphic dummy, is adopted to simulate impacts that are characterised by different deceleration pulses and the proposed algorithm is used to investigate the influence of pulse shape on impact survivability", "keywords": ["feasible region", "crashworthiness", "support vector machine", "direct search"]} {"id": "kp20k_training_456", "title": "feasibly constructive proofs and the propositional calculus (preliminary version", "abstract": "The motivation for this work comes from two general sources. The first source is the basic open question in complexity theory of whether P equals NP (see [1] and [2]). Our approach is to try to show they are not equal, by trying to show that the set of tautologies is not in NP (of course its complement is in NP ). This is equivalent to showing that no proof system (in the general sense defined in [3]) for the tautologies is super in the sense that there is a short proof for every tautology. Extended resolution is an example of a powerful proof system for tautologies that can simulate most standard proof systems (see [3]). The Main Theorem (5.5) in this paper describes the power of extended resolution in a way that may provide a handle for showing it is not super. The second motivation comes from constructive mathematics. A constructive proof of, say, a statement @@@@A must provide an effective means of finding a proof of A for each value of x, but nothing is said about how long this proof is as a function of x. If the function is exponential or super exponential, then for short values of x the length of the proof of the instance of A may exceed the number of electrons in the universe. In section 2, I introduce the system PV for number theory, and it is this system which I suggest properly formalizes the notion of a feasibly constructive proof", "keywords": ["value", "mathematics", "systems", "examples", "values", "motivation", "functional", "standardization", "theory", "version", "power", "general", "complexity", "paper", "effect"]} {"id": "kp20k_training_457", "title": "SINA: Semantic interpretation of user queries for question answering on interlinked data", "abstract": "The architectural choices underlying Linked Data have led to a compendium of data sources which contain both duplicated and fragmented information on a large number of domains. One way to enable non-experts users to access this data compendium is to provide keyword search frameworks that can capitalize on the inherent characteristics of Linked Data. Developing such systems is challenging for three main reasons. First, resources across different datasets or even within the same dataset can be homonyms. Second, different datasets employ heterogeneous schemas and each one may only contain a part of the answer for a certain user query. Finally, constructing a federated formal query from keywords across different datasets requires exploiting links between the different datasets on both the schema and instance levels. We present Sina, a scalable keyword search system that can answer user queries by transforming user-supplied keywords or natural-languages queries into conjunctive SPARQL queries over a set of interlinked data sources. Sina uses a hidden Markov model to determine the most suitable resources for a user-supplied query from different datasets. Moreover, our framework is able to construct federated queries by using the disambiguated resources and leveraging the link structure underlying the datasets to query. We evaluate Sina over three different datasets. We can answer 25 queries from the QALD-1 correctly. Moreover, we perform as well as the best question answering system from the QALD-3 competition by answering 32 questions correctly while also being able to answer queries on distributed sources. We study the runtime of SINA in its mono-core and parallel implementations and draw preliminary conclusions on the scalability of keyword search on Linked Data", "keywords": ["keyword search", "question answering", "hidden markov model", "sparql", "rdf", "disambiguation"]} {"id": "kp20k_training_458", "title": "Generalized median string computation by means of string embedding in vector spaces", "abstract": "In structural pattern recognition the median string has been established as a useful tool to represent a set of strings. However, its exact computation is complex and of high computational burden. In this paper we propose a new approach for the computation of median string based on string embedding. Strings are embedded into a vector space and the median is computed in the vector domain. We apply three different inverse transformations to go from the vector domain back to the string domain in order to obtain a final approximation of the median string. All of them are based on the weighted mean of a pair of strings. Experiments show that we succeed to compute good approximations of the median string", "keywords": ["string", "generalized median", "embedding", "vector space", "lower bound"]} {"id": "kp20k_training_459", "title": "efficient indexing of the historical, present, and future positions of moving objects", "abstract": "Although significant effort has been put into the development of efficient spatio-temporal indexing techniques for moving objects, little attention has been given to the development of techniques that efficiently support queries about the past, present, and future positions of objects. The provisioning of such techniques is challenging, both because of the nature of the data, which reflects continuous movement, and because of the types of queries to be supported. This paper proposes the BB x-index structure, which indexes the positions of moving objects, given as linear functions of time, at any time. The index stores linearized moving-object locations in a forest of B + -trees. The index supports queries that select objects based on temporal and spatial constraints, such as queries that retrieve all objects whose positions fall within a spatial range during a set of time intervals. Empirical experiments are reported that offer insight into the query and update performance of the proposed technique", "keywords": ["indexing", "b-tree", "mobile objects"]} {"id": "kp20k_training_460", "title": "towards model-driven unit testing", "abstract": "The Model-Driven Architecture (MDA) approach for constructing software systems advocates a stepwise refinement and transformation process starting from high-level models to concrete program code. In contrast to numerous research efforts that try to generate executable function code from models, we propose a novel approach termed model-driven monitoring. On the model level the behavior of an operation is specified with a pair of UML composite structure diagrams (visual contract), a visual notation for pre- and post-conditions. The specified behavior is implemented by a programmer manually. An automatic translation from our visual contracts to JML assertions allows for monitoring the hand-coded programs during their execution. In this paper we present how we extend our approach to allow for model-driven unit testing, where we utilize the generated JML assertions as test oracles. Further, we present an idea how to generate sufficient test cases from our visual contracts with the help of model-checking techniques", "keywords": ["test case generation", "visual contracts", "model checking", "design by contract"]} {"id": "kp20k_training_461", "title": "Sliding window-based frequent pattern mining over data streams", "abstract": "Finding frequent patterns in a continuous stream of transactions is critical for many applications such as retail market data analysis, network monitoring, web usage mining, and stock market prediction. Even though numerous frequent pattern mining algorithms have been developed over the past decade, new solutions for handling stream data are still required due to the continuous, unbounded, and ordered sequence of data elements generated at a rapid rate in a data stream. Therefore, extracting frequent patterns from more recent data can enhance the analysis of stream data. In this paper, we propose an efficient technique to discover the complete set of recent frequent patterns from a high-speed data stream over a sliding window. We develop a Compact Pattern Stream tree (CPS-tree) to capture the recent stream data content and efficiently remove the obsolete, old stream data content. We also introduce the concept of dynamic tree restructuring in our CPS-tree to produce a highly compact frequency-descending tree structure at runtime. The complete set of recent frequent patterns is obtained from the CPS-tree of the current window using an FP-growth mining technique. Extensive experimental analyses show that our CPS-tree is highly efficient in terms of memory and time complexity when finding recent frequent patterns from a high-speed data stream", "keywords": ["frequent pattern", "data stream", "sliding window", "tree restructuring"]} {"id": "kp20k_training_462", "title": "modeling cryptographic properties of voice and voice-based entity authentication", "abstract": "Strong and/or multi-factor entity authentication protocols are of crucial importancein building successful identity management architectures. Popular mechanisms to achieve these types of entity authentication are biometrics, and, in particular, voice, for which there are especially interesting business cases in the telecommunication and financial industries, among others. Despite several studies on the suitability of voice within entity authentication protocols, there has been little or no formal analysis of any such methods. In this paper we embark into formal modeling of seemingly cryptographic properties of voice. The goal is to define a formal abstraction for voice, in terms of algorithms with certain properties, that are of both combinatorial and cryptographic type. While we certainly do not expect to achieve the perfect mathematical model for a human phenomenon, we do hope that capturing some properties of voice in a formal model would help towards the design and analysis of voice-based cryptographic protocols, as for entity authentication. In particular, in this model we design and formally analyze two voice-based entity authentication schemes, the first being a voice-based analogue of the conventional password-transmission entity authentication scheme. We also design and analyze, in the recently introduced bounded-retrieval model [4], one voice-and-password-based entity authentication scheme that is additionally secure against intrusions and brute-force attacks, including dictionary attacks", "keywords": ["biometrics", "modeling human factors", "voice", "entity authentication"]} {"id": "kp20k_training_463", "title": "Inference of finite-state transducers from regular languages", "abstract": "Finite-state transducers are models that are being used in different areas of pattern recognition and computational linguistics. One of these areas is machine translation, where the approaches that are based on building models automatically from training examples are becoming more and more attractive. Finite-state transducers are very adequate to be used in constrained tasks where training samples of pairs of sentences are available. A technique to infer finite-state transducers is proposed in this work. This technique is based on formal relations between finite-state transducers and finite-state grammars. Given a training corpus of inputoutput pairs of sentences, the proposed approach uses statistical alignment methods to produce a set of conventional strings from which a stochastic finite-state grammar is inferred. This grammar is finally transformed into a resulting finite-state transducer. The proposed methods are assessed through series of machine translation experiments within the framework of the EUTRANS project", "keywords": ["machine translation", "grammatical inference", "formal language theory", "stochastic finite-state transducers", "natural language processing"]} {"id": "kp20k_training_464", "title": "Particle swarm optimization with preference order ranking for multi-objective optimization", "abstract": "A new optimality criterion based on preference order (PO) scheme is used to identify the best compromise in multi-objective particle swarm optimization (MOPSO). This scheme is more efficient than Pareto ranking scheme, especially when the number of objectives is very large. Meanwhile, a novel updating formula for the particles velocity is introduced to improve the search ability of the algorithm. The proposed algorithm has been compared with NSGA-II and other two MOPSO algorithms. The experimental results indicate that the proposed approach is effective on the highly complex multi-objective optimization problems", "keywords": ["particle swarm", "preference order", "pareto dominance", "multi-objective optimization", "best compromise"]} {"id": "kp20k_training_465", "title": "real-time deformation using modal analysis on graphics hardware", "abstract": "This paper presents an approach for fast simulating deformable objects that is suitable for interactive applications in computer graphics. Linear modal analysis is often used to simulate small-amplitude deformation. Compared to traditional linear modal analysis where the CPU has been used to calculate the nodal displacements, the vertex program of GPU has been found widely adopted in the current applications. However the calculation suffers from great errors due to the limitation of the number of the input registers on GPU vertex pipeline. In our approach, we solve this problem by the fragment program. A series of 2D floating point textures are used to hold the model displacement matrix, the fragment program multiplies this matrix with the modal amplitude and sums up the results. Experiments show that the proposed technique fully utilizes the parallelism nature of GPU, and runs in real-time even for the complex models", "keywords": ["graphics hardware", "physically based modeling", "deformation", "modal analysis"]} {"id": "kp20k_training_466", "title": "STABILITY ANALYSIS OF A CLASS OF GENERAL PERIODIC NEURAL NETWORKS WITH DELAYS AND IMPULSES", "abstract": "Based on the inequality analysis, matrix theory and spectral theory, a class of general periodic neural networks with delays and impulses is studied. Some sufficient conditions are established for the existence and globally exponential stability of a unique periodic solution. Furthermore, the results are applied to some typical impulsive neural network systems as special cases, with a real-life example to show feasibility of our results", "keywords": ["neural network", "periodic solution", "delay", "impulse", "global exponential stability"]} {"id": "kp20k_training_467", "title": "Neuroprotective properties of resveratrol and derivatives", "abstract": "Stilbenoid compounds consist of a family of resveratrol derivatives. They have demonstrated promising activities in vitro and in vivo that indicate they may be useful in the prevention of a wide range of pathologies, such as cardiovascular diseases and cancers, as well have anti-aging effects. More recently stilbenoid compounds have shown promise in the treatment and prevention of neurodegenerative disorders, such as Huntingtons, Parkinsons, and Alzheimer's diseases. This paper primarily focuses on the impact of stilbenoids in Alzheimer's disease and more specifically on the inhibition of ?-amyloid peptide aggregation", "keywords": ["stilbenoid", "alzheimers disease", "amyloid peptide", "inhibition of aggregation"]} {"id": "kp20k_training_468", "title": "A new fuzzy multicriteria decision making method and its application in diversion of water", "abstract": "Taking account of uncertainty in multicriteria decision making problems is crucial due to the fact that depending on how it is done, ranking of alternatives can be completely different. This paper utilizes linguistic values to evaluate the performance of qualitative criteria and proposes using appropriate shapes of fuzzy numbers to evaluate the performance of quantitative criteria for each problem with respect to its particular conditions. In addition, a process to determine the weights of criteria using fuzzy numbers, which considers their competition to gain greater weights and their influence on each other is described. A new fuzzy methodology is proposed to solve such a problem that utilizes parametric form of fuzzy numbers. The case study of diversion of water into Lake Urmia watershed, which is defined using triangular, trapezoidal, and bell-shape fuzzy numbers demonstrates the utility of the proposed method. ", "keywords": ["multicriteria decision making", "fuzzy numbers with different shapes", "water resource planning and management"]} {"id": "kp20k_training_469", "title": "Scheduling divisible workloads on heterogeneous platforms", "abstract": "In this paper, we discuss several algorithms for scheduling divisible workloads on heterogeneous systems. Our main contributions are (i) new optimality results for single-round algorithms and (ii) the design of an asymptotically optimal multi-round algorithm. This multi-round algorithm automatically performs resource selection, a difficult task that was previously left to the user. Because it is periodic, it is simpler to implement, and more robust to changes in the speeds of the processors and/or communication links. On the theoretical side, to the best of our knowledge, this is the first published result assessing the absolute performance of a multi-round algorithm. On the practical side, extensive simulations reveal that our multi-round algorithm outperforms existing solutions on a large variety of platforms, especially when the communication-to-computation ratio is not very high (the difficult case", "keywords": ["scheduling", "divisible tasks", "multi-round algorithms", "asymptotical optimality"]} {"id": "kp20k_training_470", "title": "A connective ethnography of peer knowledge sharing and diffusion in a tween virtual world", "abstract": "Prior studies have shown how knowledge diffusion occurs in classrooms and structured small groups around assigned tasks yet have not begun to account for widespread knowledge sharing in more native, unstructured group settings found in online games and virtual worlds. In this paper, we describe and analyze how an insider gaming practice spread across a group of tween players ages 912years in an after-school gaming club that simultaneously participated in a virtual world called Whyville.net. In order to understand how this practice proliferated, we followed the club members as they interacted with each other and members of the virtual world at large. Employing connective ethnography to trace the movements in learning and teaching this practice, we coordinated data records from videos, tracking data, field notes, and interviews. We found that club members took advantage of the different spaces, people, and times available to them across Whyville, the club, and even home and classroom spaces. By using an insider gaming practice, namely teleporting, rather than the more traditional individual person as our analytical lens, we were able to examine knowledge sharing and diffusion across the gaming spaces, including events in local small groups as well as encounters in the virtual world. In the discussion, we address methodological issues and design implications of our findings", "keywords": ["virtual worlds", "knowledge sharing", "knowledge diffusion", "connective ethnography", "peer pedagogy"]} {"id": "kp20k_training_471", "title": "Unsupervised classification of SAR images using normalized gamma process mixtures", "abstract": "We propose an image prior for the model-based nonparametric classification of synthetic aperture radar (SAR) images that allows working with infinite number of mixture components. In order to enclose the spatial interactions of the pixel labels, the prior is derived by incorporating a conditional multinomial auto-logistic random field into the Normalized Gamma Process prior. In this way, we obtain an image classification prior that is free from the limitation on the number of classes and includes the smoothing constraint into classification problem. In this model, we introduced a hyper-parameter that can control the preservation of the important classes and the extinction of the weak ones. The recall rates reported on the synthetic and the real TerraSAR-X images show that the proposed model is capable of accurately classifying the pixels. Unlike the existing methods, it applies a simple iterative update scheme without performing a hierarchical clustering strategy. We demonstrate that the estimation accuracy of the proposed method in number of classes outperforms the conventional finite mixture models", "keywords": ["normalized gamma process mixtures", "nonparametric bayesian", "image classification", "sar images"]} {"id": "kp20k_training_472", "title": "Two Couple-Resolution Blocking Protocols on Adaptive Query Splitting for RFID Tag Identification", "abstract": "How to accelerate tag identification is an important issue in Radio Frequency Identification (RFID) systems. In some cases, the RFID reader repeatedly identifies the same tags since these tags always stay in its communication range. An anticollision protocol, called the adaptive query splitting protocol (AQS), was proposed to handle these cases. This protocol reserves information obtained from the last process of tag identification so that the reader can quickly identify these staying tags again. This paper proposes two blocking protocols, a couple-resolution blocking protocol (CRB) and an enhanced couple-resolution blocking protocol (ECRB), based on AQS. CRB and ECRB not only have the above-mentioned capability as AQS but also use the blocking technique, which prohibits unrecognized tags from colliding with staying tags, to reduce the number of collisions. Moreover, CRB adopts a couple-resolution technique to couple staying tags by simultaneously transmitting two ID prefixes from the reader, while ECRB allows the reader to send only one ID prefix to interrogate a couple of staying tags. Thus, they only need half time to identify staying tags. We formally analyze the identification delay of CRB and ECRB in the worst and average cases. Our analytic and simulation results show that they obviously outperform AQS, and ECRB needs less transmitted bits than CRB", "keywords": ["anticollision", "blocking protocol", "couple-resolution", "rfid", "tag identification"]} {"id": "kp20k_training_473", "title": "an active measurement system for shared environments", "abstract": "Testbeds composed of end hosts deployed across the Internet enable researchers to simultaneously conduct a wide variety of experiments. Active measurement studies of Internet path properties that require precisely crafted probe streams can be problematic in these environments. The reason is that load on the host systems from concurrently executing experiments (as is typical in PlanetLab) can significantly alter probe stream timings. In this paper we measure and characterize how packet streams from our local PlanetLab nodes are affected by experimental concurrency. We find that the effects can be extreme. We then set up a simple PlanetLab deployment in a laboratory testbed to evaluate these effects in a controlled fashion. We find that even relatively low load levels can cause serious problems in probe streams. Based on these results, we develop a novel system called MAD that can operate as a Linux kernel module or as a stand-alone daemon to support real-time scheduling of probe streams. MAD coordinates probe packet emission for all active measurement experiments on a node. We demonstrate the capabilities of MAD , showing that it performs effectively even under very high levels of multiplexing and host system load", "keywords": ["active measurement", "mad"]} {"id": "kp20k_training_474", "title": "Policy-based inconsistency management in relational databases", "abstract": "We define inconsistency management policies (IMPs) for real world applications. We show how IMPs relate to belief revision postulates, CQA, and relational algebra operators. We present several approaches to efficiently implement an IMP-based framework", "keywords": ["inconsistency management", "relational databases"]} {"id": "kp20k_training_475", "title": "A new delay-dependent stability criterion for linear neutral systems with norm-bounded uncertainties in all system matrices", "abstract": "This paper deals with the problem of robust stability for a class of uncertain linear neutral systems. The uncertainties under consideration are of norm-bounded type and appear in all system matrices. A new delay-dependent stability criterion is obtained and formulated in the form of linear matrix inequalities (LMIs). Neither model transformation nor bounding technique for cross terms is involved through derivation of the stability criterion. Numerical examples show that the results obtained in this paper significantly improve the estimate of the stability limit over some existing results in the literature", "keywords": ["linear systems", "neutral systems", "stability", "time delay", "uncertainty", "linear matrix inequality"]} {"id": "kp20k_training_476", "title": "NEUTRALIZATION: NEW INSIGHTS INTO THE PROBLEM OF EMPLOYEE INFORMATION SYSTEMS SECURITY POLICY VIOLATIONS", "abstract": "Employees' failure to comply with information systems security policies is a major concern for information technology security managers. In efforts to understand this problem, IS security researchers have traditionally viewed violations of IS security policies through the lens of deterrence theory. In this article, we show that neutralization theory. a theory prominent in Criminology but not yet applied in the context of IS, provides a compelling explanation for IS security policy violations and offers new insight into how employees rationalize this behavior. In doing so, we propose a theoretical model in which the effects of neutralization techniques are tested alongside those of sanctions described by deterrence theory. Our empirical results highlight neutralization as an important factor to take into account with regard to developing and implementing organizational security policies and practices", "keywords": ["neutralization theory", "deterrence theory", "is security policies", "is security", "compliance"]} {"id": "kp20k_training_477", "title": "Simplifying complex environments using incremental textured depth meshes", "abstract": "We present an incremental algorithm to compute image-based simplifications of a large environment. We use an optimization-based approach to generate samples based on scene visibility, and from each viewpoint create textured depth meshes (TDMs) using sampled range panoramas of the environment. The optimization function minimizes artifacts such as skins and cracks in the reconstruction. We also present an encoding scheme for multiple TDMs that exploits spatial coherence among different viewpoints. The resulting simplifications, incremental textured depth meshes (ITDMs), reduce preprocessing, storage, rendering costs and visible artifacts. Our algorithm has been applied to large, complex synthetic environments comprising millions of primitives. It is able to render them at 20 - 40 frames a second on a PC with little loss in visual fidelity", "keywords": ["interactive display", "simplification", "textured depth meshes", "spatial encoding", "walkthrough"]} {"id": "kp20k_training_478", "title": "A Neural Approach to the Underdetermined-Order Recursive Least-Squares Adaptive Filtering", "abstract": "The incorporation of the neural architectures in adaptive filtering applications has been addressed in detail. In particular, the Underdetermined-Order Recursive Least-Squares (URLS) algorithm, which lies between the well-known Normalized Least Mean Square and Recursive Least Squares algorithms, is reformulated via a neural architecture. The response of the neural network is seen to be identical to that of the algorithmic approach. Together with the advantage of simple circuit realization, this neural network avoids the drawbacks of digital computation such as error propagation and matrix inversion, which is ill-conditioned in most cases. It is numerically attractive because the quadratic optimization problem performs an implicit matrix inversion. Also, the neural network offers the flexibility of easy alteration of the prediction order of the URLS algorithm which may be crucial in some applications. It is rather difficult to achieve in the digital implementation, as one would have to use Levinson recursions. The neural network can easily be integrated into a digital system through appropriate digital-to-analog and analog-to-digital converters", "keywords": ["adaptive filtering", "underdetermined recursive least squares", "neural networks", "analog adaptive filter"]} {"id": "kp20k_training_479", "title": "Bottleneck flows in unit capacity networks", "abstract": "The bottleneck network flow problem (BNFP) is a generalization of several well-studied bottleneck problems such as the bottleneck transportation problem (BTP), bottleneck assignment problem (BAP), bottleneck path problem (BPP), and so on. The BNFP can easily be solved as a sequence of O(log n) maximum flow problems on almost unit capacity networks. We observe that this algorithm runs in O(min{m(3/2) . n(2/3) m} log n) time by showing that the maximum flow problem on an almost unit capacity graph can be solved in O(min{m(3/2) . n(2/3)m}) time. We then propose a faster algorithm to solve the unit capacity BNFP in O(min{m(n log n)(2/3). m(3/2) root log n}) time, an improvement by a factor of at least 3 root log n. For dense graphs, the improvement is by a factor of root log n. On unit capacity simple graphs, we show that BNFP can be solved in O root n log n) time, an improvement by a factor of root log n. As a consequence we have an O(m root n log n) algorithm for the BTP with unit arc capacities. ", "keywords": ["algorithms", "combinatorial problems", "graphs", "network flows", "minimum cost flow", "unit capacity"]} {"id": "kp20k_training_480", "title": "Taylor's decomposition on four points for solving third-order linear time-varying systems", "abstract": "In the present paper, the use of three-step difference schemes generated by Taylor's decomposition on four points for the numerical solutions of third-order time-varying linear dynamical systems is presented. The method is illustrated for the numerical analysis of an up-converter used in communication systems", "keywords": ["taylor's decomposition on four points", "third-order differential equation", "three-step difference schemes", "approximation order", "periodically time-varying systems"]} {"id": "kp20k_training_481", "title": "BEM formulation for von Krmn plates", "abstract": "This work deals with nonlinear geometric plates in the context of von Krmn's theory. The formulation is written such that only the boundary in-plane displacement and deflection integral equations for boundary collocations are required. At internal points, only out-of-plane rotation, curvature and in-plane internal force representations are used. Thus, only integral representations of these values are derived. The nonlinear system of equations is derived by approximating all densities in the domain integrals as single values, which therefore reduces the computational effort needed to evaluate the domain value influences. Hyper-singular equations are avoided by approximating the domain values using only internal nodes. The solution is obtained using a Newton scheme for which a consistent tangent operator was derived", "keywords": ["bending plates", "geometrical nonlinearities"]} {"id": "kp20k_training_482", "title": "On X-Variable Filling and Flipping for Capture-Power Reduction in Linear Decompressor-Based Test Compression Environment", "abstract": "Excessive test power consumption and growing test data volume are both serious concerns for the semiconductor industry. Various low-power X-filling techniques and test data compression schemes were developed accordingly to address the above problems. These methods, however, often exploit the very same \"don't-care\" bits in the test cubes to achieve different objectives and hence may contradict each other. In this paper, we propose novel techniques to reduce scan capture power in linear decompressor-based test compression environment, by employing algorithmic solutions to fill and flip X-variables supplied to the linear decompressor. Experimental results on benchmark circuits demonstrate that our proposed techniques significantly outperform existing solutions", "keywords": ["capture-power reduction", "linear decompressor-based test compression", "x-filling"]} {"id": "kp20k_training_483", "title": "WWW-based access to object-oriented clinical databases: the KHOSPAD project", "abstract": "KHOSPAD is a project aiming at improving the quality of the process of patient care concerning general practitionerpatienthospital relationships, using current information and networking technologies. The studied application field is a cardiology division, with hemodynamic laboratory and the population of PTCA patients. Data related to PTCA patients are managed by ARCADIA, an object-oriented database management system developed for the considered clinical setting. We defined a remotely accessible view of ARCADIA medical record, suitable for general practitioners (GPs) caring patients after PTCA, during the follow-up period. Using a PC, a modem and Internet, an authorized GP can consult remotely the medical records of his PTCA patients. Main features of the application are related to the management and display of complex data, specifically characterized by multimedia and temporal features, based on an object-oriented temporal data model", "keywords": ["object-oriented clinical databases", "temporal databases", "www", "internet", "java", "software architecture", "temporal data visualization"]} {"id": "kp20k_training_484", "title": "Fuzzy R-subgroups with thresholds of near-rings and implication operators", "abstract": "Using the belongs to relation (q) and quasi-coincidence with relation (q) between fuzzy points and fuzzy sets, the concept of (alpha, beta)-fuzzy R-subgroup of a near-ring where alpha , beta are any two of {epsilon, q, epsilon boolean AND q , epsilon boolean OR q} with alpha not equal epsilon boolean AND q is introduced and related properties are investigated. We also introduce the notion of a fuzzy R-subgroup with thresholds which is a generalization of an ordinary fuzzy R-subgroup and an (epsilon, epsilon boolean OR q)-fuzzy R-subgroup. Finally, we give the definition of an implication-based fuzzy R-subgroup", "keywords": ["fuzzy set", "fuzzy point", "near-ring", "fuzzy r-subgroup", "fuzzy r-subgroup", "level set"]} {"id": "kp20k_training_485", "title": "A time accurate pseudo-wavelet scheme for two-dimensional turbulence", "abstract": "In this paper, we propose a wavelet-Taylor-Galerkin method for solving the two-dimensional Navier-Stokes equations. The discretization in time is performed before the spatial discretization by introducing second-order generalization of the standard time stepping schemes with the help of Taylor series expansion in time step. Wavelet-Taylor-Galerkin schemes taking advantage of the wavelet bases capabilities to compress both functions and operators are presented. Results for two-dimensional turbulence are shown", "keywords": ["taylor-galerkin method", "wavelets", "navier-stokes equations", "turbulence"]} {"id": "kp20k_training_486", "title": "Audio-augmented paper for therapy and educational intervention for children with autistic spectrum disorder", "abstract": "Physical tokens are artifacts which sustain cooperation between the children and therapists. Therapists anchor children's attention through physical tokens. Therapists controlled children's attention through physical tokens. The environment provides to the therapists the control of the flow of the therapeutic activity. The environment provides a good mean to stimulate fun and consequently to help children's attention on listening tasks", "keywords": ["autism spectrum disorder", "social competence", "social story", "audio-augmented paper", "interaction design", "tangible user interface"]} {"id": "kp20k_training_487", "title": "TREATING EPILEPSY VIA ADAPTIVE NEUROSTIMULATION: A REINFORCEMENT LEARNING APPROACH", "abstract": "This paper presents a now methodology for automatically learning an optimal neurostimulation strategy for the treatment of epilepsy. The technical challenge is to automatically modulate neurostimulation parameters, as a function of the observed EEG signal, so as to minimize the frequency and duration of seizures. The methodology leverages recent techniques from the machine learning literature, in particular the reinforcement learning paradigm, to formalize this optimization problem. We present an algorithm which is able to automatically learn an adaptive neurostimulation strategy directly from labeled training data acquired from animal brain tissues. Our results suggest that this methodology can be used to automatically find a stimulation strategy which effectively reduces the incidence of seizures, while also minimizing the amount of stimulation applied. This work highlights the crucial role that modern machine learning techniques can play in the optimization of treatment strategies for patients with chronic disorders such as epilepsy", "keywords": ["epilepsy", "neurostimulation", "reinforcement learning"]} {"id": "kp20k_training_488", "title": "Load-Balanced Parallel Streamline Generation on Large Scale Vector Fields", "abstract": "Because of the ever increasing size of output data from scientific simulations, supercomputers are increasingly relied upon to generate visualizations. One use of supercomputers is to generate field lines from large scale flow fields. When generating field lines in parallel, the vector field is generally decomposed into blocks, which are then assigned to processors. Since various regions of the vector field can have different flow complexity, processors will require varying amounts of computation time to trace their particles, causing load imbalance, and thus limiting the performance speedup. To achieve load-balanced streamline generation, we propose a workload-aware partitioning algorithm to decompose the vector field into partitions with near equal workloads. Since actual workloads are unknown beforehand, we propose a workload estimation algorithm to predict the workload in the local vector field. A graph-based representation of the vector field is employed to generate these estimates. Once the workloads have been estimated, our partitioning algorithm is hierarchically applied to distribute the workload to all partitions. We examine the performance of our workload estimation and workload-aware partitioning algorithm in several timings studies, which demonstrates that by employing these methods, better scalability can be achieved with little overhead", "keywords": ["flow visualization", "parallel processing", "3d vector field visualization", "streamlines"]} {"id": "kp20k_training_489", "title": "An integrated research tool for X-ray imaging simulation", "abstract": "This paper presents a software simulation package of the entire X-ray projection radiography process including beam generation, absorber structure and composition, irradiation set up, radiation transport through the absorbing medium, image formation and dose calculation. Phantoms are created as composite objects from geometrical or voxelized primitives and can be subjected to simulated irradiation process. The acquired projection images represent the two-dimensional spatial distribution of the energy absorbed in the detector and are formed at any geometry, taking into account energy spectrum, beam geometry and detector response. This software tool is the evolution of a previously presented system, with new functionalities, user interface and an expanded range of applications. This has been achieved mainly by the use of combinatorial geometry for phantom design and the implementation of a Monte Carlo code for the simulation of the radiation interaction at the absorber and the detector", "keywords": ["monte carlo", "imaging", "simulation", "projection radiography"]} {"id": "kp20k_training_490", "title": "Requirements and solutions to software encapsulation and engineering in next generation manufacturing systems: OOONEIDA approach", "abstract": "This paper addresses the solutions enabling agile development, deployment and reconfiguration of software-intensive automation systems both in discrete manufacturing and process technologies. As the key enabler for reaching the required level of flexibility of such systems, the paper discusses the issues of encapsulation, integration and re-use of the automation intellectual property (IP). The goals can be fulfilled by the use of a vendor-independent concept of a reusable portable and scalable software module (function block), as well as by a vendor-independent automation device model. This paper also discusses the requirements of the methodology for the application of such modules in the time- and cost-effective specification, design, validation, realization and deployment of intelligent mechatronic components in distributed industrial automation and control systems. A new global initiative OOONEIDA is presented, that targets these goals through the development of the automation object concept based on the recognized industrial standards IEC61131, IEC61499, IEC61804 and unified modelling language (UML); and through the creation of the technological infrastructure for a new, open-knowledge economy for automation components and automated industrial products. In particular, a web-based repository for standardized automation solutions will be developed to serve as an electronic-commerce facility in industrial automation businesses", "keywords": ["industrial automation", "intelligent manufacturing systems"]} {"id": "kp20k_training_491", "title": "Robust camera pose and scene structure analysis for service robotics", "abstract": "Successful path planning and object manipulation in service robotics applications rely both on a good estimation of the robot's position and orientation (pose) in the environment, as well as on a reliable understanding of the visualized scene. In this paper a robust real-time camera pose and a scene structure estimation system is proposed. First, the pose of the camera is estimated through the analysis of the so-called tracks. The tracks include key features from the imaged scene and geometric constraints which are used to solve the pose estimation problem. Second, based on the calculated pose of the camera, i.e. robot, the scene is analyzed via a robust depth segmentation and object classification approach. In order to reliably segment the object's depth, a feedback control technique at an image processing level has been used with the purpose of improving the robustness of the robotic vision system with respect to external influences, such as cluttered scenes and variable illumination conditions. The control strategy detailed in this paper is based on the traditional open-loop mathematical model of the depth estimation process. In order to control a robotic system, the obtained visual information is classified into objects of interest and obstacles. The proposed scene analysis architecture is evaluated through experimental results within a robotic collision avoidance system. ", "keywords": ["robot vision systems", "feedback control", "stereo vision", "robustness", "3d reconstruction"]} {"id": "kp20k_training_492", "title": "NML, a schematic extension of F. Esteva and L. Godo's logic MTL", "abstract": "A schematic extension NML of F.Esteva and L.Godo's Logic MTL is introduced in this paper. Based on a new left-continuous but discontinuous t-norm, which was proposed by S.Jenei and can be regarded as a kind of distorted nilpotent minimum, the semantics of NML is interpreted and the standard completeness theorem of NML is proved. The fact that the maximum and the minimum are definable from the negation and implication in NML and NM is discovered, which also leads to a modification of the NM axiom system. ", "keywords": ["non-classical logics", "left-continuous t-norm", "mtl system", "nm system", "lukasiewicz system", "nml system"]} {"id": "kp20k_training_493", "title": "verifying safety properties of concurrent java programs using 3-valued logic", "abstract": "We provide a parametric framework for verifying safety properties of concurrent Java programs. The framework combines thread-scheduling information with information about the shape of the heap. This leads to error-detection algorithms that are more precise than existing techniques. The framework also provides the most precise shape-analysis algorithm for concurrent programs. In contrast to existing verification techniques, we do not put a bound on the number of allocated objects. The framework even produces interesting results when analyzing Java programs with an unbounded number of threads. The framework is applied to successfully verify the following properties of a concurrent program: Concurrent manipulation of linked-list based ADT preserves the ADT datatype invariant [19]. The program does not perform inconsistent updates due to interference. The program does not reach a deadlock. The program does not produce run-time errors due to illegal thread interactions. We also find bugs in erroneous versions of such implementations. A prototype of our framework has been implemented", "keywords": ["precise", "deadlock", " framework ", "scheduling", "concurrent program", "invariance", "object", "timing", "shape", "informal", "errors", "error detection", "thread", "interaction", "program", "shape analysis", "prototype", "parametric", "implementation", "concurrency", "verification", "version", "logic", "manipulation", "algorithm", "interference", "update", "bugs"]} {"id": "kp20k_training_494", "title": "Automatic discovery of theorems in elementary geometry", "abstract": "We present here a further development of the well-known approach to automatic theorem proving in elementary geometry via algorithmic commutative algebra and algebraic geometry. Rather than confirming/refuting geometric statements (automatic proving) or finding geometric formulae holding among prescribed geometric magnitudes (automatic derivation), in this paper we consider (following Kapur and Mundy) the problem of dealing automatically with arbitrary geometric statements (i.e., theses that do not follow, in general, from the given hypotheses) aiming to find complementary hypotheses for the statements to become true. First we introduce some standard algebraic geometry notions in automatic proving, both for self-containment and in order to focus our own contribution. Then we present a rather successful but noncomplete method for automatic discovery that, roughly, proceeds adding the given conjectural thesis to the collection of hypotheses and then derives some special consequences from this new set of conditions. Several examples are discussed in detail", "keywords": ["automatic theorem proving", "elementary geometry", "grobner basis"]} {"id": "kp20k_training_495", "title": "Using support vector machines with a novel hybrid feature selection method for diagnosis of erythemato-squamous diseases", "abstract": "In this paper, we developed a diagnosis model based on support vector machines (SVM) with a novel hybrid feature selection method to diagnose erythemato-squamous diseases. Our proposed hybrid feature selection method, named improved F-score and Sequential Forward Search (IFSFS), combines the advantages of filter and wrapper methods to select the optimal feature subset from the original feature set. In our IFSFS, we improved the original F-score from measuring the discrimination of two sets of real numbers to measuring the discrimination between more than two sets of real numbers. The improved F-score and Sequential Forward Search (SFS) are combined to find the optimal feature subset in the process of feature selection, where, the improved F-score is an evaluation criterion of filter method, and SFS is an evaluation system of wrapper method. The best parameters of kernel function of SVM are found out by grid search technique. Experiments have been conducted on different training-test partitions of the erythemato-squamous diseases dataset taken from UCI (University of California Irvine) machine learning database. Our experimental results show that the proposed SVM-based model with IFSFS achieves 98.61% classification accuracy and contains 21 features. With these results, we conclude our method is very promising compared to the previously reported results. ", "keywords": ["support vector machines ", "feature selection", "sequential forward search ", "erythemato-squamous diseases"]} {"id": "kp20k_training_496", "title": "Domain-specific languages: From design to implementation application to video device drivers generation", "abstract": "Domain-Specific languages (DSL) have many potential advantages in terms of software engineering ranging from increased productivity to the application of formal methods. Although they have been used in practice for decades, there has been little study of methodology or implementation tools for the DSL approach. In this paper, we present our DSL approach and its application to a realistic domain: the generation of video display device drivers. The presentation focuses on the validation of our proposed framework for domain-specific languages, from design to implementation. The framework leads to a flexible design and structure, and provides automatic generation of efficient implementations of DSL programs. Additionally, we describe an example of a complete DSL for video display adaptors and the benefits of the DSL approach for this application. This demonstrates some of the generally claimed benefits of using DSLs: increased productivity, higher-level abstraction, and easier verification. This DSL has been fully implemented with our approach and is available. Compose project URL: http://www.irisa.fr/compose/gal", "keywords": ["gal", "video cards", "device drivers", "domain-specific language", "partial evaluation"]} {"id": "kp20k_training_497", "title": "mapping visual notations to mof compliant models with qvt relations", "abstract": "Model-centric methodologies rely on the definition of domain-specific modeling languages for being able to create domain-specific models. With MOF the OMG adopted a standard which provides the essential constructs for the definition of semantic language constructs (abstract syntax). However, there are no specifications on how to define the notations (concrete syntax) for abstract syntax elements. Usually, the concrete syntax of MOF compliant languages is described informally. We propose to define MOF-based metamodels for abstract syntax and concrete syntax and to connect them by model transformations specified with QVT Relations in a flexible, declarative way. Using a QVT based transformation engine one can easily implement a Model View Controller architecture by integrating modeling tools and metadata repositories", "keywords": ["visual languages", "model transformation", "domain specific languages", "ocl", "qvt relations"]} {"id": "kp20k_training_498", "title": "Financial early warning system model and data mining application for risk detection", "abstract": "One of the biggest problems of SMEs is their tendencies to financial distress because of insufficient finance background. In this study, an early warning system (EWS) model based on data mining for financial risk detection is presented. CHAID algorithm has been used for development of the EWS. Developed EWS can be served like a tailor made financial advisor in decision making process of the firms with its automated nature to the ones who have inadequate financial background. Besides, an application of the model implemented which covered 7853 SMEs based on Turkish Central Bank (TCB) 2007 data. By using EWS model, 31 risk profiles, 15 risk indicators, 2 early warning signals, and 4 financial road maps has been determined for financial risk mitigation", "keywords": ["chaid", "data mining", "early warning systems", "financial risk", "financial distress", "smes"]} {"id": "kp20k_training_499", "title": "A new wavelet algorithm to enhance and detect microcalcifications", "abstract": "We have proposed a new thresholding technique applied over wavelet coefficients for mammogram enhancement. We have utilized Shannon entropy to find the best t in the wavelet domain. We have utilized Tsallis entropy to find the best t in the wavelet domain. The proposed technique has better FROC test with 96.5% true positives and 0.36 false positives", "keywords": ["wavelet transform", "shannon entropy", "tsallis entropy", "otsu", "microcalcifications and mammograms"]} {"id": "kp20k_training_500", "title": "An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition", "abstract": "This article provides an overview of the first BioASQ challenge, a competition on large-scale biomedical semantic indexing and question answering (QA), which took place between March and September 2013. BioASQ assesses the ability of systems to semantically index very large numbers of biomedical scientific articles, and to return concise and user-understandable answers to given natural language questions by combining information from biomedical articles and ontologies", "keywords": ["bioasq competition", "hierarchical text classification", "semantic indexing", "information retrieval", "passage retrieval", "question answering", "multi-document text summarization"]} {"id": "kp20k_training_501", "title": "Regreening the Metropolis: Pathways to More Ecological Cities: Keynote Address", "abstract": "Eighty percent of the American population now lives in metropolitan regions whose geographic extent continues to expand even as many core cities and inner-tier suburbs lose middle-class populations, jobs, and tax base. Urban sprawl and the socioeconomic polarizing of metropolitan America have been fostered by public policies including (1) federal subsidies for new infrastructure on the urban fringe; (2) tax policies that favor home ownership over rental properties; (3) local zoning codes; and (4) federal and state neglect of older urban neighborhoods. In the face of diminished access to nature outside of metropolitan areas, locally based efforts to protect and restore greenspaces within urban areas seek to make older communities more habitable and more ecological. Some pathways to more ecological cities include the following", "keywords": ["urban ecology", "city nature", "urban biodiversity", "spirit of place"]} {"id": "kp20k_training_502", "title": "A modified runs test for symmetry", "abstract": "We propose a modification of a ModarresGastwirth test for the hypothesis of symmetry about a known center. By means of a Monte Carlo Study we show that the modified test overtakes the original ModarresGastwirth test for a wide spectrum of asymmetrical alternatives coming from the lambda family and for all assayed sample sizes. We also show that our test is the best runs test among the runs tests we have compared", "keywords": ["runs test", "test of symmetry", "generalized lambda family", "power", "primary secondary "]} {"id": "kp20k_training_503", "title": "Probability-based approaches to VLSI circuit partitioning", "abstract": "Iterative-improvement two-way min-cut partitioning is an important phase in most circuit placement tools, and finds use in many other computer-aided design (CAD) applications. Most iterative improvement techniques for circuit netlists like the Fiduccia-Mattheyses (FM) method compute the gains of nodes using local netlist information that is only concerned with the immediate improvement in the cutset, This can lead to misleading gain information. Krishnamurthy suggested a lookahead (LA) gain calculation method to ameliorate this situation; however, as we show, it leaves room for improvement. We present here a probabilistic gain computation approach called probabilistic partitioner (PROP) that is capable of capturing the future implications of moving a node at the current time. We also propose an extended algorithm SHRINK-PROP that increases the provability of removing recently \"perturbed\" nets (nets whose nodes have been moved for the first time) from the cutset, Experimental results on medium- to large-size ACM/SIGDA benchmark circuits show that PROP and SHRINK-PROP outperform previous iterative-improvement methods like FM (bq. about 30% and 37%, respectively) and LA (by about 27% and 34%, respectively). Both PROP and SHRINK-PROP also obtain much better cutsizes than many recent state-of-the-art partitioners like EIG1, WINDOW MELO, PARABOLI, GFM and CMetis (by 4.5% to 67%). Our empirical timing results reveal that PROP is appreciably Faster than most recent techniques, We also obtain results on the more recent ISPD-98 benchmark suite that show similar substantial mincut improvements by PROP and SHRINK-PROP over FM (24% and 31%, respectively). it is also noteworthy that SHRINK-PROP's results are within 2.5% of those obtained by hMetis. one of the best multilevel partitioners. However. the multilevel paradigm is orthogonal to SHRINK-PROP. Further, since it is a \"flat\" partitioner, it has advantages over hMetis in partition-driven placement applications", "keywords": ["clustering effect", "iterative improvement", "min-cut partitioning", "probabilistic gain", "vlsi circuit"]} {"id": "kp20k_training_504", "title": "ShengBTE: A solver of the Boltzmann transport equation for phonons", "abstract": "ShengBTE is a software package for computing the lattice thermal conductivity of crystalline bulk materials and nanowires with diffusive boundary conditions. It is based on a full iterative solution to the Boltzmann transport equation. Its main inputs are sets of second- and third-order interatomic force constants, which can be calculated using third-party ab-initio packages. Dirac delta distributions arising from conservation of energy are approximated by Gaussian functions. A locally adaptive algorithm is used to determine each process-specific broadening parameter, which renders the method fully parameter free. The code is free software, written in Fortran and parallelized using MPI. A complementary Python script to help compute third-order interatomic force constants from a minimum number of ab-initio calculations, using a real-space finite-difference approach, is also publicly available for download. Here we discuss the design and implementation of both pieces of software and present results for three example systems: Si, InAs and lonsdaleite. Program title: ShengBTE Catalogue identifier: AESL_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AESL_v1_0.html Program obtainable from: CPC Program Library, Queens University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 292052 No. of bytes in distributed program, including test data, etc.: 1989781 Distribution format: tar.gz Programming language: Fortran 90, MPI. Computer: Non-specific. Operating system: Unix/Linux. Has the code been vectorized or parallelized?: Yes, parallelized using MPI. RAM: Up to several GB Classification: 7.9. External routines: LAPACK, MPI, spglib (http://spglib.sourceforge.net/) Nature of problem: Calculation of thermal conductivity and related quantities, determination of scattering rates for allowed three-phonon processes Solution method: Iterative solution, locally adaptive Gaussian broadening Running time: Up to several hours on several tens of processors", "keywords": ["boltzmann transport equation", "thermal conductivity", "phonon"]} {"id": "kp20k_training_505", "title": "Tracing impact in a usability improvement process", "abstract": "Analyzing usability improvement processes as they take place in real-life organizations is necessary to understand the practice of usability work. This paper describes a case study where the usability of an information system is improved and a relationship between the improvements and the evaluation efforts is established. Results show that evaluation techniques complemented each other by suggesting different kinds of usability improvement. Among the techniques applied, a combination of questionnaires and Metaphors of Human Thinking (MOT) showed the largest mean impact and MOT produced the largest number of impacts. Logging of real-life use of the system over 6 months indicated six aspects of improved usability, where significant differences among evaluation techniques were found. Concerning five of the six aspects Think Aloud evaluations and the above-mentioned combination of questionnaire and MOT performed equally well, and better than MOT. Based on the evaluations 40 redesign proposals were developed and 30 of these were implemented. Four of the implemented redesigns where considered especially important. These evolved with inspiration from multiple evaluations and were informed by stakeholders with different kinds of expertise. Our results suggest that practitioners should not rely on isolated evaluations. Instead complementing techniques should be combined, and people with different expertise should be involved. ", "keywords": ["usability engineering", "case study", "usability improvement process", "metaphors of human thinking", "think loud", "questionnaire"]} {"id": "kp20k_training_506", "title": "HOMAN, a learning based negotiation method for holonic multi-agent systems", "abstract": "Holonic multi-agent systems are a special category of multi-agent systems that best fit to environments with numerous agents and high complexity. Like in general multi-agent systems, the agents in the holonic system may negotiate with each other. These systems have their own characteristics and structure, for which a specific negotiation mechanism is required. This mechanism should be simple, fast and operable in real world applications. It would be better to equip negotiators with a learning method which can efficiently use the available information. The learning method should itself be fast, too. Additionally, this mechanism should match the special characteristics of the holonic multi-agent systems. In this paper, we introduce such a negotiation method. Experimental results demonstrate the efficiency of this new approach", "keywords": ["holonic multi-agent systems", "negotiation", "semi-cooperative", "agreement", "regression"]} {"id": "kp20k_training_507", "title": "the portable common runtime approach to interoperability", "abstract": "Operating system abstractions do not always reach high enough for direct use by a language or applications designer. The gap is filled by language-specific runtime environments, which become more complex for richer languages (CommonLisp needs more than C+ +, which needs more than C). But language-specific environments inhibit integrated multi-lingual programming, and also make porting hard (for instance, because of operating system dependencies). To help solve these problems, we have built the Portable Common Runtime (PCR), a language-independent and operating-system-independent base for modern languages. PCR offers four interrelated facilities: storage management (including universal garbage collection), symbol binding (including static and dynamic linking and loading), threads (lightweight processes), and low-level I/O (including network sockets). PCR is common because these facilities simultaneously support programs in several languages. PCR supports C. Cedar, Scheme, and CommonLisp intercalling and runs pre-existing C and CommonLisp (Kyoto) binaries. PCR is portable because it uses only a small set of operating system features. The PCR source code is available for use by other researchers and developers", "keywords": ["network", "help", "applications", "use", "portability", "developer", "design", "collect", "direct", "runtime", "linking", "thread", "program", "dependencies", "research", "systems", "environments", "abstraction", "language", "dynamic", "operating system", "process", "support", "source-code", "binding", "complexity", "feature", "interoperability", "storage management", "scheme", "integrability"]} {"id": "kp20k_training_508", "title": "Efficient keyword search over virtual XML views", "abstract": "Emerging applications such as personalized portals, enterprise search, and web integration systems often require keyword search over semi-structured views. However, traditional information retrieval techniques are likely to be expensive in this context because they rely on the assumption that the set of documents being searched is materialized. In this paper, we present a system architecture and algorithm that can efficiently evaluate keyword search queries over virtual (unmaterialized) XML views. An interesting aspect of our approach is that it exploits indices present on the base data and thereby avoids materializing large parts of the view that are not relevant to the query results. Another feature of the algorithm is that by solely using indices, we can still score the results of queries over the virtual view, and the resulting scores are the same as if the view was materialized. Our performance evaluation using the INEX data set in the Quark (Bhaskar et al. in Quark: an efficient XQuery full-text implementation. In: SIGMOD, 2006) open-source XML database system indicates that the proposed approach is scalable and efficient", "keywords": ["keyword search", "xml views", "document projections", "document pruning", "top-k"]} {"id": "kp20k_training_509", "title": "A novel clustering method on time series data", "abstract": "Time series is a very popular type of data which exists in many domains. Clustering time series data has a wide range of applications and has attracted researchers from a wide range of discipline. In this paper a novel algorithm for shape based time series clustering is proposed. It can reduce the size of data, improve the efficiency and not reduce the effects by using the principle of complex network. Firstly, one-nearest neighbor network is built based on the similarity of time series objects. In this step, triangle distance is used to measure the similarity. Of the neighbor network each node represents one time series object and each link denotes neighbor relationship between nodes. Secondly, the nodes with high degrees are chosen and used to cluster. In clustering process, dynamic time warping distance function and hierarchical clustering algorithm are applied. Thirdly, some experiments are executed on synthetic and real data. The results show that the proposed algorithm has good performance on efficiency and effectiveness. ", "keywords": ["time series", "clustering", "dynamic time warping", "nearest neighbor network"]} {"id": "kp20k_training_510", "title": "An Improved Floating-to-Fixed-Point Conversion Scheme for DCT Quantization Algorithm", "abstract": "Conventional fixed-point implementation of the DCT coefficients quantization algorithm in video compression may result in deteriorated image quality. The paper investigates this problem and proposes an improved floating-to-fixed-point conversion scheme. With a proper scaling factor and a new-established look-up table, the proposed fixed-point scheme can obtain bit-wise consistence to the floating-point realization. Experimental results verify the validity of the proposed method", "keywords": ["floating-to-fixed-point conversion", "discrete cosine transform", "quantization", "video compression"]} {"id": "kp20k_training_511", "title": "A radial basis function network approach for the computation of inverse continuous time variant functions", "abstract": "This Paper presents an efficient approach for the fast computation of inverse continuous time variant functions with the proper use of Radial Basis Function Networks (RBFNs). The approach is based on implementing RBFNs for computing inverse continuous time variant functions via an overall damped least squares solution that includes a novel null space vector for singularities prevention. The singularities avoidance null space vector is derived from developing a sufficiency condition for singularities prevention that conduces to establish some characterizing matrices and an associated performance index", "keywords": ["artificial neural networks", "inverse functions", "radial basis functions network"]} {"id": "kp20k_training_512", "title": "Cryptography on smart cards", "abstract": "This article presents an overview of the cryptographic primitives that are commonly implemented on smart cards. We also discuss attacks that can be mounted on smart cards as well as countermeasures against such attacks", "keywords": ["smart cards", "cryptography"]} {"id": "kp20k_training_513", "title": "The antecedents of customer satisfaction and its link to complaint intentions in online shopping: An integration of justice, technology, and trust", "abstract": "Complaint behaviors are critical to maintaining customer loyalty in an online market. They provide insight into the customer's experience of service failure and help to redress the failures. Previous studies have shown the importance of customer satisfaction as a mediator for complaint intentions. It is important to examine the antecedents of customer satisfaction and its link to complaint intentions. Online shoppers are both buyers of products/services and users of web-based systems. Trust also plays a critical role in forming a psychological state with positive or negative feelings toward e-vendors. In this context, there are three major concerns: justice, technology and trust. This study proposes a research model to combine these issues, in order to investigate complaint intentions. Data were collected from an online survey wherein subjects were encouraged to reflect on recent service failure experiences. The results from testing a structural equation model indicate that distributive and interactional justice contribute significantly to customer satisfaction and, in turn, to complaint intentions, but procedural justice does not. Technology-based features and trust are also important in determining the two target variables. The implications for managers and scholars are also discussed", "keywords": ["online shopping", "customer satisfaction", "complaint intention", "justice theory", "expectationconfirmation model", "trust"]} {"id": "kp20k_training_514", "title": "to divide and conquer search ranking by learning query difficulty", "abstract": "Learning to rank plays an important role in information retrieval. In most of the existing solutions for learning to rank, all the queries with their returned search results are learnt and ranked with a single model. In this paper, we demonstrate that it is highly beneficial to divide queries into multiple groups and conquer search ranking based on query difficulty. To this end, we propose a method which first characterizes a query using a variety of features extracted from user search behavior, such as the click entropy, the query reformulation probability. Next, a classification model is built on these extracted features to assign a score to represent how difficult a query is. Based on this score, our method automatically divides queries into groups, and trains a specific ranking model for each group to conquer search ranking. Experimental results on RankSVM and RankNet with a large-scale evaluation dataset show that the proposed method can achieve significant improvement in the task of web search ranking", "keywords": ["learning to rank", "query difficulty"]} {"id": "kp20k_training_515", "title": "Defect reduction in PCB contract manufacturing operations", "abstract": "This study addresses the identification and improvement of a defect-reducing process step in plated-through-hole (PTH) technology of printed circuit board (PCB) assemblies. The process step discussed is a step in which the substrates are baked prior to assembly. While this step is developed to address defect problems faced by both OEMs and contract manufacturers alike, this paper discusses an experiment designed to improve the effect of the baking step that was performed at a PCB contract manufacturing facility. Furthermore, due to the tremendous variations in product complexity, a relatively new statistical process control chart which tracks defects per millions of opportunities (DPMO), was used to help evaluate the results. ", "keywords": ["electronics manufacturing", "contract manufacturing", "printed circuit boards", "plated-throughhole technology", "quality control", "statistical process control", "dpmo chart", "outgassing", "yields"]} {"id": "kp20k_training_516", "title": "a new form of dos attack in a cloud and its avoidance mechanism", "abstract": "Data center networks are typically grossly under-provisioned. This is not a problem in a corporate data center, but it could be a problem in a shared infrastructure, such as a co-location facility or a cloud infrastructure. If an application is deployed in such an infrastructure, the application owners need to take into account the infrastructure limitations. They need to build in counter-measures to ensure that the application is secure and it meets its performance requirements. In this paper, we describe a new form of DOS attack, which exploits the network under-provisioning in a cloud infrastructure. We have verified that such an attack could be carried out in practice in one cloud infrastructure. We also describe a mechanism to detect and avoid this new form of attack", "keywords": ["bandwidth estimation", "dos attack"]} {"id": "kp20k_training_517", "title": "interdisciplinary applications of mathematical modeling", "abstract": "We demonstrate applications of numerical integration and visualization algorithms in diverse fields including psychological modeling (biometrics); in high energy physics for the study of collisions of elementary particles; and in medical physics for regulating the dosage of proton beam radiation therapy. We discuss the problems and solution methods, as supported by numerical results", "keywords": ["proton beam radiation therapy", "sensory discriminal process", "duo-trio method", "numerical integration and visualization", "feynman diagram", "adaptive partitioning algorithm"]} {"id": "kp20k_training_518", "title": "The probability ranking principle revisited", "abstract": "A theoretic framework for multimedia information retrieval is introduced which guarantees optimal retrieval effectiveness. In particular. a Ranking Principle for Distributed Multimedia-Documents (RPDM) is described together with an algorithm that satisfies this principle. Finally, the RPDM is shown to be a generalization of the Probability Ranking principle (PRP) which guarantees optimal retrieval effectiveness in the case of text document retrieval. The PRP justifies theoretically the relevance ranking adopted by modern search engines. In contrast to the classical PRP. the new RPDM takes into account transmission and inspection time, and most importantly, aspectual recall rather than simple recall", "keywords": ["multimedia information retrieval", "probability ranking principle", "relevance ranking", "optimal search performance", "maximum retrieval effectiveness"]} {"id": "kp20k_training_519", "title": "how users associate wireless devices", "abstract": "In a wireless world, users can establish connections between devices spontaneously, and unhampered by cables. However, in the absence of cables, what is the natural interaction to connect one device with another? A wide range of device association techniques have been demonstrated, but it has remained an open question what actions users would spontaneously choose for device association. We contribute a study eliciting device association actions from non-technical users without premeditation. Over 700 user-defined actions were collected for 37 different device combinations. We present a classification of user-defined actions, and observations of the users' rationale. Our findings indicate that there is no single most spontaneous action; instead five prominent categories of user-defined actions were found", "keywords": ["spontaneous interaction", "wireless devices", "device association", "input actions"]} {"id": "kp20k_training_520", "title": "Analyticity of weighted central paths and error bounds for semidefinite programming", "abstract": "The purpose of this paper is two-fold. Firstly, we show that every Cholesky-based weighted central path for semidefinite programming is analytic under strict complementarity. This result is applied to homogeneous cone programming to show that the central paths defined by the known class of optimal self-concordant barriers are analytic in the presence of strictly complementary solutions. Secondly, we consider a sequence of primal-dual solutions that lies within a prescribed neighborhood of the central path of a pair of primal-dual semidefinite programming problems, and converges to the respective optimal faces. Under the additional assumption of strict complementarity, we derive two necessary and sufficient conditions for the sequence of primal-dual solutions to converge linearly with their duality gaps", "keywords": ["semidefinite programming", "homogeneous cone programming", "weighted analytic center", "error bound"]} {"id": "kp20k_training_521", "title": "Multi-Class Blue Noise Sampling", "abstract": "Sampling is a core process for a variety of graphics applications. Among existing sampling methods, blue noise sampling remains popular thanks to its spatial uniformity and absence of aliasing artifacts. However, research so far has been mainly focused on blue noise sampling with a single class of samples. This could be insufficient for common natural as well as man-made phenomena requiring multiple classes of samples, such as object placement, imaging sensors, and stippling patterns", "keywords": ["multi-class", "blue noise", "sampling", "poisson hard/soft disk", "dart throwing", "relaxation"]} {"id": "kp20k_training_522", "title": "A note on the inventory models for deteriorating items with ramp type demand rate", "abstract": "In this research we study the inventory models for deteriorating items with ramp type demand rate. We first clearly point out some questionable results that appeared in (Mandal, B., Pal, A.K., 1998. Order level inventory system with ramp type demand rate for deteriorating items. Journal of Interdisciplinary Mathematics 1, 4966 and Wu, K.S., Ouyang, L.Y., 2000. A replenishment policy for deteriorating items with ramp type demand rate (Short Communication). Proceedings of National Science Council ROC (A) 24, 279286). And then resolve the similar problem by offering a rigorous and efficient method to derive the optimal solution. In addition, we also propose an extended inventory model with ramp type demand rate and its optimal feasible solution to amend the incompleteness in the previous work. Moreover, we also proposed a very good inventory replenishment policy for this kind of inventory model. We believe that our work will provide a solid foundation for the further study of this sort of important inventory models with ramp type demand rate", "keywords": ["inventory", "ramp type demand rate", "deteriorating item"]} {"id": "kp20k_training_523", "title": "Efficient multiple faces tracking based on Relevance Vector Machine and Boosting learning", "abstract": "A multiple faces tracking system was presented based on Relevance Vector Machine (RVM) and Boosting learning. In this system, a face detector based on Boosting learning is used to detect faces at the first frame, and the face motion model and color model are created. The face motion model consists of a set of RVMs that learn the relationship between the motion of the face and its appearance, and the face color model is the 2D histogram of the face region in CrCb color space. In the tracking process different tracking methods (RVM tracking, local search, giving up tracking) are used according to different states of faces, and the states are changed according to the tracking results. When the full image search condition is satisfied, a full image search is started in order to find new coming faces and former occluded faces. In the full image search and local search, the similarity matrix is introduced to help matching faces efficiently. Experimental results demonstrate that this system can (a) automatically find new coming faces; (b) recover from occlusion, for example, if the faces are occluded by others and reappear or leave the scene and return; ", "keywords": ["face tracking", "face detection", "multiple faces tracking", "real-time tracking", "probabilistic algorithms", "relevance vector machine", "boosting", "adaboost"]} {"id": "kp20k_training_524", "title": "a design flow for application-specific networks on chip with guaranteed performance to accelerate soc design and verification", "abstract": "Systems on chip (SOC) are composed of intellectual property blocks (IP) and interconnect. While mature tooling exists to design the former, tooling for interconnect design is still a research area. In this paper we describe an operational design flow that generates and configures application-specific network on chip (NOC) instances, given application communication requirements. The NOC can be simulated in SystemC and RTL VHDL. An independent performance verification tool verifies analytically that the NOC instance (hardware) and its configuration (software) together meet the application performance requirements. The ?thereal NOC's guaranteed performance is essential to replace time-consuming simulation by fast analytical performance validation. As a result, application-specific NOCs that are guaranteed to meet the application's communication requirements are generated and verified in minutes, reducing the number of design iterations. A realistic MPEG SOC example substantiates our claims", "keywords": ["requirements", "communication", "network", "software", "applications", "research", "examples", "simulation", "intellectual property", "design", "performance", "verification", "tool", "configurability", "flow", "interconnect", "system on-chip", "timing", "hardware", "systemc", "paper", "network on-chip", "iter"]} {"id": "kp20k_training_525", "title": "Homing-pigeon-based messaging: multiple pigeon-assisted delivery in delay-tolerant networks", "abstract": "In this paper, we consider the applications of delay-tolerant networks (DTNs), where the nodes in a network are located in separated areas, and in each separated area, there exists (at least) an anchor node that provides regional network coverage for the nearby nodes. The anchor nodes are responsible for collecting and distributing messages for the nodes in the vicinity. This work proposes to use a set of messengers (named pigeons) that move around the network to deliver messages among multiple anchor nodes. Each source node (anchor node or Internet access point) owns multiple dedicated pigeons, and each pigeon takes a round trip starting from its home (i.e., the source) through the destination anchor nodes and then returns home, disseminating the messages on its way. We named this as a homing-pigeon-based messaging (HoPM) scheme. The HoPM scheme is different from the prior schemes in that each messenger is completely dedicated to its home node for providing messaging service. We obtained the average message delay of HoPM scheme in DTN through theoretical analysis with three different pigeon scheduling schemes. The analytical model was validated by simulations. We also studied the effects of several key parameters on the system performance and compared the results with previous solutions. The results allowed us to better understand the impacts of different scheduling schemes on the system performance of HoPM and demonstrated that our proposed scheme outperforms the previous ones. ", "keywords": ["messenger scheduling", "delay-tolerant network", "partitioned wireless network", "homing-pigeon messaging system", "queueing theory", "traffic modeling and mobility management"]} {"id": "kp20k_training_526", "title": "The West Nile Virus Encephalitis Outbreak in the United States (1999-2000", "abstract": "Viruses cause most forms of encephalitis. The two main types responsible for epidemic encephalitis are enteroviruses and arboviruses. The City of New York reports about 10 cases of encephalitis yearly. Establishing a diagnosis is often difficult. In August 1999, a cluster of five patients with fever, confusion, and weakness were admitted to a community hospital in Flushing, New York. Flaccid paralysis developed in four of the five patients, and they required ventilatory support. Three, less severe, cases presented later in the same month. An investigation was conducted by the New York City (NYC) and New York State (NYS) health departments and the national Centers for Disease Control and Prevention (CDC). The West Nile virus (WNV) was identified as the etiologic agent. WNV is an arthropod-borne flavivirus, with a geographic distribution in Africa, the Middle East, and southwestern Asia. It has also been isolated in Australia and sporadically in Europe but never in the Americas. The majority of people infected have no symptoms. Fever, severe myalgias, headache, conjunctivitis, lymphadenopathy, and a roseolar rash can occur. Rarely, encephalitis or meningitis is seen. The NYC outbreak resulted in the first cases of WNV infection in the Western Hemisphere and the first arboviral infection in NYC since yellow fever in the nineteenth century. The WNV is now a public health concern in the United States", "keywords": ["west nile virus", "encephalitis", "arbovirus"]} {"id": "kp20k_training_527", "title": "Existence results for impulsive neutral second-order stochastic evolution equations with nonlocal conditions", "abstract": "In this paper we consider a class of impulsive neutral second-order stochastic evolution equations with nonlocal initial conditions in a real separable Hilbert space. Sufficient conditions for the existence of mild solutions are established by operator theory and the Sadovskii fixed point theorem. An example is provided to illustrate the theory. ", "keywords": ["stochastic evolution equations", "impulsive equation", "nonlocal condition"]} {"id": "kp20k_training_528", "title": "Distribution network design: New problems and related models", "abstract": "We study some complex distribution network design problems, which involve facility location, warehousing, transportation and inventory decisions. Several realistic scenarios are investigated. Two kinds of mathematical programming formulations are proposed for all the introduced problems, together with a proof of their correctness. Some formulations extend models proposed by Perl and Daskin (1985) for some warehouse location-routing problems; other formulations are based on flow variables and constraints", "keywords": ["distribution", "location-routing", "integer linear programming models"]} {"id": "kp20k_training_529", "title": "Lyapunov-based nonlinear controllers for obstacle avoidance with a planar n-link doubly nonholonomic manipulator", "abstract": "A mobile manipulator is a robotic system made up of two components; a mobile platform and a manipulator mounted on the platform equipped with non-deformable wheels. Such a combined system requires complex design and control. This paper considers the autonomous navigation problem of a nonholonomic mobile platform and an n-link nonholonomic manipulator fixed to the platform. For this planar n-link doubly nonholonomic manipulator, we present the first ever set of nonlinear continuous controllers for obstacle avoidance. The controllers provide a collision-free trajectory within a constrained workspace cluttered with fixed obstacles of different shapes and sizes whilst satisfying the nonholonomic and kinodynamic constraints associated with the robotic system. An advantage of the proposed method is the ease at which the acceleration-based control laws can be derived from the Lyapunov function. The effectiveness of the nonholonomic planner is demonstrated via computer simulations. ", "keywords": ["lyapunov-based control scheme", "n-link doubly nonholonomic manipulators", "artificial potential fields", "lyapunov stability", "kinodynamic constraints"]} {"id": "kp20k_training_530", "title": "Automatic analysis of trabecular bone structure from knee MRI", "abstract": "We investigated the feasibility of quantifying osteoarthritis (OA) by analysis of the trabecular bone structure in low-field knee MRI. Generic texture features were extracted from the images and subsequently selected by sequential floating forward selection (SFFS), following a fully automatic, uncommitted machine-learning based framework. Six different classifiers were evaluated in cross-validation schemes and the results showed that the presence of OA can be quantified by a bone structure marker. The performance of the developed marker reached a generalization area-under-the-ROC (AUC) of 0.82, which is higher than the established cartilage markers known to relate to the OA diagnosis", "keywords": ["bone structure", "oa", "machine learning", "texture analysis", "classification", "feature selection", "mri"]} {"id": "kp20k_training_531", "title": "An application of fuzzy sets theory to the EOQ model with imperfect quality items", "abstract": "This article investigates the inventory problem for items received with imperfect quality, where, upon the arrival of order lot, 100% screening process is performed and the items of imperfect quality are sold as a single batch at a discounted price, prior to receiving the next shipment. The objective is to determine the optimal order lot size to maximize the total profit. We first propose a model with fuzzy defective rate. Then, the model with fuzzy defective rate and fuzzy annual demand is presented. For each case, we employ the signed distance, a ranking method for fuzzy numbers, to find the estimate of total profit per unit time in the fuzzy sense, and then derive the corresponding optimal lot size. Numerical examples are provided to illustrate the results of proposed models", "keywords": ["inventory", "imperfect quality", "fuzzy set", "signed distance"]} {"id": "kp20k_training_532", "title": "Maximin performance of binary-input channels with uncertain noise distributions", "abstract": "We consider uncertainty classes of noise distributions defined by a bound on the divergence with respect to a nominal noise distribution. The noise that maximizes the minimum error probability for binary-input channels is found. The effect of the reduction in uncertainty brought about by knowledge of the signal-to-noise ratio is also studied. The particular class of Gaussian nominal distributions provides an analysis tool for near-Gaussian channels. Asymptotic behavior of the least favorable noise distribution and resulting error probability are studied in a variety of scenarios, namely: asymptotically small divergence with and without power constraint; asymptotically large divergence with and without power constraint; and asymptotically large signal-to-noise ratio", "keywords": ["detection", "gaussian error probability", "hypothesis testing", "kullback-leibler divergence", "least favorable noise"]} {"id": "kp20k_training_533", "title": "Alleviating the problem of local minima in Backpropagation through competitive learning", "abstract": "The backpropagation (BP) algorithm is widely recognized as a powerful tool for training feedforward neural networks (FNNs). However, since the algorithm employs the steepest descent technique to adjust the network weights, it suffers from a slow convergence rate and often produces suboptimal solutions, which are the two major drawbacks of BP. This paper proposes a modified BP algorithm which can remarkably alleviate the problem of local minima confronted with by the standard BP (SBP). As one output of the modified training procedure, a bucket of all the possible solutions of weights matrices found during training is acquired, among which the best solution is chosen competitively based upon their performances on a validation dataset. Simulations are conducted on four benchmark classification tasks to compare and evaluate the classification performances and generalization capabilities of the proposed modified BP and SBP", "keywords": ["backpropagation ", "feedforward neural networks ", "local minima", "competitive learning", "classification"]} {"id": "kp20k_training_534", "title": "Towards categorical models for fairness: fully abstract presheaf semantics of SCCS with finite delay", "abstract": "We present a presheaf model for the observation of infinite as well as finite computations. We give a concrete representation of the presheaf model as a category of generalised synchronisation trees and show that it is coreflective in a category of generalised transition systems, which are a special case of the general transition systems of Hennessy and Stirling. This can be viewed as a first step towards representing fairness in categorical models for concurrency. The open map bisimulation is shown to coincide with extended bisimulation of Hennessy and Stirling, which is essentially fair CTL*-bisimulation. We give a denotational semantics of Milner's SCCS with finite delay in the presheaf model, which differs from previous semantics by giving the meanings of recursion by final coalgebras and meanings of finite delay by initial algebras of the process equations for delay. Finally, we formulate Milner's operational semantics of SCCS with finite delay in terms of generalised transition systems and prove that the presheaf semantics is fully abstract with respect to extended bisimulation. ", "keywords": ["concurrency", "fairness", "finite delay", "full abstraction", "open maps"]} {"id": "kp20k_training_535", "title": "Determining efficient temperature sets for the simulated tempering method", "abstract": "In statistical physics, the efficiency of tempering approaches strongly depends on ingredients such as the number of replicas R , reliable determination of weight factors and the set of used temperatures, TR={T1,T2,,TR} T R = { T 1 , T 2 , , T R } . For the simulated tempering (ST) in particularuseful due to its generality and conceptual simplicitythe latter aspect (closely related to the actual R ) may be a key issue in problems displaying metastability and trapping in certain regions of the phase space. To determine TR T R s leading to accurate thermodynamics estimates and still trying to minimize the simulation computational time, here a fixed exchange frequency scheme is considered for the ST. From the temperature of interest T1 T 1 , successive T s are chosen so that the exchange frequency between any adjacent pair Tr T r and Tr+1 T r + 1 has a same value f . By varying the f s and analyzing the TR T R s through relatively inexpensive tests (e.g.,time decay towards the steady regime), an optimal situation in which the simulations visit much faster and more uniformly the relevant portions of the phase space is determined. As illustrations, the proposal is applied to three lattice models, BEG, BellLavis, and Potts, in the hard case of extreme first-order phase transitions, always giving very good results, even for R=3 R = 3 . Also, comparisons with other protocols (constant entropy and arithmetic progression) to choose the set TR T R are undertaken. The fixed exchange frequency method is found to be consistently superior, specially for small R s. Finally, distinct instances where the prescription could be helpful (in second-order transitions and for the parallel tempering approach) are briefly discussed", "keywords": ["strong first-order phase transitions", "simulated tempering", "monte carlo methods", "replica temperatures optimal values"]} {"id": "kp20k_training_537", "title": "Attitudes of community pharmacists, university-based pharmacists, and students toward on-line information resources", "abstract": "The study sought to explore the attitudes of community pharmacists, university-based pharmacists, and pharmacy students before and after exposure to computerized systems of on-line information services. A 42-item attitudinal survey was administered to 21 community pharmacists, 7 university clinical pharmacist faculty, and 17 senior pharmacy students, prior to and at the end of a year of access to Grateful Med(R) and BRS Colleague(R). Few significant differences were noted among the participants at baseline. No significant interaction-effect differences for type of participant or system used were found. Participants were generally positive about computers in general, the accuracy of on-line information services, their impact on knowledge and confidence, and their usefulness for pharmacists", "keywords": ["pharmacists", "attitudes", "computers", "drug information"]} {"id": "kp20k_training_538", "title": "Comparison of several approaches to the linear approximation of the yield condition and application to the robust design of plane frames for the case of uncertainty", "abstract": "Since the yield condition for frame structures is non-linear, piecewise linear approximations are needed in order to apply linear optimization methods. Four approaches are presented and compared. After the theoretical consideration and comparison of the different approximation methods, they are applied to the robust design of an 18-bar frame in case of uncertainty. Here, the less restrictive methods yield the cheapest design, as expected. It will be shown, that the approximation from inside of first level does not cause much higher costs than the other methods. But since its constraints are sufficient in contrast to other approximations, it is recommended", "keywords": ["piecewise linear approximation", "yield condition", "robust optimal design", "stochastic uncertainty", "stochastic applied load", "plane frame"]} {"id": "kp20k_training_540", "title": "A Class of Differential Vector Variational Inequalities in Finite Dimensional Spaces", "abstract": "In this paper, we introduce and study a class of differential vector variational inequalities in finite dimensional Euclidean spaces. We establish a relationship between differential vector variational inequalities and differential scalar variational inequalities. Under various conditions, we obtain the existence and linear growth of solutions to the scalar variational inequalities. In particular we prove existence theorems for Carathodory weak solutions of the differential vector variational inequalities. Furthermore, we give a convergence result on Euler time-dependent procedure for solving the initial-value differential vector variational inequalities", "keywords": ["differential vector variational inequality", "carathodory weak solution", "existence", "linear growth", "euler time-stepping procedure"]} {"id": "kp20k_training_541", "title": "Adaptive hypermedia", "abstract": "Adaptive hypermedia is a relatively new direction of research on the crossroads of hypermedia and user modeling. Adaptive hypermedia systems build a model of the goals, preferences and knowledge of each individual user, and use this model throughout the interaction with the user, in order to adapt to the needs of that user. The goal of this paper is to present the state of the art in adaptive hypermedia at the eve of the year 2000, and to highlight some prospects for the future. This paper attempts to serve both the newcomers and the experts in the area of adaptive hypermedia by building on an earlier comprehensive review (Brusilovsky, 1996; Brusilovsky, 1998", "keywords": ["hypertext", "hypermedia", "user model", "user profile", "adaptive presentation", "adaptive navigation support", "web-based systems", "adaptation"]} {"id": "kp20k_training_542", "title": "framing design in the third paradigm", "abstract": "This paper develops vocabulary to discuss the phenomena related to the new design paradigm, which considers designing as a situated and constructive activity of meaning making rather than as problem solving. The paper studies how design projects proceed from the fuzzy early phases towards the issues of central relevance to designing. A central concept is framing, and it is elaborated with examples from two case studies. Several aspects of framing are explicated, exploratory, anticipatory and social framing, and related concepts of 'focusing', 'priming', and 'grounding' are explained. The paper concludes that understanding designing as a situated and constructive making of meaning has bearings on how designing needs to be supported", "keywords": ["design framing", "reflective practice", "user-centered design", "user-driven innovation"]} {"id": "kp20k_training_543", "title": "Interval evaluations in the analytic hierarchy process by possibility analysis", "abstract": "Since a pairwise comparison matrix in the Analytic Hierarchy Process (AHP) is based on human intuition, the given matrix will always include inconsistent elements violating the transitivity property. We propose the Interval AI IP by which interval weights can be obtained. The widths of the estimated interval weights represent inconsistency in judging data. Since interval weights can be obtained from inconsistent data, the proposed Interval AI-IP is more appropriate to human judgment. Assuming crisp values in a pairwise comparison matrix, the interval comparisons including the given crisp comparisons can be obtained by applying the Linear Programming (LP) approach. Using an interval preference relation, the Interval AHP for crisp data can be extended to an approach for interval data allowing to express the uncertainty of human judgment in pairwise comparisons", "keywords": ["ahp", "interval evaluations", "possibility analysis"]} {"id": "kp20k_training_544", "title": "A model for real-time failure prognosis based on hidden Markov model and belief rule base", "abstract": "As one of most important aspects of condition-based maintenance (CBM), failure prognosis has attracted an increasing attention with the growing demand for higher operational efficiency and safety in industrial systems. Currently there are no effective methods which can predict a hidden failure of a system real-time when there exist influences from the changes of environmental factors and there is no such an accurate mathematical model for the system prognosis due to its intrinsic complexity and operating in potentially uncertain environment. Therefore, this paper focuses on developing a new hidden Markov model (HMM) based method which can deal with the problem. Although an accurate model between environmental factors and a failure process is difficult to obtain, some expert knowledge can be collected and represented by a belief rule base (BRB) which is an expert system in fact. As such, combining the HMM with the BRB, a new prognosis model is proposed to predict the hidden failure real-time even when there are influences from the changes of environmental factors. In the proposed model, the HMM is used to capture the relationships between the hidden failure and monitored observations of a system. The BRB is used to model the relationships between the environmental factors and the transition probabilities among the hidden states of the system including the hidden failure, which is the main contribution of this paper. Moreover, a recursive algorithm for online updating the prognosis model is developed. An experimental case study is examined to demonstrate the implementation and potential applications of the proposed real-time failure prognosis method", "keywords": ["failure prognosis", "belief rule base", "expert systems", "hidden markov model", "environmental factors"]} {"id": "kp20k_training_545", "title": "Computing the Volume of a Union of Balls: A Certified Algorithm", "abstract": "Balls and spheres are amongst the simplest 3D modeling primitives, and computing the volume of a union of balls is an elementary problem. Although a number of strategies addressing this problem have been investigated in several communities, we are not aware of any robust algorithm, and present the first such algorithm. Our calculation relies on the decomposition of the volume of the union into convex regions, namely the restrictions of the balls to their regions in the power diagram. Theoretically, we establish a formula for the volume of a restriction, based on Gauss' divergence theorem. The proof being constructive, we develop the associated algorithm. On the implementation side, we carefully analyse the predicates and constructions involved in the volume calculation, and present a certified implementation relying on interval arithmetic. The result is certified in the sense that the exact volume belongs to the interval computed. Experimental results are presented on hand-crafted models illustrating various difficulties, as well as on the 58,898 models found in the tenth of July 2009 release of the Protein Data Bank", "keywords": ["algorithms", "design", "reliability", "theory", "computational geometry", "union of balls", "alpha-shapes", "medial axis transform", "volume calculation", "structural biology", "protein modeling", "macro-molecular models", "van der waals models", "certified numerics", "interval arithmetic", "c plus plus design"]} {"id": "kp20k_training_546", "title": "Simple polynomial multiplication algorithms for exact conditional tests of linearity in a logistic model", "abstract": "The linear logistic model is often employed in the analysis of binary response data. The well-known asymptotic chi-square and likelihood ratio tests are usually used to detect the assumption of linearity in such a model. For small, sparse, or skewed data, the asymptotic theory is however dubious and exact conditional chi-square and likelihood ratio tests may provide reliable alternatives. In this article, we propose efficient polynomial multiplication algorithms to compute exact significance levels as well as exact powers of these tests. Two options, namely the cell- and stage-wise approaches, in implementing these algorithms will be discussed. When sample sizes are large, we propose an efficient Monte Carlo method for estimating the exact significance levels and exact powers. Real data are used to demonstrate the performance with an application of the proposed algorithms", "keywords": ["dose-response data", "exact significance level", "exact power computation", "polynomial multiplication algorithm"]} {"id": "kp20k_training_547", "title": "On the definitions of anonymity for ring signatures", "abstract": "This paper studies the relations among several definitions of anonymity for ring signature schemes in the same attack environment. It is shown that one intuitive and two technical definitions we consider are asymptotically equivalent, and the indistinguishability-based technical definition is the strongest, i.e., the most secure when achieved. when the exact reduction cost is taken into account. We then extend our result to the threshold case where a subset of members cooperate to create a signature. The threshold setting makes the notion of anonymity more complex and yields a greater variety of definitions. We explore several notions and observe certain relation does not seem hold unlike the simple single-signer case. Nevertheless, we see that an indistinguishability-based definition is the most favorable in the threshold case. We also study the notion of linkability and present a simple scheme that achieves both anonymity and linkability", "keywords": ["ring signature", "anonymity", "linkability"]} {"id": "kp20k_training_548", "title": "scalable proximity estimation and link prediction in online social networks", "abstract": "Proximity measures quantify the closeness or similarity between nodes in a social network and form the basis of a range of applications in social sciences, business, information technology, computer networks, and cyber security. It is challenging to estimate proximity measures in online social networks due to their massive scale (with millions of users) and dynamic nature (with hundreds of thousands of new nodes and millions of edges added daily). To address this challenge, we develop two novel methods to efficiently and accurately approximate a large family of proximity measures. We also propose a novel incremental update algorithm to enable near real-time proximity estimation in highly dynamic social networks. Evaluation based on a large amount of real data collected in five popular online social networks shows that our methods are accurate and can easily scale to networks with millions of nodes. To demonstrate the practical values of our techniques, we consider a significant application of proximity estimation: link prediction, i.e., predicting which new edges will be added in the near future based on past snapshots of a social network. Our results reveal that (i) the effectiveness of different proximity measures for link prediction varies significantly across different online social networks and depends heavily on the fraction of edges contributed by the highest degree nodes, and (ii) combining multiple proximity measures consistently yields the best link prediction accuracy", "keywords": ["matrix factorization", "embedding", "social network", "link prediction", "proximity measure", "sketch"]} {"id": "kp20k_training_549", "title": "Applications of regional strain energy in compliant structure design for energy absorption", "abstract": "Topology optimization of regional strain energy is studied in this paper. Unlike the conventional mean compliance formulation, this paper considers two main functions of structure: rigidity and compliance. For normal usages, rigidity is chosen as the design objective. For compliant design, a portion of the structure absorbs energy, while another part maintains the structural integrity. Therefore, we implemented a regional strain energy formulation for topology optimization. Sensitivity to regional strain energy is derived from the adjoint method. Numerical results from the proposed formulation are presented", "keywords": ["topology optimization", "compliant structure", "energy absorption"]} {"id": "kp20k_training_550", "title": "conversion of control dependence to data dependence", "abstract": "Program analysis methods, especially those which support automatic vectorization, are based on the concept of interstatement dependence where a dependence holds between two statements when one of the statements computes values needed by the other. Powerful program transformation systems that convert sequential programs to a form more suitable for vector or parallel machines have been developed using this concept [AllK 82, KKLW 80].The dependence analysis in these systems is based on data dependence. In the presence of complex control flow, data dependence is not sufficient to transform programs because of the introduction of control dependences. A control dependence exists between two statements when the execution of one statement can prevent the execution of the other. Control dependences do not fit conveniently into dependence-based program translators.One solution is to convert all control dependences to data dependences by eliminating goto statements and introducing logical variables to control the execution of statements in the program. In this scheme, action statements are converted to IF statements. The variables in the conditional expression of an IF statement can be viewed as inputs to the statement being controlled. The result is that control dependences between statements become explicit data dependences expressed through the definitions and uses of the controlling logical variables.This paper presents a method for systematically converting control dependences to data dependences in this fashion. The algorithms presented here have been implemented in PFC, an experimental vectorizer written at Rice University", "keywords": ["fit", "definition", "express", "concept", "presence", "program analysis", "control flow", "paper", "control dependence", "program transformation", "transformation", "control", "program", "translation", "variability", "dependencies", "vectorization", "method", "experimentation", "systems", "values", "dependence analysis", "data dependence", "parallel", "data", "support", "complexity", "algorithm", "conversation", "action", "scheme"]} {"id": "kp20k_training_551", "title": "An integration scheme for electromagnetic scattering using plane wave edge elements", "abstract": "Finite element techniques for the simulation of electromagnetic wave propagation are, like all conventional element based approaches for wave problems, limited by the ability of the polynomial basis to capture the sinusoidal nature of the solution. The Partition of Unity Method (PUM) has recently been applied successfully, in finite and boundary element algorithms, to wave propagation. In this paper, we apply the PUM approach to the edge finite elements in the solution of Maxwells equations. The electric field is expanded in a set of plane waves, the amplitudes of which become the unknowns, allowing each element to span a region containing multiple wavelengths. However, it is well known that, with PUM enrichment, the burden of computation shifts from the solver to the evaluation of oscillatory integrals during matrix assembly. A full electromagnetic scattering problem is not simulated or solved in this paper. This paper is an addition to the work of Ledger and concentrates on efficient methods of evaluating the oscillatory integrals that arise. A semi-analytical scheme of the Filon character is presented", "keywords": ["edge elements", "partition of unity", "maxwells equations", "oscillatory integrals"]} {"id": "kp20k_training_552", "title": "Search-based metamodel matching with structural and syntactic measures", "abstract": "Metamodel matching using search-based software engineering. The use of syntactic measures improve the results of metamodel matching We compared our approach to four ontology-based approaches. Our results show that our search-based approach was significantly better than state-of-the-art matching tools", "keywords": ["model matching", "search-based software engineering", "simulated annealing"]} {"id": "kp20k_training_553", "title": "A recursion-based broadcast paradigm in wormhole routed networks", "abstract": "A novel broadcast technique for wormhole-routed parallel computers based on recursion is presented in this paper. It works by partitioning the interconnection graph into a number of higher-level subgraphs. Then, we identify the Transmission SubGraph (TSG) in each subgraph. Both the higher-level subgraphs and the TSGs are recursively defined, i.e., we split each level i subgraph into several level i + 1 subgraphs and identify-level i + 1 TSGs accordingly. We first split and scatter the source message into the TSG of the original graph. Next, in each recursive round message transmissions are from lower-level TSGs to higher-level TSGs and all transmissions at the same level happen concurrently. The algorithm proceeds recursively from lower-level subgraphs to higher level subgraphs until each highest-level subgraph (a single node) gets the complete message. We have applied this general paradigm to a number of topologies including two or higher dimension mesh/torus and hypercube. Our results show considerable improvements over all other algorithms for a wide range of message sizes under both one-port and all-port models", "keywords": ["hypercube", "massive parallel computer", "mesh", "one-to-all broadcast", "parallel processing", "torus", "wormhole routing"]} {"id": "kp20k_training_554", "title": "Stress analysis of three-dimensional contact problems using the boundary element method", "abstract": "This paper presents a technique based on the boundary element method[1] to analyse three-dimensional contact problems. The formulation is implemented for the frictionless and infinite friction conditions. Following a review of the basic nature of contact problems, the analytical basis of the direct formulation of the boundary element method is described. The numerical implementation employs linear triangular elements for the representation of the boundary and variables of the bodies in contact. Opposite nodal points in similar element pairs are defined on the two surfaces in the area which are expected to come into contact under the increasing load. The use of appropriate contact conditions enables the integral equations for the two bodies to be coupled together. Following an iteration procedure, the size of the contact zone is determined by finding a boundary solution compatible with the contact conditions. Different examples have been analysed in order to verify the applicability of the proposed method to various contact situations. The results have been compared with those obtained using the finite element method in conjunction with the ABAQUS[2] and IDEAS[3] packages which are shown to be in good agreement", "keywords": ["stress analysis", "three-dimensional contact problems", "boundary element method"]} {"id": "kp20k_training_555", "title": "co-evolving application code and design models by exploiting meta-data", "abstract": "Evolvability and adaptability are intrinsic properties of today's software applications. Unfortunately, the urgency of evolving/adapting a system often drives the developer to directly modify the application code neglecting to update its design models. Even, most of the development environments support the code refactoring without supporting the refactoring of the design information. Refactoring, evolution and in general every change to the code should be reflected into the design models, so that these models consistently represent the application and can be used as documentation in the successive maintenance steps. The code evolution should not evolve only the application code but also its design models . Unfortunately, to co-evolve the application code and its design is a hard job to be carried out automatically, since there is an evident and notorious gap between these two representations. We propose a new approach to code evolution (in particular to code refactoring) that supports the automatic co-evolution of the design models. The approach relies on a set of predefined metadata that the developer should use to annotate the application code and to highlight the refactoring performed on the code. Then, these meta-data are retrieved through reflection and used to automatically and coherently update the application design models", "keywords": ["co-evolution", "reflection", "software evolution", "refactoring", "meta-data"]} {"id": "kp20k_training_556", "title": "Differential Effects of Donepezil on Methamphetamine and Cocaine Dependencies", "abstract": "Donepezil, a choline esterase inhibitor, has been widely used as a medicine for Alzheimer's disease. Recently, a study showed that donepezil inhibited addictive behaviors induced by cocaine, including cocaine-conditioned place preference (CPP) and locomotor sensitization to cocaine. In the present study, we investigated the effects of donepezil on methamphetamine (METH)-induced behavioral changes in mice. In counterbalanced CPP tests, the intraperitoneal (i.p.) administration of 3 mg/kg donepezil prior to 2 mg/kg METH i.p. failed to inhibit METH CPP, whereas pretreatment with 3 mg/kg donepezil abolished the CPP for cocaine (10 mg/kg, i.p.). Similarly, in locomotor sensitization experiments, i.p. administration of 1 mg/kg donepezil prior to 2 mg/kg METH i.p. failed to inhibit locomotor sensitivity to METH, whereas pretreatment with 1 mg/kg donepezil significantly inhibited locomotor sensitivity to cocaine (10 mg/kg, i.p.). These results suggest that donepezil may be a useful tool for treating cocaine dependence but not for treating METH dependence. The differences in the donepezil effects on addictive behaviors induced by METH and cocaine might be due to differences in the involvement of acetylcholine in the mechanisms of METH and cocaine dependencies", "keywords": ["methamphetamine", "cocaine", "donepezil", "conditioned place preference", "sensitization", "mice"]} {"id": "kp20k_training_558", "title": "A Gaussian function model for simulation of complex environmental sensing", "abstract": "Sensors can be used to sense not only simple behavior but also complex ones. Previous work has demonstrated how agent-based modeling can be used to model sensing of complex behavior in Complex Environments", "keywords": ["complex adaptive system", "environmental sensing", "gaussian function", "mathematical model"]} {"id": "kp20k_training_559", "title": "An integer programming-based search technique for error-prone structures of LDPC codes", "abstract": "In this paper, an efficient, general framework is presented for finding common, devastating error-prone structures (EPS) of any finite-length low-density parity-check (LDPC) code. The smallest stopping set for the binary erasure channel (BEC), the smallest fully absorbing set, the smallest absorbing set, and the smallest elementary trapping set for the binary symmetric channel (BSC) are found and the dominant EPS are enumerated. The method involves integer programming optimization techniques, which guarantees that the results are provably optimal", "keywords": ["trapping sets", "stopping sets", "absorbing sets", "integer programming"]} {"id": "kp20k_training_560", "title": "Chemosensitization of tumors by resveratrol", "abstract": "Because tumors develop resistance to chemotherapeutic agents, the cancer research community continues to search for effective chemosensitizers. One promising possibility is to use dietary agents that sensitize tumors to the chemotherapeutics. In this review, we discuss that the use of resveratrol can sensitize tumor cells to chemotherapeutic agents. The tumors shown to be sensitized by resveratrol include lung carcinoma, acute myeloid leukemia, promyelocytic leukemia, multiple myeloma, prostate cancer, oral epidermoid carcinoma, and pancreatic cancer. The chemotherapeutic agents include vincristine, adriamycin, paclitaxel, doxorubicin, cisplatin, gefitinib, 5-fluorouracil, velcade, and gemcitabine. The chemosensitization of tumor cells by resveratrol appears to be mediated through its ability to modulate multiple cell-signaling molecules, including drug transporters, cell survival proteins, cell proliferative proteins, and members of the NF-?B and STAT3 signaling pathways. Interestingly, this nutraceutical has also been reported to suppress apoptosis induced by paclitaxel, vincristine, and daunorubicin in some tumor cells. The potential mechanisms underlying this dual effect are discussed. Overall, studies suggest that resveratrol can be used to sensitize tumors to standard cancer chemotherapeutics", "keywords": ["apoptosis", "cancer therapy", "chemoresistance", "chemosensitization", "resveratrol", "tumor"]} {"id": "kp20k_training_561", "title": "Towards Scalable Summarization of Consumer Videos Via Sparse Dictionary Selection", "abstract": "The rapid growth of consumer videos requires an effective and efficient content summarization method to provide a user-friendly way to manage and browse the huge amount of video data. Compared with most previous methods that focus on sports and news videos, the summarization of personal videos is more challenging because of its unconstrained content and the lack of any pre-imposed video structures. We formulate video summarization as a novel dictionary selection problem using sparsity consistency, where a dictionary of key frames is selected such that the original video can be best reconstructed from this representative dictionary. An efficient global optimization algorithm is introduced to solve the dictionary selection model with the convergence rates as O(1/root K-2) (where K is the iteration counter), in contrast to traditional sub-gradient descent methods of O(1/root K). Our method provides a scalable solution for both key frame extraction and video skim generation, because one can select an arbitrary number of key frames to represent the original videos. Experiments on a human labeled benchmark dataset and comparisons to the state-of-the-art methods demonstrate the advantages of our algorithm", "keywords": ["group sparse", "key frame", "lasso", "scene analysis", "video analysis", "video skim", "video summarization"]} {"id": "kp20k_training_562", "title": "Sufficient completeness verification for conditional and constrained TRS", "abstract": "We present a procedure for checking sufficient completeness of conditional and constrained term rewriting systems containing axioms for constructors which may be constrained (by e.g. equalities, disequalities, ordering, membership, ...). Such axioms allow to specify complex data structures like e.g. sets, sorted lists or powerlists. Our approach is integrated into a framework for inductive theorem proving based on tree grammars with constraints, a formalism which permits an exact representation of languages of ground constructor terms in normal form. The procedure is presented by an inference system which is shown sound and complete. A precondition of one inference of this system refers to a (undecidable) property called strong ground reducibility which is discharged to the above inductive theorem proving system. We have successfully applied our method to several examples, yielding readable proofs and, in case of negative answer, a counter-example suggesting how to complete the specification. Moreover, we show that it is a decision procedure when the TRS is unconditional but constrained, for an expressive class of constrained constructor axioms. ", "keywords": ["sufficient completeness", "conditional and constrained term rewriting", "narrowing", "tree grammars"]} {"id": "kp20k_training_563", "title": "Scheduling Parallel Programs by Work Stealing with Private Deques", "abstract": "Work stealing has proven to be an effective method for scheduling parallel programs on multicore computers. To achieve high performance, work stealing distributes tasks between concurrent queues, called deques, which are assigned to each processor. Each processor operates on its deque locally except when performing load balancing via steals. Unfortunately, concurrent deques suffer from two limitations: 1) local deque operations require expensive memory fences in modern weak-memory architectures, 2) they can be very difficult to extend to support various optimizations and flexible forms of task distribution strategies needed many applications, e. g., those that do not fit nicely into the divide-and-conquer, nested data parallel paradigm. For these reasons, there has been a lot recent interest in implementations of work stealing with non-concurrent deques, where deques remain entirely private to each processor and load balancing is performed via message passing. Private deques eliminate the need for memory fences from local operations and enable the design and implementation of efficient techniques for reducing task-creation overheads and improving task distribution. These advantages, however, come at the cost of communication. It is not known whether work stealing with private deques enjoys the theoretical guarantees of concurrent deques and whether they can be effective in practice. In this paper, we propose two work-stealing algorithms with private deques and prove that the algorithms guarantee similar theoretical bounds as work stealing with concurrent deques. For the analysis, we use a probabilistic model and consider a new parameter, the branching depth of the computation. We present an implementation of the algorithm as a C++ library and show that it compares well to Cilk on a range of benchmarks. Since our approach relies on private deques, it enables implementing flexible task creation and distribution strategies. As a specific example, we show how to implement task coalescing and steal-half strategies, which can be important in fine-grain, non-divide-and-conquer algorithms such as graph algorithms, and apply them to the depth-first-search problem", "keywords": ["work stealing", "nested parallelism", "dynamic load balancing"]} {"id": "kp20k_training_564", "title": "Multilevel Huffman coding: An efficient test-data compression method for IP cores", "abstract": "A new test-data compression method suitable for cores of unknown structure is introduced in this paper. The proposed method encodes the test data provided by the core vendor using a new, very effective compression scheme based on multilevel Huffman coding. Each Huffman codeword corresponds to three different kinds of information, and thus, significant compression improvements compared to the already known techniques are achieved. A simple architecture is proposed for decoding the compressed data on chip. Its hardware overhead is very low and comparable to that of the most efficient methods in the literature. Moreover, the major part of the decompressor can be shared among different cores, which reduces the hardware overhead of the proposed architecture considerably. Additionally, the proposed technique offers increased probability of detection of unmodeled faults since the majority of the unknown values of the test sets are replaced by pseudorandom data generated by a linear feedback shift register", "keywords": ["embedded testing techniques", "huffman encoding", "intellectual property cores", "linear feedback shift registers ", "test-data compression"]} {"id": "kp20k_training_565", "title": "Variable selection in regression models using nonstandard optimisation of information criteria", "abstract": "The question of variable selection in a regression model is a major open research topic in econometrics. Traditionally two broad classes of methods have been used. One is sequential testing and the other is information criteria. The advent of large datasets used by institutions such as central banks has exacerbated this model selection problem. A solution in the context of information criteria is provided in this paper. The solution rests on the judicious selection of a subset of models for consideration using nonstandard optimisation algorithms for information criterion minimisation. In particular, simulated annealing and genetic algorithms are considered. Both a Monte Carlo study and an empirical forecasting application to UK CPI inflation suggest that the proposed methods are worthy of further consideration", "keywords": ["simulated annealing", "genetic algorithms", "information criteria", "model selection", "forecasting", "inflation"]} {"id": "kp20k_training_566", "title": "Highly nonlinear photonic crystal fiber with ultrahigh birefringence using a nano-scale slot core", "abstract": "A new type of slot photonic crystal fiber is proposed. Ultrahigh nonlinear coefficient up to 3.5739104W?1km?1 can be achieved for the quasi-TM mode. The modal birefringence at 1.55?m is up to 0.5015. The proposed PCF is suitable for all-optical signal processing", "keywords": ["photonic crystal fiber", "nonlinearity", "birefringence", "chromatic dispersion", "slot core"]} {"id": "kp20k_training_567", "title": "what will system level design be when it grows up", "abstract": "We have seen a growing new interest in Electronic System Level (ESL) architectures, design methods, tools and implementation fabrics in the last few years. But the picture of what types and approaches to building embedded systems will become the most widely-accepted norms in the future remains fuzzy at best. Everyone want to know where systems and system design is going \"when it grows up\", if it ever \"grows up\". Some of the key questions that need to be answered include which applications will be key system drivers, what SW & HW architectures will suit best, how programmable and configurable will they be, will systems designers need to deal with physical implementation issues or will that be hidden behind fabric abstractions and programming models, and what will those abstractions and models be? Moreover, will these abstractions stabilize and be still useful as the underlying technology keeps developing at high speed.This panel consists of proponents of a number of alternative visions for where we will end up, and how we will get there", "keywords": ["process variability", "system-level compensation", "parametric yield"]} {"id": "kp20k_training_568", "title": "The effectiveness of bootstrap methods in evaluating skewed auditing populations: A simulation study", "abstract": "This article describes a comparison among four bootstrap methods: the percentile, reflective, bootstrap-t, and variance stabilized bootstrap-t using a simple new stabilization procedure. The four methods are employed in constructing upper confidence bounds for the mean error in a wide variety of audit populations. The simulation results indicate that the variance stabilized bootstrap-t bound is to be preferred. It exhibits reliable coverage while maintaining reasonable tightness", "keywords": ["confidence bounds", "dollar unit sampling", "t-pivot"]} {"id": "kp20k_training_569", "title": "Evaluation of arctic multibeam sonar data quality using nadir crossover error analysis and compilation of a full-resolution data product", "abstract": "Characterize uncertainty in multi-source multibeam data sets. Highest spatial resolution compilation for the Canada Basin and Chukchi Borderland. Fully resolvable pdf for interpretation of Arctic seafloor morphology", "keywords": ["arctic ocean", "canada basin", "chukchi", "crossover analysis", "multibeam", "ecs"]} {"id": "kp20k_training_570", "title": "Software Trace Cache for commercial applications", "abstract": "In this paper we address the important problem of instruction fetch for future wide issue superscalar processors. Our approach focuses on understanding the interaction between software and hardware techniques targeting an increase in the instruction fetch bandwidth. That is the objective, for instance, of the Hardware Trace Cache (HTC). We design a profile based code reordering technique which targets a maximization of the sequentiality of instructions, while still trying to minimize instruction cache misses. We call our software approach, Software Trace Cache (STC). We evaluate our software approach, and then compare it with the HTC and the combination of both techniques. Our results on PostgreSQL show that for large codes with few loops and deterministic execution sequences the STC offers better results than a HTC. Also, both the software and hardware approaches combine well to obtain improved results", "keywords": ["instruction fetch", "code layout", "software trace cache"]} {"id": "kp20k_training_571", "title": "Goal state optimization algorithm considering computational resource constraints and uncertainty in task execution time", "abstract": "A search methodology with goal state optimization considering computational resource constraints is proposed. The combination of an extended graph search methodology and parallelization of task execution and online planning makes it possible to solve the problem. The uncertainty of the task execution time is also considered. The problem can be solved by utilizing a random-based and/or a greedy-based graph-searching methodology. The proposed method is evaluated using a rearrangement problem of 20 movable objects with uncertainty in the task execution time, and the effectiveness is shown with simulation results", "keywords": ["robot motion planning", "parallelization of action and plan", "rearrangement planning", "graph searching", "resource constraints"]} {"id": "kp20k_training_572", "title": "A 270-MHz CMOS quadrature modulator for a GSM transmitter", "abstract": "This paper describes a 270-MHz CMOS quadrature modulator (QMOD) fur a global system for mobile communications (GSM) transmitter. QMOD consists of two attenuators and two doubly-balanced modulators (DBM's) and fabricated by using 0.35-mu m CMOS process. The carrier leakage level of -35.7 dBc and the image rejection level of -45.1 dBc are achieved. It's total chip area is 880 mu m x 550 mu m and it consumes 1.0 mA with 3.0 V power supply", "keywords": ["cmos gsm transmitter qmod"]} {"id": "kp20k_training_573", "title": "Mining multi-tag association for image tagging", "abstract": "Automatic media tagging plays a critical role in modern tag-based media retrieval systems. Existing tagging schemes mostly perform tag assignment based on community contributed media resources, where the tags are provided by users interactively. However, such social resources usually contain dirty and incomplete tags, which severely limit the performance of these tagging methods. In this paper, we propose a novel automatic image tagging method aiming to automatically discover more complete tags associated with information importance for test images. Given an image dataset, all the near-duplicate clusters are discovered. For each near-duplicate cluster, all the tags occurring in the cluster form the cluster's \"document\". Given a test image, we firstly initialize the candidate tag set from its near-duplicate cluster's document. The candidate tag set is then expanded by considering the implicit multi-tag associations mined from all the clusters' documents, where each cluster's document is regarded as a transaction. To further reduce noisy tags, a visual relevance score is also computed for each candidate tag to the test image based on a new tag model. Tags with very low scores can be removed from the final tag set. Extensive experiments conducted on a real-world web image dataset-NUS-WIDE, demonstrate the promising effectiveness of our approach", "keywords": ["image tagging", "tag completion", "tag denoising", "weighted association rule mining"]} {"id": "kp20k_training_574", "title": "Improve the performance of co-training by committee with refinement of class probability estimations", "abstract": "Semi-supervised learning is a popular machine learning technique where only a small number of labeled examples are available and a large pool of unlabeled examples can be obtained easily. In co-training by committee, a paradigm of semi-supervised learning, it is necessary to pick out a fixed number of most confident examples according to the ranking of class probability values at each iteration. Unfortunately, the class probability values may repeat, which results in the problem that some unlabeled instances share the same probability and will be picked out randomly. This brings a negative effect on the improvement of the performance of classifiers. In this paper, we propose a simple method to deal with this problem under the intuition that different probabilities are crucial. The distance metric between unlabeled instances and labeled instances can be combined with the probabilities of class membership of committee. Two distance metrics are considered to assign each unlabeled example a unique probability value. In order to prove that our method can get higher-quality examples and reduce the introduction of noise, a data editing technique is used to compare with our method. Experimental results verify the effectiveness of our method and the data editing technique, and also confirm that the method for the first distance metric is generally better than the data editing technique", "keywords": ["co-training", "semi-supervised learning", "ensemble learning", "class probability", "distance metric", "data editing"]} {"id": "kp20k_training_575", "title": "A unified RANSLES model: Computational development, accuracy and cost", "abstract": "Large eddy simulation (LES) is computationally extremely expensive for the investigation of wall-bounded turbulent flows at high Reynolds numbers. A way to reduce the computational cost of LES by orders of magnitude is to combine LES equations with Reynolds-averaged NavierStokes (RANS) equations used in the near-wall region. A large variety of such hybrid RANSLES methods are currently in use such that there is the question of which hybrid RANS-LES method represents the optimal approach. The properties of an optimal hybrid RANSLES model are formulated here by taking reference to fundamental properties of fluid flow equations. It is shown that unified RANSLES models derived from an underlying stochastic turbulence model have the properties of optimal hybrid RANSLES models. The rest of the paper is organized in two parts. First, a priori and a posteriori analyses of channel flow data are used to find the optimal computational formulation of the theoretically derived unified RANSLES model and to show that this computational model, which is referred to as linear unified model (LUM), does also have all the properties of an optimal hybrid RANSLES model. Second, a posteriori analyses of channel flow data are used to study the accuracy and cost features of the LUM. The following conclusions are obtained. (i) Compared to RANS, which require evidence for their predictions, the LUM has the significant advantage that the quality of predictions is relatively independent of the RANS model applied. (ii) Compared to LES, the significant advantage of the LUM is a cost reduction of high-Reynolds number simulations by a factor of 0.07 Re 0.46 . For coarse grids, the LUM has a significant accuracy advantage over corresponding LES. (iii) Compared to other usually applied hybrid RANSLES models, it is shown that the LUM provides significantly improved predictions", "keywords": ["stochastic turbulence model", "rans", "les", "unified ransles models", "channel flow application"]} {"id": "kp20k_training_576", "title": "On Kelly networks with shuffling", "abstract": "We consider Kelly networks with shuffling of customers within each queue. Specifically, each arrival, departure or movement of a customer from one queue to another triggers a shuffle of the other customers at each queue. The shuffle distribution may depend on the network state and on the customer that triggers the shuffle. We prove that the stationary distribution of the network state remains the same as without shuffling. In particular, Kelly networks with shuffling have the product form. Moreover, the insensitivity property is preserved for symmetric queues", "keywords": ["product form", "insensitivity", "symmetric queues", "shuffling"]} {"id": "kp20k_training_577", "title": "log-based receiver-reliable multicast for distributed interactive simulation", "abstract": "Reliable multicast communication is important in large-scale distributed applications. For example, reliable multicast is used to transmit terrain and environmental updates in distributed simulations. To date, proposed protocols have not supported these applications' requirements, which include wide-area data distribution, low-latency packet loss detection and recovery, and minimal data and management over-head within fine-grained multicast groups, each containing a single data source.In this paper, we introduce the notion of Log-Based Receiver-reliable Multicast (LBRM) communication, and we describe and evaluate a collection of log-based receiver reliable multicast optimizations that provide an efficient, scalable protocol for high-performance simulation applications. We argue that these techniques provide value to a broader range of applications and that the receiver-reliable model is an appropriate one for communication in general", "keywords": ["communication", "requirements", "value", "applications", "data distributed", "recovery", "efficiency", "examples", "simulation", "scalability", "log", "large-scale", "high-performance", "collect", "multicast", "general", "group", "model", "distributed simulation", "paper", "distributed application", "evaluation", "minimal", "management", "interaction", "detection", "latency", "optimality", "reliability", "data", "distributed", "packet-loss", "update"]} {"id": "kp20k_training_578", "title": "design of relational views over network schemas", "abstract": "An algorithm is presented for designing relational views over network schemas to: (1) support general query and update capability, (2) preserve the information content of the data base and (3) provide independence from its physical organization. The proposed solution is applicable to many existing CODASYL databases without data or schema conversion. The particular declarations of a CODASYL schema which supply sources of logical data definition are first identified. Then the view design algorithm is derived on the basis of a formal analysis of the semantic constraints established by these declarations. A new form of data structure diagram is also introduced to visualize these constraints", "keywords": ["network", "diagrams", "analysis", "design", "definition", "data structures", "schema", "general", "constraint", "informal", "contention", "organization", "views", "visualization", "formalism", "semantic", "data base", "data", "support", "physical", "relation", "algorithm", "database", "conversation", "update", "query"]} {"id": "kp20k_training_579", "title": "On the coverings by tolerance classes", "abstract": "A tolerance is a reflexive and symmetric, but not necessarily transitive, binary relation. Contrary to what happens with equivalence relations, when dealing with tolerances one must distinguish between blocks (maximal subsets where the tolerance is a total relation) and classes (the class of an element is the set of those elements tolerable with it). Both blocks and classes of a tolerance on a set define coverings of this set, but not every covering of a set is defined in this way. The characterization of those coverings that are families of blocks of some tolerance has been known for more than a decade now. In this paper we give a characterization of those coverings of a finite set that are families of classes of some tolerance", "keywords": ["tolerance", "similarity relation", "class", "neighborhood", "block"]} {"id": "kp20k_training_580", "title": "Boundary conditions control for a shallow-water model", "abstract": "A variational data assimilation technique was used to estimate optimal discretization of interpolation operators and derivatives in the nodes adjacent to the rigid boundary. Assimilation of artificially generated observational data in the shallow-water model in a square box and assimilation of real observations in the model of the Black sea are discussed. It is shown in both experiments that controlling the discretization of operators near a rigid boundary can bring the model solution closer to observations as in the assimilation window and beyond the window. This type of control also allows to improve climatic variability of the model. ", "keywords": ["variational data assimilation", "boundary conditions", "shallow water model", "black sea model"]} {"id": "kp20k_training_581", "title": "A simple local smoothing scheme in strongly singular boundary integral representation of potential gradient", "abstract": "A new approach for computation of potential gradient at and near boundary is introduced. A strongly singular boundary integral representation of potential gradient, whose integral density is the potential gradient, is derived and analysed. Applying the concept of the osculating circle, a local smoothing procedure which computes a continuous approximation of potential gradient from the results of a 2D Boundary Element Method (BEM) analysis using linear elements is proposed and evaluated. This approximation is used in the integral representation derived as an integral density which fulfills the continuity requirements. Numerical experiments demonstrate, for quasiuniform meshes, an O(h2) accuracy of potential gradient computed by both the local smoothing procedure on smooth parts of the boundary and by the integral representation on smooth boundary parts and near smooth boundary parts for points inside the domain. A consequence of the latter result is that no significant increase in the error appears near the boundary, boundary layer effect thus being eliminated in this approach", "keywords": ["potential theory", "potential gradient computation", "boundary element method", "boundary layer effect", "superconvergence"]} {"id": "kp20k_training_582", "title": "functional modularity for genetic programming", "abstract": "In this paper we introduce, formalize, and experimentally validate a novel concept of functional modularity for Genetic Programming (GP). We rely on module definition that is most natural for GP: a piece of program code (subtree). However, as opposed to syntax-based approaches that abstract from the actual computation performed by a module, we analyze also its semantic using a set of fitness cases. In particular, the central notion of this approach is subgoal , an entity that embodies module's desired semantic and is used to evaluate module candidates. As the cardinality of the space of all subgoals is exponential with respect to the number of fitness cases, we introduce monotonicity to assess subgoals' potential utility for searching for good modules. For a given subgoal and a sample of modules, monotonicity measures the correlation of subgoal's distance from module's semantics and the fitness of the solution the module is part of. In the experimental part we demonstrate how these concepts may be used to describe and quantify the modularity of two simple problems of Boolean function synthesis. In particular, we conclude that monotonicity usefully differentiates two problems with different nature of modularity, allows us to tell apart the useful subgoals from the other ones, and may be potentially used for problem decomposition and enhance the efficiency of evolutionary search", "keywords": ["modularity", "problem decomposition", "genetic programming"]} {"id": "kp20k_training_584", "title": "answering approximate queries over autonomous web databases", "abstract": "To deal with the problem of empty or too little answers returned from a Web database in response to a user query, this paper proposes a novel approach to provide relevant and ranked query results. Based on the user original query, we speculate how much the user cares about each specified attribute and assign a corresponding weight to it. This original query is then rewritten as an approximate query by relaxing the query criteria range. The relaxation order of all specified attributes and the relaxed degree on each specified attribute are varied with the attribute weights. For the approximate query results, we generate users' contextual preferences from database workload and use them to create a priori orders of tuples in an off-line preprocessing step. Only a few representative orders are saved, each corresponding to a set of contexts. Then, these orders and associated contexts are used at query time to expeditiously provide ranked answers. Results of a preliminary user study demonstrate that our query relaxation and results ranking methods can capture the user's preferences effectively. The efficiency and effectiveness of our approach is also demonstrated by experimental result", "keywords": ["top-k", "query results ranking", "query relaxation", "web database"]} {"id": "kp20k_training_585", "title": "Stochastic finite learning of the pattern languages", "abstract": "The present paper proposes a new learning model-called stochastic finite learning-and shows the whole class of pattern languages to be learnable within this model. This main result is achieved by providing a new and improved average-case analysis of the Lange-Wiehagen (New Generation Computing, 8, 361-370) algorithm learning the class of all pattern languages in the limit from positive data. The complexity measure chosen is the total learning time, i.e., the overall time taken by the algorithm until convergence. The expectation of the total learning time is carefully analyzed and exponentially shrinking tail bounds for it are established for a large class of probability distributions. For every pattern pi containing k different variables it is shown that Lange and Wiehagen's algorithm possesses an expected total learning time of O( ) over cap>alpha (k) E[Lambda ]log(1/beta)(k)), where ) over cap> and beta are two easily computable parameters arising naturally from the underlying probability distributions, and E[Lambda] is the expected example string length. Finally, assuming a bit of domain knowledge concerning the underlying class of probability distributions, it is shown how to convert learning in the limit into stochastic finite learning", "keywords": ["inductive learning", "pattern languages", "average-case analysis", "learning in the limit", "stochastic finite learning"]} {"id": "kp20k_training_587", "title": "PROBABILISTIC QUANTUM KEY DISTRIBUTION", "abstract": "This work presents a new concept in quantum key distribution called the probabilistic quantum key distribution (PQKD) protocol, which is based on the measurement uncertainty in quantum phenomena. It allows two mutually untrusted communicants to negotiate an unpredictable key that has a randomness guaranteed by the laws of quantum mechanics. In contrast to conventional QKD (e.g., BB84) in which one communicant has to trust the other for key distribution or quantum key agreement (QKA) in which the communicants have to artificially contribute subkeys to a negotiating key, PQKD is a natural and simple method for distributing a secure random key. The communicants in the illustrated PQKD take Einstein-Podolsky-Rosen (EPR) pairs as quantum resources and then use entanglement swapping and Bell-measurements to negotiate an unpredictable key", "keywords": ["quantum information", "quantum cryptography", "quantum key agreement", "quantum key distribution"]} {"id": "kp20k_training_588", "title": "An experiment with reflective middleware to support grid-based flood monitoring", "abstract": "Flooding is a growing problem, which affects more than 10% of the U.K. population. The cost of damage caused by flooding correlates closely with the warning time given before a flood event, making flood monitoring and prediction critical to minimizing the cost of flood damage. This paper describes a wireless sensor network (WSN) for flood warning, which is capable of not only integrating with remote fixed-network grids for computationally intensive flood modelling purposes but also performing on-site grid computation. This functionality is supported by the reflective and component-based GridKit middleware, which provides support for both WSN and grid application domains. ", "keywords": ["grid", "wsn", "middleware"]} {"id": "kp20k_training_589", "title": "rate control for delay-sensitive traffic in multihop wireless networks", "abstract": "We propose two multipath rate control algorithms that guarantee bounded end-to-end delay in multihop wireless networks. Our work extends the previous research on optimal rate control and scheduling in multihop wireless networks, to support inelastic delay requirements. Using the relationship between dual variables and packet delay, we develop two alternative solutions that are independent from any queuing model assumption, contrary to the previous research. In the first solution, we derive lower bounds on source rates that achieve the required delay bounds. We then develop a distributed algorithm comprising scheduling and rate control functions, which requires each source to primarily check the feasibility of its QoS before initiating its session. In the second solution we eliminate the admission control phase by developing an algorithm that converges to the utility function weights that ensure the required delay bounds for all flows. Both solutions carry out scheduling at slower timescale than rate control, and consequently are more efficient than previous cross-layer algorithms. We show through numerical examples that even when there are no delay constraints, the proposed algorithms significantly reduce the delay compared to the previous solutions", "keywords": ["delay", "multihop wireless networks", "qos", "cross-layer optimization", "rate control"]} {"id": "kp20k_training_590", "title": "Optimality of KLT for high-rate transform coding of Gaussian vector-scale mixtures: Application to reconstruction, estimation, and classification", "abstract": "The Karhunen-Loeve transform (KLT) is known to be optimal for high-rate transform coding of Gaussian vectors for both fixed-rate and variable-rate encoding. The KLT is also known to be suboptimal for some non-Gaussian models. This paper proves high-rate optimality of the KLT for variable-rate encoding of a broad class of non-Gaussian vectors: Gaussian vector-scale mixtures (GVSM), which extend the Gaussian scale mixture (GSM) model of natural signals. A key concavity property of the scalar GSM (same as the scalar GVSM) is derived to complete the proof. Optimality holds under a broad class of quadratic criteria, which include mean-squared error (MSE) as well as generalized f-divergence loss in estimation and binary classification systems. Finally, the theory is illustrated using two applications: signal estimation in multiplicative noise and joint optimization of classification/reconstruction systems", "keywords": ["chernoff distance", "classification", "estimation", "f-divergence", "gaussian scale mixture", "high-resolution quantization", "karhunen-loeve transform ", "mean-squared error ", "multiplicative noise", "quadratic criterion"]} {"id": "kp20k_training_591", "title": "The Norepinephrine Transporter and Pheochromocytoma", "abstract": "Pheochromocytomas are rare neuroendocrine tumors of chromaffin cell origin that synthesize and secrete excess quantities of catecholamines and other vasoactive peptides. Pheochromocytomas also express the norepinephrine transporter (NET), a molecule that is used clinically as a means of incorporating radiolabelled substrates such as 131I-MIBG (iodo-metaiodobenzylguanidine) into pheochromocytoma tumor cells. This allows the diagnostic localization of these tumors and, more recently, 131I-MIBG has been used in trials in the treatment of pheochromocytoma, potentially giving rise to NET as a therapeutic target. However, because of varying levels or activities of the transporter, the ability of 131I-MIBG to be consistently incorporated into tumor cells is limited, and therefore various strategies to increase NET functional activity are being investigated, including the use of traditional chemotherapeutic agents such as cisplatin or doxorubicin. Other aspects of NET discussed in this short review include the regulation of the transporter and how novel proteinprotein interactions between NET and structures such as syntaxin 1A may hold the key to innovative ways to increase the therapeutic value of 131I-MIBG", "keywords": ["norepinephrine transporter", "pc12 cells", "uptake assay", "cisplatin"]} {"id": "kp20k_training_592", "title": "MetaEasy: A Meta-Analysis Add-In for Microsoft Excel", "abstract": "Meta-analysis is a statistical methodology that combines or integrates the results of several independent clinical trials considered by the analyst to be 'combinable' (Huque 1988). However, completeness and user-friendliness are uncommon both in specialised meta-analysis software packages and in mainstream statistical packages that have to rely on user-written commands. We implemented the meta-analysis methodology in a Microsoft Excel add-in which is freely available and incorporates more meta-analysis models (including the iterative maximum likelihood and profile likelihood) than are usually available, while paying particular attention to the user-friendliness of the package", "keywords": ["meta-analysis", "forest plot", "excel", "vba", "maximum likelihood", "profile likelihood"]} {"id": "kp20k_training_593", "title": "Adaptive data collection strategies for lifetime-constrained wireless sensor networks", "abstract": "Communication is a primary source of energy consumption in wireless sensor networks. Due to resource constraints, the sensor nodes may not have enough energy to report every reading to the base station over a required network lifetime. This paper investigates data collection strategies in lifetime-constrained wireless sensor networks. Our objective is to maximize the accuracy of data collected by the base station over the network lifetime. Instead of sending sensor readings periodically, the relative importance of the readings is considered in data collection: the sensor nodes send data updates to the base station when the new readings differ more substantially from the previous ones. We analyze the optimal update strategy and develop adaptive update strategies for both individual and aggregate data collections. We also present two methods to cope with message losses in wireless transmission. To make full use of the energy budgets, we design an algorithm to allocate the numbers of updates allowed to be sent by the sensor nodes based on their topological relations. Experimental results using real data traces show that, compared with the periodic strategy, adaptive strategies significantly improve the accuracy of data collected by the base station", "keywords": ["data collection", "energy efficiency", "network lifetime", "data accuracy", "sensor network"]} {"id": "kp20k_training_594", "title": "Image fusion-based contrast enhancement", "abstract": "The goal of contrast enhancement is to improve visibility of image details without introducing unrealistic visual appearances and/or unwanted artefacts. While global contrast-enhancement techniques enhance the overall contrast, their dependences on the global content of the image limit their ability to enhance local details. They also result in significant change in image brightness and introduce saturation artefacts. Local enhancement methods, on the other hand, improve image details but can produce block discontinuities, noise amplification and unnatural image modifications. To remedy these shortcomings, this article presents a fusion-based contrast-enhancement technique which integrates information to overcome the limitations of different contrast-enhancement algorithms. The proposed method balances the requirement of local and global contrast enhancements and a faithful representation of the original image appearance, an objective that is difficult to achieve using traditional enhancement methods. Fusion is performed in a multi-resolution fashion using Laplacian pyramid decomposition to account for the multi-channel properties of the human visual system. For this purpose, metrics are defined for contrast, image brightness and saturation. The performance of the proposed method is evaluated using visual assessment and quantitative measures for contrast, luminance and saturation. The results show the efficiency of the method in enhancing details without affecting the colour balance or introducing saturation artefacts and illustrate the usefulness of fusion techniques for image enhancement applications", "keywords": ["contrast enhancement", "image fusion", "pyramidal image decomposition", "gaussian pyramid decomposition", "image blending", "luminance"]} {"id": "kp20k_training_595", "title": "Reachability analysis for uncertain SSPs", "abstract": "Stochastic Shortest Path problems (SSPs) can be efficiently dealt with by the Real-Time Dynamic Programming algorithm (RTDP). Yet, RTDP requires that a goal state is always reachable. This article presents an algorithm checking for goal reachability, especially in the complex case of an uncertain SSP where only a possible interval is known for each transition probability. This gives an analysis method for determining if SSP algorithms such as RTDP are applicable, even if the exact model is not known. As this is a time-consuming algorithm, we also present a simple process that often speeds it up dramatically. Yet, the main improvement still needed is to turn to a symbolic analysis in order to avoid a complete state-space enumeration", "keywords": ["stochastic shortest-path problems", "uncertain model", "reachability analysis"]} {"id": "kp20k_training_596", "title": "embodiment in brain-computer interaction", "abstract": "With emerging opportunities for using Brain-Computer Interaction (BCI) in gaming applications, there is a need to understand the opportunities and constraints of this interaction paradigm. To complement existing laboratory-based studies, there is also a call for the study of BCI in real world contexts. In this paper we present such a real world study of a simple BCI game called MindFlex, played as a social activity in the home. In particular, drawing on the philosophical traditions of embodied interaction, we highlight the importance of considering the body in BCI and not simply what is going on in the head. The study shows how people use bodily actions to facilitate control of brain activity but also to make their actions and intentions visible to, and interpretable by, others playing and watching the game. It is the public availability of these bodily actions during BCI that allows action to be socially organised, understood and coordinated with others and through which social relationships can be played out. We discuss the implications of this perspective and findings for BCI", "keywords": ["play", "embodied interaction", "gaming", "brain-computer interaction"]} {"id": "kp20k_training_597", "title": "formally measuring agreement and disagreement in ontologies", "abstract": "Ontologies are conceptual models of particular domains, and domains can be modeled differently, representing different opinions, beliefs or perspectives. In other terms, ontologies may disagree with some particular pieces of information and among themselves. Assessing such agreements and disagreements is very useful in a variety of scenarios, in particular when integrating external elements of information into existing ones. In this paper, we present a set of measures to evaluate the agreement and disagreement of an ontology with a statement or with other ontologies. While our work goes beyond the naive approach of checking for logical inconsistencies, it relies on a complete formal framework based on the semantics of the considered ontologies. The experiments realized on several concrete scenarios show the validity of our approach and the usefulness of measuring agreement and disagreement in ontologies", "keywords": ["controversy", "ontologies", "agreement", "disagreement", "consensus"]} {"id": "kp20k_training_598", "title": "Minimal Realizations of Linear Systems: The \"Shortest Basis\" Approach", "abstract": "Given a discrete-time linear system C, a shortest basis for is a set of linearly independent generators for C with the least possible lengths. A basis B is a shortest basis if and only if it has the predictable span property (i.e., has the predictable delay and degree properties, and is non-catastrophic), or alternatively if and only if it has the subsystem basis property (for any interval J, the generators in B whose span is in J is a basis for the subsystem C(J)). The dimensions of the minimal state spaces and minimal transition spaces of C are simply the numbers of generators in a shortest basis B that are active at any given state or symbol time, respectively. A minimal linear realization for C in controller canonical form follows directly from a shortest basis for C, and a minimal linear realization for C in observer canonical form follows directly from a shortest basis for the orthogonal system C(perpendicular to). This approach seems conceptually simpler than that of classical minimal realization theory", "keywords": ["linear systems", "minimal realizations"]} {"id": "kp20k_training_599", "title": "A Low-Latency Multi-layer Prefix Grouping Technique for Parallel Huffman Decoding of Multimedia Standards", "abstract": "Huffman coding is a popular and important lossless compression scheme for various multimedia applications. This paper presents a low-latency parallel Huffman decoding technique with efficient memory usage for multimedia standards. First, the multi-layer prefix grouping technique is proposed for sub-group partition. It exploits the prefix characteristic in Huffman codewords to solve the problem of table size explosion. Second, a two-level table lookup approach is introduced which can promptly branch to the correct sub-group by level-1 table lookup and decode the symbols by level-2 table lookup. Third, two optimization approaches are developed; one is to reduce the branch cycles and the other is parallel processing between two-level table lookup and direct table lookup approaches to fully utilize the advantage of VLIW parallel processing. An AAC Huffman decoding example is realized on the Parallel Architecture Core DSP (PAC DSP) processor. The simulation results show that the proposed method can further improve about 89% of decoding cycles and 33% of table size comparing to the linear search method", "keywords": ["huffman coding", "prefix grouping", "parallel processing", "vliw dsp processor", "multimedia"]} {"id": "kp20k_training_600", "title": "Analysis and numerical simulation of strong discontinuities in finite strain poroplasticity", "abstract": "This paper presents an analysis of strong discontinuities in coupled poroplastic media in the finite deformation range. A multi-scale framework is developed for the characterization of these solutions involving a discontinuous deformation (or displacement) field in this coupled setting. The strong discontinuities are used as a tool for the modeling of the localized dissipative effects characteristic of the localized failures of typical poroplastic systems. This is accomplished through the inclusion of a cohesive-frictional law relating the resolved stresses on the discontinuity and the accumulated fluid content on it with the displacement and fluid flow jumps across the discontinuity surface. The formulation considers the limit of vanishing small scales, hence recovering a problem in the large scale involving the usual regular displacement and pore pressure variables, while capturing correctly these localized dissipative mechanisms. All the couplings between the mechanical and fluid problems, from the modeling of the solid's response through effective stresses and tractions to the geometric coupling consequence of the assumed finite deformation setting, are taken into account in these considerations. The multi-scale structure of the theoretical formulation is fully employed in the development of new enhanced strain finite elements to capture these discontinuous solutions with no regularization of the singular fields appearing in the formulation. Several numerical simulations are presented showing the properties and performance of the proposed localized models and the enhanced finite elements used in their numerical implementation", "keywords": ["porous media", "coupled poro-elastoplasticity", "finite deformations", "strain localization", "strong discontinuity", "enhanced finite element methods"]} {"id": "kp20k_training_601", "title": "TOWARD REAL NOON-STATE SOURCES", "abstract": "Path-entangled N-photon systems described by NOON states are the main ingredient of many quantum information and quantum imaging protocols. Our analysis aims to lead the way toward the implementation of both NOON-state sources and their applications. To this end, we study the functionality of \"real\" NOON-state sources by quantifying the effect real experimental apparatuses have on the actual generation of the desired NOON state. In particular, since the conditional generation of NOON states strongly relies on photon counters, we evaluate the dependence of both the reliability and the signal-to-noise ratio of \"real\" NOON-state sources on detection losses. We find a surprising result: NOON-state sources relying on nondetection are much more reliable than NOON-state sources relying on single-photon detection. Also the comparison of the resources required to implement these two protocols comes out to be in favor of NOON-state sources based on nondetection. A scheme to improve the performances of \"real\" NOON-state sources based on single-photon detection is also proposed and analyzed", "keywords": ["noon-state preparation", "path-entanglement", "efficiency", "quantum optics"]} {"id": "kp20k_training_602", "title": "Using TPACK as a framework to understand teacher candidates' technology integration decisions", "abstract": "This research uses the technological pedagogical and content knowledge (TPACK) framework as a lens for understanding how teacher candidates make decisions about the use of information and communication technology in their teaching. Pre- and post-treatment assessments required elementary teacher candidates at Brigham Young University to articulate how and why they would integrate technology in three content teaching design tasks. Researchers identified themes from student rationales that mapped to the TPACK constructs. Rationales simultaneously supported subcategories of knowledge that could be helpful to other researchers trying to understand and measure TPACK. The research showed significant student growth in the use of rationales grounded in content-specific knowledge and general pedagogical knowledge, while rationales related to general technological knowledge remained constant", "keywords": ["information and communication technology", "pedagogical content knowledge", "pre-service teacher education", "technological pedagogical content knowledge", "technology integration"]} {"id": "kp20k_training_603", "title": "exploiting temporal coherence in global illumination", "abstract": "Producing high quality animations featuring rich object appearance and compelling lighting effects is very time consuming using traditional frame-by-frame rendering systems. In this paper we present a number of global illumination and rendering solutions that exploit temporal coherence in lighting distribution for subsequent frames to improve the computation performance and overall animation quality. Our strategy relies on extending into temporal domain well-known global illumination techniques such as density estimation photon tracing, photon mapping, and bi-directional path tracing, which were originally designed to handle static scenes only", "keywords": ["density estimation", "temporal coherence", "irradiance cache", "bi-directional path tracing", "global illumination"]} {"id": "kp20k_training_604", "title": "Effectiveness of cognitive-load based adaptive instruction in genetics education", "abstract": "Research addressing the issue of instructional control in computer-assisted instruction has revealed mixed results. Prior knowledge level seems to play a mediating role in the students ability to effectively use given instructional control. This study examined the effects of three types of instructional control (non-adaptive program control, learner control, adaptive program control) and prior knowledge (high school, 1st year and 2nd year college students) on effectiveness and efficiency of learning in a genetics training program. The results revealed that adaptive program control led to highest training performance but not to superior post-test or far-transfer performance. Furthermore, adaptive program control proved to be more efficient in terms of learning outcomes of the test phase than the other two instructional control types. College students outperformed the high school students on all aspects of the study thereby strengthening the importance of prior knowledge in learning effectiveness and efficiency. Lastly, the interaction effects showed that for each prior knowledge level different levels of support were beneficial to learning", "keywords": ["cognitive load", "adaptive instruction", "learner control", "non-adaptive program control", "learning efficiency", "problem selection algorithm"]} {"id": "kp20k_training_605", "title": "Sub-pixel mapping based on artificial immune systems for remote sensing imagery", "abstract": "We propose an artificial immune sub-pixel mapping framework for remote sensing imagery. The sub-pixel mapping problem is transformed to an optimization problem. The proposed algorithm can obtain better sub-pixel mapping results by immune operators. Experimental results demonstrate that the proposed approach outperforms the previous methods", "keywords": ["sub-pixel mapping", "remote sensing", "artificial immune systems", "clonal selection", "classification"]} {"id": "kp20k_training_606", "title": "Modeling of the quenching of blast products from energetic materials by expansion into vacuum", "abstract": "Condensed phase energetic materials include propellants and explosives. Their detonation or burning products generate dense, high pressure states that are often adjacent to regions that are at vacuum or near-vacuum conditions. An important chemical diagnostic experiment is the time of flight mass spectroscopy experiment that initiates an energetic material sample via an impact from a flyer plate, whose products expand into a vacuum. The rapid expansion quenches the reaction in the products so that the products can be differentiated by molecular weight detection as they stream past a detector. Analysis of this experiment requires a gas dynamic simulation of the products of a reacting multi-component gas that flows into a vacuum region. Extreme computational difficulties can arise if flow near the vacuum interface is not carefully and accurately computed. We modify an algorithm proposed by Munz [1], that computed the fluxes appropriate to a gasvacuum interface for an inert ideal gas, and extend it to a multi-component mixture of reacting chemical components reactions with general, non-ideal equations of state. We illustrate how to incorporate that extension in the context of a complete set of algorithms for a general, cell-based flow solver. A key step is to use the local exact solution for an isentropic expansion fan, for the mixture that connects the computed flow states to the vacuum. Regularity conditions (i.e. the LiuSmoller conditions) are necessary conditions that must be imposed on the equation of state of the multicomponent fluid in the limit of a vacuum state. We show that the Jones, Wilkins, Lee (JWL) equation of state meets these requirements", "keywords": ["vacuum riemann problem", "vacuum tracking", "multi-component reacting flow", "time of flight mass spectroscopy", "petn", "jwl", "miegruneisen equation of state"]} {"id": "kp20k_training_607", "title": "modeling multiple-event situations across news articles", "abstract": "Readers interested in the context of an event covered in the news such as the dismissal of a lawsuit can benefit from easily finding out about the overall news situation, the legal trial, of which the event is a part. Guided by abstract models of news situation types such as legal trials, corporate acquisitions, and kidnappings, Brussell is a system that presents situation instances it creates by reading multiple articles about the specific events that comprise them. We discuss how these situation models are structured and how they drive the creation of particular instances", "keywords": ["news situations"]} {"id": "kp20k_training_608", "title": "Model selection for least squares support vector regressions based on small-world strategy", "abstract": "Model selection plays a key role in the application of support vector machine (SVM). In this paper, a method of model selection based on the small-world strategy is proposed for least squares support vector regression (LS-SVR). In this method, the model selection is treated as a single-objective global optimization problem in which generalization performance measure performs as fitness function. To get better optimization performance, the main idea of depending more heavily on dense local connections in small-world phenomenon is considered, and a new small-world optimization algorithm based on tabu search, called the tabu-based small-world optimization (TSWO), is proposed by employing tabu search to construct local search operator. Therefore, the hyper-parameters with best generalization performance can be chosen as the global optimum based on the powerful search ability of TSWO. Experiments on six complex multimodal functions are conducted, demonstrating that TSWO performs better in avoiding premature of the population in comparison with the genetic algorithm (GA) and particle swarm optimization (PSO). Moreover, the effectiveness of leave-one-out bound of LS-SVM on regression problems is tested on noisy sinc function and benchmark data sets, and the numerical results show that the model selection using TSWO can almost obtain smaller generalization errors than using GA and PSO with three generalization performance measures adopted", "keywords": ["model selection", "least squares support vector machines", "small-world", "tabu search"]} {"id": "kp20k_training_609", "title": "A model of seepage field in the tailings dam considering the chemical clogging process", "abstract": "The radial collector well, an important water drainage construction, has been widely applied to the tailings dam. Chemical clogging frequently occurs around the vertical shaft in radial collector well due to enough dissolved oxygen and some heavy metals in groundwater flow of tailings dam. Considering the contribution of water discharge from both vertical shaft and horizontal screen laterals and chemical clogging occurring around vertical shaft well, a new model was developed on the basis of Multi-Node Well (MNW2) package of MODFLOW. Moreover, two cases were calculated by the newly developed model. The results indicate that the model considering chemical clogging occurring around the vertical shaft well is reasonable. Owing to the decrease in hydraulic conductivity caused by chemical clogging, the groundwater level in dam body increases constantly and water discharge of radial collector well declines by 1015%. For ordinary vertical well, it decreases by 30%. Therefore, chemical clogging occurring around radial collector well can arouse increases of groundwater level, and influence dambody safety", "keywords": ["groundwater flow", "radial collector well", "chemical clogging", "mathematical model", "modflow", "tailing dam"]} {"id": "kp20k_training_610", "title": "A symmetrisation method for non-associated unified hardening model", "abstract": "This paper presents a simple method for symmetrising the asymmetric elastoplastic matrix arising from non-associated flow rules. The symmetrisation is based on mathematical transformation and does not alter the incremental stressstrain relationship. The resulting stress increment is identical to that obtained using the original asymmetrized elastoplastic matrix. The symmetrisation method is applied to integrate the Unified Hardening (UH) model where the elastoplastic matrix is asymmetric due to stress transformation. The performance of the method is verified through finite element analysis (FEA) of boundary value problems such as triaxial extension tests and bearing capacity of foundations. It is found that the symmetrisation method can improve the convergence of the FEA and reduce computational time significantly for non-associated elastoplastic models", "keywords": ["three-dimensional", "non-associated flow rule", "elastoplastic matrix", "symmetrisation", "finite element analyses"]} {"id": "kp20k_training_611", "title": "ROFL: Routing on flat labels", "abstract": "It is accepted wisdom that the current Internet architecture conflates network locations and host identities, but there is no agreement on how a future architecture should distinguish the two. One could sidestep this quandary by routing directly on host identities themselves, and eliminating the need for network-layer protocols to include any mention of network location. The key to achieving this is the ability to route on flat labels. In this paper we take an initial stab at this challenge, proposing and analyzing our ROFL routing algorithm. While its scaling and efficiency properties are far from ideal, our results suggest that the idea of routing on flat labels cannot be immediately dismissed", "keywords": ["algorithms", "design", "experimentation", "routing", "naming", "internet architecture"]} {"id": "kp20k_training_612", "title": "A branch-and-cut approach for a generic multiple-product, assembly-system design problem", "abstract": "This paper presents two new models to deal with different tooling requirements in the generic multiple-product assembly-system design (MPASD) problem and proposes a new branch-and-cut solution approach, which adds cuts at each node in the search tree. It employs the facet generation procedure (FGP) to generate facets of underlying knapsack polytopes. In addition, it uses the FGP in a new way to generate additional cuts and incorporates two new methods that exploit special structures of the MPASD problem to generate cuts. One new method is based on a principle that can be applied to solve generic 0-1 problems by exploiting embedded integral polytopes. The approach includes new heuristic and pre-processing methods, which are applied at the root node to manage the size of each instance. This paper establishes benchmarks for MPASD through an experiment in which the approach outperformed IBM's Optimization Subroutine Library (OSL), a commercially available solver", "keywords": ["programming : integer", "cutting planes", "production scheduling", "flexible manufacturing line balancing"]} {"id": "kp20k_training_613", "title": "MANAGING COGNITIVE AND MIXED-MOTIVE CONFLICTS IN CONCURRENT ENGINEERING", "abstract": "In collaborative activities such as concurrent engineering (GE), conflicts arise due to differences in goals, information available, and the understanding of the task. Such conflicts can be categorized into two types: mixed-motive and cognitive. Mixed-motive conflicts are essentially due to interest differentials among stakeholders. Cognitive conflicts can occur even when the stakeholders do not differ in their respective utilities, but simply because they offer multiple cognitive perspectives on the problem. Because conflicts in CE occur under a wider context of cooperative problem solving, the imperative for solving conflicts in such situations is strong. This paper argues that mechanisms for managing conflicts in CE should bear a strong conceptual mapping with the nature of the underlying conflict. Moreover, since CE activities are performed in collaborative settings, such mechanisms should accommodate information processing at multiple referent levels. We discuss the nature of both types of conflicts and the requirements of mechanisms for managing them, The functionalities of an implementation that addresses these requirements are illustrated through an example of a CE task", "keywords": ["cognitive conflict", "mixed-motive conflict", "cognitive feedback", "design rationale"]} {"id": "kp20k_training_614", "title": "Designing robust emergency medical service via stochastic programming", "abstract": "This paper addresses the problem of designing robust emergency medical services. Under this respect, the main issue to consider is the inherent uncertainty which characterizes real life situations. Several approaches can be used to design robust mathematical models which are able to hedge uncertain conditions. We are using here the stochastic programming framework and, in particular, the probabilistic paradigm. More specifically, we develop a stochastic programming model with probabilistic constraints aimed to solve both the location and the dimensioning problems, i.e. where service sites must be located and how many emergency vehicles must be assigned to each site, in order to achieve a reliable level of service and minimize the overall costs. In doing so, we consider the randomness of the system as far as the demand of emergency service is concerned. The numerical results, which have been collected on a large set of test problems, demonstrate the validity of the proposed model, particularly in dealing with the trade-off between quality of service and costs management", "keywords": ["stochastic programming", "facility location", "health services", "emergency services"]} {"id": "kp20k_training_615", "title": "Recommendation of optimized information seeking process based on the similarity of user access behavior patterns", "abstract": "Differing from many studies of recommendation that provided the final results directly, our study focuses on providing an optimized process of information seeking to users. Based on process mining, we propose an integrated adaptive framework to support and facilitate individualized recommendation based on the gradual adaptation model that gradually adapts to a target users transition of needs and behaviors of information access, including various search-related activities, over different time spans. In detail, successful information seeking processes are extracted from the information seeking histories of users. Furthermore, these successful information seeking processes are optimized as a series of action units to support the target users whose information access behavior patterns are similar to the reference users. Based on these, the optimized information seeking processes are navigated to the target users according to their transitions of interest focus. In addition to describing some definitions and measures introduced, we go further to present an optimized process recommendation model and show the system architecture. Finally, we discuss the simulation and scenario for the proposed system", "keywords": ["personalized recommendation", "behavior patterns", "information seeking process"]} {"id": "kp20k_training_616", "title": "Analytical mechanics solution for mechanism motion and elastic deformation hybrid problem of beam system", "abstract": "Based on the dynamics of flexible multi-body systems and finite element method, a beam system dynamics model is built for solving motiondeformation mixed problem and tracing the whole process of mechanism motion. Kinetic control equation and constraint equation in which, mechanism motion and elastic deformation is described using hybrid coordinates, and the spatial position matrix of the element is described using Euler quaternion are derived. Numerical examples show that the method can trace and solve the track and internal force of the system", "keywords": ["dynamics of flexible multi-body systems", "hybrid coordinates description", "euler quaternion", "beam element"]} {"id": "kp20k_training_617", "title": "Availability analysis of shared backup path protection under multiple-link failure scenario in WDM networks", "abstract": "Dedicated protection and shared protection are the main protection schemes in optical wavelength division multiplexing (WDM) networks. Shared protection techniques surpass the dedicated protection techniques by providing the same level of availability as dedicated protection with reduced spare capacity. Satisfying the service availability levels defined by the users service-level agreement (SLA) in a cost-effective and resource-efficient way is a major challenge for networks operators. Hence, evaluating the availability of the shared protection scheme has a great interest. We recently developed an analytical model to estimate network availability of a WDM network with shared-link connections under multiple link-failures. However, this model requires the information of all possible combinations of the unshared protection paths, which is somehow troublesome. In this paper, we propose a more practical analytical model for evaluating the availability of a WDM network with shared-link connections under multiple link-failures. The proposed model requires only an estimate of the set of shared paths of each protection path. The estimated availability of the proposed model accurately matched with that of the previous model. Finally, we compare the previous model with the proposed model to demonstrate the merits and demerits of both models illustrating the threshold at which each model performs better based on the computational complexity. The proposed model significantly contributes to the related areas by providing network operators with a practical tool to evaluate quantitatively the system-availability and, thus, the expected survivability degree of WDM optical networks with shared connections under multiple-link failures", "keywords": ["wdm networks", "multiple link-failures", "shared-link connections", "availability analysis"]} {"id": "kp20k_training_618", "title": "Evolving RBF neural networks for time-series forecasting with EvRBF", "abstract": "This paper is focused on determining the parameters of radial basis function neural networks (number of neurons, and their respective centers and radii) automatically. While this task is often done by hand, or based in hillclimbing methods which are highly dependent on initial values, in this work, evolutionary algorithms are used to automatically build a radial basis function neural networks (RBF NN) that solves a specified problem, in this case related to currency exchange rates forecasting. The evolutionary algorithm EvRBF has been implemented using the evolutionary computation framework evolving object, which allows direct evolution of problem solutions. Thus no internal representation is needed, and specific solution domain knowledge can be used to construct specific evolutionary operators, as well as cost or fitness functions. Results obtained are compared with existent bibliography, showing an improvement over the published methods", "keywords": ["rbf", "evolutionary algorithms", "eo", "functional estimation", "time-series forecasting", "currency exchange"]} {"id": "kp20k_training_619", "title": "Modified centralized ROCOF based load shedding scheme in an islanded distribution network", "abstract": "Two new centralized adaptive under frequency load shedding methods are proposed. DG units operation and loads willing to pay (WTP) are considered. The objective is to minimize the resulting penalties of the load shedding", "keywords": ["distributed generation", "under frequency load shedding", "rate of change of frequency of load", "islanded operation"]} {"id": "kp20k_training_620", "title": "horn-ok-please", "abstract": "Road congestion is a common problem worldwide. Existing Intelligent Transport Systems (ITS) are mostly inapplicable in developing regions due to high cost and assumptions of orderly traffic. In this work, we develop a low-cost technique to estimate vehicular speed, based on vehicular honks. Honks are a characteristic feature of the chaotic road conditions common in many developing regions like India and South-East Asia. We envision a system where dynamic road-traffic information is learnt using inexpensive, wireless-enabled on-road sensors. Subsequent analyzed information can then be sent to mobile road users; this would fit well with the burgeoning mobile market in developing regions. The core of our technique comprises a pair of road side acoustic sensors, separated by a distance. If a moving vehicle honks between the two sensors, its speed can be estimated from the Doppler shift of the honk frequency. In this context, we have developed algorithms for honk detection, honk matching across sensors, and speed estimation. Based on the speed estimates, we subsequently detect road congestion. We have done extensive experiments in semi-controlled settings as well as real road scenarios under different traffic conditions. Using over 18 hours of road-side recordings, we show that our speed estimation technique is effective in real conditions. Further, we use our data to characterize traffic state as free-flowing versus congested using a variety of metrics: the vehicle speed distribution, the number and duration of honks. Our results show clear statistical divergence of congested versus free flowing traffic states, and a threshold-based classification accuracy of 70-100% in most situations", "keywords": ["its", "sensor network", "audio signal processing"]} {"id": "kp20k_training_621", "title": "Triangular mesh offset for generalized cutter", "abstract": "In 3-axis NC (Numerical Control) machining, various cutters are used and the offset compensation for these cutters is important for a gouge free tool path generation. This paper introduces triangular mesh offset method for a generalized cutter defined based on the APT (Automatically Programmed Tools) definition or parametric curve. An offset vector is computed according to the geometry of a cutter and the normal vector of a part surface. A triangular mesh is offset to the CL (Cutter Location) surface by multiple normal vectors of a vertex and the offset vector computation method. A tool path for a generalized cutter is generated on the CL surface, and the machining test shows that the proposed offset method is useful for the NC machining", "keywords": ["offset", "apt cutter", "parabolic cutter", "triangular mesh", "cl surface", "tool path", "nc machining"]} {"id": "kp20k_training_623", "title": "A new segmentation method for phase change thermography sequence", "abstract": "A new segmentation method for image sequence is proposed in order to get the isotherm from phase change thermography sequence (PCTS). Firstly, the PCTS is transformed into a series of synthesized images by compression and conversion, so the isotherm extraction can be transformed into the segmentation of a series of synthesized images. Secondly, a virtual illumination model is constructed to eliminate the glisten of the aerocraft model. In order to get the parameters of virtual illumination model, a coordination-optimization method is employed and all parameters are obtained according to the similarity constraint. Finally, the proving isotherms are gained after the threshold coefficients are compensated. The eventual results demonstrate the efficiency of the proposed segmentation method", "keywords": ["image segmentation", "phase change thermography sequence", "illumination model", "threshold coefficient"]} {"id": "kp20k_training_624", "title": "Multi-color continuous-variable entangled optical beams generated by NOPOs", "abstract": "We propose an alternative scalable way to generate multi-color entangled optical beams efficiently utilizing the tripartite entanglement existent between three fieldssignal, idler, and pumpfrom a nondegenerate optical parametric oscillator (NOPO) operating above the threshold. The special case of two cascaded NOPOs is studied, as it is shown that the five beams with very different frequencies are generated by NOPOA (one of the retained signal and idler beams, and the reflected pump beam) and NOPOB (the output signal and idler beams, and the reflected pump beam). These beams are theoretically demonstrated to be continuous variable (CV) entangled with each other by applying the positivity of the partially transposed criterion for the inseparability of multipartite CV entanglement. The symplectic eigenvalues of the partial transposition covariance matrix of the obtained optical entangled state are numerically calculated in terms of experimentally reachable system parameters. The optimal operation conditions to achieve high five-color entanglement are presented. As the cavity parameters and the nonlinear crystals of the two NOPOs can be chosen freely, the frequencies of the submodes in the entangled state thus are adjustable to match the transition frequencies of atoms or low loss fiber-optic communication window. The calculated results provide direct references for future experiment to generate multi-color entangled optical beams efficiently by means of NOPOs operating above the threshold", "keywords": ["non-degenarate optical parametric oscillator", "multi-color entangled state", "continue-variable quantum entanglement"]} {"id": "kp20k_training_625", "title": "BCHED - Energy Balanced Sub-Round Local Topology Management for Wireless Sensor Network", "abstract": "Topology controlling based on cluster structure is an important method to improve the energy efficiency of wireless sensor network (WSN) systems. Frequent clustering process of classical controlling methods, such as LEACH, is apt to cause serious energy consuming. Some improved methods reduced re-clustering frequency, but these methods sometimes lead to energy imbalance in the stable communication period. In this paper, a hierarchical topology controlling method BCHED is proposed. With double round clustering mechanism, BCHED activates a local re-clustering process between two rounds of data transmission, and with optional cluster head exchanging mechanism, BCHED reorganize the node clusters according to their residual energy distribution. Experimental results show that, with BCHED, the energy balance performance of WSN system is significantly improved, and the system lifetime can be effectively extended", "keywords": ["wireless sensor network", "topology controlling", "network clustering"]} {"id": "kp20k_training_626", "title": "Hierarchical reconstruction for discontinuous Galerkin methods on unstructured grids with a WENO-type linear reconstruction and partial neighboring cells", "abstract": "The hierarchical reconstruction (HR) [Y.-J. Liu, C.-W. Shu, E. Tadmor, M.-P. Zhang, Central discontinuous Galerkin methods on overlapping cells with a non-oscillatory hierarchical reconstruction, SIAM J. Numer. Anal. 45 (2007) 2442-2467] is applied to the piecewise quadratic discontinuous Galerkin method on two-dimensional unstructured triangular grids. A variety of limiter functions have been explored in the construction of piecewise linear polynomials in every hierarchical reconstruction stage. We show that on triangular grids, the use of center biased limiter functions is essential in order to recover the desired order of accuracy. Several new techniques have been developed in the paper: (a) we develop a WENO-type linear reconstruction in each hierarchical level, which solves the accuracy degeneracy problem of previous limiter functions and is essentially independent of the local mesh structure; (b) we find that HR using partial neighboring cells significantly reduces over/under-shoots, and further improves the resolution of the numerical solutions. The method is compact and therefore easy to implement. Numerical computations for scalar and systems of nonlinear hyperbolic equations are performed. We demonstrate that the procedure can generate essentially non-oscillatory solutions while keeping the resolution and desired order of accuracy for smooth solutions", "keywords": ["hierarchical reconstruction", "discontinuous galerkin methods", "unstructured grids", "hyperbolic conservation laws"]} {"id": "kp20k_training_627", "title": "Spatialtemporal model for demand and allocation of waste landfills in growing urban regions", "abstract": "Shortage of land for waste disposal is a serious and growing potential problem in most large urban regions. However, no practical studies have been reported in the literature that incorporate the process of consumption and depletion of landfill space in urban regions over time and analyse its implications for the management of waste. An evaluation of existing models of waste management indicates that they can provide significant insights into the design of solid waste management activities. However, these models do not integrate spatial and temporal aspects of waste disposal that are essential to understand and measure the problem of shortage of land. The lack of adequate models is caused in part due to limitations of the methodologies the existing models are based upon, such as limitations of geographic information systems (GIS) in handling dynamic processes, and the limitations of systems analysis in incorporating spatial physical properties. This indicates that new methods need to be introduced in waste management modelling. Moreover, existing models generally do not link waste management to the process of urban growth. This paper presents a model to spatially and dynamically model the demand for and allocation of facilities for urban solid waste disposal in growing urban regions. The model developed here consists of a loose-coupled system that integrates GIS (geographic information systems) and cellular automata (CA) in order to give it spatial and dynamic capabilities. The model combines three sub-systems: (1) a CA-based model to simulate spatial urban growth over the future; (2) a spread-sheet calculation for designing waste disposal options and hence evaluating demand for landfill space over time; and (3) a model developed within a GIS to evaluate the availability and suitability of land for landfill over time and then simulate allocation of landfills in the available land. The proposed model has been tested and set up with data from a real source (Porto Alegre City, Brazil), and has successfully assessed the demand for landfills and their allocation over time under a range of scenarios of decision-making regarding waste disposal systems, urban growth patterns and land evaluation criteria", "keywords": ["urban solid waste", "waste management", "landfill", "dynamic modelling", "geographical information systems"]} {"id": "kp20k_training_628", "title": "Dynamic delamination modelling using interface elements", "abstract": "Existing techniques in explicit dynamic Finite Element (FE) codes for the analysis of delamination in composite structures and components can be simplistic, using simple stress-based failure function to initiate and propagate delaminations. This paper presents an interface modelling technique for explicit FE codes. The formulation is based on damage mechanics and uses only two constants for each delamination mode; firstly, a stress threshold for damage to commence, and secondly, the critical energy release rate for the particular delamination mode. The model has been implemented into the LLNL DYNA3D Finite Element (FE) code and the LS-DYNA3D commercial FE code. The interface element modelling technique is applied to a series of common fracture toughness based delamination problems, namely the DCB, ENF and MMB tests. The tests are modelled using a simple dynamic relaxation technique, and serves to validate the methodology before application to more complex problems. Explicit Finite Elements codes, such as DYNA3D, are commonly used to solve impact type problems. A modified BOEING impact test at two energy levels is used to illustrate the application of the interface element technique, and its coupling to existing in-plane failure models. Simulations are also performed without interface elements to demonstrate the need to include the interface when modelling impact on composite components", "keywords": ["finite elements", "composite failure", "delamination modelling", "impact"]} {"id": "kp20k_training_629", "title": "A new Steiner patch based file format for Additive Manufacturing processes", "abstract": "A new Steiner patch based Additive Manufacturing file format has been developed. Steiner format uses triangular rational Bezier representation of Steiner patches. Steiner format has high geometric fidelity and low approximation error. The Steiner patches can be easily sliced and closed form solutions can be obtained. AM parts manufactured using Steiner format has very low profile and form errors", "keywords": ["additive manufacturing ", "steiner patches", "standard tessellation language file", "additive manufacturing file format", "chordal errors", "geometric dimensioning and tolerancing errors"]} {"id": "kp20k_training_630", "title": "Empirical challenges and solutions in constructing a high-performance metasearch engine", "abstract": "Purpose - This paper seeks to disclose the important role of missing documents, broken links and duplicate items in the results merging process of a metasearch engine in detail. It aims to investigate some related practical challenges and proposes some solutions. The study also aims to employ these solutions to improve an existing model for results aggregation. Design/methodology/approach - This research measures the amount of an increase in retrieval effectiveness of an existing results merging model that is obtained as a result of the proposed improvements. The 50 queries of the 2002 TREC web track were employed as a standard test collection based on a snapshot of the worldwide web to explore and evaluate the retrieval effectiveness of the suggested method. Three popular web search engines (Ask, Bing and Google) as the underlying resources of metasearch engines were selected. Each of the 50 queries was passed to all three search engines. For each query the top ten non-sponsored results of each search engine were retrieved. The returned result lists of the search engines were aggregated using a proposed algorithm that takes the practical issues of the process into consideration. The effectiveness of the result lists generated was measured using a well-known performance indicator called \"TSAP\" (TREC-style average precision). Findings - Experimental results demonstrate that the proposed model increases the performance of an existing results merging system by 14.39 percent on average. Practical implications - The findings of this research would be helpful for metasearch engine designers as well as providing motivation to the vendors of web search engines to improve their technology. Originality/value - This study provides some valuable concepts, practical challenges, solutions and experimental results in the field of web metasearching that have not been previously investigated", "keywords": ["metasearch", "missing documents", "broken links", "duplicate documents", "data fusion", "rank aggregation", "owa operator", "searching", "information searches", "information retrieval"]} {"id": "kp20k_training_631", "title": "Maintaining awareness using policies; Enabling agents to identify relevance of information", "abstract": "The field of computer supported cooperative work aims at providing information technology models, methods, and tools that assist individuals to cooperate. The presented paper is based on three main observations from literature. First, one of the problems in utilizing information technology for cooperation is to identify the relevance of information, called awareness. Second, research in computer supported cooperative work proposes the use of agent technologies to aid individuals to maintain their awareness. Third, literature lacks the formalized methods on how software agents can identify awareness. This paper addresses the problem of awareness identification. The main contribution of this paper is to propose and evaluate a formalized structure, called Policy-based Awareness Management (PAM). PAM extends the logic of general awareness in order to identify relevance of information. PAM formalizes existing policies into Directory Enabled Networks-next generation structure and uses them as a source for awareness identification. The formalism is demonstrated by applying PAM to the space shuttle Columbia disaster occurred in 2003. The paper also argues that efficacy and cost-efficiency of the logic of general awareness will be increased by PAM. This is evaluated by simulation of hypothetical scenarios as well as a case study. ", "keywords": ["computer supported cooperative work", "awareness", "intelligent agents", "policy"]} {"id": "kp20k_training_632", "title": "Extension headers for IPv6 anycast", "abstract": "Anycast is a new communication paradigm defined in IPv6. Different from unicast and multicast routing, routers on the internetwork deliver an anycast datagrant to the nearest available node. By shifting the task of resolving destinations from source node to internetwork, anycasting is highly flexible and cost-effective on routing process and inherently load-balanced and robust on server selection. To achieve these objectives, not only \"distance\" but also other metrics, such as load balance, reliability, QoS, can and should be taken into account in anycast routing. The IPv6 basic header is designed in a simple and fixed-length format for the purpose of efficient forwarding. Extra data and options needed for packet processing are encoded into extension headers. Such a design makes possible the adding of extension headers for special purposes. In this paper, we define routing extension headers for IPv6 anycasting to enable various types of anycast routing mechanism. Scenarios are also provided to demonstrate how to apply them. ", "keywords": ["anycasting", "ipv6", "extension header", "routing header"]} {"id": "kp20k_training_633", "title": "The calculus of constructions as a framework for proof search with set variable instantiation", "abstract": "We show how a procedure developed by Bledsoe for automatically finding substitution instances for set variables in higher-order logic can be adapted to provide increased automation in proof search in the Calculus of Constructions (CC). Bledsoe's procedure operates on an extension of first-order logic that allows existential quantification over set variables. This class of variables can also be identified in CC. The existence of a correspondence between higher-order logic and higher-order type theories such as CC is well-known. CC can be viewed as an extension of higher-order logic where the basic terms of the language, the simply-typed lambda-terms, are replaced with terms containing dependent types. We show how Bledsoe's techniques can be incorporated into a reformulation of a search procedure for CC given by Dowek and extended to handle terms with dependent types. We introduce a notion of search context for CC which allows us to separate the operations of assumption introduction and backchaining. Search contexts allow a smooth integration of the step which finds solutions to set variables. We discuss how the procedure can be restricted to obtain procedures for set variable instantiation in sublanguages of CC such as the Logical Framework (LF) and higher-order hereditary Harrop formulas (hohh). The latter serves as the logical foundation of the lambda Prolog logic programming language. ", "keywords": ["proof search", "higher order logic", "type theory", "set theory", "calculus of constructions"]} {"id": "kp20k_training_634", "title": "Learning temporal nodes Bayesian networks", "abstract": "Temporal nodes Bayesian networks (TNBNs) are an alternative to dynamic Bayesian networks for temporal reasoning with much simpler and efficient models in some domains. TNBNs are composed of temporal nodes, temporal intervals, and probabilistic dependencies. However, methods for learning this type of models from data have not yet been developed. In this paper, we propose a learning algorithm to obtain the structure and temporal intervals for TNBNs from data. The method consists of three phases: (i) obtain an initial approximation of the intervals, (ii) obtain a structure using a standard algorithm and (iii) refine the intervals for each temporal node based on a clustering algorithm. We evaluated the method with synthetic data from three different TNBNs of different sizes. Our method obtains the best score using a combined measure of interval quality and prediction accuracy, and a competitive structural quality with lower running times, compared to other related algorithms. We also present a real world application of the algorithm with data obtained from a combined cycle power plant in order to diagnose temporal faults. ", "keywords": ["bayesian networks", "temporal reasoning", "learning"]} {"id": "kp20k_training_635", "title": "Solving bilevel programs with the KKT-approach", "abstract": "Bilevel programs (BL) form a special class of optimization problems. They appear in many models in economics, game theory and mathematical physics. BL programs show a more complicated structure than standard finite problems. We study the so-called KKT-approach for solving bilevel problems, where the lower level minimality condition is replaced by the KKT- or the FJ-condition. This leads to a special structured mathematical program with complementarity constraints. We analyze the KKT-approach from a generic viewpoint and reveal the advantages and possible drawbacks of this approach for solving BL problems numerically", "keywords": ["bilevel problems", "kkt-condition", "fj-condition", "mathematical programs with complementarity constraints", "genericity", "critical points"]} {"id": "kp20k_training_636", "title": "Modeling and evaluating of typical advanced peer-to-peer botnet", "abstract": "In this paper, we present a general model for an advanced peer-to-peer (P2P) botnet, in which the performance of the botnet can be systematically studied. From the model, we can derive five performance metrics to describe the robustness, security and efficiency of the botnet. Additionally, we analyze the relationship between the performance metrics and the model feature metrics of the botnet, and it is helpful to study the botnet under different model feature metrics. Furthermore, the proposed model can be easily applied to other types of botnets. Finally, taking the robustness and security into consideration, an optimization scheme for designing an optimal P2P botnet is proposed", "keywords": ["botnet", "peer-to-peer", "modeling", "optimization scheme"]} {"id": "kp20k_training_637", "title": "Learning a coverage set of maximally general fuzzy rules by rough sets", "abstract": "Expert systems have been widely used in domains where mathematical models cannot be easily built, human experts are not available or the cost of querying an expert is high. Machine learning or data mining can extract desirable knowledge or interesting patterns from existing databases and ease the development bottleneck in building expert systems. In the past we proposed a method [Hong, T.P., Wang, T.T., Wang, S.L. (2000). Knowledge acquisition from quantitative data using the rough-set theory. Intelligent Data Analysis (in press).], which combined the rough set theory and the fuzzy set theory to produce all possible fuzzy rules from quantitative data. In this paper, we propose a new algorithm to deal with the problem of producing a set of maximally general fuzzy rules for coverage of training examples from quantitative data. A rule is maximally general if no other rule exists that is both more general and with larger confidence than it. The proposed method first transforms each quantitative value into a fuzzy set of linguistic terms using membership functions and then calculates the fuzzy lower approximations and the fuzzy upper approximations. The maximally general fuzzy rules are then generated based on these fuzzy approximations by an iterative induction process. The rules derived can then be used to build a prototype knowledge base in a fuzzy expert system", "keywords": ["machine learning", "fuzzy set", "rough set", "data mining", "expert system"]} {"id": "kp20k_training_638", "title": "An ontological conceptualization approach for awareness in domain-independent collaborative modeling systems: Application to a model-driven development method", "abstract": "One of the most important aspects of collaborative systems is the concept of awareness, which refers to the perception and knowledge of the group and its activities. Support for the design and automatic development of awareness mechanisms within collaborative systems is hard to find. Furthermore, awareness conceptualizations are usually partial and differ greatly between the proposals of different authors. In response to these problems, we propose an awareness ontology that conceptualizes some of the most important aspects of awareness in a specific kind of system: collaborative systems for carrying out modeling activities. The awareness ontology brings together and extends a series of ontologies we have developed in the past. The ontology is prepared to better meet the specific implementation needs of a model-driven development approach. In order to validate the usefulness of this ontology, we relate its concepts to the awareness dimensions set out in Gutwin and Greenbergs framework, and we apply the ontology to two systems presently in use", "keywords": ["awareness", "cscw", "collaborative modeling", "collaborative systems development", "ontologies"]} {"id": "kp20k_training_639", "title": "Approximating k-node connected subgraphs via critical graphs", "abstract": "We present two new approximation algorithms for the problem of finding a k-node connected spanning subgraph ( directed or undirected) of minimum cost. The best known approximation guarantees for this problem were O(min{k, n/root(n-k)}) for both directed and undirected graphs, and O( ln k) for undirected graphs with n >= 6k(2), where n is the number of nodes in the input graph. Our first algorithm has approximation ratio O(n/n-k ln(2) k), which is O(ln(2) k) except for very large values of k, namely, k = n - o( n). This algorithm is based on a new result on l-connected p-critical graphs, which is of independent interest in the context of graph theory. Our second algorithm uses the primal-dual method and has approximation ratio O(v n ln k) for all values of n, k. Combining these two gives an algorithm with approximation ratio O(ln k center dot min{root k, n/n-k ln k}), which asymptotically improves the best known approximation guarantee for directed graphs for all values of n, k, and for undirected graphs for k > root n/6. Moreover, this is the first algorithm that has an approximation guarantee better than T( k) for all values of n, k. Our approximation ratio also provides an upper bound on the integrality gap of the standard LP-relaxation", "keywords": ["connectivity", "approximation", "graphs", "network design"]} {"id": "kp20k_training_640", "title": "Development of an AutoWEP distributed hydrological model and its application to the upstream catchment of the Miyun Reservoir", "abstract": "Based on the physically characterized distributed hydrological modeling scheme WEP-L a more generalized and expandable method AutoWEP has been developed that is equipped with updated modules for pre-processing and automatic parameter identification. Sub-basin scale classifications of land use and soil are undertaken by incorporating remote sensing data and geographic information system techniques. In the process of developing the AutoWEP modeling scheme, a new concept of parameter partitioning is proposed and an automatic delineation of parameter partitions is achieved through programming. The sensitivity analysis algorithm, LH-OAT, and the parameter optimization algorithm, SCE-UA, are embedded in the model. Its application to the upstream watershed of the Miyun Reservoir shows that AutoWEP features time-savings, improved efficiency and suitable generalizations, that result in a long series of acceptable simulations", "keywords": ["autowep modeling", "parameter identification", "parameter partition", "sensitivity analysis", "parameter optimization"]} {"id": "kp20k_training_641", "title": "towards insider threat detection using web server logs", "abstract": "Malicious insiders represent one of the most difficult categories of threats an organization must consider when mitigating operational risk. Insiders by definition possess elevated privileges; have knowledge about control measures; and may be able to bypass security measures designed to prevent, detect, or react to unauthorized access. In this paper, we discuss our initial research efforts focused on the detection of malicious insiders who exploit internal organizational web servers. The objective of the research is to apply lessons learned in network monitoring domains and enterprise log management to investigate various approaches for detecting insider threat activities using standardized tools and a common event expression framework", "keywords": ["log management", "web server logs", "insider threat", "common event expression", "insider threat detection"]} {"id": "kp20k_training_642", "title": "weight similarity measurement model based, object oriented approach for bug databases mining to detect similar and duplicate bugs", "abstract": "In this paper data mining is applied on bug database to discover the similar and duplicate bugs. Whenever a new bug will be entered in the bug database through bug tracking system, it will be matched against the existing bugs and duplicate and similar bugs will be mined from the bug database. Similar kind of bugs are resolved in almost in same manners. So if a bug is found somewhere similar to other existing bug which is already resolved then its resolution will take less time, since some of the bug analysis part is similar to existing one, hence it will save time. In the existing tradition developers must have to manually identify duplicate bug reports, but this identification process is time-consuming and exacerbates the already high cost of software maintenance. So if the similar and duplicate bugs can be found out using some approach it will be a cost and time saving activity. Based on this concept a weight similarity measurement model based object orinted approach is described here in this paper to discover similar and duplicate bugs in the bug database", "keywords": ["information retrieval", "bug object", "similarity measurement", "duplicate bug"]} {"id": "kp20k_training_643", "title": "recursive modeling for completed code generation", "abstract": "Model-Driven Development is promising to software development because it can reduce the complexity and cost of developing large software systems. The basic idea is the use of different kinds of models during the software development process, transformations between them, and automatic code generation at the end of the development. But unlike the structural parts, fully-automated code generation from the behavior parts is still hard, if it works at all, restricted to specific application areas using a domain specific language, DSL. This paper proposes an approach to model the behavior parts of a system and to embed them into the structural models. The underlying idea is recursive refinements of activity elements in an activity diagram. With this, the detail generated code depends on the depth at which the refinements are done, i.e. if the lowest level of activities is mapped into activities executors , the completed code can be obtained", "keywords": ["activity executor ", "recursive modeling", "code generation", "mdd"]} {"id": "kp20k_training_644", "title": "On the theoretical comparison of low-bias steady-state estimators", "abstract": "The time-average estimator is typically biased in the context of steady-state simulation, and its bias is of order 1/t, where t represents simulated time. Several \"low-bias\" estimators have been developed that have a lower order bias, and, to first-order, the same variance of the time-average. We argue that this kind of first-order comparison is insufficient, and that a second-order asymptotic expansion of the mean square error (MSE) of the estimators is needed. We provide such an expansion for the time-average estimator in both the Markov and regenerative settings. Additionally, we provide a full bias expansion and a second-order MSE expansion for the Meketon - Heidelberger low-bias estimator, and show that its MSE can be asymptotically higher or lower than that of the time-average depending on the problem. The situation is different in the context of parallel steady-state simulation, where a reduction in bias that leaves the first-order variance unaffected is arguably an improvement in performance", "keywords": ["low-bias estimators", "steady-state simulation", "mean-square error expansion"]} {"id": "kp20k_training_645", "title": "The multisymplectic numerical method for GrossPitaevskii equation", "abstract": "For a BoseEinstein Condensate placed in a rotating trap and confined in the z-axis, a multisymplectic difference scheme was constructed to investigate the evolution of vortices in this paper. First, we look for a steady state solution of the imaginary time G-P equation. Then, we numerically study the vortices's development in real time, starting with the solution in imaginary time as initial value", "keywords": ["multisymplectic methods", "boseeinstein condensate", "two-dimensional g-p equation", "vortices"]} {"id": "kp20k_training_646", "title": "SYSTEM-DESIGN, DATA-COLLECTION AND EVALUATION OF A SPEECH DIALOG SYSTEM", "abstract": "This paper describes design issues of a speech dialogue system, the evaluation of the system, and the data collection of spontaneous speech in a transportation guidance domain. As it is difficult to collect spontaneous speech and to use a real system for the collection and evaluation, the phenomena related with dialogues have not been quantitatively clarified yet. The authors constructed a speech dialogue system which operates in almost real time, with acceptable recognition accuracy and flexible dialogue control. The system was used for spontaneous speech collection in a transportation guidance domain. The system performance evaluated in the domain is the understanding rate of 84.2% for the utterances within the predefined grammar and the lexicon. Also some statistics of the spontaneous speech collected are given", "keywords": ["speech dialog system", "spontaneous speech", "continuous speech recognition", "speech understanding"]} {"id": "kp20k_training_647", "title": "abstract convex evolutionary search", "abstract": "Geometric crossover is a formal class of crossovers which includes many well-known recombination operators across representations. In this paper, we present a general result showing that all evolutionary algorithms using geometric crossover with no mutation perform the same form of convex search regardless of the underlying representation, the specific selection mechanism, the specific offspring distribution, the specific search space, and the problem at hand. We then start investigating a few representation/space-independent geometric conditions on the fitness landscape - various forms of generalized concavity - that when matched with the convex evolutionary search guarantee, to different extents, improvement of offspring over parents for any choice of parents. This is a first step towards showing that the convexity relation between search and landscape may play an important role towards explaining the performance of evolutionary algorithms in a general setting across representations", "keywords": ["representations", "convex search", "evolutionary algorithms"]} {"id": "kp20k_training_648", "title": "Area measurement of large closed regions with a mobile robot", "abstract": "How can a mobile robot measure the area of a closed region that is beyond its immediate sensing range? This problem, which we name as blind area measurement, is inspired from scout worker ants who assess potential nest cavities. We first review the insect studies that have shown that these scouts, who work in dark, seem to assess arbitrary closed spaces and reliably reject nest sites that are small for the colony. We briefly describe the hypothesis that these scouts use \"Buffon's needle method\" to measure the area of the nest. Then we evaluate and analyze this method for mobile robots to measure large closed regions. We use a simulated mobile robot system to evaluate the performance of the method through systematic experiments. The results showed that the method can reliably measure the area of large and rather open, closed regions regardless of their shape and compactness. Moreover, the method's performance seems to be undisturbed by the existence of objects and by partial barriers placed inside these regions. Finally, at a smaller scale, we partially verified some of these results on a real mobile robot platform", "keywords": ["mobile robot", "stigmergy", "area measurement", "ants", "buffon's needle", "area coverage"]} {"id": "kp20k_training_649", "title": "Minimum pilot power for service coverage in WCDMA networks", "abstract": "Pilot power management is an important issue for efficient resource utilization in WCDMA networks. In this paper, we consider the problem of minimizing pilot power subject to a coverage constraint. The constraint can be used to model various levels of coverage requirement, among which full coverage is a special case. The pilot power minimization problem is NP-hard,as it generalizes the set covering problem. Our solution approach for this problem consists of mathematical programming models and methods. We present a linear-integer mathematical formulation for the problem. To solve the problem for large-scale networks, we propose a column generation method embedded into an iterative rounding procedure. We apply the proposed method to a range of test networks originated from realistic network planning scenarios, and compare the results to those obtained by two ad hoc approaches. The numerical experiments show that our algorithm is able to find near-optimal solutions with a reasonable amount of computing effort for large networks. Moreover, optimized pilot power considerably outperforms the ad hoc approaches, demonstrating that efficient pilot power management is an important component of radio resource optimization. As another part of our numerical study, we examine the trade-off between service coverage and pilot power consumption", "keywords": ["wcdma", "pilot power", "coverage", "optimization"]} {"id": "kp20k_training_650", "title": "ASYMPTOTICALLY STABLE MULTI-VALUED MANY-TO-MANY ASSOCIATIVE MEMORY NEURAL NETWORK AND ITS APPLICATION IN IMAGE RETRIEVAL", "abstract": "As an important artificial neural network, associative memory model can be employed to mimic human thinking and machine intelligence. In this paper, first, a multi-valued many-to-many Gaussian associative memory model (M(3)GAM) is proposed by introducing the Gaussian unidirectional associative memory model (GUAM) and Gaussian bidirectional associative memory model (GBAM) into Hattori et al's multi-module associative memory model ((MMA)(2)). Second, the M(3)GAM's asymptotical stability is proved theoretically in both synchronous and asynchronous update modes, which ensures that the stored patterns become the M(3)GAM's stable points. Third, by substituting the general similarity metric for the negative squared Euclidean distance in M(3)GAM, the generalized multivalued many-to-many Gaussian associative memory model (GM(3)GAM) is presented, which makes the M(3)GAM become its special case. Finally, we investigate the M(3)GAM's application in association-based image retrieval, and the computer simulation results verify the M(3)GAM's robust performance", "keywords": ["artificial neural network", "associative memory model", "asymptotical stability", "similarity metric", "association-based image retrieval"]} {"id": "kp20k_training_651", "title": "Identity-Based Threshold Proxy Signature from Bilinear Pairings", "abstract": "Delegation of rights is a common practice in the real world. We present two identity-based threshold proxy signature schemes, which allow an original signer to delegate her signing capability to a group of n proxy signers, and it requires a consensus of t or more proxy signers in order to generate a valid signature. In addition to identity-based scheme, privacy protection for proxy singers and security assurance are two distinct features of this work. Our first scheme provides partial privacy protection to proxy signers such that all signers' identities are revealed, whereas none of those t participating signers is specified. On the other hand, all proxy signers remain anonymous in the second scheme. This provides a full privacy protection to all proxy signers; however, each valid signature contains a tag that allows one to trace all the participating proxy signers. Both our proposed schemes are secure against unforgeability under chosen message attack, and satisfy many other necessary conditions for proxy signature", "keywords": ["identity-based signature", "proxy signature", "threshold", "privacy protection", "authentication"]} {"id": "kp20k_training_652", "title": "Semi-autonomous navigation of a robotic wheelchair", "abstract": "The present work considers the development of a wheelchair for people with special needs, which is capable of navigating semi-autonomously within its workspace. This system is expected to prove useful to people with impaired mobility and limited fine motor control of the upper extremities. Among the implemented behaviors of this robotic system are the avoidance of obstacles, the motion in the middle of the free space and the following of a moving target specified by the user (e.g., a person walking in front of the wheelchair). The wheelchair is equipped with sonars, which are used for distance measurement in preselected critical directions, and with a panoramic camera with a 360 degree field of view, which is used for following a moving target. After suitably processing the color sequence of the panoramic images using the color histogram of the desired target, the orientation of the target with respect to the wheelchair is determined, while its distance is determined by the sonars. The motion control laws developed for the system use the sensory data and take into account the non-holonomic kinematic constraints of the wheelchair, in order to guarantee certain desired features of the closed-loop system, such as stability. Moreover, they are as simplified as possible to minimize implementation requirements. An experimental prototype has been developed at ICS-FORTH, based on a commercially-available wheelchair. The sensors, the computing power and the electronics needed for the implementation of the navigation behaviors and of the user interfaces (touch screen, voice commands) were developed as add-on modules and integrated with the wheelchair", "keywords": ["wheelchairs", "robot navigation", "non-holonomic mobile robots", "person following", "sensor-based control", "panoramic cameras"]} {"id": "kp20k_training_653", "title": "Learning protein secondary structure from sequential and relational data", "abstract": "We propose a method for sequential supervised learning that exploits explicit knowledge of short- and long-range dependencies. The architecture consists of a recursive and bi-directional neural network that takes as input a sequence along with an associated interaction graph. The interaction graph models (partial) knowledge about long-range dependency relations. We tested the method on the prediction of protein secondary structure, a task in which relations due to beta-strand pairings and other spatial proximities are known to have a significant effect on the prediction accuracy. In this particular task, interactions can be derived from knowledge of protein contact maps at the residue level. Our results show that prediction accuracy can be significantly boosted by the integration of interaction graphs", "keywords": ["recursive neural networks", "relational learning", "protein secondary structure prediction", "protein contact maps"]} {"id": "kp20k_training_654", "title": "Document replication strategies for geographically distributed web search engines", "abstract": "Large-scale web search engines are composed of multiple data centers that are geographically distant to each other. Typically, a user query is processed in a data center that is geographically close to the origin of the query, over a replica of the entire web index. Compared to a centralized, single-center search engine, this architecture offers lower query response times as the network latencies between the users and data centers are reduced. However, it does not scale well with increasing index sizes and query traffic volumes because queries are evaluated on the entire web index, which has to be replicated and maintained in all data centers. As a remedy to this scalability problem, we propose a document replication framework in which documents are selectively replicated on data centers based on regional user interests. Within this framework, we propose three different document replication strategies, each optimizing a different objective: reducing the potential search quality loss, the average query response time, or the total query workload of the search system. For all three strategies, we consider two alternative types of capacity constraints on index sizes of data centers. Moreover, we investigate the performance impact of query forwarding and result caching. We evaluate our strategies via detailed simulations, using a large query log and a document collection obtained from the Yahoo! web search engine", "keywords": ["web search", "distributed information retrieval", "document replication", "query processing", "query forwarding", "result caching"]} {"id": "kp20k_training_655", "title": "Collision correction using a cross-layer design architecture for dedicated short range communications vehicle safety messaging", "abstract": "This paper presents a new physical (PHY) and medium access control (MAC) cross-layer design frame collision correction (CC) architecture for correction of Dedicated Short Range Communications (DSRCs) safety messages. Conditions suitable for the use of this design are presented, which can be used for optimization. At its basic level, the CC at the PHY uses a new decision making block that uses information from the MAC layer for the channel estimator and equalizer. This requires a cache of previously received frames, and pre-announcing frame repetitions from the MAC. We present the theoretical equations behind CC mechanism, and describe the components required to implement the cross-layer CC using deployment and sequence diagrams. Simulation results show that especially under high user load, reception reliability of the DSRC safety messages increases and PER decreases", "keywords": ["vehicle safety", "cross-layer design", "collision mitigation", "physical layer", "dsrc", "ofdm"]} {"id": "kp20k_training_656", "title": "A general model of unit testing efficacy", "abstract": "Much of software engineering is targeted towards identifying and removing existing defects while preventing the injection of new ones. Defect management is therefore one important software development process whose principal aim is to ensure that the software produced reaches the required quality standard before it is shipped into the market place. In this paper, we report on the results of research conducted to develop a predictive model of the efficacy of one important defect management technique, that of unit testing. We have taken an empirical approach. We commence with a number of assumptions that led to a theoretical model which describes the relationship between effort expended and the number of defects remaining in a software code module tested (the latter measure being termed correctness). This model is general enough to capture the possibility that debugging of a software defect is not perfect and could lead to new defects being injected. The Model is examined empirically against actual data and validated as a good predictive model under specific conditions. The work has been done in such a way that models are derived not only for the case of overall correctness but also for specific types of correctness such as correctness arising from the removal of defects contributing to shortcoming in reliability (R-type), functionality (F-type), usability (U-type) and maintainability (M-type) aspects of the program subject to defect management", "keywords": ["software process", "software quality", "process efficacy", "unit testing efficacy model", "defect management", "functionality", "reliability", "usability", "maintainability"]} {"id": "kp20k_training_657", "title": "Constitutive modeling of materials and contacts using the disturbed state concept: Part 1 Background and analysis", "abstract": "Computer methods have opened a new era for accurate and economic analysis and design of engineering problems. They account for many significant factors such as arbitrary geometries, nonhomogeneities in material composition, complex boundary conditions, nonlinear material behavior (constitutive modeling) and complex loading conditions, which were difficult to include in conventional and closed form solution procedures. Constitutive modeling characterizes the mechanical behavior of solids and contacts (e.g. interfaces and joints), and plays perhaps the most important role for realistic solutions from procedures in computational mechanics. A great number of constitutive models, from simple to the advanced, have been proposed. Most of them account for specific characteristics of the material. However, a deforming material may experience, simultaneously, many characteristics such as elastic, plastic and creep strains, different loading (stress) paths, volume change under shear stress, microcracking leading to fracture and failure, strain softening or degradation, and healing or strengthening. Hence, there is a need for developing unified models that account for these characteristics. The main objective of these two papers is to present a brief review of the available constitutive models, and identify their capabilities and limitations; then a novel and unified approach, called the disturbed state concept (DSC) with hierarchical single surface (HISS) plasticity, is presented including its theoretical background, constitutive parameters and their determination, and validation at the test specimen and boundary value problem levels. The general capabilities of the DSC/HISS approach are emphasized by its application for a wide range of materials and contacts (interfaces and joints). Because of its generality, the DSC contains many previous models as special cases. The presentation is divided in two papers. This paper (Part 1) contains the review of various models, and then description of the DSC/HISS model and its analysis for issues such as mesh dependence and localization. Part 1 also contains the capability of the DSC/HISS model to define the behavior of both solids and contacts. Validations of the DSC/HISS model at the specimen and boundary value problem levels for a wide range of materials and contacts are included in the compendium paper, Part 2. The idea of the DSC is considered to be relatively simple, and it can be easily implemented in computer procedures. It is believed that the DSC can provide a realistic and unified approach for constitutive modeling for a wide range of materials and contacts", "keywords": ["constitutive modeling", "solids", "interfaces", "unified dsc model", "computer methods", "applications"]} {"id": "kp20k_training_658", "title": "An Efficient Neumann Series-Based Algorithm for Thermoacoustic and Photoacoustic Tomography with Variable Sound Speed", "abstract": "We present an efficient algorithm for reconstructing an unknown source in thermoacoustic and photoacoustic tomography based on the recent advances in understanding the theoretical nature of the problem. We work with variable sound speeds that also might be discontinuous across some surface. The latter problem arises in brain imaging. The algorithmic development is based on an explicit formula in the form of a Neumann series. We present numerical examples with nontrapping, trapping, and piecewise smooth speeds, as well as examples with data on a part of the boundary. These numerical examples demonstrate the robust performance of the Neumann series-based algorithm", "keywords": ["thermoacoustic tomography", "photoacoustic tomography", "inverse problems", "neumann series", "variable sound speed"]} {"id": "kp20k_training_659", "title": "Documentary genre and digital recordkeeping: red herring or a way forward", "abstract": "The purpose of this paper is to provide a preliminary assessment of the utility of the genre concept for digital recordkeeping. The exponential growth in the volume of records created since the 1940s has been a key motivator for the development of strategies that do not involve the review or processing of individual documents or files. Automation now allows processes at a level of granularity that is rarely, if at all, possible in the case of manual processes, without loss of cognisance of context. For this reason, it is timely to revisit concepts that may have been disregarded because of a perceived limited effectiveness in contributing anything to theory or practice. In this paper, the genre concept and its employability in the management of current and archival digital records are considered, as a form of social contextualisation of a document and as an attractive entry point of granularity at which to implement automation of appraisal processes. Particular attention is paid to the structurational view of genre and its connections with recordkeeping theory", "keywords": ["genre", "structurational theory", "recordkeeping continuum"]} {"id": "kp20k_training_660", "title": "Existence and multiplicity of positive periodic solutions for a class of higher-dimension functional differential equations with impulses", "abstract": "This paper deals with the existence of multiple periodic solutions for n-dimensional functional differential equations with impulses. By employing the Krasnoselskii fixed point theorem, we obtain some easily verifiable sufficient criteria which extend previous results. ", "keywords": ["positive periodic solution", "functional differential equations", "impulse", "the krasnoselskii fixed point theorem"]} {"id": "kp20k_training_661", "title": "Modelling and querying geographical data warehouses", "abstract": "A number of proposals for integrating geographical (Geographical Information Systems-GIS) and multidimensional (data warehouse-DW and online analytical processing-OLAP) processing are found in the database literature. However, most of the current approaches do not take into account the use of a COW (geographical data warehouse) metamodel or query language to make available the simultaneous specification of multidimensional and spatial operators. To address this, this paper discusses the UML class diagram of a GDW metamodel and proposes its formal specifications. We then present a formal metamodel for a geographical data cube and propose the Geographical Multidimensional Query Language (GeoMDQL) as well. GeoMDQL is based on well-known standards such as the MultiDimensional eXpressions (MDX) language and OGC simple features specification for SQL and has been specifically defined for spatial OLAP environments based on a GDW. We also present the GeoMDQL syntax and a discussion regarding the taxonomy of GeoMDQL query types. Additionally, aspects related to the GeoMDQL architecture implementation are described, along with a case study involving the Brazilian public healthcare system in order to illustrate the proposed query language. ", "keywords": ["solar", "geographical data warehouse", "geographical and multidimensional query language "]} {"id": "kp20k_training_662", "title": "DDAS: Distance and direction awareness system for intelligent vehicles", "abstract": "Wireless technology has been widely used for applications of wireless Internet access. With the matured wireless transmission technology, the new demand on wireless applications is toward the concept of deploying wireless devices on transportation systems such as buses, trains and vehicles. Statistics of car accident cases show that car accidents are often caused from drivers unnoticing other approaching cars during driving. Without the assistants of automotive personal computer system (also called as Auto PC), during high-speed moving, driver always counts on himself/herself to look for all vehicles around him/her via limited vision and acoustic recognition. In case that the Auto PC is able to provide useful surrounding information, such as the directions and distances to nearby vehicles, to drivers, unnecessary collisions could be obviously avoided, especially in cases of changing lane, crossing intersection and making a turn. In this paper, we will introduce the concept of automatic distance and direction awareness system (DDAS) and describe the designed embedded DDAS integrated with three-wheel and four-wheel robot cars", "keywords": ["embedded", "smart antenna", "vehicle", "wireless", "zigbee"]} {"id": "kp20k_training_663", "title": "Investigating models for preservice teachers' use of technology to support student-centered learning", "abstract": "The study addressed two limitations of previous research on factors related to teachers integration of technology in their teaching. It attempted to test a structural equation model (SEM) of the relationships among a set of variables influencing preservice teachers' use of technology specifically to support student-centered learning A review of literature led to a path model that provided the design and analysis for the study, which involved 206 preservice teachers in the United States. The results show that the proposed model had a moderate fit to the observed data, and a more parsimonious model was found to have a better fit. In addition, preservice teachers' self-efficacy of teaching with technology had the strongest influence on technology use, which was mediated by their perceived value of teaching and learning with technology. School's contextual factors had moderate influence on technology use. Moreover, the effect of preservice teachers' training on student-centered technology use was mediated by both perceived value and self-efficacy of technology. The implications for teacher preparation include close collaboration between teacher education program and field experience, focusing on specific technology uses ", "keywords": ["elementary education", "improving classroom teaching", "pedagogical issues", "secondary education"]} {"id": "kp20k_training_664", "title": "a risc approach to process groups", "abstract": "ISIS [1], developed at Cornell University, is a system for building applications consisting of cooperating, distributed processes. Group management and group communication are two basic building blocks provided by ISIS. ISIS has been very successful, and there is currently a demand for a version that will run on many different environments and transport protocols, and will scale to many process groups. Furthermore, performance is an important issue. For this purpose, ISIS is being redesigned and rebuilt from scratch [2]. Of particular importance to us is getting the new ISIS system to run well on modern microkernel technology, notably MACH [3] and Chorus [4]. The basic reasoning behind these plans is that microkernels appear to offer satisfactory support for memory management and communication between processes on the same machine, but that support for applications that run on multiple machines is weak. The current IPC mechanisms are adequate only for the simpler distributed applications, as they do not address any of the internal management issues of distribution.The new ISIS system has several well-defined layers. The lowest layers, which implement multicast transport and failure detection, are near completion and currently run on SUN OS using SUN LWP threads, on MACH using C Threads, and on the x-kernel [5]. This system can use several different network protocols at the same time, such as IP, UDP (with or without multicast support), and raw Ethernet. This enables processes on SUN OS, MACH, and Chorus to multicast among each other, even though the environments are very dissimilar. The system makes use of available hardware multicast if possible. It also queues messages if a backlog appears, so that multiple messages may be packed together in a single packet. Using this strategy, the number of messages per second can become very large, and in the current (simple) implementation about 10,000 per second can be sent between distributed SUN OS user processes, a figure that approaches the speed of local light-weight remote procedure call mechanisms. (The current round-trip time on SUN OS over Ethernet is about 3 milliseconds", "keywords": ["communication", "network protocol", "applications", "use", "scale", "intern", "performance", "building block", "addressing", "multicast", "scratch", "queue", "timing", "group", "distributed application", "ethernet", "lighting", "user", "locality", "failure", "management", "thread", "transport protocol", "reasoning", "strategies", "detection", "systems", "technologies", "environments", "message", "implementation", "process", "memorialized", "layer", "support", "version", "hardware", "distributed", "transport", "group communication", "completeness"]} {"id": "kp20k_training_665", "title": "ON AFFINE SCALING ALGORITHMS FOR NONCONVEX QUADRATIC-PROGRAMMING", "abstract": "We investigate the use of interior algorithms, especially the affine-scaling algorithm, to solve nonconvex - indefinite or negative definite - quadratic programming (QP) problems. Although the nonconvex QP with a polytope constraint is a \"hard\" problem, we show that the problem with an ellipsoidal constraint is \"easy\". When the \"hard\" QP is solved by successively solving the \"easy\" QP, the sequence of points monotonically converge to a feasible point satisfying both the first and the second order optimality conditions", "keywords": ["nonconvex quadratic programming", "affine-scaling algorithm", "interior algorithms", "np-hard problems"]} {"id": "kp20k_training_666", "title": "Multi-agent simulation of group behavior in E-Government policy decision", "abstract": "To research complex group behavior in E-Government policy decision, this study proposes a multi-agent qualitative simulation approach using EGGBM (E-Government Group Behavior Model). Causal reasoning is employed to analyze it from the perspective of system. Then, a multi-agent simulation decision system based on Java-Repast is developed. Moreover, three validation experiments are designed to prove that EGGBM can exactly represent the actual situation. At last, an example of application is given to show that this method can help policy-makers choose appropriate policies to improve the level of accepting information technology (LAIT) of groups. It is shown that this approach could be a new attempt for the research of group behavior in governmental organization", "keywords": ["group behavior", "repast", "multi-agent", "e-government", "causal reasoning"]} {"id": "kp20k_training_667", "title": "Independent component analysis for unaveraged single-trial MEG data decomposition and single-dipole source localization", "abstract": "This paper presents a novel method for decomposing and localizing unaveraged single-trial magnetoencephalographic data based on the independent component analysis (ICA) approach associated with pre- and post-processing techniques. In the pre-processing stage, recorded single-trial raw data are first decomposed into uncorrelated signals with the reduction of high-power additive noise. In the stage of source separation, the decorrelated source signals are further decomposed into independent source components. In the post-processing stage, we perform a source localization procedure to seek a single-dipole map of decomposed individual source components, e.g., evoked responses. The first results of applying the proposed robust ICA approach to single-trial data with phantom and auditory evoked field tasks indicate the following. (1) A source signal is successfully extracted from unaveraged single-trial phantom data. The accuracy of dipole estimation for the decomposed source is even better than that of taking the average of total trials. (2) Not only the behavior and location of individual neuronal sources can be obtained but also the activity strength (amplitude) of evoked responses corresponding to a stimulation trial can be obtained and visualized. Moreover, the dynamics of individual neuronal sources, such as the trial-by-trial variations of the amplitude and location, can be observed", "keywords": ["magnetoencephalography ", "single-trial data analysis", "phantom experiment", "auditory evoked fields ", "robust pre-whitening technique", "independent component analysis ", "single-dipole source localization"]} {"id": "kp20k_training_668", "title": "The effects of learning style and hypermedia prior experience on behavioral disorders knowledge and time on task: a case-based hypermedia environment", "abstract": "This study involved 17 graduate students enrolled in a Behavioral Disorders course. As a part of the course, they engaged in an extensive case-based hypermedia program designed to enhance their ability to solve student emotional and behavioral problems. Results include: (1) students increased their knowledge about behavioral disorders; (2) those students with more hypermedia experience spent more time using the hypermedia program; (3) those students who acquired greater knowledge also wrote better student reports; and (4) students, regardless of learning style (as measured by Kolb's Learning Style Inventory), benefited equally from using the hypermedia program", "keywords": ["learning style", "hypermedia", "behavioral disorders"]} {"id": "kp20k_training_669", "title": "Rough Sets, Coverings and Incomplete Information", "abstract": "Rough sets are often induced by descriptions of objects based on the precise observations of an insufficient number of attributes. In this paper, we study generalizations of rough sets to incomplete information systems, involving imprecise observations of attributes. The precise role of covering-based approximations of sets that extend the standard rough sets in the presence of incomplete information about attribute values is described. In this setting, a covering encodes a set of possible partitions of the set of objects. A natural semantics of two possible generalisations of rough sets to the case of a covering (or a non transitive tolerance relation) is laid bare. It is shown that uncertainty due to granularity of the description of sets by attributes and uncertainty due to incomplete information are superposed, whereby upper and lower approximations themselves (in Pawlak's sense) become ill-known, each being bracketed by two nested sets. The notion of measure of accuracy is extended to the incomplete information setting, and the generalization of this construct to fuzzy attribute mappings is outlined", "keywords": ["rough sets", "possibility theory", "covering", "fuzzy sets"]} {"id": "kp20k_training_670", "title": "exploiting power budgeting in thermal-aware dynamic placement for reconfigurable systems", "abstract": "In this paper, a novel thermal-aware dynamic placement planner for reconfigurable systems is presented, which targets transient temperature reduction. Rather than solving time-consuming differential equations to obtain the hotspots, we propose a fast and accurate heuristic model based on power budgeting to plan the dynamic placements of the design statically, while considering the boundary conditions. Based on our heuristic model, we have developed a fast optimization technique to plan the dynamic placements at design time. Our results indicate that our technique is two orders of magnitude faster while the quality of the placements generated in terms of temperature and interconnection overhead is the same, if not better, compared to the thermal-aware placement techniques which perform thermal simulations inside the search engine", "keywords": ["temperature", "dynamic reconfiguration", "reconfigurable systems", "placement", "computer aided design"]} {"id": "kp20k_training_671", "title": "Optimized independent components for parameter regression", "abstract": "In this paper, a modified ICR algorithm is proposed for quality prediction purpose. The disadvantage of original Independent Component Regression (ICR) is that the extracted Independent Components (ICs) are not informative for quality prediction and interpretation. In the proposed method, to enhance the causal relationship between the extracted ICs and quality variables, a dual-objective optimization which combines the cost function w(T)X(T)Yv in Partial Least Squares (PLS) and the approximations of negentropy in Independent Component Analysis (ICA) is constructed in the first step for feature extraction. It simultaneously considers both the quality-correlation and the independence, and then the ICR-MLR (Multiple Linear Regression) method is used to obtain the regression coefficients. The proposed method is applied to the quality prediction in continuous annealing process and Tennessee Eastman process. Applications indicate that the proposed approach effectively captures the relations in the process variables and use of proposed method instead of original PLS and ICR improves the regression matching and prediction ability. ", "keywords": ["pls", "ica", "negentropy", "feature extraction"]} {"id": "kp20k_training_672", "title": "Discriminant Bag of Words based representation for human action recognition", "abstract": "Human action recognition based on Bag of Words representation. Discriminant codebook learning for better action class discrimination. Unified framework for the determination of both the optimized codebook and linear data projections", "keywords": ["bag of words", "discriminant learning", "codebook learning"]} {"id": "kp20k_training_673", "title": "Unsupervised connectionist algorithms for clustering an environmental data set: A comparison", "abstract": "Various unsupervised algorithms for vector quantization can be found in the literature. Being based on different assumptions, they do not all yield exactly the same results on the same problem. To better understand these differences, this article presents an evaluation of some unsupervised neural networks, considered among the most useful for quantization, in the context of a real-world problem: radioelectric wave propagation. Radio wave propagation is highly dependent upon environmental characteristics (e.g. those of the city, country, mountains, etc.). Within the framework of a cell net planning its radiocommunication strategy, we are interested in determining a set of environmental classes, sufficiently homogeneous, to which a specific prediction model of radio electrical field can be applied. Of particular interest are techniques that allow improved analysis of results. Firstly, Mahalanobis distance, taking data correlation into account, is used to make assignments. Secondly, studies of class dispersion and homogeneity, using both a data structure mapping representation and statistical analysis, emphasize the importance of the global properties of each algorithm. In conclusion, we discuss the advantages and disadvantages of each method on real problems", "keywords": ["neural networks", "unsupervised learning", "vector quantization", "radiocommunication"]} {"id": "kp20k_training_674", "title": "Preference-based multi-objective evolutionary algorithms for power-aware application mapping on NoC platforms", "abstract": "Network-on-chip (NoC) are considered the next generation of communication infrastructure in embedded systems. In the platform-based design methodology, an application is implemented by a set of collaborative intellectual property (IP) blocks. The selection of the most suited set of IPs as well as their physical mapping onto the NoC infrastructure to implement efficiently the application at hand are two hard combinatorial problems that occur during the synthesis process of Noc-based embedded system implementation. In this paper, we propose an innovative preference-based multi-objective evolutionary methodology to perform the assignment and mapping stages. We use one of the well-known and efficient multi-objective evolutionary algorithms NSGA-II and microGA as a kernel. The optimization processes of assignment and mapping are both driven by the minimization of the required silicon area and imposed execution time of the application, considering that the decision makers preference is a pre-specified value of the overall power consumption of the implementation", "keywords": ["network-on-chip", "ip assignment", "ip mapping", "multi-objective design"]} {"id": "kp20k_training_675", "title": "2D dry granular free-surface transient flow over complex topography with obstacles. Part II: Numerical predictions of fluid structures and benchmarking", "abstract": "Dense granular flows are present in geophysics and in several industrial processes, which has lead to an increasing interest for the knowledge and understanding of the physics which govern their propagation. For this reason, a wide range of laboratory experiments on gravity-driven flows have been carried out during the last two decades. The present work is focused on geomorphological processes and, following previous work, a series of laboratory studies which constitute a further step in mimicking natural phenomena are described and simulated. Three situations are considered with some common properties: a two-dimensional configuration, variable slope of the topography and the presence of obstacles. The setup and measurement technique employed during the development of these experiments are deeply explained in the companion work. The first experiment is based on a single obstacle, the second one is performed against multiple obstacles and the third one studies the influence of a dike on which overtopping occurs. Due to the impact of the flow against the obstacles, fast moving shocks appear, and a variety of secondary waves emerge. In order to delve into the physics of these types of phenomena, a shock-capturing numerical scheme is used to simulate the cases. The suitability of the mathematical models employed in this work has been previously validated. Comparisons between computed and experimental data are presented for the three cases. The computed results show that the numerical tool is able to predict faithfully the overall behavior of this type of complex dense granular flow", "keywords": ["granular flow", "landslides", "numerical modeling", "obstacles"]} {"id": "kp20k_training_676", "title": "Context sharing in a real world ubicomp deployment", "abstract": "While the application of ubicomp systems to explore context sharing has received a large amount of interest, only a very small number of studies have been carried out which involve real world use outside of the lab. This article presents an in-depth analysis of context sharing behaviours that built up around use of the Hermes interactive office door display system received during deployment. The Hermes system provided a groupware application supporting asynchronous messaging facilities, analogous to a digital form of Post-it notes, in order to explore the use of situated display systems to support awareness and coordination in an office environment. From this analysis we distil a set of issues relating to context sharing ranging from privacy concerns to ease of use; each supported through qualitative data from user interviews and questionnaires", "keywords": ["context sharing", "ubiquitous computing", "longitudinal deployment", "situated displays"]} {"id": "kp20k_training_677", "title": "Holding-time-aware dynamic traffic grooming algorithms based on multipath routing for WDM optical networks", "abstract": "This paper investigates approaches for the traffic grooming problem that consider connection holding-times and bandwidth availability. Moreover, solutions can indicate the splitting of connections into two or more sub-streams by multipath routing and fine-tuned by traffic grooming to utilize network resources better. Algorithms are proposed and the results of simulations using a variety of realistic scenarios indicate that the proposed algorithms significantly reduce the blocking of connection requests yet promote a fair distribution of the network resources in relation to the state-of-the-art solutions", "keywords": ["traffic grooming", "holding time awareness", "load balancing", "multipath routing", "wdm"]} {"id": "kp20k_training_678", "title": "Three Classes of Maximal Hyperclones", "abstract": "In this paper, we present three classes of maximal hyperclones. They are determined by three classes of Rosenberg's relations: nontrivial equivalence relations, central relations and h-regular relations", "keywords": ["clone", "maximal clone", "hyperclone", "maximal hyperclone"]} {"id": "kp20k_training_679", "title": "The knowledge acquisition workshops: A remarkable convergence of ideas", "abstract": "Intense interest in knowledge-acquisition research began 25 years ago, stimulated by the excitement about knowledge-based systems that emerged in the 1970s followed by the realities of the AI Winter that arrived in the 1980s. The knowledge-acquisition workshops that responded to this interest led to the formation of a vibrant research community that has achieved remarkable consensus on a number of issues. These viewpoints include (1) the rejection of the notion of knowledge as a commodity to be transferred from one locus to another, (2) an acceptance of the situated nature of human expertise, (3) emphasis on knowledge acquisition as the modeling of problem solving, and (4) the pursuit of reusable patterns in problem solving and in domain descriptions that can facilitate both modeling and system implementation. The Semantic Web community will benefit greatly by incorporating these perspectives in its work", "keywords": ["knowledge acquisition", "knowledge-based systems", "semantic web", "workshops and conferences"]} {"id": "kp20k_training_680", "title": "Topological Persistence for Medium Access Control", "abstract": "The primary function of the medium access control (MAC) protocol is managing access to the shared communication channel. From the viewpoint of the transmitters, the MAC protocol determines each transmitter's channel occupancy, the fraction of time that it spends transmitting over the channel. In this paper, we define a set of topological persistences that conform to both network topology and traffic load. We employ these persistences as target occupancies for the MAC layer protocol. A centralized algorithm is developed for calculating topological persistences and its correctness is established. A distributed algorithm and implementation are developed that can operate within scheduled and contention-based MAC protocols. In the distributed algorithm, network resources are allocated through auctions at each receiver in which transmitters participate as bidders to converge on the topological allocation. Very low overhead is achieved by piggybacking auction and bidder communication on existing data packets. The practicality of the distributed algorithm is demonstrated in a wireless network via simulation using the ns-2 network simulator. Simulation results show fast convergence to the topological solution and, once operating with topological persistences, improved performance compared to IEEE 802.11 in delay, throughput, and drop rate", "keywords": ["wireless networks", "medium access control"]} {"id": "kp20k_training_681", "title": "Plate on layered foundation analyzed by a semi-analytical and semi-numerical method", "abstract": "A semi-analytical and semi-numerical method is developed for the analysis of plate-layered soil systems. Applying a Hankel transform, an expression relating the surface settlement and the reaction of the layered soil is derived. Such a reaction can be treated as a load acting on the plate in addition to the applied external load. Having the plate modeled by eight-noded isoparametric elements, the governing equations of the plate can be formed and solved. Numerical examples, including square, trapezoidal and circular plates resting on elastic layered soil, are given to demonstrate the advantages, accuracy and versatility of this method", "keywords": ["raft on foundation", "layered foundation", "fundamental solution", "transfer matrix method", "finite element method"]} {"id": "kp20k_training_682", "title": "Semi-divisible triangular norms", "abstract": "Semi-divisibility of left-continuous triangular norms is a weakening of the divisibility (i.e., continuity) axiom for t-norms. In this contribution we focus on the class of semi-divisible t-norms and show the following properties: Each semi-divisible t-norm with Ran(n (T) ) = [0, 1] is nilpotent. Semi-divisibility of an ordinal sum t-norm is determined by the corresponding property of its first component (which can be a proper t-subnorm, too). Finally, negations with finite range derived from semi-divisible t-norms are studied", "keywords": ["triangular norm", "residual implication", "ordinal sum"]} {"id": "kp20k_training_683", "title": "Expert system for remnant life prediction of defected components under fatigue and creep-fatigue loadings", "abstract": "Life prediction and management of cracked high temperature structures is a matter of great importance for both economical and safe reasons. To implement such a task, many fields such as material science, structure engineering and mechanics science etc. are involved and expertise is generally required. In terms of the methodology of advanced time-dependent fracture mechanics, this paper developed an expert system to realize an appropriate combination of material database, condition database and knowledge database. Many assessment criteria including the multi-defects interaction and combination, invalidation criterion and creep-fatigue interaction are employed in the inference engine of expert system. The over-conservativeness of life prediction from traditional method is reduced reasonably and therefore the accuracy of predicted life is improved. Consequently, the intelligent and expert life management of cracked high temperature structures is realized which provides a powerful tool in practice. ", "keywords": ["high temperature structure", "life management", "expert system", "creep-fatigue interaction", "multiple cracks"]} {"id": "kp20k_training_685", "title": "Walkneta biologically inspired network to control six-legged walking", "abstract": "To investigate walking we perform experimental studies on animals in parallel with software and hardware simulations of the control structures and the body to be controlled. Therefore, the primary goal of our simulation studies is not so much to develop a technical device, but to develop a system which can be used as a scientific tool to study insect walking. To this end, the animat should copy essential properties of the animals. In this review, we will first describe the basic behavioral properties of hexapod walking, as the are known from stick insects. Then we describe a simple neural network called Walknet which exemplifies these properties and also shows some interesting emergent properties. The latter arise mainly from the use of the physical properties to simplify explicit calculations. The model is simple too, because it uses only static neuronal units. Finally, we present some new behavioral results", "keywords": ["walking", "leg coordination", "positive feedback", "six-legged robot", "stick insect", "situatedness", "decentralized control"]} {"id": "kp20k_training_686", "title": "Modelling the scatter of EN curves using a serial hybrid neural network", "abstract": "If structural reliability is estimated by following a strain-based approach, a materials strength should be represented by the scatter of the ?N (EN) curves that link the strain amplitude with the corresponding statistical distribution of the number of cycles-to-failure. The basic shape of the ?N curve is usually modelled by the CoffinManson relationship. If a loading mean level also needs to be considered, the original CoffinManson relationship is modified to account for the non-zero mean level of the loading, which can be achieved by using a SmithWatsonTopper modification of the original CoffinManson relationship. In this paper, a methodology for estimating the dependence of the statistical distribution of the number of cycles-to-failure on the SmithWatsonTopper modification is presented. The statistical distribution of the number of cycles-to-failure was modelled with a two-parametric Weibull probability density function. The core of the presented methodology is represented by a multilayer perceptron neural network combined with the Weibull probability density function using a size parameter that follows the SmithWatsonTopper analytical model. The article presents the theoretical background of the methodology and its application in the case of experimental fatigue data. The results show that it is possible to model ?N curves and their scatter for different influential parameters, such as the specimens diameter and the testing temperature", "keywords": ["serial hybrid neural network", "weibull pdf", "en curves", "fatigue life scatter", "smithwatsontopper parameter"]} {"id": "kp20k_training_687", "title": "rate-distortion problem for physics based distributed sensing", "abstract": "We consider the rate-distortion problem for sensing the continuous space-time physical temperature in a circular ring on which a heat source is applied over space and time, and which is also allowed to cool by radiation or convection to its surrounding medium. The heat source is modelled as a continuous space-time stochastic process which is bandlimited over space and time. The temperature field is the result of a circular convolution over space and a continuous-time causal filtering over time of the heat source with the Green's function corresponding to the heat equation, which is space and time invariant. The temperature field is sampled at uniform spatial locations by a set of sensors and it has to be reconstructed at a base station. The goal is to minimize the mean-square-error per second, for a given number of nats per second, assuming ideal communication channels between sensors and base station. We find a) the centralized R c (D) function of the temperature field, where all the space-time samples can be observed and encoded jointly. Then, we obtain b) the R s-i (D) function, where each sensor, independently, encodes its samples optimally over time and c) the R st-i (D) function, where each sensor is constrained to encode also independently over time. We also study two distributed prediction-based approaches: a) with perfect feedback from the base station, where temporal prediction is performed at the base station and each sensor performs differential encoding, and b) without feedback, where each sensor locally performs temporal prediction", "keywords": ["sensor networks", "prediction", "green's function", "distributed sampling", "heat equation", "local coding", "centralized coding", "feedback", "distributed coding", "temperature field", "rate-distortion", "spatio-temporal correlation"]} {"id": "kp20k_training_688", "title": "developing a media space for remote synchronous parent-child interaction", "abstract": "While supporting family communication has traditionally been a domain of interest for interaction designers, few research initiatives have explicitly investigated remote synchronous communication between children and parents. We discuss the design of the ShareTable, a media space that supports synchronous interaction with children by augmenting videoconferencing with a camera-projector system to allow for shared viewing of physical artifacts. We present an exploratory evaluation of this system, highlighting how such a media space may be used by families for learning and play activities. The ShareTable was positively received by our participants and preferred over standard videoconferencing. Informed by the results of our exploratory evaluation, we discuss the next design iteration of the ShareTable and directions for future investigations in this area", "keywords": ["distributed families", "media space", "computer-mediated communication", "parents and children"]} {"id": "kp20k_training_689", "title": "Cross-Noise-Coupled Architecture of Complex Bandpass Delta Sigma AD Modulator", "abstract": "Complex bandpass Delta Sigma AD modulators can provide superior performance to a pair of real bandpass Delta Sigma AD modulators of the same order. They process just input I and Q signals, not image signals, and AD conversion can be realized with low power dissipation, so that they are desirable for such low-IF receiver applications. This paper proposes a new architecture for complex bandpass Delta Sigma AD modulators with cross-noise-coupled topology, which effectively raises the order of the complex modulator and achieves higher SQNDR (Signal to Quantization Noise and Distortion Ratio) with low power dissipation. By providing the cross-coupled quantization noise injection to internal I and Q paths, noise coupling between two quantizers can be realized in complex form, which enhances the order of noise shaping in complex domain, and provides a higher-order NTF using a lower-order loop filter in the complex Delta Sigma AD modulator. Proposed higher-order modulator can be realized just by adding some passive capacitors and switches, the additional integrator circuit composed of an operational amplifier is not necessary, and the performance of the complex modulator can be effectively raised without more power dissipation. We have performed simulation with MATLAB to verify the effectiveness of the proposed architecture. The simulation results show that the proposed architecture can achieve the realization of higher-order enhancement, an improve SQNDR of the complex bandpass Delta Sigma AD modulator", "keywords": ["complex bandpass delta sigma ad modulator", "noise coupling", "feedforward", "multibit"]} {"id": "kp20k_training_690", "title": "Facial motion cloning", "abstract": "We propose a method for automatically copying facial motion from one 3D face model to another, while preserving the compliance of the motion to the MPEG-4 Face and Body Animation (FBA) standard. Despite the enormous progress in the field of Facial Animation, producing a new animatable face from scratch is still a tremendous task for an artist. Although many methods exist to animate a face automatically based on procedural methods, these methods still need to be initialized by defining facial regions or similar, and they lack flexibility because the artist can only obtain the facial motion that a particular algorithm offers. Therefore a very common approach is interpolation between key facial expressions, usually called morph targets, containing either speech elements (visemes) or emotional expressions. Following the same approach, the MPEG-4 Facial Animation specification offers a method for interpolation of facial motion from key positions, called Facial Animation Tables, which are essentially morph targets corresponding to all possible motions specified in MPEG-4. The problem of this approach is that the artist needs to create a new set of morph targets for each new face model. In case of MPEG-4 there are 86 morph targets, which is a lot of work to create manually. Our method solves this problem by cloning the morph targets, i.e. by automatically copying the motion of vertices, as well as geometry transforms, from source face to target face while maintaining the regional correspondences and the correct scale of motion. It requires the user only to identify a subset of the MPEG-4 Feature Points in the source and target faces. The scale of the movement is normalized with respect to MPEG-4 normalization units (FAPUs), meaning that the MPEG-4 FBA compliance of the copied motion is preserved. Our method is therefore suitable not only for cloning of free facial expressions, but also of MPEG-4 compatible facial motion, in particular the Facial Animation Tables. We believe that Facial Motion Cloning offers dramatic time saving to artists producing morph targets for facial animation or MPEG-4 Facial Animation Tables", "keywords": ["facial animation", "morph targets", "mpeg-4", "fba", "vrml", "text-to-speech", "virtual characters", "virtual humans"]} {"id": "kp20k_training_691", "title": "An analysis of the Intel security architecture and implementations", "abstract": "An in-depth analysis of the processor families identifies architectural properties that may have unexpected, and undesirable, results in secure computer systems. In addition, reported implementation errors in some processor versions render them undesirable for secure systems because of potential security and reliability problems. In this paper, we discuss the imbalance in scrutiny for hardware protection mechanisms relative to software, and why this imbalance is increasingly difficult to justify as hardware complexity increases. We illustrate this difficulty with examples of architectural subtleties and reported implementation errors", "keywords": ["hardware security architecture", "hardware implementation error", "microprocessor", "computer security", "penetration testing", "covert channels"]} {"id": "kp20k_training_692", "title": "Realtime concatenation technique for skeletal motion in humanoid animation", "abstract": "In this paper, we propose a realtime concatenation technique between basic skeletal motions obtained Ly the motion capture technique and etc. to generate a lifelike behavior for a humanoid character (avatar). We execute several experiments to show the advantage and the property of our technique and also report the results. Finally, we describe our applied system called WonderSpace which leads participants to the exciting and attractive virtual worlds with humanoid characters in cyberspace. Our concatenation technique has the following features: (1) based on a blending method between a preceding motion and a succeeding motion by a transition function, (2) realizing \"smooth transition,\" \"monotone transition,\" and \"equivalent transition\" by the transition function called paste function, (3) generating a connecting interval by making the backward and forward predictions for the preceding and succeeding motions, (4) executing the prediction under the hypothesis of \"the smooth stopping state\" or \"the state of connecting motion\", (5) controlling the prediction intervals by the parameter indicating the importance of the motion, and (6) realizing realtime calculation", "keywords": ["3d computer graphics", "web3d", "interactive", "3d virtual world", "3d character", "blending function"]} {"id": "kp20k_training_693", "title": "Call-by-value is dual to call-by-name", "abstract": "The rules of classical logic may be formulated in pairs corresponding to De Morgan duals: rules about & are dual to rules about V. A line of work, including that of Filinski (1989), Griffin (1990), Parigot (1992), Danos, Joinet, and Schellinx (1995), Selinger (1998,2001), and Curien and Herbelin (2000), has led to the startling conclusion that call-by-value is the de Morgan dual of call-by-name. This paper presents a dual calculus that corresponds to the classical sequent calculus of Gentzen (1935) in the same way that the lambda calculus of Church (1932,1940) corresponds to the intuitionistic natural deduction of Gentzen (1935). The paper includes crisp formulations of call-by-value and call-by-name that are obviously dual; no similar formulations appear in the literature. The paper gives a CPS translation and its inverse, and shows that the translation is both sound and complete, strengthening a result in Curien and Herbelin (2000). Note. This paper uses color to clarify the relation of types and terms, and of source and target calculi. If the URL below is not in blue, please download the color version, which can be found in the ACM Digital Library archive for ICFP 2003, at http://portal.acm.org/proceedings/icfp/archive, or by googling 'wadler dual", "keywords": ["curry-howard correspondence", "sequent calculus", "natural deduction", "de morgan dual", "logic", "lambda calculus", "lambda mu calculus"]} {"id": "kp20k_training_694", "title": "Universal automata and NFA learning", "abstract": "The aim of this paper is to develop a new algorithm that, with a complete sample as input, identifies the family of regular languages by means of nondeterministic finite automata. It is a state-merging algorithm. One of its main features is that the convergence (which is proved) is achieved independently from the order in which the states are merged, that is, the merging of states may be done \"randomly\". ", "keywords": ["grammatical inference", "finite automata", "universal automaton"]} {"id": "kp20k_training_695", "title": "Effect of load models on assessment of energy losses in distributed generation planning", "abstract": "Distributed Generation (DG) is gaining in significance due to the keen public awareness of the environmental impacts of electric power generation and significant advances in several generation technologies which are much more environmentally friendly (wind power generation, micro-turbines, fuel cells, and photovoltaic) than conventional coal, oil and gas-fired plants. Accurate assessment of energy losses when DG is connected is gaining in significance due to the developments in the electricity market place, such as increasing competition, real time pricing and spot pricing. However, inappropriate modelling can give rise to misleading results. This paper presents an investigation into the effect of load models on the predicted energy losses in DG planning. Following a brief introduction the paper proposes a detailed voltage dependent load model, for DG planning use, which considers three categories of loads: residential, industrial and commercial. The paper proposes a methodology to study the effect of load models on the assessment of energy losses based on time series simulations to take into account both the variations of renewable generation and load demand. A comparative study of energy losses between the use of a traditional constant load model and the voltage dependent load model and at various load levels is carried out using a 38-node example power system. Simulations presented in the paper indicate that the load model to be adopted can significantly affect the results of DG planning", "keywords": ["distributed generation", "load model", "energy losses", "voltage profile", "load forecasting"]} {"id": "kp20k_training_696", "title": "generational stack collection and profile-driven pretenuring", "abstract": "This paper presents two techniques for improving garbage collection performance: generational stack collection and profile-driven pretenuring. The first is applicable to stack-based implementations of functional languages while the second is useful for any generational collector. We have implemented both techniques in a generational collector used by the TIL compiler (Tarditi, Morrisett, Cheng, Stone, Harper, and Lee 1996), and have observed decreases in garbage collection times of as much as 70% and 30%, respectively.Functional languages encourage the use of recursion which can lead to a long chain of activation records. When a collection occurs, these activation records must be scanned for roots. We show that scanning many activation records can take so long as to become the dominant cost of garbage collection. However, most deep stacks unwind very infrequently, so most of the root information obtained from the stack remains unchanged across successive garbage collections. Generational stack collection greatly reduces the stack scan cost by reusing information from previous scans.Generational techniques have been successful in reducing the cost of garbage collection (Ungar 1984). Various complex heap arrangements and tenuring policies have been proposed to increase the effectiveness of generational techniques by reducing the cost and frequency of scanning and copying. In contrast, we show that by using profile information to make lifetime predictions, pretenuring can avoid copying data altogether. In essence, this technique uses a refinement of the generational hypothesis (most data die young) with a locality principle concerning the age of data: most allocations sites produce data that immediately dies, while a few allocation sites consistently produce data that survives many collections", "keywords": ["lifetime", "activation", "generation", "use", "policy", "recursion", "performance", "collect", "paper", "scan", "informal", "locality", "predict", "records", "stack", "arrangement", "implementation", "data", "compilation", "complexity", "profiles", "refine", "effect", "cost", "age", "functional languages", "allocation"]} {"id": "kp20k_training_697", "title": "Regulated Secretion in Chromaffin Cells", "abstract": "ARFs constitute a family of structurally related proteins that forms a subset of the ras GTPases. In chromaffin cells, secretagogue-evoked stimulation triggers the rapid translocation of ARF6 from secretory granules to the plasma membrane and the concomitant activation of PLD in the plasma membrane. Both PLD activation and catecholamine secretion are strongly inhibited by a synthetic peptide corresponding to the N-terminal domain of ARF6. ARNO, a potential guanine nucleotide exchange factor for ARF6, is expressed and localized in the plasma membrane of chromaffin cells. Using permeabilized cells, we found that the introduction of anti-ARNO antibodies into the cytosol inhibits both PLD activation and catecholamine secretion. Chromaffin cells express PLD1 at the plasma membrane. We found that microinjection of the catalytically inactive PLD1(K898R) dramatically reduces catecholamine secretion monitored by amperometry, most likely by interfering with a late postdocking step of calcium-regulated exocytosis. We propose that ARNO-ARF6 participate in the exocytotic reaction by controlling the plasma membrane-bound PLD1. By generating fusogenic lipids at the exocytotic sites, PLD1 may represent an essential component of the fusion machinery in neuroendocrine cells", "keywords": ["arf", "arno", "chromaffin", "exocytosis", "phospholipase d", "secretory granule"]} {"id": "kp20k_training_698", "title": "Analysis of elastic wave propagation in a functionally graded thick hollow cylinder using a hybrid mesh-free method", "abstract": "In this paper, a hybrid mesh-free method based on generalized finite difference (GFD) and Newmark finite difference (NFD) methods is presented to calculate the velocity of elastic wave propagation in functionally graded materials (FGMs). The physical domain to be considered is a thick hollow cylinder made of functionally graded material in which mechanical properties are graded in the radial direction only. A power-law variation of the volume fractions of the two constituents is assumed for mechanical property variation. The cylinder is excited by shock loading to obtain the time history of the radial displacement. The velocity of elastic wave propagation in functionally graded cylinder is calculated from periodic behavior of the radial displacement in time domain. The effects of various grading patterns and various constitutive mechanical properties on the velocity of elastic wave propagation in functionally graded cylinders are studied in detail. Numerical results demonstrate the efficiency of the proposed method in simulating the wave propagation in FGMs", "keywords": ["mesh-free methods", "functionally graded materials", "thick hollow cylinder", "thermal shock", "wave propagation"]} {"id": "kp20k_training_699", "title": "A nonparametric methodology for evaluating convergence in a multi-input multi-output setting", "abstract": "The paper presents a novel nonparametric methodology to evaluate convergence. We develop two new indexes to evaluate ?-convergence and ?-convergence. The indexes developed allow evaluations using multiple inputs and outputs. The methodology complements productivity assessments based on the Malmquist index. The methodology is applied to Portuguese construction companies operating in 20082010", "keywords": ["convergence", "productivity", "malmquist index", "data envelopment analysis", "construction industry"]} {"id": "kp20k_training_700", "title": "the effects of interaction frequency on the optimization performance of cooperative coevolution", "abstract": "Cooperative coevolution is often used to solve difficult optimization problems by means of problem decomposition. Its performance on this task is influenced by many design decisions. It would be useful to have some knowledge of the performance effects of these decisions, in order to make the more beneficial ones. In this paper we study the effects on performance of the frequency of interaction between populations. We show them to be problem-dependent and use dynamics analysis to explain this dependency", "keywords": ["dynamics", "cooperative coevolution", "performance"]} {"id": "kp20k_training_701", "title": "Brain-inspired method for solving fuzzy multi-criteria decision making problems (BIFMCDM", "abstract": "We propose a brain-inspired method for solving fuzzy decision making problems. We study a websites ranking problem for an e-alliance. Processing fuzzy information as just abstract element could lead to wrong decision", "keywords": ["brain informatics", "fuzzy sets theory", "multi criteria decision making", "simulation", "web intelligence", "world wide wisdom web "]} {"id": "kp20k_training_702", "title": "Environmental model access and interoperability: The GEO Model Web initiative", "abstract": "The Group on Earth Observation (GEO) Model Web initiative utilizes a Model as a Service approach to increase model access and sharing. It relies on gradual, organic growth leading towards dynamic webs of interacting models, analogous to the World Wide Web. The long term vision is for a consultative infrastructure that can help address \"what if\" and other questions that decision makers and other users have. Four basic principles underlie the Model Web: open access, minimal barriers to entry, service-driven, and scalability; any implementation approach meeting these principles will be a step towards the long term vision. Implementing a Model Web encounters a number of technical challenges, including information modelling, minimizing interoperability agreements, performance, and long term access, each of which has its own implications. For example, a clear information model is essential for accommodating the different resources published in the Model Web (model engines, model services, etc.), and a flexible architecture, capable of integrating different existing distributed computing infrastructures, is required to address the performance requirements. Architectural solutions, in keeping with the Model Web principles, exist for each of these technical challenges. There are also a variety of other key challenges, including difficulties in making models interoperable; calibration and validation; and social, cultural, and institutional constraints. Although the long term vision of a consultative infrastructure is clearly an ambitious goal, even small steps towards that vision provide immediate benefits. A variety of activities are now in progress that are beginning to take those steps. ", "keywords": ["model web", "composition as a service ", "model as a service ", "geoss", "environmental modelling", "interoperability"]} {"id": "kp20k_training_703", "title": "non-uniform micro-channel design for stacked 3d-ics", "abstract": "Micro-channel cooling shows great potential in removing high density heat in 3D circuits. The current micro-channel heat sink designs spread the entire surface to be cooled with micro-channels. This approach, though might provide sufficient cooling, requires quite high pumping power. In this paper, we investigate the non-uniform allocation of micro-channels to provide sufficient cooling with less pumping power. Specifically, we decide the count, location and pumping pressure drop/flow rate of micro-channels such that acceptable cooling is achieved at minimum pumping power. Thermal wake effect and runtime pressure drop/flow rate control are also considered. The experiments showed that, compared with the conventional design which spreads micro-channels all over the chip, our non-uniform microchannel design achieves 55--60% pumping power saving", "keywords": ["power", "3d-ic", "micro-channel", "liquid cooling"]} {"id": "kp20k_training_704", "title": "Neurocomputing techniques to dynamically forecast spatiotemporal air pollution data", "abstract": "Real time monitoring, forecasting and modeling air pollutants concentrations in major urban centers is one of the top priorities of all local and national authorities globally. This paper studies and analyzes the parameters related to the problem, aiming in the design and development of an effective machine learning model and its corresponding system, capable of forecasting dangerous levels of ozone (O3) concentrations in the city center of Athens and more specifically in the Athinas air quality monitoring station. This is a multi parametric case, so an effort has been made to combine a vast number of data vectors from several operational nearby measurements stations. The final result was the design and construction of a group of artificial neural networks capable of estimating O3 concentrations in real time mode and also having the capacity of forecasting the same values for future time intervals of 1, 2, 3 and 6h, respectively", "keywords": ["artificial neural networks", "machine learning", "multi parametric ann", "pollution of the atmosphere", "ozone estimation and forecasting"]} {"id": "kp20k_training_705", "title": "Revisiting rational bubbles in the G-7 stock markets using the Fourier unit root test and the nonparametric rank test for cointegration", "abstract": "This paper re-investigates whether rational bubbles existed in the G-7 stock markets during the period of January 2000-June 2009 using the newly developed Fourier unit root test and a nonparametric rank test for cointegration. The empirical results from our Fourier unit test indicate that the null hypothesis of J(1) unit root in stock prices can be rejected for Canada, France, Italy and the UK. However, the empirical results from the rank test reveal that rational bubbles did not exist in the G-7 stock markets during the sample period. ", "keywords": ["fourier unit root test", "rank test for nonlinear cointegration", "g7 stock markets", "rational bubbles"]} {"id": "kp20k_training_706", "title": "Orchestrating Stream Graphs Using Model Checking", "abstract": "In this article we use model checking to statically distribute and schedule Synchronous DataFlow (SDF) graphs on heterogeneous execution architectures. We show that model checking is capable of providing an optimal solution and it arrives at these solutions faster (in terms of algorithm runtime) than equivalent ILP formulations. Furthermore, we also show how different types of optimizations such as task parallelism, data parallelism, and state sharing can be included within our framework. Finally, comparison of our approach with the current state-of-the-art heuristic techniques show the pitfalls of these techniques and gives a glimpse of how these heuristic techniques can be improved", "keywords": ["languages", "performance", "streaming", "dataflow", "parallelization", "compiler"]} {"id": "kp20k_training_707", "title": "Model-averaged Wald confidence intervals", "abstract": "The process of model averaging has become increasingly popular as a method for performing inference in the presence of model uncertainty. In the frequentist setting, a model-averaged estimate of a parameter is calculated as the weighted sum of single-model estimates, often using weights derived from an information criterion such as AIC or BIC. A standard method for calculating a model-averaged confidence interval is to use a Wald interval centered around the model-averaged estimate. We propose a new method for construction of a model-averaged Wald confidence interval, based on the idea of model averaging tail areas of the sampling distributions of the single-model estimates. We use simulation to compare the performance of the new method and existing methods, in terms of coverage rate and interval width. The new method consistently outperforms existing methods in terms of coverage, often for little increase in the interval width. We also consider choice of model weights, and find that AIC weights are preferable to either AICc or BIC weights in terms of coverage", "keywords": ["model averaging", "model weight", "model uncertainty", "wald interval", "coverage rate"]} {"id": "kp20k_training_708", "title": "Dynamics of the difference equation x n + 1 = x n + p x n ? k x n + q", "abstract": "We study the invariant interval, the character of semicycles, the global stability, and the boundedness of the difference equation", "keywords": ["local asymptotic stability", "invariant interval", "semicycle behavior", "global asymptotic stability", "boundedness"]} {"id": "kp20k_training_709", "title": "Cross-validation based single response adaptive design of experiments for Kriging metamodeling of deterministic computer simulations", "abstract": "A new approach for single response adaptive design of deterministic computer experiments is presented. The approach is called SFCVT, for Space-Filling Cross-Validation Tradeoff. SFCVT uses metamodeling to obtain an estimate of cross-validation errors, which are maximized subject to a constraint on space filling to determine sample points in the design space. The proposed method is compared, using a test suite of forty four numerical examples, with three DOE methods from the literature. The numerical test examples can be classified into symmetric and asymmetric functions. Symmetric examples refer to functions for which the extreme points are located symmetrically in the design space and asymmetric examples are those for which the extreme regions are not located in a symmetric fashion in the design space. Based upon the comparison results for the numerical examples, it is shown that SFCVT performs better than an existing adaptive and a non-adaptive DOE method for asymmetric multimodal functions with high nonlinearity near the boundary, and is comparable for symmetric multimodal functions and other test problems. The proposed approach is integrated with a multi-scale heat exchanger optimization tool to reduce the computational effort involved in the design of novel air-to-water heat exchangers. The resulting designs are shown to be significantly more compact than mainstream heat exchanger designs", "keywords": ["design optimization", "design of experiments", "kriging metamodeling", "heat exchanger design"]} {"id": "kp20k_training_710", "title": "Bottlenecks and Hubs in Inferred Networks Are Important for Virulence in Salmonella typhimurium", "abstract": "Recent advances in experimental methods have provided sufficient data to consider systems as large networks of interconnected components. High-throughput determination of protein-protein interaction networks has led to the observation that topological bottlenecks, proteins defined by high centrality in the network, are enriched in proteins with systems-level phenotypes such as essentiality. Global transcriptional profiling by microarray analysis has been used extensively to characterize systems, for example, examining cellular response to environmental conditions and effects of genetic mutations. These transcriptomic datasets have been used to infer regulatory and functional relationship networks based on co-regulation. We use the context likelihood of relatedness (CLR) method to infer networks from two datasets gathered from the pathogen Salmonella typhimurium: one under a range of environmental culture conditions and the other from deletions of 15 regulators found to be essential in virulence. Bottleneck and hub genes were identified from these inferred networks, and we show for the first time that these genes are significantly more likely to be essential for virulence than their non-bottleneck or non-hub counterparts. Networks generated using simple similarity metrics (correlation and mutual information) did not display this behavior. Overall, this study demonstrates that topology of networks inferred from global transcriptional profiles provides information about the systems-level roles of bottleneck genes. Analysis of the differences between the two CLR-derived networks suggests that the bottleneck nodes are either mediators of transitions between system states or sentinels that reflect the dynamics of these transitions", "keywords": ["bottlenecks", "network inference", "salmonella typhimurium", "virulence"]} {"id": "kp20k_training_711", "title": "Semantic manipulation of users queries and modeling the health and nutrition preferences", "abstract": "People depend on popular search engines to look for the desired health and nutrition information. Many search engines cannot semantically interpret, enrich the users natural language queries easily and hence do not retrieve the personalized information that fits the users needs. One reason for retrieving irrelevant information is the fact that people have different preferences where each one likes and dislikes certain types of food. In addition, some people have specific health conditions that restrict their food choices and encourage them to take other foods. Moreover, the cultures, where people live in, influence food choices while the search engines are not aware of these cultural habits. Therefore, it will be helpful to develop a system that semantically manipulates users queries and models the users preferences to retrieve personalized health and food information. In this paper, we harness semantic Web technology to capture users preferences, construct a nutritional and health-oriented users profile, model the users preferences and use them to organize the related knowledge so that users can retrieve personalized health and food information. We present an approach that uses the personalization techniques based on integrated domain ontologies, pre-constructed by domain experts, to retrieve relevant food and health information that is consistent with peoples needs. We implemented the system, and the empirical results show high precision and recall with a superior users satisfaction", "keywords": ["semantic query manipulation", "user profile ontology", "personalization"]} {"id": "kp20k_training_712", "title": "Swift and stable polygon growth and broken line offset", "abstract": "The problem of object growing (offsetting the object boundary by a certain distance) is an important and widely studied problem. In this paper we propose a new approach for offsetting the boundary of an object described by segments which are not necessarily connected. This approach avoids many destructive special cases that arise in some heuristic-based approaches. Moreover, the method developed in this paper is stable in that it does not fail because of missing segments. Also, the time required for the computation of the offset is relatively short and therefore inexpensive, i.e. it is expected to be O(n*log n", "keywords": ["offset technique", "cad/cam", "algorithms", "trimmed offsets"]} {"id": "kp20k_training_713", "title": "ILP-based multistage placement of PMUs with dynamic monitoring constraints", "abstract": "A multistage planning of PMUs placement for power systems is proposed. The methodology takes into account network expansion plans. System observability is maximized over time. The methodology identifies nodes to locate PMUs based on security criteria. The stepwise approach allows the utilities to develop a path for PMU placement", "keywords": ["phasor measurement unit ", "multistage pmu placement", "observability", "integer linear programming ", "coherency recognition", "community detection"]} {"id": "kp20k_training_714", "title": "Indoor solar energy harvesting for sensor network router nodes", "abstract": "A unique method has been developed to scavenge energy from monocrystaline solar cells to power wireless router nodes used in indoor applications. The systems energy harvesting module consists of solar cells connected in series-parallel combination to scavenge energy from 34W fluorescent lights. A set of ultracapacitors were used as the energy storage device. Two router nodes were used as a router pair at each route point to minimize power consumption. Test results show that the harvesting circuit which acted as a plug-in to the router nodes manages energy harvesting and storage, and enables near-perpetual, harvesting aware operation of the router node", "keywords": ["wireless sensor networks", "energy harvesting", "energy scavenging", "solar energy", "router nodes"]} {"id": "kp20k_training_715", "title": "Detecting data records in semi-structured web sites based on text token clustering", "abstract": "This paper describes a new approach to the use of clustering for automatic data detection in semi-structured web pages. Unlike most exiting web information extraction approaches that usually apply wrapper induction techniques to manually labelled web pages, this approach avoids the pattern induction process by using clustering techniques on unlabelled pages. In this approach, a variant Hierarchical Agglomerative Clustering (HAC) algorithm called K-neighbours-HAC is developed which uses the similarities of the data format (HTML tags) and the data content (text string values) to group similar text tokens into clusters. We also develop a new method to label text tokens to capture the hierarchical structure of HTML pages and an algorithm for mapping labelled text tokens to XML. The new approach is tested and compared with several common existing wrapper induction systems on three different sets of web pages. The results suggest that the new approach is effective for data record detection and that it outperforms these common existing approaches examined on these web sites. Compared with the existing approaches, the new approach does not require training and successfully avoids the explicit pattern induction process, and accordingly the entire data detection process is simpler", "keywords": ["automatic data detection", "web information extraction", "text token clustering", "html tags", "semi-structured web sites"]} {"id": "kp20k_training_716", "title": "The design of GSC FieldLog: ontology-based software for computer aided geological field mapping", "abstract": "Databases containing geological field information are increasingly being constructed directly in the field. The design of such databases is often challenged by opposing needs: (1) the individual need to maintain flexibility of database structure and contents, to accommodate unexpected field situations; and (2) the corporate need to retain compatibility between distinct field databases, to accommodate their interoperability. The FieldLog mapping software balances these needs by exploiting a domain ontology developed for field information, one that enables field database flexibility and facilitates compatibility. The ontology consists of cartographic, geospatial, geological and metadata objects that form a common basis for interoperability and that can be instantiated by users into customized field databases. The design of the FieldLog software, its foundation on this ontology, and the resulting benefits to usability are presented in this paper. The discussion concentrates on satisfying the flexibility requirement by implementing the ontology as a generic data model within an object-relational database environment; issues of interoperability are not considered in detail. Benefits of this ontologic-driven approach are also developed within a description of the FieldLog application, including (1) improved usability due to an user interface based on the geological components of the ontology, and (2) diminished technical prerequisites as users are shielded from the many database and GIS technicalities handled by the ontology", "keywords": ["geological mapping", "field data", "data model", "ontology", "gis"]} {"id": "kp20k_training_717", "title": "Systemic disease sequelae in chronic inflammatory diseases and chronic psychological stress: comparison and pathophysiological model", "abstract": "In chronic inflammatory diseases (CIDs), the neuroendocrineimmune crosstalk is important to allocate energy-rich substrates to the activated immune system. Since the immune system can request energy-rich substrates independent of the rest of the body, I refer to it as the selfish immune system, an expression that was taken from the theory of the selfish brain, giving the brain a similar position. In CIDs, the theory predicts the appearance of long-term disease sequelae, such as metabolic syndrome. Since long-standing energy requirements of the immune system determine disease sequelae, the question arose as to whether chronic psychological stress due to chronic activation of the brain causes similar sequelae. Indeed, there are many similarities; however, there are also differences. A major difference is the behavior of body weight (constant in CIDs versus loss or gain in stress). To explain this discrepancy, a new pathophysiological theory is presented that places inflammation and stress axes in the middle", "keywords": ["chronic inflammatory disease", "rheumatoid arthritis", "psychological stress", "systemic disease sequelae"]} {"id": "kp20k_training_718", "title": "Setting parameters by example", "abstract": "We introduce a class of \"inverse parametric optimization\" problems, in which one is given both a parametric optimization problem and a desired optimal solution; the task is to determine parameter values that lead to the given solution. We describe algorithms for solving such problems for minimum spanning trees, shortest paths, and other \"optimal subgraph\" problems and discuss applications in multicast routing, vehicle path planning, resource allocation, and board game programming", "keywords": ["inverse optimization", "parametric search", "shortest paths", "minimum spanning tree", "vehicle routing", "adaptive user interfaces", "alpha-beta search", "evaluation function", "randomized algorithms", "ellipsoid method"]} {"id": "kp20k_training_719", "title": "PedVis: A Structured, Space-Efficient Technique for Pedigree Visualization", "abstract": "Public genealogical databases are becoming increasingly populated with historical data and records of the current population's ancestors. As this increasing amount of available information is used to link individuals to their ancestors, the resulting trees become deeper and more dense, which justifies the need for using organized, space-efficient layouts to display the data. Existing layouts are often only able to show a small subset of the data at a time. As a result, it is easy to become lost when navigating through the data or to lose sight of the overall tree structure. On the contrary, leaving space for unknown ancestors allows one to better understand the tree's structure, but leaving this space becomes expensive and allows fewer generations to be displayed at a time. In this work, we propose that the H-tree based layout be used in genealogical software to display ancestral trees. We will show that this layout presents an increase in the number of displayable generations, provides a nicely arranged, symmetrical, intuitive and organized fractal structure, increases the user's ability to understand and navigate through the data, and accounts for the visualization requirements necessary for displaying such trees. Finally, user-study results indicate potential for user acceptance of the new layout", "keywords": ["genealogy", "pedigree", "h-tree"]} {"id": "kp20k_training_720", "title": "The evolution of mobile communications in Europe: The transition from the second to the third generation", "abstract": "This paper analyses the evolution of the mobile communications industry in the European Union. The research focuses its interest on the different roles played by the regulator in Europe and in other regions of the world (mainly the US). The diffusion of GSM was extraordinarily fast in Europe, mainly due to the adoption of a unified standard from inception. This rapid diffusion has resulted in an important competitive advantage for European operators. Interestingly, while the regulator acted similarly in the case of UMTS, the development of the latter has faced many problems and, presently, its diffusion is still low (about 5% in the EU). The paper also offers basic information on market structure that may be useful for extracting some preliminary conclusions about the degree of rivalry within the industry and the differences that can be observed between European countries", "keywords": ["european mobile communications", "2g", "3g", "regulation", "market structure"]} {"id": "kp20k_training_721", "title": "A Decision Support System for Design of Transmission System of Low Power Tractors", "abstract": "A decision support system (DSS) was developed in Visual Basic 6.0 programming language to design transmission system of low horsepower agricultural tractors, which involved the design of clutch and gearbox. The DSS provided graphical user interface by linking databases to support decision on design of transmission system for low horsepower tractors on the basis of modified ASABE draft model. The developed program for design of tractor transmission system calculated clutch size, gear ratios, number of teeth on each gear, and various gear design parameters. Related deviation was computed for design of transmission system of tractors based on measured and predicted values (simulated). The related deviation was less than 7% for design of clutch plate outer diameter and less than 3% for inner diameter. There was less than 1% variation between the predicted results by the developed DSS and those obtained from actual measurement for design of gear ratio. The DSS program was user friendly and efficient for predicting the design of transmission system for different tractor models to meet requirements of research institutions and industry. ", "keywords": ["decision support system", "tractor transmission system", "low power tractors", "simulation", "asabe equation"]} {"id": "kp20k_training_722", "title": "Distributed automation: PABADIS versus HMS", "abstract": "Distributed control systems (DCS) have gained huge interest in the automation business. Several approaches have been made which aim at the design and application of DCS to improve system flexibility and robustness. Important approaches are (among others) the holonic manufacturing systems (HMS) and the plant automation based on distributed systems (PABADIS) approach. PABADIS deals with plant automation systems in a distributed way using generic mobile and stationary agents and plug and participate facilities within a flat structure as key points of the developed control architecture. HMS deals with a similar structure, but aims more at a control hierarchy of special agents. This paper gives a description of the PABADIS project and makes comparisons between the two concepts, showing advantages and disadvantages of both systems. Based on this paper, it will be possible to observe the abilities and drawbacks of distributed agent-based control systems", "keywords": ["distributed control systems ", "holonic manufacturing systems ", "manufacturing execution system", "multiagent system", "plant automation based on distributed systems "]} {"id": "kp20k_training_723", "title": "A new supervised learning algorithm for multiple spiking neural networks with application in epilepsy and seizure detection", "abstract": "A new Multi-Spiking Neural Network (MuSpiNN) model is presented in which information from one neuron is transmitted to the next in the form of multiple spikes via multiple synapses. A new supervised learning algorithm, dubbed Multi-SpikeProp, is developed for training MuSpiNN. The model and learning algorithm employ the heuristic rules and optimum parameter values presented by the authors in a recent paper that improved the efficiency of the original single-spiking Spiking Neural Network (SNN) model by two orders of magnitude. The classification accuracies of MuSpiNN and Multi-SpikeProp are evaluated using three increasingly more complicated problems: the XOR problem, the Fisher iris classification problem, and the epilepsy and seizure detection (EEG classification) problem. It is observed that MuSpiNN learns the XOR problem in twice the number of epochs compared with the single-spiking SNN model but requires only one-fourth the number of synapses. For the iris and EEG classification problems, a modular architecture is employed to reduce each 3-class classification problem to three 2-class classification problems and improve the classification accuracy. For the complicated EEG classification problem a classification accuracy in the range of 90.7%94.8% was achieved, which is significantly higher than the 82% classification accuracy obtained using the single-spiking SNN with SpikeProp", "keywords": ["spiking neural networks", "epilepsy", "eeg classification", "supervised learning"]} {"id": "kp20k_training_724", "title": "Investigations about replication of empirical studies in software engineering: A systematic mapping study", "abstract": "Two recent mapping studies which were intended to verify the current state of replication of empirical studies in Software Engineering (SE) identified two sets of studies: empirical studies actually reporting replications (published between 1994 and 2012) and a second group of studies that are concerned with definitions, classifications, processes, guidelines, and other research topics or themes about replication work in empirical software engineering research (published between 1996 and 2012). In this current article, our goal is to analyze and discuss the contents of the second set of studies about replications to increase our understanding of the current state of the work on replication in empirical software engineering research. We applied the systematic literature review method to build a systematic mapping study, in which the primary studies were collected by two previous mapping studies covering the period 19962012 complemented by manual and automatic search procedures that collected articles published in 2013. We analyzed 37 papers reporting studies about replication published in the last 17years. These papers explore different topics related to concepts and classifications, presented guidelines, and discuss theoretical issues that are relevant for our understanding of replication in our field. We also investigated how these 37 papers have been cited in the 135 replication papers published between 1994 and 2012. Replication in SE still lacks a set of standardized concepts and terminology, which has a negative impact on the replication work in our field. To improve this situation, it is important that the SE research community engage on an effort to create and evaluate taxonomy, frameworks, guidelines, and methodologies to fully support the development of replications", "keywords": ["replications", "experiments", "empirical studies", "mapping study", "systematic literature review", "software engineering"]} {"id": "kp20k_training_725", "title": "worst-case analysis of memory allocation algorithms", "abstract": "Various memory allocation problems can be modeled by the following abstract problem. Given a list A &equil; (&agr; 1 ,&agr; 2 ,...&agr; n ,) of real numbers in the range (0, 1], place these in a minimum number of bins so that no bin holds numbers summing to more than 1. We let A* be the smallest number of bins into which the numbers of list A may be placed. Since a general placement algorithm for attaining A* appears to be impractical, it is important to determine good heuristic methods for assigning numbers of bins. We consider four such simple methods and analyze the worst-case performance of each, closely bounding the maximum of the ratio of the number of bins used by each method applied to list A to the optimal quantity A", "keywords": ["place", "placement", "case", "general", "method", "heuristics", "algorithm", "memory allocation", "abstraction", "optimality", "analysis", "performance"]} {"id": "kp20k_training_726", "title": "Building a financial diagnosis system based on fuzzy logic production system", "abstract": "The purpose of this study is to build a financial expert system based on fuzzy theory and Fuzzy LOgic Production System (FLOPS), which is an expert tool for processing the ambiguity. The study consists if four parts. For the first part, the basic features of expert systems are presented. For the second part, fuzzy concepts and the evaluation of classical expert systems to fuzzy expert systems will be presented. For the third part, the expert system shell (FLOPS) used in this study will be described. For the last part, it will be presented the financial diagnosis system, developed by using the Wall's seven ratios, traditional seven ratios and also 34 ratios selected by a financial expert. Alter analyzing and investigating these three kinds of methods, financial diagnosis system will be developed as a fuzzy expert system which used a membership function bared on averages and standard deviation. At the last step, the new approach will be tried by increasing the fuzzy sets far five membership functions. Some practical examples will be given. Throughout the paper, the way of budding I financial diagnosis system based on fuzzy expert system is stressed", "keywords": ["financial analysis", "expert system", "fuzzy theory", "area of the paper, expert systems and ai-based systems"]} {"id": "kp20k_training_727", "title": "A hybrid Boundary Element-Wave Based Method for an efficient solution of bounded acoustic problems with inclusions", "abstract": "This paper presents a novel hybrid approach for the efficient solution of bounded acoustic problems with arbitrarily shaped inclusions. The hybrid method couples the Wave Based Method (WBM) and the Boundary Element Method (BEM) in order to benefit from the prominent advantages of both. The WBM is based on an indirect Trefftz approach; as such, it uses exact solutions of the governing equations to approximate the field variables. It has a high computational advantage as compared to conventional element based methods, when applied on moderately complex geometries. The BEM, on the other hand, can tackle complex geometries with ease. However, it can be computationally expensive. The hybrid Boundary Element-Wave Based Method (BE-WBM) combines the best properties of the two; it makes use of the efficient WBM for the moderately complex bounded domains and utilizes the flexibility of the BEM for the complex objects that reside in the bounded domains. The accuracy and the efficiency of the method is demonstrated with three numerical examples, where the hybrid BE-WBM is shown to be more efficient than a quadratic Finite Element Method (FEM). While the hybrid method provides efficient solution for the bounded problems with inclusions, it also brings certain conceptual advantages over the FEM. The fact that it is a boundary-type method with an easy refinement concept reduces the modeling effort on the preprocessing step. Moreover, for certain optimization scenarios such as optimization of the position of inclusions, the FEM becomes disadvantageous because of its domain discretization requirements for each iteration. On the other hand, the hybrid method allows reusing of the fixed geometries and only needs recalculation of the coupling matrices without a further need of preprocessing. As such, the hybrid method combines efficiency with versatility", "keywords": ["wave based method", "boundary element method", "helmholtz problem", "bounded acoustic problem", "inclusions", "trefftz method"]} {"id": "kp20k_training_728", "title": "A fuzzy clustering-based binary threshold bispectrum estimation approach", "abstract": "A fuzzy clustering bispectrum estimation approach is proposed in this paper and applied on the rolling element bearing fault recognition. The method combines the basic higher order spectrum theory and fuzzy clustering technique in data mining. At first, all the bispectrum estimation results of the training samples and test samples are taken binarization threshold processing and turned into binary feature images. Then, the binary feature images of the training samples are used to construct object templates including kernel images and domain images. Every fault category has one object templates. At last, by calculating the distances between test samples binary feature images and the different object templates, the object classification and pattern recognition can be effectively accomplished. Bearing is the most important and much easier to be damaged component in rotating machinery. Furthermore, there exist large amounts of noise jamming and nonlinear coupling components in bearing vibration signals. Higher Order Cumulants, which can quantitatively describe the nonlinear characteristic signals with close relationship between the mechanical faults, are introduced in this paper to de-noise the raw bearing vibration signals and obtain the bispectrum estimation pictures. At last, the rolling bearing fault diagnosis experiment results showed that the classification was completely correct", "keywords": ["fault recognition", "fuzzy clustering", "bispectrum estimation", "bearing fault"]} {"id": "kp20k_training_729", "title": "Scalability of write-ahead logging on multicore and multisocket hardware", "abstract": "The shift to multi-core and multi-socket hardware brings new challenges to database systems, as the software parallelism determines performance. Even though database systems traditionally accommodate simultaneous requests, a multitude of synchronization barriers serialize execution. Write-ahead logging is a fundamental, omnipresent component in ARIES-style concurrency and recovery, and one of the most important yet-to-be addressed potential bottlenecks, especially in OLTP workloads making frequent small changes to data. In this paper, we identify four logging-related impediments to database system scalability. Each issue challenges different level in the software architecture: (a) the high volume of small-sized I/O requests may saturate the disk, (b) transactions hold locks while waiting for the log flush, (c) extensive context switching overwhelms the OS scheduler with threads executing log I/Os, and (d) contention appears as transactions serialize accesses to in-memory log data structures. We demonstrate these problems and address them with techniques that, when combined, comprise a holistic, scalable approach to logging. Our solution achieves a 20-69% speedup over a modern database system when running log-intensive workloads, such as the TPC-B and TATP benchmarks, in a single-socket multiprocessor server. Moreover, it achieves log insert throughput over 2.2 GB/s for small log records on the single-socket server, roughly 20 times higher than the traditional way of accessing the log using a single mutex. Furthermore, we investigate techniques on scaling the performance of logging to multi-socket servers. We present a set of optimizations which partly ameliorate the latency penalty that comes with multi-socket hardware, and then we investigate the feasibility of applying a distributed log buffer design at the socket level", "keywords": ["log manager", "early lock release", "flush pipelining", "log buffer contention", "consolidation array", "scaling to multisockets"]} {"id": "kp20k_training_730", "title": "Time-Driven Priority Router Implementation: Analysis and Experiments", "abstract": "Low complexity solutions to provide deterministic quality over packet switched networks while achieving high resource utilization have been an open research issue for many years. Service differentiation combined with resource overprovisioning has been considered an acceptable compromise and widely deployed given that the amount of traffic requiring quality guarantees has been limited. This approach is not viable, though, as new bandwidth hungry applications, such as video on demand, telepresence, and virtual reality, populate networks invalidating the rationale that made it acceptable so far. Time-driven priority represents a potentially interesting solution. However, the fact that the network operation is based on a time reference shared by all nodes raises concerns on the complexity of the nodes, from the point of view of both their hardware and software architecture. This work analyzes the implications that the timing requirements of time-driven priority have on network nodes and shows how proper operation can be ensured even when system components introduce timing uncertainties. Experimental results on a time-driven priority router implementation based on a personal computer both validate the analysis and demonstrate the feasibility of the technology even on an architecture that is not designed for operating under timing constraints", "keywords": ["architecture related performance", "experiments on a network testbed", "packet scheduling", "time-driven priority"]} {"id": "kp20k_training_731", "title": "Computationally sound symbolic security reduction analysis of the group key exchange protocols using bilinear pairings", "abstract": "The security of the group key exchange protocols has been widely studied in the cryptographic community in recent years. Current work usually applies either the computational approach or the symbolic approach for security analysis. The symbolic approach is more efficient than the computational approach, because it can be easily automated. However, compared with the computational approach, it has to overcome three challenges: (1) The computational soundness is unclear; (2) the number of participants must be fixed; and (3) the advantage of efficiency disappears, if the number of participants is large. This paper proposes a computationally sound symbolic security reduction approach to resolve these three issues. On one hand, combined with the properties of the bilinear pairings, the universally composable symbolic analysis (UCSA) approach is extended from the two-party protocols to the group key exchange protocols. Meanwhile, the computational soundness of the symbolic approach is guaranteed. On the other hand, for the group key exchange protocols which satisfy the syntax of the simple protocols proposed in this paper, the security is proved to be unrelated with the number of participants. As a result, the symbolic approach just needs to deal with the protocols among three participants. This makes the symbolic approach has the ability to handle arbitrary number of participants. Therefore, the advantage of efficiency is still guaranteed. The proposed approach can also be applied to other types of cryptographic primitives besides bilinear pairing for computationally sound and efficient symbolic analysis of group key exchange protocols", "keywords": ["universally composable symbolic analysis", "computational soundness", "bilinear pairing", "group key exchange protocol"]} {"id": "kp20k_training_732", "title": "Dynamic gradient method for PEBS detection in power system transient stability assessment", "abstract": "In methods for assessing the critical clearing time based on the transient energy function, the dominant procedures in use for detecting the exit point across the potential energy boundary surface (PEBS) are the ray and the gradient methods. Because both methods rely on the geometrical characteristics of the post-fault potential energy surface, they may yield erroneous results. In this paper, a more reliable method for PEBS detection is proposed. It is called the dynamic gradient method to indicate that from a given system state, a small portion of the trajectory of the gradient system is approximated and tested for convergence toward the post-fault stable equilibrium point. It is shown that a trade-off between computing time and reliability can be found as the number of machines in the system becomes greater. The method is illustrated on 3-machine and 10-machine systems", "keywords": ["lyapunov methods", "transient stability", "electric power systems", "gradient system", "potential energy boundary surface ", "critical clearing time"]} {"id": "kp20k_training_733", "title": "Exact Matrix Completion via Convex Optimization", "abstract": "We consider a problem of considerable practical interest: the recovery of a data matrix from a sampling of its entries. Suppose that we observe m entries selected uniformly at random from a matrix M. Can we complete the matrix and recover the entries that we have not seen? We show that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries. We prove that if the number m of sampled entries obeys m >= Cn(1.2)r log n for some positive numerical constant C, then with very high probability, most n x n matrices of rank r can be perfectly recovered by solving a simple convex optimization program. This program finds the matrix with minimum nuclear norm that fits the data. The condition above assumes that the rank is not too large. However, if one replaces the 1.2 exponent with 1.25, then the result holds for all values of the rank. Similar results hold for arbitrary rectangular matrices as well. Our results are connected with the recent literature on compressed sensing, and show that objects other than signals and images can be perfectly reconstructed from very limited information", "keywords": ["matrix completion", "low-rank matrices", "convex optimization", "duality in optimization", "nuclear norm minimization", "random matrices", "noncommutative khintchine inequality", "decoupling", "compressed sensing"]} {"id": "kp20k_training_734", "title": "A general framework for expressing preferences in causal reasoning and planning", "abstract": "We consider the problem of representing arbitrary preferences in causal reasoning and planning systems. In planning, a preference may be seen as a goal or constraint that is desirable, but not necessary, to satisfy. To begin, we define a very general query language for histories, or interleaved sequences of world states and actions. Based on this, we specify a second language in which preferences are defined. A single preference defines a binary relation on histories, indicating that one history is preferred to the other. From this, one can define global preference orderings on the set of histories, the maximal elements of which are the preferred histories. The approach is very general and flexible; thus it constitutes a base language in terms of which higher-level preferences may be defined. To this end, we investigate two fundamental types of preferences that we call choice and temporal preferences. We consider concrete strategies for these types of preferences and encode them in terms of our framework. We suggest how to express aggregates in the approach, allowing, e.g. the expression of a preference for histories with lowest total action costs. Last, our approach can be used to express other approaches and so serves as a common framework in which such approaches can be expressed and compared. We illustrate this by indicating how an approach due to Son and Pontelli can be encoded in our approach, as well as the language PDDL3", "keywords": ["knowledge representation", "logical representations of preferences", "preferences", "planning"]} {"id": "kp20k_training_735", "title": "A cost sensitive decision tree algorithm with two adaptive mechanisms", "abstract": "An adaptive selecting cut point mechanism is designed to build a classifier. Adaptive removing attribute mechanism will remove the redundant attributes. We adopt two mechanisms to design algorithm which for classifier construction. Experimental results show the effectiveness and feasibility of our algorithm", "keywords": ["adaptive mechanisms", "cost sensitive", "decision tree", "granular computing"]} {"id": "kp20k_training_736", "title": "Solving the Buckley-Leverett equation with gravity in a heterogeneous porous medium", "abstract": "Immiscible two-phase flow in porous media can be described by the fractional flow model. If capillary forces are neglected, then the saturation equation is a non-linear hyperbolic conservation law, known as the Buckley-Leverett equation. This equation can be numerically solved by the method of Godunov, in which the saturation is computed from the solution of Riemann problems at cell interfaces. At a discontinuity of permeability this solution has to be constructed from two flux functions. In order to determine a unique solution an entropy inequality is needed. In this article an entropy inequality is derived from a regularisation procedure, where the physical capillary pressure term is added to the Buckley-Leverett equation. This entropy inequality determines unique solutions of Riemann problems for all initial conditions. It leads to a simple recipe for the computation of interface fluxes for the method of Godunov", "keywords": ["buckley-leverett equation", "two-phase flow", "heterogeneous porous medium", "fractional flow model", "riemann problem", "entropy inequality", "godunov scheme"]} {"id": "kp20k_training_737", "title": "An exploratory study of enterprise resource planning adoption in Greek companies", "abstract": "Purpose - To examine enterprise resource planning (ERP) adoption in Greek companies, and explore the effects of uncertainty on the performance of these systems and the methods used to cope with uncertainty. Design/methodology/approach - This research was exploratory and six case studies were generated. This work was part of a larger project on the adoption, implementation and integration of ERP systems in Greek enterprises. A taxonomy of ERP adoption research was developed from the literature review and used to underpin the issues investigated in these cases. The results were compared with the literature on ERP adoption in the USA and UK. Findings - There were major differences between ERP adoption in Greek companies and companies in other countries. The adoption, implementation and integration of ERP systems were fragmented in Greek companies. This fragmentation demonstrated that the internal enterprise's culture, resources available, skills of employees, and the way ERP systems are perceived, treated and integrated within the business and in the supply chain, play critical roles in determining the success/failure of ERP systems adoption. A warehouse management system was adopted by some Greek enterprises to cope with uncertainty. Research limitations/implications - A comparison of ERP adoption was made between the USA, UK and Greece, and may limit its usefulness elsewhere. Practical implications - Practical advice is offered to managers contemplating adopting ERP. Originality/value - A new taxonomy of ERP adoption research was developed, which refocused the ERP implementation and integration into related critical success/failure factors and total integration issues, thus providing a more holistic ERP adoption framework", "keywords": ["uncertainty management", "greece", "resource management"]} {"id": "kp20k_training_738", "title": "Classification of newborn EEG maturity with Bayesian averaging over decision trees", "abstract": "EEG experts can assess a newborns brain maturity by visual analysis of age-related patterns in sleep EEG. It is highly desirable to make the results of assessment most accurate and reliable. However, the expert analysis is limited in capability to provide the estimate of uncertainty in assessments. Bayesian inference has been shown providing the most accurate estimates of uncertainty by using Markov Chain Monte Carlo (MCMC) integration over the posterior distribution. The use of MCMC enables to approximate the desired distribution by sampling the areas of interests in which the density of distribution is high. In practice, the posterior distribution can be multimodal, and so that the existing MCMC techniques cannot provide the proportional sampling from the areas of interest. The lack of prior information makes MCMC integration more difficult when a model parameter space is large and cannot be explored in detail within a reasonable time. In particular, the lack of information about EEG feature importance can affect the results of Bayesian assessment of EEG maturity. In this paper we explore how the posterior information about EEG feature importance can be used to reduce a negative influence of disproportional sampling on the results of Bayesian assessment. We found that the MCMC integration tends to oversample the areas in which a model parameter space includes one or more features, the importance of which counted in terms of their posterior use is low. Using this finding, we proposed to cure the results of MCMC integration and then described the results of testing the proposed method on a set of sleep EEG recordings", "keywords": ["eeg", "brain maturity", "bayesian model averaging", "markov chain monte carlo"]} {"id": "kp20k_training_739", "title": "Parametric estimation of the continuous non-stationary spectrum and its dynamics in surface EMG studies", "abstract": "Frequency spectrum of surface electromyographic signals (SEMGs) exhibit a non-stationary nature even in the case of constant level isometric muscle contractions due to changes related to muscle fatigue processes. These changes can be evaluated by methods for estimation of time-varying (TV) spectrum. The most widely adopted non-parametric approach is a short time Fourier transform (STFT), from which changes of mean frequency (MF) as well as other parameters for qualitative description of spectrum variation can be calculated. Similar idea of a sliding-window generalisation can also be used in case of parametric spectrum analysis methods. We applied such approach to obtain TV linear models of SEMGs, although its large variance due to independence of estimations in consequent windows represents a major drawback. This variance causes unrealistic abrupt changes in the curve of overall spectrum dynamics, calculated either as the second derivative of the MF or, as we propose, autoregressive moving average (ARMA) distance between subsequent linear models forming the TV parametric spectrum. A smoother estimation is therefore sought and another method shows to be superior over a simple sliding window technique. It supposes that trajectories of TV linear model coefficients can be described as linear combinations of known basis functions. We demonstrate that the later method is very appropriate for description of slowly changing spectra of SEMGs and that dynamics measures obtained from such estimations can be used as an additional indication of the fatigue process", "keywords": ["surface electromyography", "muscle fatigue", "time-varying linear modelling", "dynamic signals"]} {"id": "kp20k_training_740", "title": "A multiple-case design methodology for studying MRP success and CSFs", "abstract": "We used a multiple-case design to study materials requirements planning (MRP) implementation outcome in 10 manufacturing companies in Singapore. Using a two-phased data collection approach (pre-interview questionnaires and personal interviews), we sought to develop a comprehensive and operationally acceptable measure of MRP success. Our measure consists of two linked components. They are a satisfaction score (a quantitative measure) and a complementary measure based on comments from the interviewees regarding the level of usage and acceptance of the system. We also extended and consolidated a seven-factor critical success factor (CSF) framework using this methodology. CSFs are important, but knowing the linkages between them is even more important, because these linkages tell us which CSFs to emphasize at various stages of the project", "keywords": ["materials requirements planning ", "critical success factors ", "multiple-case design", "mrp implementation outcome"]} {"id": "kp20k_training_741", "title": "Hybrid heuristic-waterfilling game theory approach in MC-CDMA resource allocation", "abstract": "This paper discusses the power allocation with fixed rate constraint problem in multi-carrier code division multiple access (MC-CDMA) networks, that has been solved through game theoretic perspective by the use of an iterative water-filling algorithm (IWFA). The problem is analyzed under various interference density configurations, and its reliability is studied in terms of solution existence and uniqueness. Moreover, numerical results reveal the approach shortcoming, thus a new method combining swarm intelligence and IWFA is proposed to make practicable the use of game theoretic approaches in realistic MC-CDMA systems scenarios. The contribution of this paper is twofold: (i) provide a complete analysis for the existence and uniqueness of the game solution, from simple to more realist and complex interference scenarios; (ii) propose a hybrid power allocation optimization method combining swarm intelligence, game theory and IWFA. To corroborate the effectiveness of the proposed method, an outage probability analysis in realistic interference scenarios, and a complexity comparison with the classical IWFA are presented. ", "keywords": ["power-rate allocation control", "siso multi-rate mc-cdma", "game theory", "iterative water-filling algorithm", "qos"]} {"id": "kp20k_training_742", "title": "Direct type-specific conic fitting and eigenvalue bias correction", "abstract": "A new method to fit specific types of conics to scattered data points is introduced. Direct, specific fitting of ellipses and hyperbolae is achieved by imposing a quadratic constraint on the conic coefficients, whereby an improved partitioning of the design matrix is devised so as to improve computational efficiency and numerical stability by eliminating redundant aspects of the fitting procedure. Fitting of parabolas is achieved by determining an orthogonal basis vector set in the Grassmannian space of the quadratic terms coefficients. The linear combination of the basis vectors that fulfills the parabolic condition and has a minimum residual norm is determined using Lagrange multipliers. This is the first known direct solution for parabola specific fitting. Furthermore, the inherent bias of a linear conic fit is addressed. We propose a linear method of correcting this bias, producing better geometric fits which are still constrained to specific conic type", "keywords": ["curve fitting", "conics", "constrained least squares"]} {"id": "kp20k_training_743", "title": "Communication with WWW in Czech", "abstract": "This paper describes UIO, a multi-domain question-answering system for the Czech language that looks for answers on the web. UIO exploits two fields, namely natural language interface to databases and question answering. In its current version, UIO can be used for asking questions about train and coach timetables, cinema and theatre performances, about currency exchange rates, name-days and on the Diderot Encyclopaedia. Much effort have been made into making addition of a new domain very easy. No limits concerning words or the form of a question need to be set in UIO. Users can ask syntactically correct as well as incorrect questions, or use keywords. A Czech morphological analyser and a bottom-up chart parser are employed for analysis of the question. The database of multi-word expressions is automatically updated when a new item has been found on the web. For all domains UIO has an accuracy rate about 80", "keywords": ["question answering", "natural language processing"]} {"id": "kp20k_training_744", "title": "efficient coloring of a large spectrum of graphs", "abstract": "We have developed a new algorithm and software for graph coloring by systematically combining several algorithm and software development ideas that had crucial impact on the algorithm's performance. The algorithm explores the divide-and-conquer paradigm, global search for constrained independent sets using a computationally inexpensive objective function, assignment of most-constrained vertices to least-constraining colors, reuse and locality exploration of intermediate solutions, search time management, post-processing lottery-scheduling iterative improvement, and statistical parameter determination and validation. The algorithm was tested on a set of real-life examples. We found that hard-to-color real-life examples are common especially in domains where problem modeling results in denser graphs. Systematic experimentations demonstrated that for numerous instances the algorithm outperformed all other implementations reported in literature in solution quality and run-time", "keywords": ["statistics", "scheduling", "reuse", "quality", "color", "examples", "efficiency", "software development", "performance", "domain", "spread spectrum communication", "divide-and-conquer", "object", "timing", "model", "combinational", "management", "search", "locality", "exploration", "software", "experimentation", "determinism", "ism frequency band", "digital radio", "rf cmos", "transceiver", "functional", "assignment", "implementation", "process", "graph", "algorithm", "graph coloring", "iter", "global"]} {"id": "kp20k_training_745", "title": "Boolean Equations and Boolean Inequations", "abstract": "In this paper we consider Boolean inequations of the form f(X) not equal 0. We also consider the system of Boolean inequation and Boolean equation f(X) not equal 0 boolean AND g(X) = 0 and we describe all the solutions of this system", "keywords": ["boolean inequations", "boolean functions"]} {"id": "kp20k_training_746", "title": "Time efficient centralized gossiping in radio networks", "abstract": "In this paper we study the gossiping problem (all-to-all communication) in radio networks where all nodes are aware of the network topology. We start our presentation with a deterministic gossiping algorithm that works in at most n units of time in any radio network of size n. This algorithm is optimal in the worst case scenario since there exist radio network topologies, such as lines, stars and complete graphs in which radio gossiping cannot be completed in less than n communication rounds. Furthermore, we show that there does not exist any radio network topology in which the gossiping task can be solved in less than [log(n - 1)] +2 rounds. We also show that this lower bound can be matched from above for a fraction of all possible integer values of n, and for all other values of n we propose a solution which accomplishes gossiping in [log(n - 1)] + 2 rounds. Then we show an almost optimal radio gossiping algorithm in trees, which misses the optimal time complexity by a single round. Finally, we study asymptotically optimal O(D)-time gossiping (where D is the diameter of the network) in graphs with the maximum degree Delta = O(D1-1/(i+1)/log(i) n), for any integer constant i >= 0 and D large enough. ", "keywords": ["radio networks", "broadcasting and gossiping", "centralized algorithms"]} {"id": "kp20k_training_747", "title": "Tree kernel-based protein-protein interaction extraction from biomedical literature", "abstract": "There is a surge of research interest in protein-protein interaction (PPI) extraction from biomedical literature. While most of the state-of-the-art PPI extraction systems focus on dependency-based structured information, the rich structured information inherent in constituent parse trees has not been extensively explored for PPI extraction. In this paper, we propose a novel approach to tree kernel-baled PPI extraction, where the tree representation generated from a constituent syntactic parser is further refined using the shortest dependency path between two proteins derived from a dependency parser. Specifically, all the constituent tree nodes associated with the nodes on the shortest dependency path are kept intact, while other nodes are removed safely to make the constituent tree concise and precise for PPI extraction. Compared with previously used constituent tree setups, our dependency-motivated constituent tree setup achieves the best results across five commonly used PPI corpora. Moreover, our tree kernel-based method outperforms other single kernel-based ones and performs comparably with some multiple kernel ones on the most commonly tested AIMed corpus. ", "keywords": ["protein-protein interaction", "convolution tree kernel", "constituent parse tree", "shortest dependency path"]} {"id": "kp20k_training_748", "title": "A review of content-based image retrieval systems in medical applications - clinical benefits and future directions", "abstract": "Content-based visual information retrieval (CBVIR) or content-based image retrieval (CBIR) has been one on the most vivid research areas in the field of computer vision over the last 10 years. The availability of large and steadily growing amounts of visual and multimedia data, and the development of the Internet underline the need to create thematic access methods that offer more than simple text-based queries or requests based on matching exact database fields. Many programs and toots have been developed to formulate and execute queries based on the visual or audio content and to help browsing large multimedia repositories. Still, no general breakthrough has been achieved with respect to large varied databases with documents of differing sorts and with varying characteristics. Answers to many questions with respect to speed, semantic descriptors or objective image interpretations are still unanswered. In the medical field, images, and especially digital images, are produced in ever-increasing quantities and used for diagnostics and therapy. The Radiology Department of the University Hospital of Geneva alone produced more than 12,000 images a day in 2002. The cardiology is currently the second largest producer of digital images, especially with videos of cardiac catheterization (similar to1800 exams per year containing almost 2000 images each). The total amount of cardiologic image data produced in the Geneva University Hospital was around 1 TB in 2002. Endoscopic videos can equally produce enormous amounts of data. With digital imaging and communications in medicine (DICOM), a standard for image communication has been set and patient information can be stored with the actual image(s), although stilt a few problems prevail with respect to the standardization. In several articles, content-based access to medical images for supporting clinical decision-making has been proposed that would ease the management of clinical data and scenarios for the integration of content-based access methods into picture archiving and communication systems (PACS) have been created. This article gives an overview of available literature in the field of content-based access to medical image data and on the technologies used in the field. Section 1 gives an introduction into generic content-based image retrieval and the technologies used. Section 2 explains the propositions for the use of image retrieval in medical practice and the various approaches. Example systems and application areas are described. Section 3 describes the techniques used in the implemented systems, their datasets and evaluations. Section 4 identifies possible clinical benefits of image retrieval systems in clinical practice as well as in research and education. New research directions are being defined that can prove to be useful. This article also identifies explanations to some of the outlined problems in the field as it Looks like many propositions for systems are made from the medical domain and research prototypes are developed in computer science departments using medical datasets. Still, there are very few systems that seem to be used in clinical practice. It needs to be stated as well that the goal is not, in general, to replace text-based retrieval methods as they exist at the moment but to complement them with visual search tools. ", "keywords": ["medical image retrieval", "content-based search", "visual information retrieval", "pacs"]} {"id": "kp20k_training_749", "title": "Infinitesimal Plane-Based Pose Estimation", "abstract": "Estimating the pose of a plane given a set of point correspondences is a core problem in computer vision with many applications including Augmented Reality (AR), camera calibration and 3D scene reconstruction and interpretation. Despite much progress over recent years there is still the need for a more efficient and more accurate solution, particularly in mobile applications where the run-time budget is critical. We present a new analytic solution to the problem which is far faster than current methods based on solving Pose from (n) Points (PnP) and is in most cases more accurate. Our approach involves a new way to exploit redundancy in the homography coefficients. This uses the fact that when the homography is noisy it will estimate the true transform between the model plane and the image better at some regions on the plane than at others. Our method is based on locating a point where the transform is best estimated, and using only the local transformation at that point to constrain pose. This involves solving pose with a local non-redundant 1st-order PDE. We call this framework Infinitesimal Plane-based Pose Estimation (IPPE), because one can think of it as solving pose using the transform about an infinitesimally small region on the surface. We show experimentally that IPPE leads to very accurate pose estimates. Because IPPE is analytic it is both extremely fast and allows us to fully characterise the method in terms of degeneracies, number of returned solutions, and the geometric relationship of these solutions. This characterisation is not possible with state-of-the-art PnP methods", "keywords": ["plane", "pose", "sfm", "pnp", "homography"]} {"id": "kp20k_training_750", "title": "Evolutionary learning of spiking neural networks towards quantification of 3D MRI brain tumor tissues", "abstract": "This paper presents a new classification technique for 3D MR images, based on a third-generation network of spiking neurons. Implementation of multi-dimensional co-occurrence matrices for the identification of pathological tumor tissue and normal brain tissue features are assessed. The results show the ability of spiking classifier with iterative training using genetic algorithm to automatically and simultaneously recover tissue-specific structural patterns and achieve segmentation of tumor part. The spiking network classifier has been validated and tested for various real-time and Harvard benchmark datasets, where appreciable performance in terms of mean square error, accuracy and computational time is obtained. The spiking network employed Izhikevich neurons as nodes in a multi-layered structure. The classifier has been compared with computational power of multi-layer neural networks with sigmoidal neurons. The results on misclassified tumors are analyzed and suggestions for future work are discussed", "keywords": ["3d magnetic resonance imaging", "multi-dimensional co-occurrence matrices", "spiking neural networks", "izhikevich neurons", "genetic algorithm"]} {"id": "kp20k_training_751", "title": "Inferential queueing and speculative push", "abstract": "Communication latencies within critical sections constitute a major bottleneck in some classes of emerging parallel workloads. In this paper, we argue for the use of two mechanisms to reduce these communication latencies: Inferentially Queued locks (IQLs) and Speculative Push (SP). With IQLs, the processor infers the existence, and limits, of a critical section from the use of synchronization instructions and joins a queue of lock requestors, reducing synchronization delay. The SP mechanism extracts information about program structure by observing IQLs. SP allows the cache controller, responding to a request for a cache line that likely includes a lock variable, to predict the data sets the requestor will modify within the associated critical section. The controller then pushes these lines from its own cache to the target cache, as well as writing them to memory. Overlapping the protected data transfer with that of the lock can substantially reduce the communication latencies within critical sections. By pushing data in exclusive state, the mechanism can collapse a read-modify-write sequences within a critical section into a single local cache access. The write-back to memory allows the receiving cache to ignore the push. Neither mechanism requires any programmer or compiler support nor any instruction set changes. Our experiments demonstrate that IQLs and SP can improve performance of applications employing frequent synchronization", "keywords": ["synchronization", "data forwarding", "inferential queueing", "critical sections", "migratory sharing"]} {"id": "kp20k_training_752", "title": "The thermal failure process of the quantum cascade laser", "abstract": "We report the thermal failure process of the quantum cascade laser. Firstly, high temperature and strain in the active region are verified by Raman spectra, and the conspicuous catastrophically failed characteristics are observed by scanning electron microscope. Secondly, the defects generate serious structure disorder due to the high temperature of the active region and the resulted strain relaxation. Thirdly, the abundant atomic diffusion in the active region and substrate are observed. The structure disorder and the change of element composition in the active region directly lead to the quantum cascade laser failure. The theoretical analysis fits well with the results of experimental studies", "keywords": ["quantum cascade lasers ", "thermal", "failure", "high temperature"]} {"id": "kp20k_training_754", "title": "Stable advection-reaction-diffusion with arbitrary anisotropy", "abstract": "Turing first theorized that many biological patterns arise through the processes of reaction and diffusion. Subsequently, reaction-diffusion systems have been studied in many fields, including computer graphics. We first show that for visual simulation purposes, reaction-diffusion equations can be made unconditionally stable using a variety of straightforward methods. Second, we propose an anisotropy embedding that significantly expands the space of possible patterns that can be generated. Third, we show that by adding an advection term, the simulation can be coupled to a fluid simulation to produce visually appealing flows. Fourth, we couple fast marching methods to our anisotropy embedding to create a painting interface to the simulation. Unconditional stability is maintained throughout, and our system runs at interactive rates. Finally, we show that on the Cell processor, it is possible to implement reaction-diffusion on top of an existing fluid solver with no significant performance impact. ", "keywords": ["physically based animation", "fluid simulation", "computer animation"]} {"id": "kp20k_training_755", "title": "Alexander duality and moments in reliability modelling", "abstract": "There are strong connections between coherent systems in reliability for systems which have components with a finite number of states and certain algebraic structures. A special case is binary systems where there are two states: fail and not fail. The connection relies on an order property in the system and a way of coding states alpha = (alpha(1),...,alpha(d)) with monomials x(alpha) = (x(1)(alpha1),...,x(d)(alphad)). The algebraic entities are the Scarf complex and the associated Alexander duality. The failure ''event'' can be studied using these ideas and identities and bounds derived when the algebra is combined with probability distributions on the states. The x(alpha) coding aids the use of moments mu(alpha)=E(X-alpha) with respect to the underlying distribution", "keywords": ["reliability", "monomial ideals", "scarf complex", "alexander dual", "hilbert series", "moments"]} {"id": "kp20k_training_756", "title": "Failure identification for linear repetitive processes", "abstract": "This paper investigates the fault detection and isolation (FDI) problem for discrete-time linear repetitive processes using a geometric approach, starting from a 2-D model for these processes that incorporates a representation of the failure. Based on this model, the FDI problem is formulated in the geometric setting and sufficient conditions for solvability of this problem are given. Moreover, the processess behaviour in the presence of noise is considered, leading to the development of a statistical approach for determining a decision threshold. Finally, a FDI procedure is developed based on an asymptotic observer reconstruction of the state vector", "keywords": ["fault detection and isolation", "fdi", "geometric approach", "linear repetitive processes", "multidimensional systems"]} {"id": "kp20k_training_757", "title": "Determinants of web site information by Spanish city councils", "abstract": "Purpose - The purpose of this research is to analyse the web sites of large Spanish city councils with the objective of assessing the extent of information disseminated on the internet and determining what factors are affecting the observed levels of information disclosure. Design/methodology/approach - The study takes as its reference point the existing literature on the examination of the quality of web sites, in particular the provisions of the Web Quality Model (WQM) and the importance of content as a key variable in determining web site quality. In order to quantify the information on city council web sites, a Disclosure Index has been designed which takes into account the content, navigability and presentation of the web sites. In order to contrast which variables determine the information provided on the web sites, our investigation bases itself on the studies about voluntary disclosure in the public sector, and six lineal regressions models have been performed. Findings - The empirical evidence obtained reveals low disclosure levels among Spanish city council web sites. In spite of this, almost 50 per cent of the city councils have reached the \"approved\" level and of these, around a quarter obtained good marks. Our results show that disclosure levels depend on political competition, public media visibility and the access to technology and educational levels of the citizens. Practical implications - The strategy of communication on the internet by local Spanish authorities is limited in general to an ornamental web presence but one that does not respond efficiently to the requirements of the digital society. During the coming years, local Spanish politicians will have to strive to take advantage of the opportunities that the internet offers to increase both the relational and informational capacity of municipal web sites as well as the digital information transparency of their public management. Originality/value - The internet is a potent channel of communication that is modifying the way in which people access and relate to information and each other. The public sector is not unaware of these changes and is incorporating itself gradually into the new network society. This study systematises; the analysis of local administration web sites, showing the lack of digital transparency, and orients politicians in the direction to follow in order to introduce improvements in their electronic relationships with the public", "keywords": ["information strategy", "worldwide web", "internet", "local authorities", "spain"]} {"id": "kp20k_training_758", "title": "The development of regional collaboration for resource efficiency: A network perspective on industrial symbiosis", "abstract": "Three development patterns of industrial symbiosis systems are proposed and empirically examined. Industrial symbiosis networks build on and strengthen the disparity of firms capability on building symbiotic relations. Due to this disparity, self-organized industrial symbiosis networks favor the most capable firms and grow preferentially. Coordinating agencies improve disadvantaged firms capabilities, and change the preferential growth to a homogeneous one. Strong government engagement helps disadvantaged firms and facilitates non-preferential symbiosis development in a region", "keywords": ["industrial ecosystem", "by-product", "preferential growth", "homogeneous growth", "complex network", "degree correlation"]} {"id": "kp20k_training_759", "title": "Effective diagnosis of heart disease through neural networks ensembles", "abstract": "In the last decades, several tools and various methodologies have been proposed by the researchers for developing effective medical decision support systems. Moreover, new methodologies and new tools are continued to develop and represent day by day. Diagnosing of the heart disease is one of the important issue and many researchers investigated to develop intelligent medical decision support systems to improve the ability of the physicians. In this paper, we introduce a methodology which uses SAS base software 9.1.3 for diagnosing of the heart disease. A neural networks ensemble method is in the centre of the proposed system. This ensemble based methods creates new models by combining the posterior probabilities or the predicted values from multiple predecessor models. So, more effective models can be created. We performed experiments with the proposed tool. We obtained 89.01% classification accuracy from the experiments made on the data taken from Cleveland heart disease database. We also obtained 80.95% and 95.91% sensitivity and specificity values, respectively, in heart disease diagnosis", "keywords": ["heart disease", "sas base software", "neural networks", "ensemble based model"]} {"id": "kp20k_training_760", "title": "Soil carbon model Yasso07 graphical user interface", "abstract": "In this article, we present a graphical user interface software for the litter decomposition and soil carbon model Yasso07 and an overview of the principles and formulae it is based on. The software can be used to test the model and use it in simple applications. Yasso07 is applicable to upland soils of different ecosystems worldwide, because it has been developed using data covering the global climate conditions and representing various ecosystem types. As input information, Yasso07 requires data on litter input to soil, climate conditions, and land-use change if any. The model predictions are given as probability densities representing the uncertainties in the parameter values of the model and those in the input data the user interface calculates these densities using a built-in Monte Carlo simulation", "keywords": ["decomposition", "monte carlo simulation", "software", "soil carbon", "statistical modelling", "uncertainty estimation"]} {"id": "kp20k_training_761", "title": "time synchronization attacks in sensor networks", "abstract": "Time synchronization is a critical building block in distributed wireless sensor networks. Because sensor nodes may be severely resource-constrained, traditional time-synchronization protocols cannot be used in sensor networks. Various time-synchronization protocols tailored for such networks have been proposed to solve this problem. However, none of these protocols have been designed with security in mind. If an adversary were able to compromise a node, he might prevent a network from effectively executing certain applications, such as sensing or tracking an object, or he might even disable the network by disrupting a fundamental service such as a TDMA-based channel-sharing scheme. In this paper we give a survey of the most common time synchronization protocols and outline the possible attacks on each protocol. In addition, we discuss how different sensor network applications that are affected by time synchronization attacks, and we propose some countermeasures for these attack", "keywords": ["sharing", "network", "clock drift and skew", "sensor", "authentication", "applications", "security", "time-synchronization", "survey", "object", "building block", "tracking", "paper", "distributed", "critic", "wireless sensor network", "attack", "resource", "senses", "sensor network", "scheme", "tdma"]} {"id": "kp20k_training_762", "title": "Enhanced Privacy ID: A Direct Anonymous Attestation Scheme with Enhanced Revocation Capabilities", "abstract": "Direct Anonymous Attestation (DAA) is a scheme that enables the remote authentication of a Trusted Platform Module (TPM) while preserving the user's privacy. A TPM can prove to a remote party that it is a valid TPM without revealing its identity and without linkability. In the DAA scheme, a TPM can be revoked only if the DAA private key in the hardware has been extracted and published widely so that verifiers obtain the corrupted private key. If the unlinkability requirement is relaxed, a TPM suspected of being compromised can be revoked even if the private key is not known. However, with the full unlinkability requirement intact, if a TPM has been compromised but its private key has not been distributed to verifiers, the TPM cannot be revoked. Furthermore, a TPM cannot be revoked from the issuer, if the TPM is found to be compromised after the DAA issuing has occurred. In this paper, we present a new DAA scheme called Enhanced Privacy ID (EPID) scheme that addresses the above limitations. While still providing unlinkability, our scheme provides a method to revoke a TPM even if the TPM private key is unknown. This expanded revocation property makes the scheme useful for other applications such as for driver's license. Our EPID scheme is efficient and provably secure in the same security model as DAA, i.e., in the random oracle model under the strong RSA assumption and the decisional Diffie-Hellman assumption", "keywords": ["security and protection", "anonymity", "privacy", "cryptographic protocols", "trusted computing"]} {"id": "kp20k_training_763", "title": "using predicate path information in hardware to determine true dependences", "abstract": "Predicated Execution has been put forth as a method for improving processor performance by removing hard-to-predict branches. As part of the process of turning a set of basic blocks into a predicated region, both paths of a branch are combined into a single path. There can be multiple definitions from disjoint paths that reach a use. Waiting to find out the correct definition that actually reaches the use can cause pipeline stalls.In this paper we examine a hardware optimization that dynamically collects and analyzes path information to determine valid dependences for predicated regions of code. We then use this information for an in-order VLIW predicated processor, so that instructions can continue towards execution without having to wait on operands from false dependences. Our results show that using our Disjoint Path Analysis System provides speedups over 6% and elimination of false RAW dependences of up to 14% due to the detection of erroneous dependences in if-converted regions of code", "keywords": ["predicated execution", "path analysis", "dependence analysis"]} {"id": "kp20k_training_764", "title": "Simulation of DNA damage clustering after proton irradiation using an adapted DBSCAN algorithm", "abstract": "In this work the Density Based Spatial Clustering of Applications with Noise (DBSCAN) algorithm was adapted to early stage DNA damage clustering calculations. The resulting algorithm takes into account the distribution of energy deposit induced by ionising particles and a damage probability function that depends on the total energy deposit amount. Proton track simulations were carried out in small micrometric volumes representing small DNA containments. The algorithm was used to determine the damage concentration clusters and thus to deduce the DSB/SSB ratios created by protons between 500keV and 50MeV. The obtained results are compared to other calculations and to available experimental data of fibroblast and plasmid cells irradiations, both extracted from literature", "keywords": ["data mining", "clustering", "monte-carlo", "dna damage", "radiobiology", "proton irradiation"]} {"id": "kp20k_training_765", "title": "building the academic strategy program", "abstract": "Purpose -- to present the application of a theory and best practice of a balanced scorecard BSC method, to create BSC strategic program for academic education decision makers, and to present framework for strategy program about research and education in faculty. Methodology/approach -- based on the investigation of number of successful project on this topic and on the authors exercise in balanced scorecard approach about educational strategy the program is created and modelled. Findings -- the balanced scorecard strategic program is developed and it allows enhancing the leadership capability across university. Practical implications -- the program can facilitate faculty and staff to formulate and measure strategic management decisions and to create competitive educational and research environment at the university. Originality/value -- the value of the program is in integrating competences, experience, best practices and tools within one new program design. The paper shows how to translate the academic strategy into different strategic objectives and goals, how to model them and how to communicate academic research and education processes to realize important improvements in cost and quality of academic services", "keywords": ["balanced scorecard program"]} {"id": "kp20k_training_766", "title": "Nonlinear transport in quantum point contact structures", "abstract": "We have investigated the magnetotransport properties under nonlinear conditions in quantum point contact structures fabricated on high mobility AlGaAs/GaAs two-dimensional electron gas (2DEG) layers. Nonlinearities in the IV characteristics are observed at the threshold for conduction when biased initially from the tunneling regime as observed previously. We observe that this non-ideality is enhanced by a magnetic field normal to the plane of the 2DEG. This behavior is interpreted in terms of corrections to the Landauer model extended to nonequilibrium conditions", "keywords": ["nanostructures", "transport", "semiconductors", "quantum dots"]} {"id": "kp20k_training_767", "title": "MRM: A matrix representation and mapping approach for knowledge acquisition", "abstract": "Knowledge acquisition plays a critical role in constructing a knowledge-based system (KBS). It is the most time-consuming phase and has been recognized as the bottleneck of KBS development. This paper presents a matrix representation and mapping (MRM) approach to facilitate the effectiveness of knowledge acquisition in building a KBS. The proposed MRM approach, which is based on matrix representation and mapping operations, comprises six consecutive steps for generating rules. The procedure in each step is elaborated. A case study on primarily diagnosing an automotive system is employed to illustrate how the MRM approach works", "keywords": ["knowledge-based systems", "knowledge acquisition", "general sorting", "matrix representation and mapping", "rule generation"]} {"id": "kp20k_training_768", "title": "Integrated Obstacle Avoidance and Path Following Through a Feedback Control Law", "abstract": "The article proposes a novel approach to path following in the presence of obstacles with unique characteristics. First, the approach proposes an integrated method for obstacle avoidance and path following based on a single feedback control law, which produces commands to actuators directly executable by a robot with unicycle kinematics. Second, the approach offers a new solution to the well-known dilemma that one has to face when dealing with multiple sensor readings, i.e., whether it is better, to summarize a huge amount of sensor data, to consider only the closest sensor reading, to consider all sensor readings separately to compute the resulting force vector, or to build a local map. The approach requires very little memory and computational resources, thus being implementable even on simpler robots moving in unknown environments", "keywords": ["obstacle avoidance", "mobile robots", "path following"]} {"id": "kp20k_training_769", "title": "A review of learning vector quantization classifiers", "abstract": "In this work, we present a review of the state of the art of learning vector quantization (LVQ) classifiers. A taxonomy is proposed which integrates the most relevant LVQ approaches to date. The main concepts associated with modern LVQ approaches are defined. A comparison is made among eleven LVQ classifiers using one real-world and two artificial datasets", "keywords": ["learning vector quantization", "supervised learning", "neural networks", "margin maximization", "likelihood ratio maximization"]} {"id": "kp20k_training_770", "title": "sketching concurrent data structures", "abstract": "We describe PSketch, a program synthesizer that helps programmers implement concurrent data structures. The system is based on the concept of sketching, a form of synthesis that allows programmers to express their insight about an implementation as a partial program: a sketch. The synthesizer automatically completes the sketch to produce an implementation that matches a given correctness criteria. PSketch is based on a new counterexample-guided inductive synthesis algorithm (CEGIS) that generalizes the original sketch synthesis algorithm from Solar-Lezama et.al. to cope efficiently with concurrent programs. The new algorithm produces a correct implementation by iteratively generating candidate implementations, running them through a verifier, and if they fail, learning from the counterexample traces to produce a better candidate; converging to a solution in a handful of iterations. PSketch also extends Sketch with higher-level sketching constructs that allow the programmer to express her insight as a \"soup\" of ingredients from which complicated code fragments must be assembled. Such sketches can be viewed as syntactic descriptions of huge spaces of candidate programs (over 10 8 candidates for some sketches we resolved). We have used the PSketch system to implement several classes of concurrent data structures, including lock-free queues and concurrent sets with fine-grained locking. We have also sketched some other concurrent objects including a sense-reversing barrier and a protocol for the dining philosophers problem; all these sketches resolved in under an hour", "keywords": ["spin", "sat", "sketching", "concurrency", "synthesis"]} {"id": "kp20k_training_771", "title": "Evaluating change in user error when using ruggedized handheld devices", "abstract": "There are no significant differences between user error and age. Lack of corrective software may not impact user error as much as expected. Keypad devices had more character errors while touchscreen devices had more word", "keywords": ["user error", "handheld devices", "generation"]} {"id": "kp20k_training_772", "title": "Support for situation awareness in trustworthy ubiquitous computing application software", "abstract": "Due to the dynamic and ephemeral nature of ubiquitous computing (ubicomp) environments, it is especially important that the application software in ubicomp environments is trustworthy. In order to have trustworthy application software in ubicomp environments, situation-awareness (SAW) in the application software is needed to enforce flexible security policies and detect violations of security policies. In this paper, an approach is presented to provide development and runtime support to incorporate SAW in trustworthy ubicomp, application software. The development support is to provide SAW requirement specification and automated code generation to achieve SAW in trustworthy ubicomp application software, and the runtime support is for context acquisition, situation analysis and situation-aware communication. To realize our approach, the improved Reconfigurable Context-Sensitive Middleware (RCSM) is developed to provide the above development and runtime support. ", "keywords": ["trustworthy ubiquitous application software", "situation-awareness", "situation-aware interface definition language ", "situation-aware middleware", "situation-aware security policies", "development and runtime support"]} {"id": "kp20k_training_773", "title": "Validation of temperature-perturbation and CFD-based modelling for the prediction of the thermal urban environment: the Lecce (IT) case study", "abstract": "Two modelling approaches for air temperature prediction in cities are evaluated. Daily trends of air temperature are well captured. ENVI-met requires an ad-hoc tuning of surface boundary conditions. ADMS-TH model performance depends on the accuracy of energy balance terms. ADMS-TH shows an overall better performance than ENVI-met", "keywords": ["urban temperature", "envi-met model", "adms-temperature and humidity model", "land use parameters", "morphological analysis", "lecce city"]} {"id": "kp20k_training_774", "title": "Comparison of software for computing the action of the matrix exponential", "abstract": "The implementation of exponential integrators requires the action of the matrix exponential and related functions of a possibly large matrix. There are various methods in the literature for carrying out this task. In this paper we describe a new implementation of a method based on interpolation at Leja points. We numerically compare this method with other codes from the literature. As we are interested in applications to exponential integrators we choose the test examples from spatial discretization of time dependent partial differential equations in two and three space dimensions. The test matrices thus have large eigenvalues and can be nonnormal", "keywords": ["leja interpolation", "action of matrix exponential", "krylov subspace method", "taylor series", "exponential integrators"]} {"id": "kp20k_training_775", "title": "Learning while designing", "abstract": "This paper describes how a computational system for designing can learn useful, reusable, generalized search strategy rules from its own experience of designing. It can then apply this experience to transform the design process from search based (knowledge lean) to knowledge based (knowledge rich). The domain of application is the design of spatial layouts for architectural design. The processes of designing and learning are tightly coupled", "keywords": ["design process", "learning", "situations", "strategy rules"]} {"id": "kp20k_training_776", "title": "An algebraic construction of codes for Slepian-Wolf source networks", "abstract": "This correspondence proposes an explicit construction of fixed-length codes for Slepian-Wolf (SW) source networks. The proposed code is linear, and has two-step encoding and decoding procedures similar to the concatenated code used for channel coding. Encoding and decoding of the code can be done in a polynomial order of the block length. The proposed code can achieve arbitrary small probability of error for ergodic sources with finite alphabets, if the pair of encoding rates is in the achievable region. Further, if the sources are memoryless, the proposed code can be modified to become universal and the probability of error vanishes exponentially as the block length tends to infinity", "keywords": ["ergodic process", "fixed-length code", "slepian-wolf coding", "source coding"]} {"id": "kp20k_training_777", "title": "ANN and ANFIS models for performance evaluation of a vertical ground source heat pump system", "abstract": "The aim of this study is to demonstrate the comparison of an artificial neural network (ANN) and an adaptive neuro-fuzzy inference system (ANFIS) for the prediction performance of a vertical ground source heat pump (VGSHP) system. The VGSHP system using R-22 as refrigerant has a three single U-tube ground heat exchanger (GHE) made of polyethylene pipe with a 40mm outside diameter. The GHEs were placed in a vertical boreholes (VBs) with 30 (VB1), 60 (VB2) and 90 (VB3)m depths and 150mm diameters. The monthly mean values of COP for VB1, VB2 and VB3 are obtained to be 3.37/1.93, 3.85/2.37, and 4.33/3.03, respectively, in cooling/heating seasons. Experimental performances were performed to verify the results from the ANN and ANFIS approaches. ANN model, Multi-layered Perceptron/Back-propagation with three different learning algorithms (the LevenbergMarquardt (LM), Scaled Conjugate Gradient (SCG) and Pola-Ribiere Conjugate Gradient (CGP) algorithms and the ANFIS model were developed using the same input variables. Finally, the statistical values are given in as tables. This paper shows the appropriateness of ANFIS for the quantitative modeling of GSHP systems", "keywords": ["adaptive neuro-fuzzy inference system", "membership functions", "ground source heat pump", "vertical heat exchanger", "coefficient of performance"]} {"id": "kp20k_training_778", "title": "Effects of agent heterogeneity in the presence of a land-market: A systematic test in an agent-based laboratory", "abstract": "Representing agent heterogeneity is one of the main reasons that agent-based models become increasingly popular in simulating the emergence of land-use, land-cover change and socioeconomic phenomena. However, the relationship between heterogeneous economic agents and the resultant landscape patterns and socioeconomic dynamics has not been systematically explored. In this paper, we present a stylized agent-based land market model, Land Use in eXurban Environments (LUXE), to study the effects of multidimensional agents heterogeneity on the spatial and socioeconomic patterns of urban land use change under various market representations. We examined two sources of agent heterogeneity: budget heterogeneity, which imposes constraints on the affordability of land, and preference heterogeneity, which determines location choice. The effects of the two dimensions of agents heterogeneity are systematically explored across different market representations by three experiments. Agents heterogeneity exhibits a complex interplay with various forms of market institutions as indicated by macro-measures (landscape metrics, segregation index, and socioeconomic metrics). In general, budget heterogeneity has pronounced effect on socioeconomic results, while preference heterogeneity is highly pertinent to spatial outcomes. The relationship between agent heterogeneity and macro-measures becomes more complex when more land market mechanisms are represented. In other words, appropriately simulating agent heterogeneity plays an important role in guaranteeing the fidelity of replicating empirical land use change process", "keywords": ["agent heterogeneity", "land market", "competitive bidding", "budget constraints", "agent-based modeling"]} {"id": "kp20k_training_779", "title": "feature shaping for linear svm classifiers", "abstract": "Linear classifiers have been shown to be effective for many discrimination tasks. Irrespective of the learning algorithm itself, the final classifier has a weight to multiply by each feature. This suggests that ideally each input feature should be linearly correlated with the target variable (or anti-correlated), whereas raw features may be highly non-linear. In this paper, we attempt to re-shape each input feature so that it is appropriate to use with a linear weight and to scale the different features in proportion to their predictive value. We demonstrate that this pre-processing is beneficial for linear SVM classifiers on a large benchmark of text classification tasks as well as UCI datasets", "keywords": ["svm", "machine learning", "feature weighting", "text classification", "feature scaling", "linear support vector machine"]} {"id": "kp20k_training_780", "title": "a service driven development process (sddp) model for ultra large scale systems", "abstract": "Achieving ultra-large-scale software systems will necessarily require new and special development processes. This position paper suggests overall structure of a process model to develop and maintain system of systems similar to Ultra Large Scale (ULS) systems. The proposed process model will be introduced in details and finally, we will evaluate it considering CMMI-ACQ which has been presented by SEI for acquirer organizations", "keywords": ["service", "software engineering", "development process", "ultra large scale systems"]} {"id": "kp20k_training_781", "title": "Delineating white matter structure in diffusion tensor MRI with anisotropy creases", "abstract": "Geometric models of white matter architecture play an increasing role in neuroscientific applications of diffusion tensor imaging, and the most popular method for building them is fiber tractography. For some analysis tasks, however, a compelling alternative may be found in the first and second derivatives of diffusion anisotropy. We extend to tensor fields the notion from classical computer vision of ridges and valleys, and define anisotropy creases as features of locally extremal tensor anisotropy. Mathematically, these are the loci where the gradient of anisotropy is orthogonal to one or more eigenvectors of its Hessian. We propose that anisotropy creases provide a basis for extracting a skeleton of the major white matter pathways, in that ridges of anisotropy coincide with interiors of fiber tracts, and valleys of anisotropy coincide with the interfaces between adjacent but distinctly oriented tracts. The crease extraction algorithm we present generates high-quality polygonal models of crease surfaces, which are further simplified by connected-component analysis. We demonstrate anisotropy creases on measured diffusion MRI data, and visualize them in combination with tractography to confirm their anatomic relevance", "keywords": ["diffusion tensor mri", "anisotropy", "ridges and valleys", "crease surface extraction", "white matter geometry"]} {"id": "kp20k_training_782", "title": "A fully parallel method for the singular eigenvalue problem", "abstract": "In this paper, a fully parallel method for finding some or all finite eigenvalues of a real symmetric matrix pencil (A, B) is presented, where A is a symmetric tridiagonal matrix and B is a diagonal matrix with b(1) > 0 and b(i) >= 0, i = 2,3,...,n. The method is based on the homotopy continuation with rank 2 perturbation. It is shown that there are exactly m disjoint, smooth homotopy paths connecting the trivial eigenvalues to the desired eigenvalues, where m is the number of finite eigenvalues of (A, B), It is also shown that the homotopy curves are monotonic and easy to follow. ", "keywords": ["eigenvalues", "eigenvalue curves", "multiprocessors", "homotopy method"]} {"id": "kp20k_training_783", "title": "Potential and Requirements of IT for Ambient Assisted Living Technologies Results of a Delphi Study", "abstract": "Objectives: Ambient Assisted Living (AAL) technologies are developed to enable elderly to live independently and safely. Innovative information technology (IT) can interconnect personal devices and offer suitable user interfaces. Often dedicated solutions are developed for particular projects. The aim of our research was to identify major IT challenges for AAL to enable generic and sustainable solutions. Methods: Delphi Survey. An online questionnaire was sent to 1800 members of the German Innovation Partnership AAL. The first round was qualitative to collect statements. Statements were reduced to items by qualitative content analysis. Items were assessed in the following two rounds by a 5-point Likert-scale. Quantitative analyses for second and third round: descriptive statistics, factor analysis and ANOVA. Results: Respondents: 81 in first, 173 in second and 70 in third round. All items got a rather high assessment. Medical issues were rated as having a very high potential. Items related to user-friendliness were regarded as most important requirements. Common requirements to all AAL-solutions are reliability, robustness, availability, data security, data privacy, legal issues, ethical requirements, easy configuration. The complete list of requirements can be used as framework for customizing future AAL projects. Conclusions: A wide variety of IT issues have been assessed important for AAL. The extensive list of requirements makes obvious that it is not efficient to develop dedicated solutions for individual projects but to provide generic methods and reusable components. Experiences and results from medical informatics research can be used to advance AAL solutions (e.g. eHealth and knowledge-based approaches", "keywords": ["health information technologies", "delphi study", "health services for elderly", "medical informatics", "ambient assisted living"]} {"id": "kp20k_training_784", "title": "HYBRID HARMONIC CODING OF SPEECH AT LOW BIT-RATES", "abstract": "This paper presents a novel approach to sinusoidal coding of speech which avoids the use of a voicing detector. The proposed model represents the speech signal as a sum of sinusoids and bandpass random signals and it is denoted hybrid harmonic model in this paper. The use of two different sets of basis functions increases the robustness of the model since there is no need to switch between techniques tailored to particular classes of sounds. Sinusoidal basis functions with harmonically related frequencies allow an accurate representation of the quasi-periodic structure of voiced speech but show difficulties in representing unvoiced sounds. On the other hand, the bandpass random functions are well suited for high quality representation of unvoiced speech sounds, since their bandwidth is larger than the bandwidth of sinusoids. The amplitudes of both sets of basis functions are simultaneously estimated by a least squares algorithm and the output speech signal is synthesized in the time domain by the superposition of all basis functions multiplied by their amplitudes. Experimental tests confirm an improved performance of the hybrid model for operation with noise-corrupted input speech, relative to classic sinusoidal models which exhibit a strong dependency on voicing decision. Finally, the implementation and test of a fully quantized hybrid coder at 4.8 kbit/s is described", "keywords": ["speech modeling", "sinusodal modeling", "coding"]} {"id": "kp20k_training_785", "title": "Semi-supervised learning and condition fusion for fault diagnosis", "abstract": "Manifold regularization based semi-supervised learning is introduced to fault diagnosis. Unlabeled condition data are also utilized to enhance the multi-fault detection. A new single-conditions and all-conditions labeled mode is proposed to feed SSL. This SSL approach outperforms supervised learning in both labeled modes. The manifold fundamental of single-conditions labeled mode is analyzed with dimensionality reduction", "keywords": ["semi-supervised learning ", "condition-based maintenance ", "fault diagnosis", "manifold regularization ", "conditions labeled mode"]} {"id": "kp20k_training_786", "title": "A deductive system for proving workflow models from operational procedures", "abstract": "Many modern business environments employ software to automate the delivery of workflows; whereas, workflow design and generation remains a laborious technical task for domain specialists. Several different approaches have been proposed for deriving workflow models. Some approaches rely on process data mining approaches, whereas others have proposed derivations of workflow models from operational structures, domain specific knowledge or workflow model compositions from knowledge-bases. Many approaches draw on principles from automatic planning, but conceptual in context and lack mathematical justification. In this paper we present a mathematical framework for deducing tasks in workflow models from plans in mechanistic or strongly controlled work environments, with a focus around automatic plan generations. In addition, we prove an associative composition operator that permits crisp hierarchical task compositions for workflow models through a set of mathematical deduction rules. The result is a logical framework that can be used to prove tasks in workflow hierarchies from operational information about work processes and machine configurations in controlled or mechanistic work environments", "keywords": ["workflow", "planning", "petri-net", "automation", "management", "modelling"]} {"id": "kp20k_training_787", "title": "Graph based signature classes for detecting polymorphic worms via content analysis", "abstract": "Malicious softwares such as trojans, viruses, or worms can cause serious damage for information systems by exploiting operating system and application software vulnerabilities. Worms constitute a significant proportion of overall malicious software and infect a large number of systems in very short periods. Polymorphic worms combine polymorphism techniques with self-replicating and fast-spreading characteristics of worms. Each copy of a polymorphic worm has a different pattern so it is not effective to use simple signature matching techniques. In this work, we propose a graph based classification framework of content based polymorphic worm signatures. This framework aims to guide researchers to propose new polymorphic worm signature schemes. We also propose a new polymorphic worm signature scheme, Conjunction of Combinational Motifs (CCM), based on the defined framework. CCM utilizes common substrings of polymorphic worm copies and also the relation between those substrings through dependency analysis. CCM is resilient to new versions of a polymorphic worm. CCM also automatically generates signatures for new versions of a polymorphic worm, triggered by partial signature matches. Experimental results support that CCM has good flow evaluation time performance with low false positives and low false negatives", "keywords": ["polymorphic worm", "worm detection", "graph based signature"]} {"id": "kp20k_training_788", "title": "polynomial-time theory of matrix groups", "abstract": "We consider matrix groups, specified by a list of generators, over finite fields. The two most basic questions about such groups are membership in and the order of the group. Even in the case of abelian groups it is not known how to answer these questions without solving hard number theoretic problems (factoring and discrete log); in fact, constructive membership testing in the case of 1 1 matrices is precisely the discrete log problem. So the reasonable question is whether these problems are solvable in randomized polynomial time using number theory oracles. Building on 25 years of work, including remarkable recent developments by several groups of authors, we are now able to determine the order of a matrix group over a finite field of odd characteristic, and to perform constructive membership testing in such groups, in randomized polynomial time, using oracles for factoring and discrete log. One of the new ingredients of this result is the following. A group is called semisimple if it has no abelian normal subgroups. For matrix groups over finite fields, we show that the order of the largest semisimple quotient can be determined in randomized polynomial time (no number theory oracles required and no restriction on parity). As a by-product, we obtain a natural problem that belongs to BPP and is not known to belong either to RP or to coRP. No such problem outside the area of matrix groups appears to be known. The problem is the decision version of the above: Given a list A of nonsingular d d matrices over a finite field and an integer N, does the group generated by A have a semisimple quotient of order > N? We also make progress in the area of constructive recognition of simple groups, with the corollary that for a large class of matrix groups, our algorithms become Las Vegas", "keywords": ["discrete log", "matrix groups", "computational group theory"]} {"id": "kp20k_training_789", "title": "Delineation of the genomics field by hybrid citation-lexical methods: interaction with experts and validation process", "abstract": "In advanced methods of delineation and mapping of scientific fields, hybrid methods open a promising path to the capitalisation of advantages of approaches based on words and citations. One way to validate the hybrid approaches is to work in cooperation with experts of the fields under scrutiny. We report here an experiment in the field of genomics, where a corpus of documents has been built by a hybrid citation-lexical method, and then clustered into research themes. Experts of the field were associated in the various stages of the process: lexical queries for building the initial set of documents, the seed; citation-based extension aiming at reducing silence; final clustering to identify noise and allow discussion on border areas. The analysis of experts advices show a high level of validation of the process, which combines a high-precision and low-recall seed, obtained by journal and lexical queries, and a citation-based extension enhancing the recall. This findings on the genomics field suggest that hybrid methods can efficiently retrieve a corpus of relevant literature, even in complex and emerging fields", "keywords": ["information retrieval", "bibliographic coupling", "genomics", "citation methods", "bibliometrics", "science mapping", "field delineation"]} {"id": "kp20k_training_790", "title": "SAMPLING CORRELATION SOURCES FOR TIMING YIELD ANALYSIS OF SEQUENTIAL CIRCUITS WITH CLOCK NETWORKS", "abstract": "Analyzing timing yield under process variations is difficult because of the presence of correlations. Reconvergent fan-out nodes (RFONs) within combinational subcircuits are a major source of topological correlation. We identify two more sources of topological correlation in clocked sequential circuit: sequential RFONs, which are nodes within a clock network where the clock paths to more than one flip-flop branch out; and sequential branch-points, which are nodes within a combinational block where combinational paths to more than one capturing flip-flop branch out. Dealing with all sources of correlation is unacceptably complicated, and we therefore show how to sample a handful of correlation sources without sacrificing significant accuracy in the yield. A further reduction in computation time can be obtained by sampling only those nodes that are likely to affect the yield. These techniques are applied to yield analysis using statistical static timing analysis based on discrete random variables and also to yield analysis based on Monte Carlo simulation; the accuracy and efficiency of both methods are assessed using example circuits. The sequential RFONs suggest that timing yield may be improved by optimizing the clock network, and we address this possibility", "keywords": ["statistical static timing analysis", "monte carlo simulation", "timing yield", "correlation", "sequential circuit", "clock network"]} {"id": "kp20k_training_791", "title": "Optimal search and one-way trading online algorithms", "abstract": "This paper is concerned with the time series search and one-way trading problems. In the (time series) search problem a player is searching for the maximum (or minimum) price in a sequence that unfolds sequentially, one price at a time. Once during this game the player can decide to accept the current price p in which case the game ends and the player's payoff is p. In the one-way trading problem a trader is given the task of trading dollars to yen. Each day, a new exchange rate is announced and the trader must decide how many dollars to convert to yen according to the current rate. The game ends when the trader trades his entire dollar wealth to yen and his payoff is the number of yen acquired. The search and one-way trading are intimately related. Any (deterministic or randomized) one-way trading algorithm can be viewed as a randomized search algorithm. Using the competitive ratio as a performance measure we determine the optimal competitive performance for several variants of these problems. In particular, we show that a simple threat-based strategy is optimal and we determine its competitive ratio which yields, for realistic values of the problem parameters, surprisingly low competitive ratios. We also consider and analyze a one-way trading game played against an adversary called Nature where the online player knows the probability distribution of the maximum exchange rate and that distribution has been chosen by Nature. Finally, we consider some applications for a Special case of portfolio selection called two-way trading in which the trader may trade back and forth between cash and one asset", "keywords": ["time series search", "one-way trading", "two-way trading", "portfolio selection", "online algorithms", "competitive analysis"]} {"id": "kp20k_training_792", "title": "Interpretation of complex scenes using dynamic tree-structure Bayesian networks", "abstract": "This paper addresses the problem of object detection and recognition in complex scenes, where objects are partially occluded. The approach presented herein is based on the hypothesis that a careful analysis of visible object details at various scales is critical for recognition in such settings. In general, however, computational complexity becomes prohibitive when trying to analyze multiple sub-parts of multiple objects in an image. To alleviate this problem, we propose a generative-model framework-namely, dynamic tree-structure belief networks (DTSBNs). This framework formulates object detection and recognition as inference of DTSBN structure and image-class conditional distributions, given an image. The causal (Markovian) dependencies in DTSBNs allow for design of computationally efficient inference, as well as for interpretation of the estimated structure as follows: each root represents a whole distinct object, while children nodes down the sub-tree represent parts of that object at various scales. Therefore, within the DTSBN framework, the treatment and recognition of object parts requires no additional training, but merely a particular interpretation of the tree/subtree structure. This property leads to a strategy for recognition of objects as a whole through recognition of their visible parts. Our experimental results demonstrate that this approach remarkably outperforms strategies without explicit analysis of object parts. ", "keywords": ["generative models", "bayesian networks", "dynamic trees", "variational inference", "image segmentation", "object recognition"]} {"id": "kp20k_training_793", "title": "consistent group membership in ad hoc networks", "abstract": "The design of ad hoc mobile applications often requires the availability of a consistent view of the application state among the participating hosts. Such views are important because they simplify both the programming and verification tasks. Essential to constructing a consistent view is the ability to know what hosts are within proximity of each other, i.e., form a group in support of the particular application. In this paper we propose an algorithm that allows hosts within communication range to maintain a consistent view of the group membership despite movement and frequent disconnections. The novel features of this algorithm are its reliance on location information and a conservative notion of logical connectivity that creates the illusion of announced disconnection. Movement patterns and delays are factored in the policy that determines which physical connections are susceptible to disconnection", "keywords": ["consistency", "communication", "applications", "policy", "movement", "proximics", "maintainability", "design", "delay", "group", "paper", "informal", "pattern", "program", "availability", "views", "locatability", "mobility", "mobile application", "verification", "support", "physical", "algorithm", "feature", "connection", "group membership", "ad hoc network", "ad-hoc"]} {"id": "kp20k_training_794", "title": "a real-time collision detection algorithm for mobile billiards game", "abstract": "Collision detection is a key technique in game design. However, some algorithms employed in PC game are not suitable for mobile game because of the low performance and small screen size in mobile devices. Combining with the features of the mobile devices, this paper proposes a quick and feasible collision detection algorithm. This algorithm makes use of the multi-level collision detection and dynamic multi-resolution grid subdivision to reduce the computing time for collision detection, which improves the algorithm performance greatly. In the collision response phase, this paper adopts the time step binary search algorithm to ensure both the computing precision and system efficiency. The mobile billiards game designed for the Bird Company indicates that this algorithm has good performance and real-time interaction", "keywords": ["collision detection", "binary search", "multi-resolution", "mobile game"]} {"id": "kp20k_training_795", "title": "Iterative execution-feedback model-directed GUI testing", "abstract": "Current fully automatic model-based test-case generation techniques for GUIs employ a static model. Therefore they are unable to leverage certain state-based relationships between GUI events (e.g., one enables the other, one alters the others execution) that are revealed at run-time and non-trivial to infer statically. We present ALT a new technique to generate GUI test cases in batches. Because of its alternating nature, ALT enhances the next batch by using GUI run-time information from the current batch. An empirical study on four fielded GUI-based applications demonstrated that ALT was able to detect new 4- and 5-way GUI interaction faults; in contrast, previous techniques, due to their requirement of too many test cases, were unable to even test 4- and 5-way GUI interactions", "keywords": ["test-case generation", "model-based testing", "gui testing", "event-driven software", "event-flow graphs"]} {"id": "kp20k_training_796", "title": "A hierarchical refinement algorithm for fully automatic gridding in spotted DNA microarray image processing", "abstract": "Gridding, the first step in spotted DNA microarray image processing, usually requires human intervention to achieve acceptable accuracy. We present a new algorithm for automatic gridding based on hierarchical refinement to improve the efficiency, robustness and reproducibility of microarray data analysis. This algorithm employs morphological reconstruction along with global and local rotation detection, non-parametric optimal thresholding and local fine-tuning without any human intervention. Using synthetic data and real microarray images of different sizes and with different degrees of rotation of subarrays, we demonstrate that this algorithm can detect and compensate for alignment and rotation problems to obtain reliable and robust results", "keywords": ["image processing", "dna microarray image", "automatic gridding", "gene expression"]} {"id": "kp20k_training_797", "title": "Genetic algorithms applied in BOPP film scheduling problems: minimizing total absolute deviation and setup times", "abstract": "The frequent changeovers in the production processes indicate the importance of setup time in many real-world manufacturing activities. The traditional approaches in dealing with setup times are that either to omit or to merge into the processing times so as to simplify the problems. These approaches could reduce the complexity of the problem, but often generated unrealistic outcomes because of the assumed conditions. This situation motivated us to consider sequence-dependent setup times in a real-world BOPP film scheduling problem. First, a setup time-based heuristic method was developed to generate the initial solutions for the genetic algorithms (GAs). Then, genetic algorithms with different mutation methods were applied. Extensive experimental results showed that the setup time-based heuristic method was relatively efficient. It was also found that a genetic algorithm with the variable mutation rate performed much effectively than one with the fixed mutation rate", "keywords": ["bopp scheduling problem", "genetic algorithm", "total absolute deviation", "setup time", "variable mutation rate"]} {"id": "kp20k_training_798", "title": "Automatic grading of Scots pine (Pinus sylvestris L.) sawlogs using an industrial X-ray log scanner", "abstract": "The successful running of a sawmill is dependent on its ability to achieve the highest possible value recovery from the sawlogs, i.e. to optimize the use of the raw material. Such optimization requires information about the properties of every log. One method of measuring these properties is to use an X-ray log scanner. The objective of the present study was to determine the accuracy when grading Scots pine (Pinus sylvestris L.) sawlogs using an industrial scanner known as the X-ray LogScanner. The study was based on 150 Scots pine sawlogs from a sawmill in northern Sweden. All logs were scanned in the LogScanner at a speed of 125 m/min. The X-ray images were analyzed on-line with measures of different properties as a result (e.g. density and density variations). The logs were then sawn with a normal sawing pattern (50125 mm) and the logs were graded depending on the result from the manual grading of the center boards. Finally, partial least squares (PLS) regression was used to calibrate statistical models that predict the log grade based on the properties measured by the X-ray LogScanner. The study showed that 7783% of the logs were correctly sorted when using the scanner to sort logs into three groups according to the predicted grade of the center boards. After sawing the sorted logs, 67% of the boards had the correct grade. When scanning the same logs repeatedly, the relative standard deviation of the predicted grade was 1220%. The study also showed that it is possible to sort out 10 and 16%, respectively, of the material into two groups with high quality logs, without changing the grade distribution of the rest of the material to any great extent", "keywords": ["sawlogs", "scanning", "x-ray"]} {"id": "kp20k_training_799", "title": "Learning juntas in the presence of noise", "abstract": "We investigate the combination of two major challenges in computational learning: dealing with huge amounts of irrelevant information and learning from noisy data. It is shown that large classes of Boolean concepts that depend only on a small fraction of their variables - so-called juntas - can be learned efficiently from uniformly distributed examples that are corrupted by random attribute and classification noise. We present solutions to cope with the manifold problems that inhibit a straightforward generalization of the noise-free case. Additionally, we extend our methods to non-uniformly distributed examples and derive new results for monotone juntas and for parity juntas in this setting. It is assumed that the attribute noise is generated by a product distribution. Without any restrictions of the attribute noise distribution, learning in the presence of noise is in general impossible. This follows from our construction of a noise distribution P and a concept class C such that it is impossible to learn C under P-noise. ", "keywords": ["learning of boolean functions", "learning in the presence of noise", "learning in the presence of irrelevant information", "juntas", "fourier analysis"]} {"id": "kp20k_training_800", "title": "Single-allocation ordered median hub location problems", "abstract": "The discrete ordered median location model is a powerful tool in modeling classic and alternative location problems that have been applied with success to a large variety of discrete location problems. Nevertheless, although hub location models have been analyzed from the sum, maximum and coverage point of views, as far as we know, they have never been considered under an alternative unifying point of view. In this paper we consider new formulations, based on the ordered median objective function, for hub location problems with new distribution patterns induced by the different users' roles within the supply chain network. This approach introduces some penalty factors associated with the position of an allocation cost with respect to the sorted sequence of these costs. First we present basic formulations for this problem, and then develop stronger formulations by exploiting properties of the model. The performance of all these formulations is compared by means of a computational analysis. ", "keywords": ["hub location problems", "ordered median function"]} {"id": "kp20k_training_801", "title": "How measuring student performances allows for measuring blended extreme apprenticeship for learning Bash programming", "abstract": "Many small exercises and few lectures can teach all programming. Measuring student behavior in exercises assesses how they learn. The reported study logged student performances in programming exercises. Metrics were defined for assessing overall programming performances. Data show that all students tend to learn basic programming skills", "keywords": ["blended learning", "extreme apprenticeship", "behavior", "performance", "metrics", "learner experience design and evaluation"]} {"id": "kp20k_training_802", "title": "Application of 3D-wavelet statistics to video analysis", "abstract": "Video activity analysis is used in various video applications such as human action recognition, video retrieval, video archiving. In this paper, we propose to apply 3D wavelet transform statistics to natural video signals and employ the resulting statistical attributes for video modeling and analysis. From the 3D wavelet transform, we investigate the marginal and joint statistics as well as the Mutual Information (MI) estimates. We show that marginal histograms are approximated quite well by Generalized Gaussian Density (GGD) functions; and the MI between coefficients decreases when the activity level increases in videos. Joint statistics attributes are applied to scene activity grouping, leading to 87.3% accurate grouping of videos. Also, marginal and joint statistics features extracted from the video are used for human action classification employing Support Vector Machine (SVM) classifiers and 93.4% of the human activities are properly classified", "keywords": ["video analysis", "3d wavelet transform statistics", "human action recognition"]} {"id": "kp20k_training_803", "title": "Modeling electrokinetic flows in microchannels using coupled lattice Boltzmann methods", "abstract": "We present a numerical framework to solve the dynamic model for electrokinetic flows in microchannels using coupled lattice Boltzmann methods. The governing equation for each transport process is solved by a lattice Boltzmann model and the entire process is simulated through an iteration procedure. After validation, the present method is used to study the applicability of the PoissonBoltzmann model for electrokinetic flows in microchannels. Our results show that for homogeneously charged long channels, the PoissonBoltzmann model is applicable for a wide range of electric double layer thickness. For the electric potential distribution, the PoissonBoltzmann model can provide good predictions until the electric double layers fully overlap, meaning that the thickness of the double layer equals the channel width. For the electroosmotic velocity, the PoissonBoltzmann model is valid even when the thickness of the double layer is 10 times of the channel width. For heterogeneously charged microchannels, a higher zeta potential and an enhanced velocity field may cause the PoissonBoltzmann model to fail to provide accurate predictions. The ionic diffusion coefficients have little effect on the steady flows for either homogeneously or heterogeneously charged channels. However the ionic valence of solvent has remarkable influences on both the electric potential distribution and the flow velocity even in homogeneously charged microchannels. Both theoretical analyses and numerical results indicate that the valence and the concentration of the counter-ions dominate the Debye length, the electrical potential distribution, and the ions transport. The present results may improve the understanding of the electrokinetic transport characteristics in microchannels", "keywords": ["electrokinetic flows", "lattice boltzmann method", "multiphysical transport", "poissonboltzmann model", "dynamic model", "microfluidics and nanofluidics"]} {"id": "kp20k_training_804", "title": "A non-iterative continuous model for switching window computation with crosstalk noise", "abstract": "Proper modeling of switching windows leads to a better estimate of the noise-induced delay variations. In this paper, we propose a new non-iterative continuous switching model. The proposed new model employs an ordering technique combined with the principle of superposition of linear circuits. The principle of superposition considers the impact of aggressors one after the other. The ordering technique avoids convergence and multiple solution issues in many practical cases. Our model surpasses the accuracy of the traditional discrete model and the speed of fixed point iteration method", "keywords": ["deep submicron", "crosstalk noise", "switch window", "non-iterative"]} {"id": "kp20k_training_805", "title": "Vibrational analysis of curved single-walled carbon nanotube on a Pasternak elastic foundation", "abstract": "Continuum mechanics and an elastic beam model were employed in the nonlinear force vibrational analysis of an embedded, curved, single-walled carbon nanotube. The analysis considered the effects of the curvature or waviness and midplane stretching of the nanotube on the nonlinear frequency. By utilizing Hes Energy Balance Method (HEBM), the relationships of the nonlinear amplitude and frequency were expressed for a curved, single-walled carbon nanotube. The amplitude frequency response curves of the nonlinear free vibration were obtained for a curved, single-walled carbon nanotube embedded in a Pasternak elastic foundation. Finally, the influence of the amplitude of the waviness, midplane stretching nonlinearity, shear foundation modulus, surrounding elastic medium, radius, and length of the curved carbon nanotube on the amplitude frequency response characteristics are discussed. As a result, the combination effects of waviness and stretching nonlinearity on the nonlinear frequency of the curved SWCNT with a small outer radius were larger than the straight one", "keywords": ["midplane stretching", "energy balance method", "curved carbon nanotube", "nonlinear vibration", "elastic foundation", "pasternak foundation"]} {"id": "kp20k_training_806", "title": "Fast Bokeh effects using low-rank linear filters", "abstract": "We present a method for faster and more flexible approximation of camera defocus effects given a focused image of a virtual scene and depth map. Our method leverages the advantages of low-rank linear filtering by reducing the problem of 2D convolution to multiple 1D convolutions, which significantly reduces the computational complexity of the filtering operation. In the case of rank 1 filters (e.g., the box filter and Gaussian filter), the kernel is described as separable since it can be implemented as a horizontal 1D convolution followed by a 1D vertical convolution. While many filter kernels which result in bokeh effects cannot be approximated closely by separable kernels, they can be effectively approximated by low-rank kernels. We demonstrate the speed and flexibility of low-rank filters by applying them to image blurring, tilt-shift postprocessing, and depth-of-field simulation, and also analyze the approximation error for several aperture shapes", "keywords": ["bokeh", "blur", "filter", "depth-of-field"]} {"id": "kp20k_training_807", "title": "A novel method for cross-species gene expression analysis", "abstract": "Analysis of gene expression from different species is a powerful way to identify evolutionarily conserved transcriptional responses. However, due to evolutionary events such as gene duplication, there is no one-to-one correspondence between genes from different species which makes comparison of their expression profiles complex", "keywords": ["gene expression", "evolution", "meta-analysis", "orthologs", "paralogs", "microarray", "rna-seq"]} {"id": "kp20k_training_808", "title": "MIMO radar signal design to improve the MIMO ambiguity function via maximizing its peak", "abstract": "Transmit signals are designed to maximize the ambiguity function?s peak of a WS-MIMO radar. Signal design is done for three cases of single target, multi-target, and prioritized ambiguity function. It is shown that in spite of increasing the number of antennas of MIMO radar, signal design does not provide diversity gain. Through simulations, it is shown that better performance can be achieved by the proposed signal design to maximize the AF?s peak", "keywords": ["multiple-input multiple-output radar", "ambiguity function", "waveform design", "power allocation", "waterfilling"]} {"id": "kp20k_training_809", "title": "Optimal design of radial basis function neural networks for fuzzy-rule extraction in high dimensional data", "abstract": "The design of an optimal radial basis function neural network (RBFNF) is not a straightforward procedure. In this paper we take advantage of the functional equivalence between RBFN and fuzzy inference systems to propose a novel efficient approach to RBFN design for fuzzy rule extraction. The method is based on advanced fuzzy clustering techniques. Solutions to practical problems are proposed. By combining these different solutions, a general methodology is derived. The efficiency of our method is demonstrated on challenging synthetic and real world data sets", "keywords": ["radial basis function networks", "fuzzy clustering", "fuzzy rule extraction", "neuro-fuzzy models", "adaptive network based fuzzy inference systems"]} {"id": "kp20k_training_810", "title": "Polarization Properties of a Turnstile Antenna in the Vicinity of the Human Body", "abstract": "Polarization of a simple turnstile antenna situated close to the human body, for potential WBAN applications at 2.45GHz band, is studied in detail by the use of electromagnetic simulator WIPL-D Pro. Circular polarization of the antenna (when isolated) is provided by adjusting the dipole impedances. Full-size, 3-dimensional simplified homogeneous model of a human body is applied. Polarization of both far and near field is studied, with various positions of the antenna and with/without metallic reflector. In the far field significant degradation of the circular polarization, due to the vicinity of the body, was observed. In the near field, at points close to the surface of the torso, polarization (of vector E) was found to significantly deviate from circular. Obtained results can be useful in designing on-body sensor networks in which circularly polarized antennas are applied, for both far field communication between sensor nodes and the gateway and near field communication between sensors", "keywords": ["circular polarization", "wban", "turnstile antenna", "on-body sensors", "full-size human model"]} {"id": "kp20k_training_811", "title": "Enforcing and defying associativity, commutativity, totality, and strong noninvertibility for worst-case one-way functions", "abstract": "Rabi and Sherman [M. Rabi, A. Sherman, An observation on associative one-way functions in complexity theory, Information Processing Letters 64 (5) (1997) 239-244; M. Rabi, A. Sherman, Associative one-way functions: A new paradigm for secret-key agreement and digital signatures, Tech. Rep. CS-TR-3183/UMIACS-TR-93-124, Department of Computer Science, University of Maryland, College Park, MD, 1993] proved that the hardness of factoring is a sufficient condition for there to exist one-way functions (i.e., p-time computable, honest, p-time noninvertible functions; this paper is in the worst-case model, not the average-case model) that are total, commutative, and associative but not strongly noninvertible. In this paper we improve the sufficient condition to P not equal NP. More generally, in this paper we completely characterize which types of one-way functions stand or fall together with (plain) one-way functions-equivalently, stand or fall together with P 54 NP. We look at the four attributes used in Rabi and Sherman's seminal work on algebraic properties of one-way functions (see [M. Rabi, A. Sherman, An observation on associative one-way functions in complexity theory, Information Processing Letters 64 (5) (1997) 239-244: M. Rabi, A. Sherman, Associative one-way functions: A new paradigm for secret-key agreement and digital signatures, Tech. Rep. CS-TR-3183/UMIACS-TR-93-124, Department of Computer Science, University of Maryland, College Park, MD, 1993]) and subsequent papers - strongness (of noninvertibility), totality, commutativity, and associativity - and for each attribute, we allow it to be required to hold, required to fail, or \"don't care\". In this categorization there are 3(4) = 81 potential types of one-way functions. We prove that each of these 81 feature-laden types stands or falls together with the existence of (plain) one-way functions. ", "keywords": ["computational complexity", "worst-case one-way functions", "associativity", "commutativity", "strong noninvertibility"]} {"id": "kp20k_training_812", "title": "An efficient algorithm for constrained global optimization and application to mechanical engineering design: League championship algorithm (LCA", "abstract": "The league championship algorithm (LCA) is a new algorithm originally proposed for unconstrained optimization which tries to metaphorically model a League championship environment wherein artificial teams play in an artificial league for several weeks (iterations). Given the league schedule, a number of individuals, as sport teams, play in pairs and their game outcome is determined given known the playing strength (fitness value) along with the team formation (solution). Modelling an artificial match analysis, each team devises the required changes in its formation (a new solution) for the next week contest and the championship goes for a number of seasons. In this paper, we adapt LCA for constrained optimization. In particular: (1) a feasibility criterion to bias the search toward feasible regions is included besides the objective value criterion; (2) generation of multiple offspring is allowed to increase the probability of an individual to generate a better solution; (3) a diversity mechanism is adopted, which allows infeasible solutions with a promising objective value precede the feasible solutions. Performance of LCA is compared with comparator algorithms on benchmark problems where the experimental results indicate that LCA is a very competitive algorithm. Performance of LCA is also evaluated on well-studied mechanical design problems and results are compared with the results of 21 constrained optimization algorithms. Computational results signify that with a smaller number of evaluations, LCA ensures finding the true optimum of these problems. These results encourage that further developments and applications of LCA would be worth investigating in the future studies", "keywords": ["constrained optimization", "constraint-handling techniques", "engineering design optimization", "league championship algorithm"]} {"id": "kp20k_training_813", "title": "Statistical model training technique based on speaker clustering approach for HMM-based speech synthesis", "abstract": "We propose an average voice model training technique using speaker class. The speaker class is obtained on the basis of speaker clustering. The average voice model is trained using the conventional contextual factors and the speaker class. In the speaker adaptation process, the target speakers speaker class is estimated. Our proposal can synthesize speech with better similarity and naturalness", "keywords": ["hmm-based speech synthesis", "average voice model", "speaker adaptation", "speaker clustering"]} {"id": "kp20k_training_814", "title": "a study of gradual transition detection in historic film material", "abstract": "The detection of gradual transitions focuses on two types of approaches: unified approaches, i.e. one detector for all gradual transition types, and approaches that use specialized detectors for each gradual transition type. We present an overview on existing methods and extend an existing unified approach for the detection of gradual transitions in historic material. In an experimental study we evaluate our approach on complex and low quality historic material as well as on contemporary material from the TRECVid evaluation. Additionally we investigate different features, feature combinations and fusion strategies. We observe that the historic material requires the use of texture features in contrast to the contemporary material that in most of the cases requires the use of colour and luminance features", "keywords": ["cultural heritage", "shot boundary detection", "gradual transition detection"]} {"id": "kp20k_training_815", "title": "Electronic retention: what does your mobile phone reveal about you", "abstract": "The global information rich society is increasingly dependent on mobile phone technology for daily activities. A substantial secondary market in mobile phones has developed as a result of a relatively short life-cycle and recent regulatory measures on electronics recycling. These developments are, however, a cause for concern regarding privacy, since it is unclear how much information is retained on a device when it is re-sold. The crucial question is: what, despite your best efforts, does your mobile phone reveal about you?. This research investigates the extent to which personal information continues to reside on mobile phones even when users have attempted to remove the information; hence, passing the information into the secondary market. A total of 49 re-sold mobile devices were acquired from two secondary markets: a local pawn shop and an online auction site. These devices were examined using three industry standard mobile forensic toolkits. Data were extracted from the devices via both physical and logical acquisitions and the resulting information artifacts categorized by type and sensitivity. All mobile devices examined yielded some user information and in total 11,135 artifacts were recovered. The findings confirm that substantial personal information is retained on a typical mobile device when it is re-sold. The results highlight several areas of potential future work necessary to ensure the confidentially of personal data stored on mobile devices", "keywords": ["mobile devices", "forensics", "privacy"]} {"id": "kp20k_training_816", "title": "reasoning about digital artifacts with acl2", "abstract": "ACL2 is both a programming language in which computing systems can be modeled and a tool to help a designer prove properties of such models. ACL2 stands for A C omputational L ogic for A pplicative C ommon L isp'' and provides mechanized reasoning support for a first-order axiomatization of an extended subset of functional Common Lisp. Most often, ACL2 is used to produce operational semantic models of artifacts. Such models can be executed as functional Lisp programs and so have dual use as both pre-fabrication simulation engines and as analyzable mathematical models of intended (or at least designed) behavior. This project had its start 40 years ago in Edinburgh with the first Boyer-Moore Pure Lisp theorem prover and has evolved proofs about list concatenation and reverse to proofs about industrial models. Industrial use of theorem provers to answer design questions of critical importance is so surprising to people outside of the theorem proving community that it bears emphasis. In the 1980s, the earlier Boyer-Moore theorem prover, Nqthm, was used to verify the ``Computational Logic stack'' -- a hardware/software stack starting with the NDL description of the netlist for a microprocessor and ascending through a machine code ISA, an assembler, linker, and loader, two compilers (for subsets of Pascal and Lisp), an operating system, and some simple applications. The system components were proved to compose so that properties proved of high-level software were guaranteed by the binary image produced by the composition. At around the same time, Nqthm was used to verify 21 of the 22 subroutines in the MC68020 binary machine code produced from the Berkeley C String Library by gcc -o, identifying bugs in the library as a result. Applications like these convinced us that (a) industrial scale formal methods was practical and (b) Nqthm's Pure Lisp produced uncompetitive results compared to C when used for simulation engines. We therefore designed ACL2, which initially was Nqthm recoded to support applicative Common Lisp. The 1990s saw the first industrial application of ACL2, to verify the correspondence between a gate-level description of the Motorola CAP DSP and its microcode engine. The Lisp model of the microcode engine was proved to be bit- and cycle-accurate but operated several times faster than the gate-level simulator in C because of the competitive execution speed of Lisp and the higher level of trusted abstraction. Furthermore, it was used to discover previously unknown microcode hazards. An executable Lisp predicate was verified to detect all hazards and subsequently used by microcode programmers to check code. This project and a subsequent one at AMD to verify the floating point division operation on the AMD K5 microprocessor demonstrated the practicality of ACL2 but also highlighted the need to develop better Lisp system programming tools wedded to formal methods, formal modeling, proof development, and ``proof maintenance'' in the face of evolution of the modeled artifacts. Much ACL2 development in first decade of the 21st century was therefore dedicated to such tools and we have witnessed a cor-responding increase in the use of ACL2 to construct and reason about commercial artifacts. ACL2 has been involved in the design of all AMD desktop microprocessors since the Athlon; specifically, ACL2 is used to verify floating-point operations on those micro-processors. Centaur Technology (chipmaker for VIA Technologies) uses ACL2 extensively in verifying its media unit and other parts of its x86 designs. Researchers at Rockwell-Collins have shown that ACL2 models of microprocessors can run at 90% of the speed of C models of those microprocessors. Rockwell-Collins has also used ACL2 to do information flow proofs to establish process separation for the AAMP7G cryptoprocessor and, on the basis of those proofs, obtained MILS certification using Formal Methods techniques as specified by EAL-7 of the Common Criteria. IBM has used ACL2 to verify floating point operations on the Power 4 and other chips. ACL2 was also used to verify key properties of the Sun Java Virtual Machine's class loader. In this talk I will sketch the 40 year history of this project, showing how the techniques and applications have grown over the years. I will demonstrate ACL2 on both some simple prob-lems and a complicated one, and I will deal briefly with the question of how -- and with what tool -- one verifies a verifier. For scholarly details of some of how to use ACL2 and some of its industrial applications see [1, 2]. For source code, lemma li-braries, and an online user's manual, see the ACL2 home page, http://www.cs.utexas.edu/users/moore/acl2", "keywords": ["hardware verification", "software stack", "operational semantics", "virtual machine verification", "jvm", "microprocessor verification", "automatic theorem proving"]} {"id": "kp20k_training_817", "title": "Deformation and fracturing using adaptive shape matching with stiffness adjustment", "abstract": "This paper presents; a fast method that computes deformations with fracturing of an object using a hierarchical lattice. Our method allows numerically stable computation based on so-called shape matching. During the simulation, the deformed shape of the object and the condition of fracturing are used to determine the appropriate detail level in the hierarchy of the lattices. Our method modifies the computation of the stiffness of the object in different levels of the hierarchy so that the stiffness is maintained uniform by introducing a stiffness parameter that does not depend on the hierarchy. By merging the subdivided lattices, our method minimizes the increase of computational cost. ", "keywords": ["interactive deformation", "soft body", "shape matching", "fracturing"]} {"id": "kp20k_training_818", "title": "Using BP network for ultrasonic inspection of flip chip solder joints", "abstract": "Flip-chip technology has been used extensively in microelectronic packaging, where defect inspection for solder joints plays an extremely important role. In this paper, ultrasonic inspection, one of the non-destructive methods, was used for inspection of flip chip solder joints. The image of the flip chip was captured by scanning acoustic microscope and segmented based on the flip chip structure information. Then a back-propagation network was adopted, and the geometric features extracted from the image were fed to the network for classification and recognition. The results demonstrate the high recognition rate and feasibility of the approach. Therefore, this approach has high potentiality for solder joint defect inspection in flip chip packaging", "keywords": ["flip chip", "back-propagation network", "defect inspection", "ultrasonic inspection"]} {"id": "kp20k_training_819", "title": "augmenting reflective middleware with an aspect orientation support layer", "abstract": "Reflective middleware provides an effective way to support adaptation in distributed systems. However, as distributed systems become increasingly complex, certain drawbacks of the reflective middleware approach are becoming evident. In particular, reflective APIs are found to impose a steep learning curve, and to place too much expressive power in the hands of developers. Recently, researchers in the field of Aspect-Oriented Programming (AOP) have argued that 'dynamic aspects' show promise in alleviating these drawbacks. In this paper, we report on work that attempts to combine the reflective middleware and AOP approaches. We build an AOP support layer on top of an underlying reflective middleware substrate in such a way that it can be dynamically deployed/undeployed where and when required, and imposes no overhead when it is not used. Our AOP approach involves aspects that can be dynamically (un)weaved across a distributed system on the basis of pointcut expressions that are inherently distributed in nature, and it supports the composition of advice that is remote from the advised joinpoint. An overall goal of the work is to effectively combine reflective middleware and AOP in a way that maximises the benefits and minimises the drawbacks of each", "keywords": ["place", "aspect", "research", "aspect orientation", "adapt", "complexity", "developer", "dynamic", "middleware", "express", "components", "layer", "support", "power", "dynamic adaptation", "learning", "distributed", "paper", "effect", "compositing", "aspect-oriented programming", "distributed systems", "reflective middleware"]} {"id": "kp20k_training_820", "title": "Band-pass filtering of the time sequences of spectral parameters for robust wireless speech recognition", "abstract": "In this paper we address the problem of automatic speech recognition when wireless speech communication systems are involved. In this context, three main sources of distortion should be considered: acoustic environment, speech coding and transmission errors. Whilst the first one has already received a lot of attention, the last two deserve further investigation in our opinion. We have found out that band-pass filtering of the recognition features improves ASR performance when distortions due to these particular communication systems are present. Furthermore, we have evaluated two alternative configurations at different bit error rates (BER) typical of these channels: band-pass filtering the LP-MFCC parameters or a modification of the RASTA-PLP using a sharper low-pass section perform consistently better than LP-MFCC and RASTA-PLP, respectively", "keywords": ["robust speech recognition", "wireless speech recognition", "transmission errors", "modulation spectrum", "rasta-plp"]} {"id": "kp20k_training_821", "title": "MSOAR: A high-throughput ortholog assignment system based on genome rearrangement", "abstract": "The assignment of orthologous genes between a pair of genomes is a fundamental and challenging problem in comparative genomics, since many computational methods for solving various biological problems critically rely on bona fide orthologs as input. While it is usually done using sequence similarity search, we recently proposed a new combinatorial approach that combines sequence similarity and genome rearrangement. This paper continues the development of the approach and unites genome rearrangement events and (post-speciation) duplication events in a single framework under the parsimony principle. In this framework, orthologous genes are assumed to correspond to each other in the most parsimonious evolutionary scenario involving both genome rearrangement and (post-speciation) gene duplication. Besides several original algorithmic contributions, the enhanced method allows for the detection of inparalogs. Following this approach, we have implemented a high-throughput system for ortholog assignment on a genome scale, called MSOAR, and applied it to human and mouse genomes. As the result will show, MSOAR is able to find 99 more true orthologs than the INPARANOID program did. In comparison to the iterated exemplar algorithm on simulated data, MSOAR performed favorably in terms of assignment accuracy. We also validated our predicted main ortholog pairs between human and mouse using public ortholog assignment datasets, synteny information, and gene function classification. These test results indiate that our approach is very promising for genome-wide ortholog assignment. Supplemental material and MSOAR program are available at http://msoar.cs.ucr.edu", "keywords": ["comparative genomics", "gene duplication", "genome rearrangement", "inparalog", "ortholog"]} {"id": "kp20k_training_822", "title": "MedLDA: Maximum Margin Supervised Topic Models", "abstract": "A supervised topic model can use side information such as ratings or labels associated with documents or images to discover more predictive low dimensional topical representations of the data. However, existing supervised topic models predominantly employ likelihood-driven objective functions for learning and inference, leaving the popular and potentially powerful max-margin principle unexploited for seeking predictive representations of data and more discriminative topic bases for the corpus. In this paper, we propose the maximum entropy discrimination latent Dirichlet allocation (MedLDA) model, which integrates the mechanism behind the max-margin prediction models (e.g., SVMs) with the mechanism behind the hierarchical Bayesian topic models (e.g., LDA) under a unified constrained optimization framework, and yields latent topical representations that are more discriminative and more suitable for prediction tasks such as document classification or regression. The principle underlying the MedLDA formalism is quite general and can be applied for jointly max-margin and maximum likelihood learning of directed or undirected topic models when supervising side information is available. Efficient variational methods for posterior inference and parameter estimation are derived and extensive empirical studies on several real data sets are also provided. Our experimental results demonstrate qualitatively and quantitatively that MedLDA could: 1) discover sparse and highly discriminative topical representations; 2) achieve state of the art prediction performance; and 3) be more efficient than existing supervised topic models, especially for classification", "keywords": ["supervised topic models", "max-margin learning", "maximum entropy discrimination", "latent dirichlet allocation", "support vector machines"]} {"id": "kp20k_training_823", "title": "LS-SVM-based image segmentation using pixel color-texture descriptors", "abstract": "Image segmentation remains an important, but hard-to-solve, problem since it appears to be application dependent with usually no a priori information available regarding the image structure. Moreover, the increasing demands of image analysis tasks in terms of segmentation results quality introduce the necessity of employing multiple cues for improving image-segmentation results. In this paper, we present a least squares support vector machine (LS-SVM) based image segmentation using pixel color-texture descriptors, in which multiple cues such as edge saliency, color saliency, local maximum energy, and multiresolution texture gradient are incorporated. Firstly, the pixel-level edge saliency and color saliency are extracted based on the spatial relations between neighboring pixels in HSV color space. Secondly, the image pixels texture features, local maximum energy and multiresolution texture gradient, are represented via nonsubsampled contourlet transform. Then, both the pixel-level edge color saliency and texture features are used as input of LS-SVM model (classifier), and the LS-SVM model (classifier) is trained by selecting the training samples with Arimoto entropy thresholding. Finally, the color image is segmented with the trained LS-SVM model (classifier). This image segmentation not only can fully take advantage of the human visual attention and local texture content of color image, but also the generalization ability of LS-SVM classifier. Experimental results show that our proposed method has very promising segmentation performance compared with the state-of-the-art segmentation approaches recently proposed in the literature", "keywords": ["image segmentation", "least squares support vector machine", "human visual attention", "local texture content", "arimoto entropy thresholding"]} {"id": "kp20k_training_824", "title": "geometric verification of swirling features in flow fields", "abstract": "In this paper, we present a verification algorithm for swirling features in flow fields, based on the geometry of streamlines. The features of interest in this case are vortices. Without a formal definition, existing detection algorithms lack the ability to accurately identify these features, and the current method for verifying the accuracy of their results is by human visual inspection. Our verification algorithm addresses this issue by automating the visual inspection process. It is based on identifying the swirling streamlines that surround the candidate vortex cores. We apply our algorithm to both numerically simulated and procedurally generated datasets to illustrate the efficacy of our approach", "keywords": ["feature verification", "flow field visualization", "vortex detection"]} {"id": "kp20k_training_825", "title": "On topology and dynamics of consensus among linear high-order agents", "abstract": "Consensus of a group of agents in a multi-agent system with and without a leader is considered. All agents are modelled by identical linear n-th order dynamical systems while the leader, when it exists, may evolve according to a different linear model of the same order. The interconnection topology between the agents is modelled as a directed weighted graph. We provide answers to the questions of whether the group converges to consensus and what consensus value the group eventually reaches. To that end, we give a detailed analysis of relevant algebraic properties of the graph Laplacian. Furthermore, we propose an LMI-based design for group consensus in the general case", "keywords": ["consensus", "multi-agent systems", "interconnection topology", "graphs"]} {"id": "kp20k_training_826", "title": "Human cognition in manual assembly: Theories and applications", "abstract": "Human cognition in production environments is analyzed with respect to various findings and theories in cognitive psychology. This theoretical overview describes effects of task complexity and attentional demands on both mental workload and task performance as well as presents experimental data on these topics. A review of two studies investigating the benefit of augmented reality and spatial cueing in an assembly task is given. Results demonstrate an improvement in task performance with attentional guidance while using contact analog highlighting. Improvements were obvious in reduced performance times and eye fixations as well as in increased velocity and acceleration of reaching and grasping movements. These results have various implications for the development of an assistive system. Future directions in this line of applied research are suggested. The introduced methodology illustrates how the analysis of human information processes and psychological experiments can contribute to the evaluation of engineering applications", "keywords": ["human cognition", "information processing", "visual attention", "mental workload", "task complexity", "worker assistance"]} {"id": "kp20k_training_827", "title": "Enabling Warping on Stereoscopic Images", "abstract": "Warping is one of the basic image processing techniques. Directly applying existing monocular image warping techniques to stereoscopic images is problematic as it often introduces vertical disparities and damages the original disparity distribution. In this paper, we show that these problems can be solved by appropriately warping both the disparity map and the two images of a stereoscopic image. We accordingly develop a technique for extending existing image warping algorithms to stereoscopic images. This technique divides stereoscopic image warping into three steps. Our method first applies the user-specified warping to one of the two images. Our method then computes the target disparity map according to the user specified warping. The target disparity map is optimized to preserve the perceived 3D shape of image content after image warping. Our method finally warps the other image using a spatially-varying warping method guided by the target disparity map. Our experiments show that our technique enables existing warping methods to be effectively applied to stereoscopic images, ranging from parametric global warping to non-parametric spatially-varying warping", "keywords": ["stereoscopic image warping", "disparity mapping"]} {"id": "kp20k_training_828", "title": "Markov chain modeling of intermittency chaos and its application to Hopfield NN", "abstract": "In this study, a modeling method of the intermittency chaos using the Markov chain is proposed. The performances of the intermittency chaos and the Markov chain model are investigated when they are injected to the Hopfield Neural Network for a quadratic assignment problem or an associative memory. Computer simulated results show that the proposed modeling is good enough to gain similar performance of the intermittency chaos", "keywords": ["intermittency chaos", "burst noise", "markov chain", "neural network", "qap", "associative memory"]} {"id": "kp20k_training_829", "title": "Splitting Integrators for Nonlinear Schrodinger Equations Over Long Times", "abstract": "Conservation properties of a full discretization via a spectral semi-discretization in space and a Lie-Trotter splitting in time for cubic Schrodinger equations with small initial data (or small nonlinearity) are studied. The approximate conservation of the actions of the linear Schrodinger equation, energy, and momentum over long times is shown using modulated Fourier expansions. The results are valid in arbitrary spatial dimension", "keywords": ["nonlinear schrodinger equation", "splitting integrators", "split-step fourier method", "long-time behavior", "near-conservation of actions", "energy", "and momentum", "modulated fourier expansion"]} {"id": "kp20k_training_830", "title": "Generalized scans and tridiagonal systems", "abstract": "Motivated by the analysis of known parallel techniques for the solution of linear tridiagonal system, we introduce generalized scans, a class of recursively defined length-preserving, sequence-to-sequence transformations that generalize the well-known prefix computations (scans). Generalized scan functions are described in terms of three algorithmic phases, the reduction phase that saves data for the third of expansion phase and prepares data for the second phase which is a recursive invocation of the same function on one fewer variable. Both the reduction and expansion phases operate on bounded number of variables, a key feature for their parallelization. Generalized scans enjoy a property, called here protoassociativity, that gives rise to ordinary associativity when generalized scans are specialized to ordinary scans. We show that the solution of positive-definite block tridiagonal linear systems can be cast as a generalized scan, thereby shedding light on the underlying structure enabling known parallelization schemes for this problem. We also describe a variety of parallel algorithms including some that are well known for tridiagonal systems and some that are much better suited to distributed computation. ", "keywords": ["parallel computation", "prefix", "scan", "numerical computation", "tridiagonal linear system"]} {"id": "kp20k_training_832", "title": "Efficient normal basis multipliers in composite fields", "abstract": "It is well-known that a class of finite fields GF(2(n)) using an optimal normal basis is most suitable for a hardware implementation of arithmetic in finite fields. In this paper, we introduce composite fields of some hardware-applicable properties resulting from the normal basis representation and the optimal condition. We also present a hardware architecture of the proposed composite fields including a hit-parallel multiplier", "keywords": ["finite field", "composite field", "optimal normal basis", "bit-parallel multiplier"]} {"id": "kp20k_training_833", "title": "A stable second-order scheme for fluidstructure interaction with strong added-mass effects", "abstract": "In this paper, we present a stable second-order time accurate scheme for solving fluidstructure interaction problems. The scheme uses so-called Combined Field with Explicit Interface (CFEI) advancing formulation based on the Arbitrary LagrangianEulerian approach with finite element procedure. Although loosely-coupled partitioned schemes are often popular choices for simulating FSI problems, these schemes may suffer from inherent instability at low structure to fluid density ratios. We show that our second-order scheme is stable for any mass density ratio and hence is able to handle strong added-mass effects. Energy-based stability proof relies heavily on the connections among extrapolation formula, trapezoidal scheme for second-order equation, and backward difference method for first-order equation. Numerical accuracy and stability of the scheme is assessed with the aid of two-dimensional fluidstructure interaction problems of increasing complexity. We confirm second-order temporal accuracy by numerical experiments on an elastic semi-circular cylinder problem. We verify the accuracy of coupled solutions with respect to the benchmark solutions of a cylinder-elastic bar and the NavierStokes flow system. To study the stability of the proposed scheme for strong added-mass effects, we present new results using the combined field formulation for flexible flapping motion of a thin-membrane structure with low mass ratio and strong added-mass effects in a uniform axial flow. Using a systematic series of fluidstructure simulations, a detailed analysis of the coupled response as a function of mass ratio for the case of very low bending rigidity has been presented", "keywords": ["fluidstructure interaction", "low mass density ratio", "combined field with explicit interface", "second order", "stability proof", "flapping dynamics", "strong added-mass"]} {"id": "kp20k_training_834", "title": "Network information flow", "abstract": "A formal model for an analysis of an information flow in interconnection networks is presented. It is based on timed process algebra which can express also network properties. The information flow is based on a concept of deducibility on composition. Robustness of systems against network timing attacks is defined. A variety of different security properties which reflect different security requirements are defined and investigated", "keywords": ["security", "information flow", "timing attack", "interconnection network"]} {"id": "kp20k_training_835", "title": "Interactive reduct evolutional computation for aesthetic design", "abstract": "We propose a method of evolving designs based on the user's personal preferences. The method works through an interaction between the user and a computer system. The method's objective is to help the customer to set design parameters via a simple evaluation of displayed samples. An important feature is that the design attributes to which the user pays more attention (favored features) are estimated using reducts in rough set theory and reflected when refining the design. New design candidates are generated by the user's evaluation of design samples generated at random. The values of attributes estimated as favored features are fixed in the refined samples, while other attributes are generated at random. This interaction continues until the samples converge to a satisfactory design. In this manner, the design process efficiently evaluates personal and subjective preferences. The method is applied to design a 3D cylinder model such as a cup or vase. The method is then compared with an Interactive GA", "keywords": ["conceptual design", "reduct", "rough set theory", "aesthetics", "kansei", "human attention", "favored feature"]} {"id": "kp20k_training_836", "title": "A multiple criteria sorting method where each category is characterized by several reference actions: The Electre Tri-nC method", "abstract": "This paper presents Electre Tri-nC, a new sorting method which takes into account several reference actions for characterizing each category. This new method gives a particular freedom to the decision maker in the co-construction decision aiding process with the analyst to characterize the set of categories, while there is no constraint for introducing only one reference action as typical of each category like in Electre Tri-C (Almeida-Dias et al., 2010). As in such a sorting method, this new sorting method is composed of two joint rules. Electre Tri-nC also fulfills a certain number of natural requirements. Additional results on the behavior of the new method are also provided in this paper, namely the ones with respect to the addition or removal of the reference actions used for characterizing a certain category. A numerical example illustrates the manner in which Electre Tri-nC can be used by a decision maker. A comparison with some related sorting procedures is presented and it allows to conclude that the new method is appropriate to deal with sorting problems", "keywords": ["multiple criteria decision aiding", "constructive approach", "sorting", "electre tri-nc lectre ri n", "decision support"]} {"id": "kp20k_training_837", "title": "Estimating unique solutions of DC transistor circuits", "abstract": "For each natural n let F-n denote the collection of mappings of R-n onto itself defined by: F is an element of F-u if and only if there exist n strictly monotone increasing functions f(k) mapping R onto itself such that for each x = [x(1),...,x(n)](T) is an element of R-n, F(x) = [f(1)(x(1)),...,f(n)(x(n))](T). The following new property of the class P-0 of matices is proved: a real n x n matrix A belongs to P-0 if and only if for every G, H is an element of F-n the set S-0 = {x is an element of R-n : -G(x) less than or equal to Ax less than or equal to -H(x)} is bounded. As an illustration of this property a method of estimating the unique solution of the nonlinear equation F(x) + A(x) = b describing the large class of DC transistor circuits is developed. This can improve the efficiency of known computation algorithms. Numerical examples of transistor circuits illustrate in detail how the method works in practice", "keywords": ["electronics", "estimation", "mathematical methods", "mathematics"]} {"id": "kp20k_training_838", "title": "Multiple topic identification in human/human conversations", "abstract": "A multiple classification methods for multiple theme hypothesization is proposed. Four methods, one of which is new, are initially used and separately evaluated. A new sequential decision strategy for multiple theme hypothesization is introduced. A new hypothesis refinancing component is presented, based on ASR word lattice. Results show that the strategy makes it possible to obtain reliable service surveys", "keywords": ["human/human conversation analysis", "multi-topic identification", "spoken language understanding", "interpretation strategies"]} {"id": "kp20k_training_839", "title": "towards a documentation maturity model", "abstract": "This paper presents preliminary work towards a maturity model for system documentation. The Documentation Maturity Model (DMM) is specifically targeted towards assessing the quality of documentation used in aiding program understanding. Software engineers and technical writers produce such documentation during regular product development lifecycles. The documentation can also be recreated after the fact via reverse engineering. The DMM has both process and product components; this paper focuses on the product quality aspects", "keywords": ["maturity model", "reverse engineering", "quality", "documentation"]} {"id": "kp20k_training_840", "title": "Numerical representation of product transitive complete fuzzy orderings", "abstract": "Let X be a space of alternatives with a preference relation in the form of product transitive complete fuzzy ordering R. We prove existence of continuous utility functions for R. ", "keywords": ["fuzzy utility function", "product transitivity", "fuzzy orderings"]} {"id": "kp20k_training_841", "title": "Design of WDM RoF PON Based on OFDM and Optical Heterodyne", "abstract": "In this paper, we propose a WDM radio-over-fiber (RoF) passive optical network (PON) based on orthogonal frequency-division multiplexing (OFDM) and optical heterodyne. With OFDM and coherent receiving technology, the system achieves high, elastic bandwidth allocation and excellent transporting property. Using optical heterodyne, the network implements the wireless access without adding a radio source. We evaluate the performance of the system in terms of bit error rate, coverage area, and receiving eye diagram and obtain the network as an excellent wire/wireless access property", "keywords": ["optical heterodyne", "orthogonal frequency-division multiplexing ", "passive optical network ", "radio-over-fiber "]} {"id": "kp20k_training_842", "title": "Image retrieval via isotropic and anisotropic mappings", "abstract": "This paper presents an approach for content-based image retrieval via isotropic and anisotropic mappings. Isotropic mappings are defined as mappings invariant to the action of the planar Euclidean group on the image spaceinvariant to the translation, rotation and reflection of image data, and hence, invariant to orientation and position. Anisotropic mappings, on the other hand, are defined as those mappings that are correspondingly variant. Structure extraction (via a perceptual grouping process) and color histogram are shown to be representations of isotropic mappings. Texture analysis using a channel energy model comprised of even-symmetric Gabor filters is considered to be a representation of anisotropic mapping. An integration framework for these mappings is developed. Results of retrieval of outdoor images by query and by classification using a nearest neighbor classifier are presented", "keywords": ["image retrieval", "euclidean group", "perceptual grouping", "structure", "texture", "color histogram", "gabor filter", "nearest neighbor classifier"]} {"id": "kp20k_training_843", "title": "Physical gestures for abstract concepts: Inclusive design with primary metaphors", "abstract": "Designers in inclusive design are challenged to create interactive products that cater for a wide range of prior experiences and cognitive abilities of their users. But suitable design guidance for this task is rare. This paper proposes the theory of primary metaphor and explores its validity as a source of design guidance. Primary metaphor theory describes how basic mental representations of physical sensorimotor experiences are extended to understand abstract domains. As primary metaphors are subconscious mental representations that are highly automated, they should be robustly available to people with differing levels of cognitive ability. Their proposed universality should make them accessible to people with differing levels of prior experience with technology. These predictions were tested for 12 primary metaphors that predict relations between spatial gestures and abstract interactive content. In an empirical study, 65 participants from two age groups (young and old) were asked to produce two-dimensional touch and three-dimensional free-form gestures in response to given abstract keywords and spatial dimensions of movements. The results show that across age groups in 92% of all cases users choose gestures that confirmed the predictions of the theory. Although the two age groups differed in their cognitive abilities and prior experience with technology, overall they did not differ in the amount of metaphor-congruent gestures they made. As predicted, only small or zero correlations of metaphor-congruent gestures with prior experience or cognitive ability could be found. The results provide a promising step toward inclusive design guidelines for gesture interaction with abstract content on mobile multitouch devices. ", "keywords": ["gesture interaction", "multi-touch interaction", "image schema", "conceptual metaphor", "inclusive design", "older adults"]} {"id": "kp20k_training_844", "title": "MBS zone configuration schemes for wireless multicast and broadcast service", "abstract": "The Multicast Broadcast Service (MBS) zone technology is proposed to provide MBS with high QoS on Mobile Communications Networks (MCNs). An MBS zone consists of a group of Base Stations (BSs) synchronized to transmit the same MBS content using the same multicasting channel, which potentially reduces the time delay for Mobile Stations (MSs) to handoff between different BSs in the same MBS zone. However, significant time delay still incurs while MSs handoff between different BSs belonging to different MBS zones (i.e., the inter-MBS zone handoff). To reduce the possibility for the inter-MBS zone handoff, we may increase the size of an MBS zone (i.e., more BSs contained in an MBS zone), which may result in poor multicasting channel utilization. This paper proposes the OverLapping Scheme (OLS) and the Enhanced OverLapping Scheme (EOLS) for more flexible MBS zone configuration to get better performance for MBS in terms of QoS and radio resource utilization. We propose the analytical models for the original MBS zone technology (namely the Basic scheme), and the OLS scheme, which are validated against the simulation experiments. Based on the simulation results, we investigate the performance for the Basic scheme, the OLS scheme, and the EOLS scheme. ", "keywords": ["handoff delay", "multicast broadcast service ", "mobile communications network "]} {"id": "kp20k_training_846", "title": "OLSR-aware channel access scheduling in wireless mesh networks", "abstract": "Wireless mesh networks (WMNs) have emerged as a key technology having various advantages, especially in providing cost-effective coverage and connectivity solutions in both rural and urban areas. WMNs are typically deployed as backbone networks, usually employing spatial TDMA (STDMA)-based access schemes which are suitable for the high traffic demands of WMNs. This paper aims to achieve higher utilization of the network capacity and thereby aims to increase the application layer throughput of STOMA-based WMNs. The central idea is to use optimized link state routing (OLSR)-specific routing layer information in link layer channel access schedule formation. This paper proposes two STDMA-based channel access scheduling schemes (one distributed, one centralized) that exploit OLSR-specific information to improve the application layer throughput without introducing any additional messaging overhead. To justify the contribution of using OLSR-specific information to the throughput, the proposed schemes are compared against one another and against their non-OLSR-aware versions via extensive ns-2 simulations. Our simulation results verify that utilizing OLSR-specific information significantly improves the overall network performance both in distributed and in centralized schemes. The simulation results further show that OLSR-aware scheduling algorithms attain higher end-to-end throughput although their non-OLSR-aware counterparts achieve higher concurrency in slot allocations. ", "keywords": ["cross-layer design", "spatial tdma", "olsr", "mac", "centralized channel access scheduling", "distributed channel access scheduling"]} {"id": "kp20k_training_847", "title": "Privacy-preserving indexing of documents on the network", "abstract": "With the ubiquitous collection of data and creation of large distributed repositories, enabling search over this data while respecting access control is critical. A related problem is that of ensuring privacy of the content owners while still maintaining an efficient index of distributed content. We address the problem of providing privacy-preserving search over distributed access-controlled content. Indexed documents can be easily reconstructed from conventional (inverted) indexes used in search. Currently, the need to avoid breaches of access-control through the index requires the index hosting site to be fully secured and trusted by all participating content providers. This level of trust is impractical in the increasingly common case where multiple competing organizations or individuals wish to selectively share content. We propose a solution that eliminates the need of such a trusted authority. The solution builds a centralized privacy-preserving index in conjunction with a distributed access-control enforcing search protocol. Two alternative methods to build the centralized index are proposed, allowing trade offs of efficiency and security. The new index provides strong and quantifiable privacy guarantees that hold even if the entire index is made public. Experiments on a real-life dataset validate performance of the scheme. The appeal of our solution is twofold: (a) content providers maintain complete control in defining access groups and ensuring its compliance, and (b) system implementors retain tunable knobs to balance privacy and efficiency concerns for their particular domains", "keywords": ["privacy", "indexing", "distributed search"]} {"id": "kp20k_training_848", "title": "cross layer optimization for efficient data aggregation in multi-hop wireless sensor networks", "abstract": "Wireless Sensor Networks (WSN) is the most promising technological paradigm to support the next generation highly efficient emergency management systems. Optimal design of WSN involves all the layers of the protocol stack: from the physical (PHY), the medium access layer (MAC) to the application layer. The design problem is conveniently cast in this paper for linear sensor network topologies where the terminals are equidistantly placed on the line between the source and the destination and are monitoring a correlated field. This simple topology can be adopted to provide insights to the performance of multihop networks used in several applications as monitoring systems, acoustic sensor arrays, seismic systems etc... The paper provides an analytical tool for performance analysis that takes into account both the statistical properties of the monitored field (spatial and temporal correlation), the PHY layer transceiver design (RF power allocation and modulation) and the medium access (duty cycle, routing", "keywords": ["source coding", "wireless sensor networks", "cross-layer design", "linear network topology", "compress and forward"]} {"id": "kp20k_training_849", "title": "Rank-order polynomial subband decomposition for medical image compression", "abstract": "In this paper, the problem of progressive lossless image coding is addressed, A nonlinear decomposition for progressive lossless compression is presented. The decomposition into subbands is called rank-order polynomial decomposition (ROPD) according to the polynomial prediction models used. The decomposition method presented here is a further development and generalization of the morphological subband decomposition (MSD) introduced earlier by the same research group. It is shown that ROPD provides similar or slightly better results than the compared coding schemes such as the codec based on set partitioning in hierarchical trees (SPIHT) and the codec based on wavelet/trellis-coded quantization (WTCQ). Our proposed method highly outperforms the standard JPEG. The proposed lossless compression scheme has the functionality of having a completely embedded bit stream, which allows for data browsing. It is shown that the ROPD has a better lossless rate than the MSD but it has also a much better browsing quality when only a part of the bit stream is decompressed. Finally, the possibility of hybrid lossy/lossless compression is presented using ultrasound images. As with other compression algorithms, considerable gain can be obtained if only the regions of interest are compressed losslessly", "keywords": ["medical image compression", "nonlinear subband decomposition", "progressive lossless image coding", "rank-order polynomial decomposition"]} {"id": "kp20k_training_850", "title": "LiNearN: A new approach to nearest neighbour density estimator", "abstract": "Reject the premise that a NN algorithm must find the NN for every instance. The first NN density estimator that has O(n) O ( n ) time complexity and O(1) O ( 1 ) space complexity. These complexities are achieved without using any indexing scheme. Our asymptotic analysis reveals that it trades off between bias and variance. Easily scales up to large data sets in anomaly detection and clustering tasks", "keywords": ["k-nearest neighbour", "density-based", "anomaly detection", "clustering"]} {"id": "kp20k_training_851", "title": "Beauty or realism: The dimensions of skin from cognitive sciences to computer graphics", "abstract": "As the most visible interface between the individual and the others, the skin is a key element of visually-carried inter-individual social information, since skin displays a wide array of information regarding gender, age, or health status. Adequate skin perception is central in individual identification and social interactions. This topic elicited marked interest in artists since the first development of visual arts in Antiquity. Often performed in order to identify the biological correlates of attractiveness, psychological research on skin perception made a jump forward with the development of virtual image synthesis. Here, we investigate how advances in both computer graphics and the psychology of skin perception may be turned to use in real-time virtual worlds. We propose a model of skin perception based both on purely physical dimensions such as color, texture, and symmetry, and on dimensions carrying socially-oriented information, such as perceived youth (information regarding putative fertility), markers of sexual dimorphism (information regarding hormonal status), and level of oxygenation (information regarding health status). It appears that for almost all of the dimensions of skin, maximal attractiveness and realism are the two opposite extremities of a single perceptive continuum", "keywords": ["avatar", "humanmachine interactions", "uncanny valley", "skin perception", "synthesized skin", "virtual settings"]} {"id": "kp20k_training_852", "title": "Fault diagnosis by Locality Preserving Discriminant Analysis and its kernel variation", "abstract": "Linear Discriminant Analysis (LDA) and its nonlinear kernel variation Generalized Discriminant Analysis (GDA) are the most popular supervised dimensionality reduction methods for fault diagnosis. However, we argue that they probably provide suboptimal results for fault diagnosis due to the Fisher's criterion they use. This paper proposes a new supervised dimensionality reduction method named Locality Preserving Discriminant Analysis (LPDA) and its kernel variation Kernel LPDA (KLPDA) for fault diagnosis. (K) LPDA maximizes a new criterion such that local discriminant structure and local geometric structure in data are optimally preserved simultaneously in each dimension of the reduced space. The criterion directly targets at minimizing local overlapping between different classes. Extensive simulations on the Tennessee Eastman (TE) benchmark simulation process and a waste water treatment plant (WWTP) clearly demonstrate the superiority of our methods in terms of misclassification rate and making use of extra training data. ", "keywords": ["fault diagnosis", "multi-fault classification", "feature extraction", "local structure preserving", "kernel methods"]} {"id": "kp20k_training_853", "title": "Wireless distributed computing in cognitive radio networks", "abstract": "Individual cognitive radio nodes in an ad-hoc cognitive radio network (CRN) have to perform complex data processing operations for several purposes, such as situational awareness and cognitive engine (CE) decision making. In an implementation point of view, each cognitive radio (CR) may not have the computational and power resources to perform these tasks by itself. In this paper, wireless distributed computing (WDC) is presented as a technology that enables multiple resource-constrained nodes to collaborate in computing complex tasks in a distributed manner. This approach has several benefits over the traditional approach of local computing, such as reduced energy and power consumption, reduced burden on the resources of individual nodes, and improved robustness. However, the benefits are negated by the communication overhead involved in WDC. This paper demonstrates the application of WDC to CRNs with the help of an example CE processing task. In addition, the paper analyzes the impact of the wireless environment on WDC scalability in homogeneous and heterogeneous environments. The paper also proposes a workload allocation scheme that utilizes a combination of stochastic optimization and decision-tree search approaches. The results show limitations in the scalability of WDC networks, mainly due to the communication overhead involved in sharing raw data pertaining to delegated computational tasks", "keywords": ["distributed computing", "cognitive radio networks", "cognitive engine", "power and energy consumption", "workload allocation"]} {"id": "kp20k_training_854", "title": "Advertisement timeout driven bee's mating approach to maintain fair energy level in sensor networks", "abstract": "In wireless sensor network, dynamic cluster-based routing approach is widely used. Such practiced approach, quickly depletes the energy of cluster heads and induces the execution of frequent re-election algorithm. This repeated cluster head re-election algorithm increases the number of advertisement messages, which in turn depletes the energy of overall sensor network. Here, we proposed the Advertisement Timeout Driven Bee's Mating Approach (ATDBMA) that reduces the cluster set-up communication overhead and elects the standby node in advance for current cluster head, which has the capability to withstand for many rounds. Our proposed ATDBMA method uses the honeybee mating behaviour in electing the standby node for current cluster head. This approach really outperforms the other methods in achieving reduced number of re-election and maintaining fair energy nodes between the rounds", "keywords": ["wireless sensor network", "bee's mating", "advertisement timeout"]} {"id": "kp20k_training_856", "title": "Protection against soft errors in the space environment: A finite impulse response (FIR) filter case study", "abstract": "The problem of radiation is a key issue in Space applications, since it produces several negative effects on digital circuits. Considering the high reliability expected in these systems, many techniques have been proposed to mitigate these effects. However, traditional protection techniques against soft errors, like Triple Modular Redundancy (TMR) or EDAC codes (for example Hamming), normally result in a significant area and power overhead. In this paper we propose a specific technique to protect digital finite impulse response (FIR) filters applying the system knowledge. This means to study and use the singularities in their structure in order to provide effective protection with minimal area and power. The results obtained in the experimental process have been compared with the protection offered by TMR and Hamming codes, in order to prove the quality of the proposed solution", "keywords": ["fault tolerance", "soft errors", "radiation", "error detection and correction codes", "digital filters"]} {"id": "kp20k_training_857", "title": "Minimizing the dynamic and sub-threshold leakage power consumption using least leakage vector-assisted technology mapping", "abstract": "Power consumption due to the temperature-dependent leakage current becomes a dominant part of the total power dissipation in systems using nanometer-scale process technology. To obtain the minimum power consumption for different operating conditions, logic synthesis tools are required to take into consideration the leakage power as well as the operating characteristics during the optimization. Conventional logic synthesis flows consider dynamic power only and use an over-simplified cost function in modeling the total power consumption of the logic network. In this paper, we propose a complete model of the total power consumption of the logic network, which includes both the active and standby sub-threshold leakage power, and the operating duty cycle of the applications. We also propose a least leakage vector (LLV) assisted technology mapping algorithm to optimize the total power of the final mapped network. Instead of finding the LLV after the logic network is synthesized and mapped, we use the LLV found in the technology-decomposed network to help in obtaining the lowest total power match during technology mapping. Experimental results on MCNC benchmarks show that on average more than 30% reduction in total power consumption is obtained comparing with the conventional low power technology mapping algorithm", "keywords": ["sub-threshold leakage power reduction", "least leakage vector", "technology mapping"]} {"id": "kp20k_training_858", "title": "Chaos breeds autonomy: connectionist design between bias and baby-sitting", "abstract": "In connectionism and its offshoots, models acquire functionality through externally controlled learning schedules. This undermines the claim of these models to autonomy. Providing these models with intrinsic biases is not a solution, as it makes their function dependent on design assumptions. Between these two alternatives, there is room for approaches based on spontaneous self-organization. Structural reorganization in adaptation to spontaneous activity is a well-known phenomenon in neural development. It is proposed here as a way to prepare connectionist models for learning and enhance the autonomy of these models", "keywords": ["small world", "non-linear dynamics", "perception", "spontaneous activity", "complex systems", "evolving and growing neural networks", "cognitive modeling"]} {"id": "kp20k_training_859", "title": "An improvement on the complexity of factoring read-once Boolean functions", "abstract": "Read-once functions have gained recent, renewed interest in the fields of theory and algorithms of Boolean functions, computational learning theory and logic design and verification. In an earlier paper [M.C. Golumbic, A. Mintz, U. Rotics, Factoring and recognition of read-once functions using cographs and normality, and the readability of functions associated with partial k-trees, Discrete Appl. Math. 154 (2006) 14651677], we presented the first polynomial-time algorithm for recognizing and factoring read-once functions, based on a classical characterization theorem of Gurvich which states that a positive Boolean function is read-once if and only if it is normal and its co-occurrence graph is P4 P 4 -free. In this note, we improve the complexity bound by showing that the method can be modified slightly, with two crucial observations, to obtain an O(n|f|) O ( n | f | ) implementation, where |f| | f | denotes the length of the DNF expression of a positive Boolean function f, and n is the number of variables in f . The previously stated bound was O(n2k) O ( n 2 k ) , where k is the number of prime implicants of the function. In both cases, f is assumed to be given as a DNF formula consisting entirely of the prime implicants of the function", "keywords": ["read-once functions", "logic", "boolean functions", "cographs"]} {"id": "kp20k_training_860", "title": "A Dutch medical language processor: part II: evaluation", "abstract": "This paper provides a preliminary evaluation of a general Dutch medical language processor (DMLP). Four examples of different potential applications (based on different linguistic modules) are presented, each with its own evaluation method. Finally, a critical review of the used evaluation methods is offered according to the state of the art in medical language processing", "keywords": ["medical language processing", "computational linguistics", "information processing", "automated encoding"]} {"id": "kp20k_training_861", "title": "Privacy-Preserving Distributed Network Troubleshooting-Bridging the Gap between Theory and Practice", "abstract": "Today, there is a fundamental imbalance in cybersecurity. While attackers act more andmore globally and coordinated, network defense is limited to examine local information only due to privacy concerns. To overcome this privacy barrier, we use secure multiparty computation (MPC) for the problem of aggregating network data from multiple domains. We first optimize MPC comparison operations for processing high volume data in near real-time by not enforcing protocols to run in a constant number of synchronization rounds. We then implement a complete set of basic MPC primitives in the SEPIA library. For parallel invocations, SEPIA's basic operations are between 35 and several hundred times faster than those of comparable MPC frameworks. Using these operations, we develop four protocols tailored for distributed network monitoring and security applications: the entropy, distinct count, event correlation, and top-k protocols. Extensive evaluation shows that the protocols are suitable for near real-time data aggregation. For example, our top-k protocol PPTKS accurately aggregates counts for 180,000 distributed IP addresses in only a few minutes. Finally, we use SEPIA with real traffic data from 17 customers of a backbone network to collaboratively detect, analyze, and mitigate distributed anomalies. Our work follows a path starting from theory, going to system design, performance evaluation, and ending with measurement. Along this way, it makes a first effort to bridge two very disparate worlds: MPC theory and network monitoring and security practices", "keywords": ["algorithms", "design", "experimentation", "measurement", "security", "applied cryptography", "secure multiparty computation", "collaborative network security", "anomaly detection", "network management", "root-cause analysis", "aggregation"]} {"id": "kp20k_training_862", "title": "Better GP benchmarks: community survey results and proposals", "abstract": "We present the results of a community survey regarding genetic programming benchmark practices. Analysis shows broad consensus that improvement is needed in problem selection and experimental rigor. While views expressed in the survey dissuade us from proposing a large-scale benchmark suite, we find community support for creating a blacklist of problems which are in common use but have important flaws, and whose use should therefore be discouraged. We propose a set of possible replacement problems", "keywords": ["genetic programming", "benchmarks", "community survey"]} {"id": "kp20k_training_863", "title": "A Tabular Steganography Scheme for Graphical Password Authentication", "abstract": "Authentication, authorization and auditing are the most important issues of security on data communication. In particular, authentication is the life of every individual essential closest friend. The user authentication security is dependent on the strength of user password. A secure password is usually random, strange, very long and difficult to remember. For most users, remember these irregular passwords are very difficult. To easily remember and security are two sides of one coin. In this paper, we propose a new graphical password authentication protocol to solve this problem. Graphical password authentication technology is the use of click on the image to replace input some characters. The graphical user interface can help user easy to create and remember their secure passwords. However, in the graphical password system based on images can provide an alternative password, but too many images will be a large database to store issue. All the information can be steganography to achieve our scheme to solve the problem of database storage. Furthermore, tabular steganography technique can achieve our scheme to solve the information eavesdropping problem during data transmission. Our modified graphical password system can help user easily and friendly to memorize their password and without loss of any security of authentication. User's chosen input will be hidden into image using steganography technology, and will be transferred to server security without any hacker problem. And then, our authentication server only needs to store only a secret key for decryption instead of large password database", "keywords": ["graphical password authentication", "security", "teganography", "protocol"]} {"id": "kp20k_training_864", "title": "A framework for optimal correction of inconsistent linear constraints", "abstract": "The problem of inconsistency between constraints often arises in practice as the result, among others, of the complexity of real models or due to unrealistic requirements and preferences. To overcome such inconsistency two major actions may be taken: removal of constraints or changes in the coefficients of the model. This last approach, that can be generically described as \"model correction\" is the problem we address in this paper in the context of linear constraints over the reals. The correction of the right hand side alone, which is very close to a fuzzy constraints approach, was one of the first proposals to deal with inconsistency, as it may be mapped into a linear problem. The correction of both the matrix of coefficients and the right hand side introduces non linearity in the constraints. The degree of difficulty in solving the problem of the optimal correction depends on the objective function, whose purpose is to measure the closeness between the original and corrected model. Contrary to other norms, that provide corrections with quite rigid patterns, the optimization of the important Frobenius norm was still an open problem. We have analyzed the problem using the KKT conditions and derived necessary and sufficient conditions which enabled us to unequivocally characterize local optima, in terms of the solution of the Total Least Squares and the set of active constraints. These conditions justify a set of pruning rules, which proved, in preliminary experimental results, quite successful in a tree search procedure for determining the global minimizer", "keywords": ["infeasibility", "flexible constraints", "linear constraints", "optimal correction"]} {"id": "kp20k_training_865", "title": "User interface evaluation and empirically-based evolution of a prototype experience management tool", "abstract": "Experience management refers to the capture, structuring, analysis, synthesis, and reuse of an organization's experience in the form of documents, plans, templates, processes, data, etc. The problem of managing experience effectively is not unique to software development, but the field of software engineering has had a high-level approach to this problem for some time. The Experience Factory is an organizational infrastructure whose goal is to produce, store, and reuse experiences gained in a software development organization [6], [71, [8]. This paper describes The Q-Labs Experience Management System (Q-Labs EMS), which is based on the Experience Factory concept and was developed for use in a multinational software engineering consultancy [31]. A critical aspect of the Q-Labs EMS project is its emphasis on empirical evaluation as a major driver of its development and evolution. The initial prototype requirements were grounded in the organizational needs and vision of Q-Labs, as were the goals and evaluation criteria later used to evaluate the prototype. However, the Q-Labs EMS architecture, data model, and user interface were designed to evolve, based on evolving user needs. This paper describes this approach, including the evaluation that was conducted of the initial prototype and its implications for the further development of systems to support software experience management", "keywords": ["experience management", "knowledge management", "experience reuse", "user interface evaluation", "empirical study"]} {"id": "kp20k_training_866", "title": "Energy-aware performance analysis methodologies for HPC architecturesAn exploratory study", "abstract": "Performance analysis is a crucial step in HPC architectures including clouds. Traditional performance analysis methodologies that were proposed, implemented, and enacted are functional with the objective of identifying bottlenecks or issues related to memory, programming languages, hardware, and virtualization aspects. However, the need for energy efficient architectures in highly scalable computing environments, such as, Grid or Cloud, has widened the research thrust on developing performance analysis methodologies that analyze the energy inefficiency of HPC applications or their associated hardware. This paper surveys the performance analysis methodologies that investigates into the available energy monitoring and energy awareness mechanisms for HPC architectures. In addition, the paper validates the existing tools in terms of overhead, portability, and user-friendly parameters by conducting experiments at HPCCLoud Research Laboratory at our premise. This research work will promote HPC application developers to select an apt monitoring mechanism and HPC tool developers to augment required energy monitoring mechanisms which fit well with their basic monitoring infrastructures", "keywords": ["hpc", "performance analysis", "tools", "energy monitoring"]} {"id": "kp20k_training_867", "title": "locating the tightest link of a network path", "abstract": "The tightest link of a network path is the link where the end-to-end available bandwidth is limited. We propose a new probe technique, called Dual Rate Periodic Streams (DRPS), for finding the location of the tightest link. A DRPS probe is a periodic stream with two rates. Initially, it goes through the path at a comparatively high rate. When arrived at a particular link, the probe shifts its rate to a lower level and keeps the rate. If proper rates are set to the probe, we can control whether the probe is congested or not by adjusting the shift time. When the point of rate shift is in front of the tightest link, the probe can go through the path without congestion, otherwise congestion occurs. Thus, we can find the location of the tightest link by congestion detection at the receiver", "keywords": ["available bandwidth", "network measurements", "dual rate periodic streams ", "tight link"]} {"id": "kp20k_training_868", "title": "Research methodology - Using online technology for secondary analysis of survey research data - \"Act globally, think locally", "abstract": "The purpose of the this article is to discuss the impact that online technologies are having and will continue to have on the way secondary analysis of survey research is performed. The authors discuss the validity of secondary analysis of survey research studies and the effect that online technology has on such analyses. Before reviewing current online public opinion sources, the authors make the argument that online services are becoming increasingly important for secondary analysis. Finally, the authors present a model indicating where online services can go in the future given the technology that is available today. Ultimately, it is believed that the Internet is currently underexploited for its capacity to aid secondary analysis. The authors advocate making survey data more easily available online to all potential users. This entails varying the format and depth of data so that users find sources suitable to their needs. It also entails the use of desktop technology to store and analyze survey research data and making that technology, or the applications that are developed through that technology, available to other users via computer networks, primarily via the Internet", "keywords": ["survey research", "unix", "pdf", "online", "secondary analysis", "cgi"]} {"id": "kp20k_training_869", "title": "Free vibration analysis of multiple-stepped beams by using Adomian decomposition method", "abstract": "The Adomian decomposition method (ADM) is employed in this paper to investigate the free vibrations of the Euler-Bernoulli beams with multiple cross-section steps. The proposed ADM method can be used to analyze the vibration of beams consisting of an arbitrary number of steps through a recursive way. The solution can be obtained by solving a set of algebraic equations with only three unknown parameters. Furthermore, the method can be extended to obtain an approximate solution to vibration problems of any type of non-uniform beams. Several numerical examples are presented and compared to those given in the paper. It is shown that the ADM offers an accurate and effective method of free vibration analysis of multiple-stepped beams with arbitrary boundary conditions. ", "keywords": ["adomian decomposition method", "vibration analysis", "multiple stepped beam", "natural frequency", "mode shape"]} {"id": "kp20k_training_870", "title": "Modelling and performance evaluation of mobile multimedia systems using QoS-GSPN", "abstract": "Quality of Service (QoS) measurement of multimedia applications is one of the most important issues for call handoff and call admission control in mobile networks. Based on the QoS measures, we propose a Generalized Stochastic Petri Net (GSPN) based model, called QoS-GSPN, which can express the real-time behavior of QoS measurement for mobile networks. QoS-GSPN performance analysis methodology includes the formal expression and performance analysis environment. It offers the promise of providing real-time behavior predictability for systems characterized by substantial stochastic behavior. With this methodology we model and analyze the call handoff and call admission control schemes in the different multimedia traffic environments of a mobile network. The results of simulation experiments are used to verify the optimal performance achievable for these schemes under the QoS constraints in the given setting of design parameters", "keywords": ["qos-gspn", "qos", "multimedia system", "mobile system"]} {"id": "kp20k_training_871", "title": "watermarking of mpeg-2 video in compressed domain using vlc mapping", "abstract": "In this work we propose a new algorithm for fragile, high capacity yet file-size preserving watermarking of MPEG-2 streams. Watermarking is done entirely in the compressed domain, with no need for full or even partial decompression. The algorithm is based on a previously developed concept of VLC mapping for compressed domain watermarking. The entropy-coded segment of the video is first parsed out and then analyzed in pairs. It is recognized that there are VLC pairs that never appear together in any intra-coded block. The list of unused pairs is systematically generated by the intersection of \"pair trees.\" One of the trees is generated from the main VLC table given in ISO/IEC 13818-2:2000 standard. The other trees are dynamically generated for each intra coded blocks. Forcing one VLC pairs in a block to one of the unused ones generates a watermark block. The change is done while maintaining run/level change to a minimum. At the decoder, the main pair tree is created offline using publicly available VLC tables. Through a secure key exchange, the indices to unused code pairs are communicated to the receiver. We show that the watermarked video is reasonably resistant to forgery attacks and remains secure to watermark detection attempts", "keywords": ["mpeg-2", "compressed domain", "variable length code"]} {"id": "kp20k_training_872", "title": "Implementing monads for C plus plus template metaprograms", "abstract": "C++ template metaprogramming is used in various application areas, such as expression templates, static interface checking, active libraries, etc. Its recognized similarities to pure functional programming languages - like Haskell - make the adoption of advanced functional techniques possible. Such a technique is using monads, programming structures representing computations. Using them actions implementing domain logic can be chained together and decorated with custom code. C++ template metaprogramming could benefit from adopting monads in situations like advanced error propagation and parser construction. In this paper we present an approach for implementing monads in C++ template metaprograms. Based on this approach we have built a monadic framework for C++ template metaprogramming. As real world examples we present a generic error propagation solution for C++ template metaprograms and a technique for building compile-time parser generators. All solutions presented in this paper are implemented and available as an open source library. ", "keywords": ["c plus plus template metaprogram", "monad", "exception handling", "monoid", "typeclass"]} {"id": "kp20k_training_873", "title": "Non-uniform data distribution for communication-efficient parallel clustering", "abstract": "Global communication requirements and load imbalance of some parallel data mining algorithms are the major obstacles to exploit the computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication cost in parallel data mining algorithms and, in particular, in the k-means algorithm for cluster analysis. In the straightforward parallel formulation of the k-means algorithm, data and computation loads are uniformly distributed over the processing nodes. This approach has excellent load balancing characteristics that may suggest it could scale up to large and extreme-scale parallel computing systems. However, at each iteration step the algorithm requires a global reduction operation which hinders the scalability of the approach. This work studies a different parallel formulation of the algorithm where the requirement of global communication is removed, while maintaining the same deterministic nature of the centralised algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real-world distributed applications or can be induced by means of multi-dimensional binary search trees. The approach can also be extended to accommodate an approximation error which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements", "keywords": ["parallel data mining", "clustering", "k-means", "group communication", "extreme-scale computing"]} {"id": "kp20k_training_874", "title": "Cost-effective control of air quality and greenhouse gases in Europe: Modeling and policy applications", "abstract": "Environmental policies in Europe have successfully eliminated the most visible and immediate harmful effects of air pollution in the last decades. However, there is ample and robust scientific evidence that even at present rates Europes emissions to the atmosphere pose a significant threat to human health, ecosystems and the global climate, though in a less visible and immediate way. As many of the low hanging fruits have been harvested by now, further action will place higher demands on economic resources, especially at a time when resources are strained by an economic crisis. In addition, interactions and interdependencies of the various measures could even lead to counter-productive outcomes of strategies if they are ignored. Integrated assessment models, such as the GAINS (Greenhouse gas Air pollution Interactions and Synergies) model, have been developed to identify portfolios of measures that improve air quality and reduce greenhouse gas emissions at least cost. Such models bring together scientific knowledge and quality-controlled data on future socio-economic driving forces of emissions, on the technical and economic features of the available emission control options, on the chemical transformation and dispersion of pollutants in the atmosphere, and the resulting impacts on human health and the environment. The GAINS model and its predecessor have been used to inform the key negotiations on air pollution control agreements in Europe during the last two decades. This paper describes the methodological approach of the GAINS model and its components. It presents a recent policy analysis that explores the likely future development of emissions and air quality in Europe in the absence of further policy measures, and assesses the potential and costs for further environmental improvements. To inform the forthcoming negotiations on the revision of the Gothenburg Protocol of the Convention on Long-range Transboundary Air Pollution, the paper discusses the implications of alternative formulations of environmental policy targets on a cost-effective allocation of further mitigation measures", "keywords": ["air pollution", "integrated assessment", "cost-effectiveness", "gains model", "convention on long-range transboundary air pollution", "sciencepolicy interface", "decision support"]} {"id": "kp20k_training_875", "title": "Costs assessments of European environmental policies", "abstract": "The evolution of energy production in the European Union (EU) is going through a big change in recent years: the incidence of traditional fuels is diminishing gradually for increasing renewable energy sources (RES), due to international concerns over climate change and for energy security reasons. The aim of this paper is to construct a simulation model that identifies and estimates costs that may arise for a community of negotiating countries from opportunistic behavior of some country when defining environmental policies. In this paper, the model is applied specifically to the new 2030 Framework for Climate and Energy Policies (COM(2014) 0015) (EC, 2014 [11]) on the promotion of RES that commits EU governments to a common goal to increase the share of RES in final consumption to 27% by 2030. Costs faced by EU countries to achieve the RES target are different due to their endowment heterogeneity, the availability of RES, the diffusion process of cost improvements and the different instruments to support the development of the RES technologies. Given the still undefined participation agreement to reach the new overall RES target by 2030, we want to assess the potential cost penalty induced by free riding behavior. This could stem from some EU country, which avoids complying with the RES Directive. Our policy simulation exercise shows that costs increase more than proportionally with the non-participating country size, measured with GDP and CO2 emissions. Furthermore, we provide a model to analytically assess the likelihood each EU country may have to behave opportunistically within the negotiation process of the new proposal on EU RES targets (COM(2014) 0015", "keywords": ["simulation model", "renewable energy", "cost function", "opportunistic behavior"]} {"id": "kp20k_training_876", "title": "Design methodology for battery powered embedded systems - In safety critical application", "abstract": "Battery powered embedded system can be considered as a power aware system for a safety critical application. There is a need of saving the battery power for such power aware system so that it can be used more efficiently, particularly in safety critical applications. Present paper describes power optimization procedure using real time scheduling technique having a specific dead line guided by the model based optimum current discharge profile of a battery. In any power aware system 'energy optimization' is one of the major issues for a faithful operation. ", "keywords": ["task scheduling", "energy optimization", "peukart's law", "power saving mode", "instruction based power optimization", "task mapping"]} {"id": "kp20k_training_877", "title": "Reliability measures for two-part partition of states for aggregated Markov repairable systems", "abstract": "Three models for the aggregated stochastic processes based on an underlying continuous-time Markov repairable system are developed in which two-part partition of states is used. Several availability measures such as interval availability, instantaneous availability and steady-state availability are presented. Some of these availabilities are derived by using Laplace transforms, which are more compact and concise. Other reliability-distributions for these three models are given as well", "keywords": ["two-part partition", "aggregation", "repairable systems", "availability measures", "distributions"]} {"id": "kp20k_training_878", "title": "an innovative architecture for context foraging", "abstract": "Nomadic computing is a term for describing computing environments where the nodes are mobile and have only ad hoc interactions with each other. Evidently, context aware applications are a key ingredient in such environments. However, nomadic nodes may not always have the capability to sense their environment and infer their exact context. Hence, applications carried by the nodes will not be able to execute properly. In this paper, we propose an architecture for collaborative exchange of contextual information in an ad hoc setting. This approach is called \"context foraging\" and is used for disseminating contextual information based on a publish/subscribe scheme. We present the algorithms required for such architecture along with the dynamic event indexing techniques used by the system. The efficiency of the suggested approach is assessed through simulation results. Our proposal is investigated and implemented in the context of the ICT IPAC Project", "keywords": ["publish-subscribe", "nomadic computing", "collaborative sensing"]} {"id": "kp20k_training_879", "title": "On the estimation and correction of bias in local atrophy estimations using example atrophy simulations", "abstract": "Brain atrophy is considered an important marker of disease progression in many chronic neuro-degenerative diseases such as multiple sclerosis (MS). A great deal of attention is being paid toward developing tools that manipulate magnetic resonance (MR) images for obtaining an accurate estimate of atrophy. Nevertheless, artifacts in MR images, inaccuracies of intermediate steps and inadequacies of the mathematical model representing the physical brain volume change, make it rather difficult to obtain a precise and unbiased estimate. This work revolves around the nature and magnitude of bias in atrophy estimations as well as a potential way of correcting them. First, we demonstrate that for different atrophy estimation methods, bias estimates exhibit varying relations to the expected atrophy and these bias estimates are of the order of the expected atrophies for standard algorithms, stressing the need for bias correction procedures. Next, a framework for estimating uncertainty in longitudinal brain atrophy by means of constructing confidence intervals is developed. Errors arising from MRI artifacts and bias in estimations are learned from example atrophy simulations and anatomies. Results are discussed for three popular non-rigid registration approaches with the help of simulated localized brain atrophy in real MR images", "keywords": ["mri", "brain atrophy estimation", "uncertainty", "confidence intervals", "non-rigid registration"]} {"id": "kp20k_training_880", "title": "Information technologies and intuitive expertise: a method for implementing complex organizational change among New York City Transit Authoritys Bus Maintainers", "abstract": "This paper describes an attempt to implement a complex information technology system with the New York City Transit Authoritys (NYCTA) Bus Maintainers intended to help better track and coordinate bus maintenance schedules. IT implementation is notorious for high failure rates among so-called low level workers. We believe that many IT implementation efforts make erroneous assumptions about front line workers expertise, which creates tension between the IT implementation effort and the cultures of practice among the front line workers. We designed an aggressive learning intervention to address this issue and called Operational Simulation. Rather than requiring the expected 12 months for implementation, the hourly staff reached independence with the new system in 2 weeks and line supervisors (who do more) managed in 6 weeks. Additionally, the NYCTA shifted from a reactive to a proactive maintenance approach, reduced cycle times, and increased the mean distance between failure, resulting in a estimated $40 million cost savings. Implications for cognition, expertise, and training are discussed", "keywords": ["organizational change", "information technology", "intuitive expertise", "simulation-based training"]} {"id": "kp20k_training_881", "title": "Mirrored disk organization reliability analysis", "abstract": "Disk mirroring or RAID level 1 (RAID1) is a popular paradigm to achieve fault tolerance and a higher disk access bandwidth for read requests. We consider four RAID1 organizations: basic mirroring, group rotate declustering, interleaved declustering, and chained declustering, where the last three organizations attain a more balanced load than basic mirroring when disk failures occur. We first obtain the number of configurations, A(n, i), which do not result in data loss when i out of n disks have failed. The probability of no data loss in this case is A(n, i)/{n/i} The reliability of each RAID1 organization is the summation over 1 <= i <= n/2 of A(n, i)r(n-1) (1 - r)(i), where r denotes the reliability of each disk. A closed-form expression for A(n, i) is obtained easily for the first three organizations. We present a relatively simple derivation of the expression for A(n, i) for the chained declustering method, which includes a correctness proof. We also discuss the routing of read requests to balance disk loads, especially when there are disk failures, to maximize the attainable throughput", "keywords": ["disk mirroring", "raid level 1", "reliability modeling", "interleaved declustering", "chained declustering", "group rotate declustering"]} {"id": "kp20k_training_882", "title": "Data processing in the early cosmic ray experiments in Sydney", "abstract": "The cosmic ray air shower experiment set up at the University of Sydney in the late 1950s was one of the first complex experiments in Australia to utilize the power of an electronic computer to process and analyse the experimental data. The paper provides a brief overview of the design and construction of the equipment for the experiment and the use of the computer SILLIAC in the processing and analysis of the data. The central role of Chris Wallace in this latter aspect is given special attention", "keywords": ["data processing", "cosmic ray air showers"]} {"id": "kp20k_training_883", "title": "The impact of metadata in web resources discovering", "abstract": "Purpose - To explore the impact of using metadata in finding and ranking web pages through 15 December 2005 search engines. Design/methodology/approach - The study has been divided into two phases. In phase one, the use of metadata schemes and the impact of overlapped documents have been examined by employing the usability technique. Phase two examined the impact of adding metadata elements to web pages in their original rank order, using the experimental method. This study focuses on indexing web pages using metadata. and its impact on search engine's rankings. Findings - Meta tags are more widely used than Dublin Core. The overlapped pages tend to include metadata. The second phase shows that by adding metadata. elements to web pages, it raises its rank order. However, this depends on the quality of the description and the metadata schemes. The study shows no great difference in page ranking between adding meta tags and Dublin Core. Practical implications - To maximize the impact: of metadata, more attention should be given to keyword and descriptive fields. Originality/value - The hypothetical relationship between overlapped pages and the inclusion of metadata and indexing by search engines had not been previously examined", "keywords": ["search engines", "indexing", "optimization techniques", "hypertext markup language"]} {"id": "kp20k_training_884", "title": "A new density-stiffness interpolation scheme for topology optimization of continuum structures", "abstract": "In this paper, a new density-stiffness interpolation scheme for topology optimization of continuum structures is proposed, Based on this new scheme, not only the so-caged checkerboard pattern can be eliminated from the final optimal topology, but also the boundary-smooth effect associated with the traditional sensitivity averaging approach can also be overcome. A proof of the existence of the solution of the optimization problem is also given, therefore mesh independent optimization results can be obtained Numerical examples illustrate the effectiveness and the advantage of the proposed interpolation scheme", "keywords": ["topology", "optimization techniques", "meshes", "filtration"]} {"id": "kp20k_training_885", "title": "PETs and their users: a critical review of the potentials and limitations of the privacy as confidentiality paradigm", "abstract": "Privacy as confidentiality has been the dominant paradigm in computer science privacy research. Privacy Enhancing Technologies (PETs) that guarantee confidentiality of personal data or anonymous communication have resulted from such research. The objective of this paper is to show that such PETs are indispensable but are short of being the privacy solutions they sometimes claim to be given current day circumstances. Using perspectives from surveillance studies we will argue that the computer scientists conception of privacy through data or communication confidentiality is techno-centric and displaces end-user perspectives and needs in surveillance societies. We will further show that the perspectives from surveillance studies also demand a critical review for their human-centric conception of information systems. Last, we rethink the position of PETs in a surveillance society and argue for the necessity of multiple paradigms for addressing privacy concerns in information systems design", "keywords": ["privacy", "confidentiality", "pets", "surveillance studies"]} {"id": "kp20k_training_886", "title": "An approach to automated decomposition of volumetric mesh", "abstract": "Mesh decomposition is critical for analyzing, understanding, editing and reusing of mesh models. Although there are many methods for mesh decomposition, most utilize only triangular meshes. In this paper, we present an automated method for decomposing a volumetric mesh into semantic components. Our method consists of three parts. First, the outer surface mesh of the volumetric mesh is decomposed into semantic features by applying existing surface mesh segmentation and feature recognition techniques. Then, for each recognized feature, its outer boundary lines are identified, and the corresponding splitter element groups are setup accordingly. The inner volumetric elements of the feature are then obtained based on the established splitter element groups. Finally, each splitter element group is decomposed into two parts using the graph cut algorithm; each group completely belongs to one feature adjacent to the splitter element group. In our graph cut algorithm, the weights of the edges in the dual graph are calculated based on the electric field, which is generated using the vertices of the boundary lines of the features. Experiments on both tetrahedral and hexahedral meshes demonstrate the effectiveness of our method", "keywords": ["volumetric mesh", "mesh decomposition", "tetrahedral mesh", "hexahedral mesh", "electric flux"]} {"id": "kp20k_training_887", "title": "Algorithms for storytelling", "abstract": "We formulate a new data mining problem called storytelling as a generalization of redescription mining. In traditional redescription mining, we are given a set of objects and a collection of subsets defined over these objects. The goal is to view the set system as a vocabulary and identify two expressions in this vocabulary that induce the same set of objects. Storytelling, on the other hand, aims to explicitly relate object sets that are disjoint (and, hence, maximally dissimilar) by finding a chain of (approximate) redescriptions between the sets. This problem finds applications in bioinformatics, for instance, where the biologist is trying to relate a set of genes expressed in one experiment to another set, implicated in a different pathway. We outline an efficient storytelling implementation that embeds the CARTwheels redescription mining algorithm in an A* search procedure, using the former to supply next move operators on search branches to the latter. This approach is practical and effective for mining large data sets and, at the same time, exploits the structure of partitions imposed by the given vocabulary. Three application case studies are presented: a study of word overlaps in large English dictionaries, exploring connections between gene sets in a bioinformatics data set, and relating publications in the PubMed index of abstracts", "keywords": ["data mining", "mining methods and algorithms", "retrieval models", "graph and tree search strategies"]} {"id": "kp20k_training_888", "title": "Bifurcation study of a neural field competition model with an application to perceptual switching in motion integration", "abstract": "Perceptual multistability is a phenomenon in which alternate interpretations of a fixed stimulus are perceived intermittently. Although correlates between activity in specific cortical areas and perception have been found, the complex patterns of activity and the underlying mechanisms that gate multistable perception are little understood. Here, we present a neural field competition model in which competing states are represented in a continuous feature space. Bifurcation analysis is used to describe the different types of complex spatio-temporal dynamics produced by the model in terms of several parameters and for different inputs. The dynamics of the model was then compared to human perception investigated psychophysically during long presentations of an ambiguous, multistable motion pattern known as the barberpole illusion. In order to do this, the model is operated in a parameter range where known physiological response properties are reproduced whilst also working close to bifurcation. The model accounts for characteristic behaviour from the psychophysical experiments in terms of the type of switching observed and changes in the rate of switching with respect to contrast. In this way, the modelling study sheds light on the underlying mechanisms that drive perceptual switching in different contrast regimes. The general approach presented is applicable to a broad range of perceptual competition problems in which spatial interactions play a role", "keywords": ["multistability", "competition", "perception", "neural fields", "bifurcation", "motion"]} {"id": "kp20k_training_889", "title": "Integration of fuzzy spatial relations in deformable models - Application to brain MRI segmentation", "abstract": "This paper presents a general framework to integrate a new type of constraints, based on spatial relations, in deformable models. In the proposed approach, spatial relations are represented as fuzzy subsets of the image space and incorporated in the deformable model as a new external force. Three methods to construct an external force from a fuzzy set representing a spatial relation are introduced and discussed. This framework is then used to segment brain subcortical structures in magnetic resonance images (MRI). A training step is proposed to estimate the main parameters defining the relations. The results demonstrate that the introduction of spatial relations in a deformable model can substantially improve the segmentation of structures with low contrast and ill-defined boundaries. ", "keywords": ["spatial relations", "deformable models", "fuzzy sets", "mri", "subcortical structures"]} {"id": "kp20k_training_890", "title": "An efficient animation of wrinkled cloth with approximate implicit integration", "abstract": "This paper presents an efficient method for creating the animation of flexible objects. The mass-spring model was used to represent flexible objects. The easiest approach to creating animation with the mass-spring model is the explicit Euler method, but the method has a serious weakness in that it suffers from an instability problem. The implicit integration method is a possible solution, but a critical flaw of the implicit method is that it involves a large linear system. This paper presents an approximate implicit method for the mass-spring model. The proposed technique updates with stability the state of n mass points in O(n) time when the number of total springs is O(n). In order to increase the efficiency of simulation or reduce the numerical errors of the proposed approximate implicit method, the number of mass points must be as small as possible. However, coarse discretization with a small number of mass points generates an unrealistic appearance for a cloth model. By introducing a wrinkled cubic spline curve, we propose a new technique that generates realistic details of the cloth model, even though a small number of mass points are used for simulation", "keywords": ["cloth animation", "mass spring model", "implicit method", "realistic detail", "wrinkled curve"]} {"id": "kp20k_training_891", "title": "Does computer confidence relate to levels of achievement in ICT-enriched learning models", "abstract": "Employer expectations have changed: university students are expected to graduate with computer competencies appropriate for their field. Educators are also harnessing technology as a medium for learning in the belief that information and communication technologies (ICTs) can enliven and motivate learning across a wide range of disciplines. Alongside developing students computer skills and introducing them to the use of professional software, educators are also harnessing professional and scientific packages for learning in some disciplines. As the educational use of information and communication technologies increases dramatically, questions arise about the effects on learners. While the use of computers for delivery, support, and communication, is generally easy and unthreatening, higher-level use may pose a barrier to learning for those who lack confidence or experience. Computer confidence may mediate in how well students perform in learning environments that require interaction with computers. This paper examines the role played by computer confidence (or computer self-efficacy) in a technology-enriched science and engineering mathematics course in an Australian university. Findings revealed that careful and appropriate use of professional software did indeed enliven learning for the majority of students. However, computer confidence occupied a very different dimension to mathematics confidence: and was not a predictor of achievement in the mathematics tasks, not even those requiring use of technology. Moreover, despite careful and nurturing support for use of the software, students with low computer confidence levels felt threatened and disadvantaged by computer laboratory tasks. The educational implications of these findings are discussed with regard to teaching and assessment, in particular. The TCAT scales used to measure technology attitudes, computer confidence/self-efficacy and mathematics confidence are included in an Appendix. Well-established, reliable, internally consistent, they may be useful to other researchers. The development of the computer confidence scale is outlined, and guidelines are offered for the design of other discipline-specific confidence/self-efficacy scales appropriate for use alongside the computer confidence scale", "keywords": ["computer attitudes", "scales", "learning", "achievement"]} {"id": "kp20k_training_892", "title": "dos protection for udp-based protocols", "abstract": "Since IP packet reassembly requires resources, a denial of service attack can be mounted by swamping a receiver with IP fragments. In this paper we argue how this attack need not affect protocols that do not rely on IP fragmentation, and argue how most protocols, e.g., those that run on top of TCP, can avoid the need for fragmentation. However, protocols such as IPsec's IKE protocol, which both runs on top of UDP and requires sending large packets, depend on IP packet reassembly. Photuris, an early proposal for IKE, introduced the concept of a stateless cookie, intended for DoS protection. However, the stateless cookie mechanism cannot protect against a DoS attack unless the receiver can successfully receive the cookie, which it will not be able to do if reassembly resources are exhausted. Thus, without additional design and/or implementation defenses, an attacker can successfully, through a fragmentation attack, prevent legitimate IKE handshakes from completing. Defense against this attack requires both protocol design and implementation defenses. The IKEv2 protocol was designed to make it easy to design a defensive implementation. This paper explains the defense strategy designed into the IKEv2 protocol, along with the additional needed implementation mechanisms. It also describes and contrasts several other potential strategies that could work for similar UDP-based protocols", "keywords": ["protocol design", "network security", "ipsec", "denial of service", "fragmentation", "buffer exhaustion", "ike", "dos"]} {"id": "kp20k_training_893", "title": "On the parallel efficiency and scalability of the correntropy coefficient for image analysis", "abstract": "Similarity measures have application in many scenarios of digital image processing. The correntropy is a robust and relatively new similarity measure that recently has been employed in various engineering applications. Despite other competitive characteristics, its computational cost is relatively high and may impose hard-to-cope time restrictions for high-dimensional applications, including image analysis and computer vision", "keywords": ["correntropy", "similarity measures", "multi-core architecture", "parallel scalability", "parallel efficiency"]} {"id": "kp20k_training_894", "title": "Positive solution to a special singular second-order boundary value problem", "abstract": "Let lambda be a nonnegative parameter. The existence of a positive solution is studied for a semipositone second-order boundary value problem u''(t) = lambda q(t) f(t, u(t),u'(t)), alpha u(0) - beta u'(0) = d, u(1) = 0, where d > 0, alpha >= 0, beta >= 0, alpha + beta > 0, q(t) f (t, u, v) >= 0 on a suitable subset of [0, 1] x [0,+ infinity) x (-infinity,+ infinity) and f (t, u, v) is allowed to be singular at t = 0, t = 1 and u = 0. The proofs are based on the Leray-Schauder fixed point theorem and the localization method. ", "keywords": ["ordinary differential equation", "singular boundary value problem", "positive solution", "existence"]} {"id": "kp20k_training_895", "title": "Report of research activities in fuzzy AI and medicine at USFCSE", "abstract": "Several projects involving the use of fuzzy and neuro-fuzzy methods in medical applications, developed by members of the Department of Computer Science and Engineering, University of South Florida, Tampa, Florida, are briefly reviewed. The successful applications are emphasized. ", "keywords": ["neuro-fuzzy system", "sudden infant death syndrome", "fuzzy logic"]} {"id": "kp20k_training_896", "title": "Societally connected multimedia across cultures", "abstract": "The advance of the Internet in the past decade has radically changed the way people communicate and collaborate with each other. Physical distance is no more a barrier in online social networks, but cultural differences (at the individual, community, as well as societal levels) still govern human-human interactions and must be considered and leveraged in the online world. The rapid deployment of high-speed Internet allows humans to interact using a rich set of multimedia data such as texts, pictures, and videos. This position paper proposes to define a new research area called 'connected multimedia', which is the study of a collection of research issues of the super-area social media that receive little attention in the literature. By connected multimedia, we mean the study of the social and technical interactions among users, multimedia data, and devices across cultures and explicitly exploiting the cultural differences. We justify why it is necessary to bring attention to this new research area and what benefits of this new research area may bring to the broader scientific research community and the humanity", "keywords": ["connected multimedia", "social media", "social-cultural constraint"]} {"id": "kp20k_training_897", "title": "multiple object retrieval in image databases using hierarchical segmentation tree", "abstract": "With the rapid growth of information, efficient and robust information retrieval techniques have become increasingly more important. Multiple object retrieval remains challenging due to the complex nature of this problem. The proposed research, unlike most existing works that are designed for single object retrieval or adopt heuristic multiple object matching scheme, aims at contributing to this field through the development of an image retrieval system that adopts a hierarchical region-tree representation of image, and enables effective and efficient multiple object retrieval and automatic discovery of the objects of interest through users' relevance feedback. We believe this is the first systematic attempt to formulate a comprehensive, intelligent, and interactive framework for multiple object retrieval in image databases that makes use of a hierarchical region-tree representation", "keywords": ["multi-object retrieval", "content-based image retrieval", "multi-resolution image segmentation", "hierarchical region-tree"]} {"id": "kp20k_training_898", "title": "Delay-dependent stability analysis for impulsive neural networks with time varying delays", "abstract": "In this paper, the global exponential stability and global asymptotic stability of the neural networks with impulsive effect and time varying delays is investigated. By using LyapunovKrasovskii-type functional, the quality of negative definite matrix and Cauchy criterion, we obtain the sufficient conditions for global exponential stability and global asymptotic stability of such model, in terms of linear matrix inequality (LMI), which depend on the delays. Two examples are given to illustrate the effectiveness of our theoretical results", "keywords": ["global exponential stability", "global asymptotic stability", "impulsive neural networks", "negative definite matrix", "delays", "linear matrix inequality "]} {"id": "kp20k_training_899", "title": "OLAP over uncertain and imprecise data", "abstract": "We extend the OLAP data model to represent data ambiguity, specifically imprecision and uncertainty, and introduce an allocation-based approach to the semantics of aggregation queries over such data. We identify three natural query properties and use them to shed light on alternative query semantics. While there is much work on representing and querying ambiguous data, to our knowledge this is the first paper to handle both imprecision and uncertainty in an OLAP setting", "keywords": ["aggregation", "imprecision", "uncertainty", "ambiguous"]} {"id": "kp20k_training_900", "title": "Maximum skew-symmetric flows and matchings", "abstract": "The maximum integer skew-symmetric flow problem (MSFP) generalizes both the maximum flow and maximum matching problems. It was introduced by Tutte [28] in terms of self-conjugate flows in antisymmetrical digraphs. He showed that for these objects there are natural analogs of classical theoretical results on usual network flows, such as the flow decomposition, augmenting path, and max-flow min-cut theorems. We give unified and shorter proofs for those theoretical results. We then extend to MSFP the shortest augmenting path method of Edmonds and Karp [7] and the blocking flow method of Dinits [4], obtaining algorithms with similar time bounds in general case. Moreover, in the cases of unit arc capacities and unit \"node capacities\" our blocking skew-symmetric flow algorithm has time bounds similar to those established in [8, 21] for Dinits' algorithm. In particular, this implies an algorithm for finding a maximum matching in a nonbipartite graph in O(rootnm) time, which matches the time bound for the algorithm of Micali and Vazirani [25]. Finally, extending a clique compression technique of Feder and Motwani [9] to particular skew-symmetric graphs, we speed up the implied maximum matching algorithm to run in O(rootnm log (n(2)/m)/log n) time, improving the best known bound for dense nonbipartite graphs. Also other theoretical and algorithmic results on skew-symmetric flows and their applications are presented", "keywords": ["skew-symmetric graph", "network flow", "matching", "b-matching"]} {"id": "kp20k_training_901", "title": "Efficient performance estimate for one-class support vector machine", "abstract": "This letter proposes and analyzes a method ( -estimate) to estimate the generalization performance of one-class support vector machine (SVM) for novelty detection. The method is an extended version of the -estimate method, which is used to estimate the generalization performance of standard SVM for classification. Our method is derived from analyzing the connection between one-class SVM and standard SVM. Without any computation intensive re-sampling, the method is computationally much more efficient than leave-one-out method, since it can be computed immediately from the decision function of one-class SVM. Using our method to estimate the error rate is more precise than using the fraction of support vectors and a parameter ? of one-class SVM. We also propose that the fraction of support vectors characterizes the precision of one-class SVM. A theoretical analysis and experiments on an artificial data and a widely known handwritten digit recognition set (MNIST) show that our method can effectively estimate the generalization performance of one-class SVM for novelty detection", "keywords": ["performance estimate", "one-class support vector machines", "support vector machines", "novelty detection"]} {"id": "kp20k_training_902", "title": "Comparison of recent methods for inference of variable influence in neural networks", "abstract": "Neural networks (NNs) belong to black box models and therefore suffer from interpretation difficulties. Four recent methods inferring variable influence in NNs are compared in this paper. The methods assist the interpretation task during different phases of the modeling procedure. They belong to information theory (ITSS), the Bayesian framework (ARD), the analysis of the network's weights (GIM), and the sequential omission of the variables (SZW). The comparison is based upon artificial and real data sets of differing size, complexity and noise level. The influence of the neural network's size has also been considered. The results provide useful information about the agreement between the methods under different conditions. Generally, SZW and GIM differ from ARD regarding the variable influence, although applied to NNs with similar modeling accuracy, even when larger data sets sizes are used. ITSS produces similar results to SZW and GIM, although suffering more from the curse of dimensionality", "keywords": ["variable influence in neural networks", "information theoretic approach", "sequential zeroing of weights", "general influence measure", "automatic relevance determination", "sensitivity analysis"]} {"id": "kp20k_training_903", "title": "Finite element modelling of reinforced concrete framed structures including catenary action", "abstract": "In this paper, a 1D discrete element is formulated for analysis of reinforced concrete frames with catenary action. A force-based formulation is developed based on the total secant stiffness approach and an associated direct iterative solution scheme is derived. The effect of material nonlinearity as well as softening of concrete under compression is taken into account and a nonlocal averaging technique is employed to maintain the objectivity of displacement and force responses. Concerning geometrical nonlinearities, the fibre strains are assumed to be small; however, the effect of transverse displacement on the axial strain is large and it is taken into account as well as the effect of shear on the axial force. Using a Simpson integration scheme, together with a piecewise interpolation of curvature, the deformed shape of the element is consistently updated. The formulation is verified with numerical examples", "keywords": ["catenary action", "force interpolation", "reinforced concrete", "secant modulus", "softening"]} {"id": "kp20k_training_904", "title": "fostering a creative interest in computer science", "abstract": "In this paper, we describe activities undertaken at our university to revise our computer science program to develop an environment and curriculum which encourages creative, hands-on learning by our students. Our main changes were the development of laboratory space, increased hands-on problem solving activities in the introductory course, open-ended programming projects in the early courses including a requirement of an open-ended project extension for an A grade, and the integration of a seminar into the senior project requirement. Our results suggest that these changes have improved student skill and willingness to deal with new problems and technologies. An additional surprising side-effect appears to be a dramatic increase in retention over the first two years, despite lower overall grade averages in those courses", "keywords": ["major retention", "curriculum", "hands-on activities", "learning community"]} {"id": "kp20k_training_905", "title": "Concave piecewise linear service curves and deadline calculations", "abstract": "The Internet is gradually and constantly becoming a multimedia network that needs mechanisms to provide effective quality of service (QoS) requirements to users. The service curve (SC) is an efficient description of QoS and the service curve based earliest deadline first policy (SCED) is a scheduling algorithm to guarantee SCs specified by users. In SCED, deadline calculation is the core. However, not every SC has a treatable deadline calculation; currently the only known treatable SC is the concave piecewise linear SC (CPLSC). In this paper, we propose an algorithm to translate all kinds of SCs into CPLSCs. In this way, the whole Internet can have improved performance. Moreover, a modification of the deadline calculation of the original SCED is developed to obtain neat and precise results. The results combining with our proposed algorithm can make the deadline calculation smooth and the multimedia Internet possible", "keywords": ["concave piecewise linear service curve", "deadline calculation", "quality of service", "sced", "differentiated services"]} {"id": "kp20k_training_906", "title": "Optimal retrial and timeout strategies for accessing network resources", "abstract": "The notion of timeout (namely, the maximal time to wait before retrying an action) turns up in many networking contexts, such as packet transmission, connection establishment, etc. Usage of timeouts is encountered especially in large-scale networks, where negative acknowledgments (NACKs) on failures have significantly higher delays than positive acknowledgments (ACKs) and frequently are not employed at all. Selection of a proper timeout involves a tradeoff between waiting too long and loading the network needlessly by waiting too little. The common approach is to set the timeout to a large value, such that, unless the action fails, it is acknowledged within the timeout duration with a high probability. This approach is conservative and leads to overly long, far from optimal, timeouts. We take a quantitative approach with the purpose of computing and studying the optimal timeout strategy. The above tradeoff is modeled by introducing a \"cost\" per unit time (until success) and a \"cost\" per repeated attempt. The optimal timeout strategy is then defined as one that a selfish user would follow to minimize its expected cost. We discuss the various practical interpretations that these costs may have. We then derive the formulas for the optimal timeout values and study some of their fundamental properties. In particular, we identify the conditions for making parallel attempts from the outset to be worthwhile. In addition, we demonstrate a striking property of positive feedback. This motivates us to study the interaction resulting when many users selfishly apply the optimal timeout strategy; specifically, we use a noncooperative game model and show that it suffers from an inherent instability problem. Some implications of these results on network design are discussed", "keywords": ["connection establishment", "network pricing", "noncooperative games", "packet retransmission", "timeout strategy"]} {"id": "kp20k_training_907", "title": "Integrated modeling and analysis of dynamics for electric vehicle powertrains", "abstract": "This paper builds theoretical models for the entire powertrain of EVs to describe EV dynamics with both mechanical and electrical systems. A Matlab model of an EV is developed to verify the derived theoretical models for the entire powertrain of EVs. A variety of final vehicle driving performances are analyzed and predicted as a function of electrical quantities", "keywords": ["electric vehicles", "powertrains", "analytic modeling", "dynamics of vehicles"]} {"id": "kp20k_training_908", "title": "Wavelet synopses for general error metrics", "abstract": "Several studies have demonstrated the effectiveness of the wavelet decomposition as a tool for reducing large amounts of data down to compact wavelet synopses that can be used to obtain fast, accurate approximate query answers. Conventional wavelet synopses that greedily minimize the overall root-mean-squared (i.e., L-2-norm) error in the data approximation can suffer from important problems, including severe bias and wide variance in the quality of the data reconstruction, and lack of nontrivial guarantees for individual approximate answers. Thus, probabilistic thresholding schemes have been recently proposed as a means of building wavelet synopses that try to probabilistically control maximum approximation-error metrics (e.g., maximum relative error). A key open problem is whether it is possible to design efficient deterministic wavelet-thresholding algorithms for minimizing general, non-L-2 error metrics, that are relevant to approximate query processing systems, such as maximum relative or maximum absolute error. Obviously, such algorithms can guarantee better maximum-error wavelet synopses and avoid the pitfalls of probabilistic techniques (e.g., \"bad\" coin-flip sequences) leading to poor solutions; in addition, they can be used to directly optimize the synopsis construction process for other useful error metrics, such as the mean relative error in data-value reconstruction. In this article, we propose novel, computationally efficient schemes for deterministic wavelet thresholding with the objective of optimizing general approximation-error metrics. We first consider the problem of constructing wavelet synopses optimized for maximum error, and introduce an optimal low polynomial-time algorithm for one-dimensional wavelet thresholding-our algorithm is based on a new Dynamic-Programming (DP) formulation, and can be employed to minimize the maximum relative or absolute error in the data reconstruction. Unfortunately, directly extending our one-dimensional DP algorithm to multidimensional wavelets results in a super-exponential increase in time complexity with the data dimensionality. Thus, we also introduce novel, polynomial-time approximation schemes (with tunable approximation guarantees) for deterministic wavelet thresholding in multiple dimensions. We then demonstrate how our optimal and approximate thresholding algorithms for maximum error can be extended to handle a broad, natural class of distributive error metrics, which includes several important error measures, such as mean weighted relative error and weighted L-p-norm error. Experimental results on real-world and synthetic data sets evaluate our novel optimization algorithms, and demonstrate their effectiveness against earlier wavelet-thresholding schemes", "keywords": ["algorithms", "performance", "theory", "data synopses", "haar wavelets", "approximate query processing"]} {"id": "kp20k_training_909", "title": "On the correlation between children's performances on electronic board tasks and nonverbal intelligence test measures", "abstract": "In this study it was investigated whether a tangible electronic console (TagTiles) can be used in principle to address a range of cognitive skills by examining the underlying basic psychometric properties of TagTiles tasks. This is a precursor to an intervention study on the impact of TagTiles on cognitive development or an instrument development study. The tasks implemented on the console consisted of abstract visual patterns, which were intended to target perception, spatial knowledge representation, eye-hand coordination, reasoning and problem solving. The results of a pilot study (N = 10, children aged 8-10) and an experiment (N = 32, children aged 8-10) are presented. Correlations between scores on TagTiles tasks on the one hand and a selection of WISC-IIINL performance subtests, Raven's progressive matrices and RAKIT's Memory Span on the other hand, were calculated. The results indicate that the TagTiles tasks cover similar skills as the applied WISC-IIINL subtests, demonstrated by the moderate to large correlations between performance scores on sets of TagTiles tasks and sets of WISC-IIINL tasks. The combined TagTiles task scores were also significantly correlated with the aggregated WISC-IIINL subtest scores. Significant correlations were found between the TagTiles tasks and the Raven test scores, though for the RAKIT Memory Span no significant correlation with TagTiles tasks was found. After further refinement and validation, in particular with a larger sample size, the tasks can be applied to provide an indication of children's skill levels, offering the benefits of a self motivating testing method to children, and avoiding inconsistencies in administration. As such, the tasks may become an effective tool for the training and assessment of nonverbal skills for children. ", "keywords": ["tangible electronic console", "nonverbal skills", "assessment", "children"]} {"id": "kp20k_training_910", "title": "ontology design patterns for the semantic business processes", "abstract": "This paper discusses two research paradigms: the first one is based on using the Meta-Object Facility (MOF) to support any kind of metadata, whereas the second one emphasizes the role of the Ontology Design Patterns (ODPs) to support knowledge transformation between the source and the target models. More precisely, in this paper we represent the Business Process ODP, which reflects both the syntax and semantics of business process specification and brings an abstract solution to the typical problem of designing and modelling semantic business processes", "keywords": ["ontology design pattern", "ontology engineering", "semantic business process"]} {"id": "kp20k_training_911", "title": "rematerialization-based register allocation through reverse computing", "abstract": "Reversible computing aims at keeping all information on input and intermediate values available at any step of the computation. Rematerialization in register allocation is an alternate solution to spilling where values are recomputed from available data instead of held in registers. In this paper we present the basic ideas of our algorithm for rematerialization with reverse computing. We use the memory demanding LQCD (Lattice Quantum ChromoDynamics) application to demonstrate that important gains of up to 33% on register pressure can be obtained. This in turn enables an increase in Instruction-Level Parallelism or in Thread-Level Parallelism. We demonstrate a 16.8% (statically timed) gain over a basic LQCD computation", "keywords": ["rematerialization", "spill code", "reverse computing", "register pressure"]} {"id": "kp20k_training_913", "title": "Constructing G(1) Bezier surfaces over a boundary curve network with T-junctions", "abstract": "AT-junction occurs in a boundary curve network when one boundary curve ends in the middle of another. We show how to construct G(1) Bezier surfaces over a boundary curve network with T-junctions. By treating the two micro patches which meet at the edge forming the upright of the T as a single macro patch, we reduce the problem to one of achieving continuity between this composite patch and the third patch which has the crossbar of the T as an edge. Thus we avoid changes to the boundary network, or to any patches except those that meet at the T-junction. Also, we analyze the singularity of the G(1) continuity system with the T-junction, and give the constraint to make a consistent system using free variables of weight functions. This is the first method of surfacing the T-junction. We present examples and verify continuity by drawing reflection lines and checking angles. ", "keywords": ["t-junction", "g continuity", "surface interpolation", "bezier surface", "boundary curve network"]} {"id": "kp20k_training_914", "title": "Optimal input/output reduction in production processes", "abstract": "While conventional Data Envelopment Analysis (DEA) models set targets for each operational unit, this paper considers the problem of input/output reduction in a centralized decision making environment. The purpose of this paper is to develop an approach to input/output reduction problem that typically occurs in organizations with a centralized decision-making environment. This paper shows that DEA can make an important contribution to this problem and discusses how DEA-based model can be used to determine an optimal input/output reduction plan. An application in banking sector with limitation in IT investment shows the usefulness of the proposed method", "keywords": ["data envelopment analysis", "input/output reduction", "efficiency", "multi-objective linear programming"]} {"id": "kp20k_training_915", "title": "Ambiguous grammars and the chemical transactions of life - Part II: the hierarchy of life's grammars", "abstract": "Purpose - This second part of a companion paper seeks to extend the theory proposed to apply the hierarchy of fuzzy formal language to cope with the three major phenomenon of life: replication, control and shuffling of genetic information. Design/methodology/approach - In order to cope with the proposal, three new classes of FFG are proposed: replicating grammars: to formalize proper-ties and consequences of DNA duplication; self-controlled grammars: to provide the tools to control the grammar ambiguity and to improve adaptability, and recombinant grammars: to formalize properties and consequences of the sexual reproduction to life evolution. Considering all these facts, FFG are proposed as the key instrument to formalize the basic properties of the chemical transactions supporting life. Findings - The formalism of the model provides a new way to analyze and interpret the findings of the different genome sequencing projects. Originality/value - The theoretical framework developed here provides a new perspective of understanding the code of life and evolution", "keywords": ["kybernetics", "fuzzy logic", "language", "genetics"]} {"id": "kp20k_training_916", "title": "A two-pass rate control algorithm for H.264/AVC high definition video coding", "abstract": "In this paper, we propose a novel two-pass rate control algorithm to achieve constant quality for H.264/AVC high definition video coding. With the first-pass collected rate and distortion information and the built model of scene complexity, the encoder can determine the expected distortion which could be achieved in the second-pass encoding under the target bit rate. According to the built linear distortion-quantizer (D-Q) model, before encoding one frame, the quantization parameter can be solved to realize constant quality encoding. After encoding one frame, the model parameters will be updated with linear regression method to ensure the prediction accuracy of the quantization parameter of next encoded frame with the same coding type. In order to obtain the expected distortion of each frame under the target bit rate, a GOP-level bit allocation scheme is also designed to adjust the target bit rate of each GOP based on the scene complexity of the GOP in the second-pass encoding. In addition, the effect of scene change on the updating of D-Q model is considered. The model will be re-initialized at the scene change to minimize modeling error. The experimental results show that compared with the latest two-pass rate control algorithm, our proposed algorithm can significantly improve the bit control accuracy at comparable coding performance in terms of constant quality and average PSNR. On average, the improvement of bit control accuracy achieved about 90", "keywords": ["two-pass", "rate control", "constant quality", "high definition video coding"]} {"id": "kp20k_training_917", "title": "Verification and validation of a Work Domain Analysis with turing machine task analysis", "abstract": "Work domain models produced by Work Domain Analysis need to be validated and verified. A method based on the Turing Machine formalism was proposed. An application to two domains allowed us to highlight some required changes. Over or underspecification, omission or false inclusion of objects were noticed", "keywords": ["work domain analysis", "validation", "turing machine task analysis"]} {"id": "kp20k_training_918", "title": "Modelling and simulation of photosynthetic microorganism growth: random walk vs. finite difference method", "abstract": "The paper deals with photosynthetic microorganism growth modelling and simulation in a distributed parameter system. Main result concerns the development and comparison of two modeling frameworks for photo-bioreactor modelling. The first \"classical\" approach is based on PDE (reaction-turbulent diffusion system) and finite difference method. The alternative approach is based on random walk model of transport by turbulent diffusion. The complications residing in modelling of multi-scale transport and reaction phenomena in microalgae are clarified and the solution is chosen. It consists on phenomenological state description of microbial culture by the lumped parameter model of photosynthetic factory (PSF model) in the re-parametrized form, published recently in this journal by Papacek, et al. (2010). Obviously both approaches lead to the same simulation results, nevertheless they provide different advantages. ", "keywords": ["multi-scale modelling", "distributed parameter system", "boundary value problem", "random walk", "photosynthetic factory"]} {"id": "kp20k_training_919", "title": "Recent advances in natural language processing for biomedical applications", "abstract": "We survey a set a recent advances in natural language processing applied to biomedical applications, which were presented in Geneva, Switzerland, in 2004 at an international workshop. While text mining applied to molecular biology and biomedical literature can report several interesting achievements, we observe that studies applied to clinical contents are still rare. In general, we argue that clinical corpora, including electronic patient records, must be made available to fill the gap between bioinformatics and medical informatics", "keywords": ["natural language processing", "bio-informatics community", "unified medical language system"]} {"id": "kp20k_training_920", "title": "A new probabilistic approach for distribution network reconfiguration: Applicability to real networks", "abstract": "Power loss reduction can be considered as one of the main purposes for distribution system operators, especially for recent non-governmental networks. Reconfiguration is an operation process widely used for this optimization by means of changing the status of switches in a distribution network. Some major points such as time-varying loads and the number of switchings, which are often neglected or not applied simultaneously in most previous studies, are the main motivation behind this study. In this paper, a new probabilistic approach is proposed to perform an optimal reconfiguration in order to reduce the total cost of operation, including the cost of switching and benefit of loss reduction. Considering time-varying loads, the proposed method can obtain an optimal balance between the number of switchings and the power loss. The effectiveness of the suggested method is demonstrated through several experiments and the results are compared with those of other reliable methods in several cases. ", "keywords": ["electric distribution networks", "reconfiguration", "power loss", "optimal switching", "time-varying loads"]} {"id": "kp20k_training_921", "title": "A feasibility study of the classification of Alpaca (Lama pacos) wool samples from different ages, sex and color by means of visible and near infrared reflectance spectroscopy", "abstract": "The usefulness of classifying the Alpaca wool samples according to their color, sex and location is associated with their economic value in the market, hence adequate methods for rapid classification are needed to assess the of wool value. This study evaluated the potential of the visible and near infrared (visNIR) spectroscopy combined with multivariate statistical analysis to classify Alpaca (Lama Pacos) fiber samples according to age (1 and 23-year-old), sex (Male and Female) and color (Black, Brown, LF and White). Samples (n=291) were scanned in reflectance mode in the wavelength range of 4002500nm using a monochromator instrument (FOSS NIRSystems6500, Inc., Silver Spring, MD, USA). Principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) were used to classify fiber samples. Cross-validation was used for validation of classification models developed. Results showed that PLS-DA correctly classified 100% of fiber samples into ages, intermediate classification rates were obtained for color, while lower classification rates were obtained for the discrimination of wool samples according to sex. The results from this study suggested that visNIR spectroscopy in combination with multivariate data analysis can be used as a rapid method to classify Alpaca fiber samples according to age, sex and color", "keywords": ["discriminant partial least squares", "principal component analysis", "spectroscopy", "visnir", "alpaca", "wool"]} {"id": "kp20k_training_922", "title": "The dissipative structure of variational multiscale methods for incompressible flows", "abstract": "In this paper, we present a precise definition of the numerical dissipation for the orthogonal projection version of the variational multiscale method for incompressible flows. We show that, only if the space of subscales is taken orthogonal to the finite element space, this definition is physically reasonable as the coarse and fine scales are properly separated. Then we compare the diffusion introduced by the numerical discretization of the problem with the diffusion introduced by a large eddy simulation model. Results for the flow around a surface-mounted obstacle problem show that numerical dissipation is of the same order as the subgrid dissipation introduced by the Smagorinsky model. Finally, when transient subscales are considered, the model is able to predict backscatter, something that is only possible when dynamic LES closures are used. Numerical evidence supporting this point is also presented", "keywords": ["stabilized finite elements", "transient subscales", "orthogonal subscales", "energy transfer"]} {"id": "kp20k_training_923", "title": "an integer linear programming based approach for parallelizing applications in on-chip multiprocessors", "abstract": "With energy consumption becoming one of the first-class optimization parameters in computer system design, compilation techniques that consider performance and energy simultaneously are expected to play a central role. In particular, compiling a given application code under performance and energy constraints is becoming an important problem. In this paper, we focus on an on-chip multiprocessor architecture and present a parallelization strategy based on integer linear programming. Given an array-intensive application, our optimization strategy determines the number of processors to be used in executing each nest based on the objective function and additional compilation constraints provided by the user. Our initial experience with this strategy shows that it is very successful in optimizing array-intensive applications on on chip multiprocessors under energy and performance constraints", "keywords": ["applications", "computation", "performance", "arrays", "object", "experience", "embedded systems", "paper", "constraint", "user", "processor", "role", "chip multiprocessors", "play", "strategies", "system design", "energy consumption", "optimality", "parallel", "code", "architecture", "functional", "energy", "compilation", "loop-level parallelism", "constraint-based compilation", "integer linear program", "class"]} {"id": "kp20k_training_924", "title": "Sensorless Direct Torque and Flux Controlled IPM Synchronous Machine Fed by Matrix Converter Over a Wide Speed Range", "abstract": "This paper proposes a new sensorless direct torque and flux controlled interior permanent magnet synchronous machine drive fed by a matrix converter. Closed-loop control of both torque and stator flux is achieved by using two PI controllers. The input and output voltage vectors are modulated with the indirect space vector modulation technique. Additionally, unity power factor on the power supply side of matrix converter is achieved through closed-loop compensation of the input displacement angle created by the input filter of matrix converter. The adaptive observer used for joint stator flux and rotor speed estimation is enhanced by HF signal injection scheme for stable operation at low speed including standstill. The stator resistance variation is compensated with the current estimation error. The operating range of the drive is extended into high speed region by incorporating field weakening. The sensorless drive exhibits high dynamic and steady-state performances over a wide speed range. The implementation of digital control system for the proposed matrix converter drive is described in this paper. Extensive experimental results confirming the effectiveness of the proposed method are also included", "keywords": ["ac-ac power conversion", "adaptive observers", "direct torque and flux control", "field weakening"]} {"id": "kp20k_training_925", "title": "Integrated framework for assessing urban water supply security of systems with non-traditional sources under climate change", "abstract": "An integrated framework for planning an urban water supply system is proposed. Challenges of including non-traditional sources under climate change are documented. Desalination, stormwater and rainwater are augmentation options for a case study. Rainwater is an expensive augmentation option for little supply security gain. Reducing per capita consumption may increase robustness to future uncertainties", "keywords": ["climate change impacts", "urban water supply planning", "integrated assessment", "risk-based performance", "non-traditional water sources", "urban stormwater harvesting"]} {"id": "kp20k_training_926", "title": "An exploratory project expert system for eliciting correlation coefficient and sequential updating of duration estimation", "abstract": "This study proposes a framework for updating estimation of project duration in project networks. The first step of building a project expert system is to elicit the correlation coefficient of activity durations from experts' knowledge and intuition. Given the correlation coefficients elicited, the linear Bayesian approach is used to update the distribution of activity duration. In particular, by reflecting the newly observed duration of completed activities, we can update the duration of upcoming activities repeatedly throughout the entire project period. This helps keep track of the constantly changing longest duration path within the networks. Finally, it is shown that all these learning and updating schemes can be relatively easily implemented on an Excel spreadsheet, so that field managers can apply the model into real projects", "keywords": ["elicitation of correlation coefficient", "sequential updating of duration estimation", "linear bayesian approach", "project expert system"]} {"id": "kp20k_training_927", "title": "Structural Effects of Biologically Relevant Rhodamines on Spectroscopy of Fluorescence Fluctuations", "abstract": "Exciton coupling in complexes between the indole ring and other ? systems is known to enhance the efficiency of energy and electron transfer. Rhodamines' xanthylium rings allow the formation of weakly or nonfluorescent complexes with the amino acid tryptophan. Thus, because of the short distance of the participating electronic clouds, intrinsic electron transferinduced fluorescence quenching occurs. In solution, the rate constant of electron transfer is known to be limited by collision interactions at the contact distance. By contrast, in protein local environments tryptophan residues can be either exposed or buried in hydrophobic regions. Herein, I report on the properties of aromatic derivatized rhodamines, among which is one with a bound phenylalanine amino acid group. Encompassed is the spectroscopic and kinetic information in bulk and at the single-molecule levels both in free solution and in the presence of human serum albumin. Spectroscopic characteristics are focused with special emphasis on enhanced fluorescence that is addressed considering optimized geometries and electronic spectra. The importance of the probes associated with peptides and metal ions both in condensed phase or interfaces and as substrates with proteins is put into perspective", "keywords": ["rhodamines", "tryptophan", "amino acids", "proteins", "molecular aggregates", " interactions", "peptide metal ion interactions", "single molecules"]} {"id": "kp20k_training_928", "title": "A novel approach for bit-serial AB(2) multiplication in finite fields GF(2(m", "abstract": "This paper presents a new inner product AB(2) multiplication algorithm and effective hardware architecture for exponentiation in finite fields CF(2(m)). Exponentiation is more efficiently implemented by applying AB(2) multiplication repeatedly rather than AB multiplication. Thus, efficient AB(2) multiplication algorithms and simple architectures are the key to implementing exponentiation. Accordingly, this paper proposes an efficient inner product multiplication algorithm based on an irreducible all one polynomial (AOP) and simple architecture, which has the same hardware equipment as Fenn's AB multiplier. The proposed bit-serial multiplication algorithm and architecture are highly regular and simpler than those of previous works. ", "keywords": ["public-key cryptosystem", "exponentiation", "modular multiplication", "irreducible all one polynomial", "inner products"]} {"id": "kp20k_training_929", "title": "policy teaching through reward function learning", "abstract": "Policy teaching considers a Markov Decision Process setting in which an interested party aims to influence an agent's decisions by providing limited incentives. In this paper, we consider the specific objective of inducing a pre-specified desired policy. We examine both the case in which the agent's reward function is known and unknown to the interested party, presenting a linear program for the former case and formulating an active, indirect elicitation method for the latter. We provide conditions for logarithmic convergence, and present a polynomial time algorithm that ensures logarithmic convergence with arbitrarily high probability. We also offer practical elicitation heuristics that can be formulated as linear programs, and demonstrate their effectiveness on a policy teaching problem in a simulated ad-network setting. We extend our methods to handle partial observations and partial target policies, and provide a game-theoretic interpretation of our methods for handling strategic agents", "keywords": ["preference elicitation", "policy teaching", "active indirect elicitation", "preference learning", "environment design"]} {"id": "kp20k_training_931", "title": "Design and simulation of manufacturing systems facing imperfectly defined information", "abstract": "Due to the constant evolution of the environment and to the complexity of the needs, the specifications of a manufacturing system are often imperfectly known. The initial design data are uncertain, inaccurate and even vague. We propose to represent the quantifiable needs using fuzzy quantities. The data are propagated during the activity of engineering to lead to the parameters of the target system. In this context, simulation techniques, based on fuzzy parameters, are used to verify the exactness of the design. We choose to use a commercial discrete event simulator and Response Surface Methodology to perform fuzzy simulation", "keywords": ["manufacturing system", "fuzzy design", "fuzzy simulation"]} {"id": "kp20k_training_932", "title": "The skeptical explorer: A multiple-hypothesis approach to visual modeling and exploration", "abstract": "The primary intent of this work is to present a method for sequentially associating three-dimensional surface measurements acquired by an autonomous exploration agent with models that describe those surfaces. Traditional multiple-viewpoint registration approaches are concerned only with finding the transformation that maps data points to a chosen global frame. Given a parts-based object representation, and assuming that the view correspondence can be found, the problem of associating the registered data with the correct part models still needs to be solved. While traditional approaches are content to group segmented data sets that geometrically overlap one another with the same part, there are cases where this causes ambiguous situations. This paper addresses the model-data association problem as it applies to three-dimensional dynamic object modeling. By tracking the state of part models across subsequent views, we wish to identify possible events that explain model-data association ambiguities and represent them in a Bayesian framework. The model-data association problem is therefore relaxed to allow multiple interpretations of the object's structure, each being assigned a probability. Rather than making a decision at every iteration about an ambiguous mapping, we look to the future for the information needed to disambiguate it. Experimental results are presented to illustrate the effectiveness of the approach", "keywords": ["sensor-based exploration", "active vision", "3d object modeling", "multiple-hypothesis tracking", "correspondence", "data association", "scene representation"]} {"id": "kp20k_training_933", "title": "mobility and stability evaluation in wireless multi-hop networks using multi-player games", "abstract": "Multi-hop networks have gained a lot of interest in recent years. A lot of work was contributed in the field of protocol design and performance of multi-hop networks. It is generally accepted that mobility has a huge impact on the protocol performance; even more for multi-hop networks. Obtaining realistic measurements of mobility, however, is complex and expensive. Thus, we adopt virtual world scenarios to explore the mobility issue, by using the well-known multi-player game, Quake II. The advantage of the Quake II engine is that users move within virtual worlds under realistic constraints, whereas other mobility models may offer insufficient accuracy or operate under unrealistic assumptions. Moreover, it is very easy to create new virtual worlds and to adapt them to specialized needs. In this paper, we propose an analytical framework for mobility measurements in virtual worlds that could be adopted for the design of communication protocols. Our framework enables the study of the impact of mobility on connectivity and stability of the network, giving useful insights for improving communication performance. An interesting application of our approach is the analysis of coverage extension of so called hotspots or emergency situations, where the fixed network infrastructure is insufficient or non-existent. In these extreme cases, multi-hop networks can be used to setup communication quickly. As these situations comprise a plethora of different cases and scenarios, our model is appropriate for their analysis, due to its generality. We use our framework to investigate the performance of multi-hop networks based on IEEE 802.11a technology. In contrast to other contributions focusing only on connectivity, the IEEE 802.11a technology also considers multi-rate connections. Our framework covers the evaluation of simple connectivity as well as link quality stability in the presence of mobility, a combination that has not been considered thus far. Therefore we introduce two simple routing schemes and highlight the performance of these protocols in presence of mobility. Furthermore we come up with four definitions of stability and investigate protocols for multi-hop networks in terms of this parameter. Our other contributions are the changes to the Quake II engine and the availability of mobility trace files", "keywords": ["routing", "connectivity", "capacity", "stability", "ad hoc networks", "ieee 802.11a", "quake ii", "multi-hop networks", "multi-player", "trace data", "mobility"]} {"id": "kp20k_training_934", "title": "Efficient Method of Achieving Agreements between Individuals and Organizations about RFID Privacy", "abstract": "This work presents novel technical and legal approaches that address privacy concerns for personal data in RFID systems. In recent years, to minimize the conflict between convenience and the privacy risk of RFID systems, organizations have been requested to disclose their policies regarding RFID activities, obtain customer consent, and adopt appropriate mechanisms to enforce these policies. However, current research on RFID typically focuses on enforcement mechanisms to protect personal data stored in RFID tags and prevent organizations from tracking user activity through information emitted by specific RFID tags. A missing piece is how organizations can obtain customers' consent efficiently and flexibly. This study recommends that organizations obtain licenses automatically or semi-automatically before collecting personal data via RFID technologies rather than deal with written consents. Such digitalized and standard licenses can be checked automatically to ensure that collection and use of personal data is based on user consent. While individuals can easily control who has licenses and license content, the proposed framework provides an efficient and flexible way to overcome the deficiencies in current privacy protection technologies for RFID systems", "keywords": ["rfid privacy", "privacy enhancing technology", "rfid"]} {"id": "kp20k_training_935", "title": "people are doing it for themselves", "abstract": "To date, the objective of creating pleasurable products has concentrated on designers articulating and interpreting user needs as part of the product creation process. This paper explores approaches to enable users to adapt, modify, specify or create products to match their needs directly. Using the potential of new technologies, active consumers can now become product creators, paralleling developments in graphics, music and digital media production. Empowered users, self-builders, recreational manufacturers, web-connected silver surfers (retired individuals using the web) and punk manufacturers [1] all exemplify this new relationship between users and products, and the evolving role of designers [2", "keywords": ["user participation", "supra-functional needs", "democratisation of design", "customisation"]} {"id": "kp20k_training_936", "title": "low-cost networks and gateways for teaching data communications", "abstract": "The growing importance of communications in computer science has resulted in many undergraduate computer science programmes offering courses in data communications. Although data communications courses can be taught in a practical manner, the cost of data communications hardware often restricts the amount of actual hands-on experience that students can gain. In this paper we describe the hardware and software requirements of several low-cost networks that can be used by students to gain experience in a wide variety of data communication topics including local area networks (such as bus networks and ring networks), wide area networks (i.e. store-and-forward networks), and gateways", "keywords": ["requirements", "network", "communication", "computer science", "software", "data communication", "forward", "teaching", "experience", "hardware", "paper", "practical", "cost", "wide-area network", "locality", "student"]} {"id": "kp20k_training_937", "title": "On the application of genetic programming for software engineering predictive modeling: A systematic review", "abstract": "The objective of this paper is to investigate the evidence for symbolic regression using genetic programming (GP) being an effective method for prediction and estimation in software engineering, when compared with regression/machine learning models and other comparison groups (including comparisons with different improvements over the standard GP algorithm). We performed a systematic review of literature that compared genetic programming models with comparative techniques based on different independent project variables. A total of 23 primary studies were obtained after searching different information sources in the time span 1995-2008. The results of the review show that symbolic regression using genetic programming has been applied in three domains within software engineering predictive modeling: (i) Software quality classification (eight primary studies). (ii) Software cost/effort/size estimation (seven primary studies). (iii) Software fault prediction/software reliability growth modeling (eight primary studies). While there is evidence in support of using genetic programming for software quality classification, software fault prediction and software reliability growth modeling: the results are inconclusive for software cost/effort/size estimation. ", "keywords": ["systematic review", "genetic programming", "symbolic regression", "modeling"]} {"id": "kp20k_training_938", "title": "a unified framework for dynamic pari-mutuel information market design", "abstract": "Recently, coinciding with and perhaps driving the increased popularity of prediction markets, several novel pari-mutuel mechanisms have been developed such as the logarithmic market scoring rule (LMSR), the cost-function formulation of market makers, and the sequential convex parimutuel mechanism (SCPM). In this work, we present a unified convex optimization framework which connects these seemingly unrelated models for centrally organizing contingent claims markets. The existing mechanisms can be expressed in our unified framework using classic utility functions. We also show that this framework is equivalent to a convex risk minimization model for the market maker. This facilitates a better understanding of the risk attitudes adopted by various mechanisms. The utility framework also leads to easy implementation since we can now find the useful cost function of a market maker in polynomial time through the solution of a simple convex optimization problem. In addition to unifying and explaining the existing mechanisms, we use the generalized framework to derive necessary and sufficient conditions for many desirable properties of a prediction market mechanism such as proper scoring, truthful bidding (in a myopic sense), efficient computation, controllable risk-measure, and guarantees on the worst-case loss. As a result, we develop the first proper, truthful, risk controlled, loss-bounded (in number of states) mechanism; none of the previously proposed mechanisms possessed all these properties simultaneously. Thus, our work could provide an effective tool for designing new market mechanisms", "keywords": ["risk measures", "predictionmarkets", "convex optimization", "unified framework"]} {"id": "kp20k_training_939", "title": "Interaction design for supporting communication between Chinese sojourners", "abstract": "In our global village, distance is not a barrier anymore for traveling. People experience new cultures and face accompanying difficulties in order to live anywhere. Social support can help these sojourners to cope with difficulties, such as culture shock. In this paper, we investigate how computer-mediated communication (CMC) tools can facilitate social support when living physically separated from loved-ones in different cultures. The goal is to understand the design considerations necessary to design new CMC tools. We studied communication practices of Chinese sojourners living in the Netherlands and the use of a technology probe with a novel video communication system. These results led to recommendations which can help designers to design interactive communication tools that facilitate communication across cultures. We conclude the paper with an interactive communication device called Circadian, which was designed based on these recommendations. We experienced the design recommendations to be abstract enough to leave space for creativity while providing a set of clear requirements which we used to base design decisions upon", "keywords": ["humancomputer interaction", "computer-mediated communication", "interaction design", "design recommendations", "cross-cultural communication", "culture shock"]} {"id": "kp20k_training_940", "title": "Backward Penalty Schemes for Monotone Inclusion Problems", "abstract": "In this paper, we are concerned with solving monotone inclusion problems expressed by the sum of a set-valued maximally monotone operator with a single-valued maximally monotone one and the normal cone to the nonempty set of zeros of another set-valued maximally monotone operator. Depending on the nature of the single-valued operator, we propose two iterative penalty schemes, both addressing the set-valued operators via backward steps. The single-valued operator is evaluated via a single forward step if it is cocoercive, and via two forward steps if it is monotone and Lipschitz continuous. The latter situation represents the starting point for dealing with complexly structured monotone inclusion problems from algorithmic point of view", "keywords": ["backward penalty algorithm", "monotone inclusion", "maximally monotone operator", "fitzpatrick function", "convex subdifferential"]} {"id": "kp20k_training_941", "title": "3D video and free viewpoint videoFrom capture to display", "abstract": "This paper gives an end-to-end overview of 3D video and free viewpoint video, which can be regarded as advanced functionalities that expand the capabilities of a 2D video. Free viewpoint video can be understood as the functionality to freely navigate within real world visual scenes, as it is known for instance from virtual worlds in computer graphics. 3D video shall be understood as the functionality that provides the user with a 3D depth impression of the observed scene, which is also known as stereo video. In that sense as functionalities, 3D video and free viewpoint video are not mutually exclusive but can very well be combined in a single system. Research in this area combines computer graphics, computer vision and visual communications. It spans the whole media processing chain from capture to display and the design of systems has to take all parts into account, which is outlined in different sections of this paper giving an end-to-end view and mapping of this broad area. The conclusion is that the necessary technology including standard media formats for 3D video and free viewpoint video is available or will be available in the future, and that there is a clear demand from industry and user for such advanced types of visual media. As a consequence we are witnessing these days how such technology enters our everyday life", "keywords": ["3d video", "stereo video", "free viewpoint video", "3dtv"]} {"id": "kp20k_training_944", "title": "Exertion interfaces for computer videogames using smartphones as input controllers", "abstract": "As mobile phones become smarter and include a wider and more powerful array of sensory components, the opportunity to leverage those capabilities in contexts other than telephony grows. We have in particular identified those sensory capabilities as key components for modern user interfaces that can detect movement, actions and intentions to enrich human-computer interaction in a natural way. In this work, we present research around using smartphones as input controllers in the context of exertion videogames. We propose a conceptual framework that identifies the core elements of such interfaces, regardless of the underlying technological platforms, and provides a design pattern for their integration into existing videogames without having to change the games source code. We present a proof of concept implementation for the framework, with two smartphone input controllers, which using a soft button and accelerometer data, interface to a target-shooting exertion game played while exercising on a stationary bicycle. We present findings from a user experience evaluation", "keywords": ["human computer interaction", "smartphone", "exertion interface", "videogame", "framework"]} {"id": "kp20k_training_945", "title": "Local feature-based multi-object recognition scheme for surveillance", "abstract": "In this paper, we propose an efficient multi-object recognition scheme for surveillance based on interest points of objects and their feature descriptors. In this scheme, we first define a set of object types of interest and collect their sample images. For each sample image, we detect interest points and construct their feature descriptors using SURF. Next, we perform a statistical analysis of the local features to select representative points among them. Intuitively, the representative points of an object are the interest points that best characterize the object. Finally, we calculate thresholds of each object for object recognition. User query is processed in a similar way. A given query image's local feature descriptors are extracted and then compared with the representative points of objects in the database. Especially, to reduce the number of comparisons required, we propose a method for merging descriptors of similar representative points into a single descriptor. This descriptor is different from typical SURF descriptor in that each element represents not a single value but a range. By using this merged descriptor, we can calculate the similarity between input image descriptor and multiple descriptors in database efficiently. In addition, since our scheme treats all the objects independently, it can recognize multiple objects simultaneously", "keywords": ["object recognition", "surf", "local feature", "feature descriptor", "surveillance"]} {"id": "kp20k_training_946", "title": "Entertainment modeling through physiology in physical play", "abstract": "This paper is an extension of previous work on capturing and modeling the affective state of entertainment (fun) grounded on children's physiological state during physical game play. The goal is to construct, using representative statistics computed from children's physiological signals, an estimator of the degree to which games provided by the playground engage the players. Previous studies have identified the difficulties of isolating elements of physical activity attributed to reported entertainment derived (solely) from heart rate (HR) recordings. In the present article, a survey experiment on a larger scale and a physical activity control experiment for surmounting those difficulties are devised. In these experiments, children's HR, blood volume pulse (BVP) and skin conductance (SC) signals, as well as their expressed preferences of how much fun particular game variants are, are obtained using games implemented on the Playware physical interactive playground. Given effective data collection, a set of numerical features is computed from these measurements of the child's physiological state. A comprehensive statistical analysis shows that children's reported entertainment preferences correlate well with specific features of the recorded signals. Preference learning techniques combined with feature set selection methods permit the construction of user models that predict reported entertainment preferences given suitable signal features. The most accurate models are obtained through evolving artificial neural networks and are demonstrated and evaluated on a Playware game and a control task requiring physical activity. The best network is able to correctly match expressed preferences in 69.64% of cases on previously unseen data (p-value=0.0022 p-value = 0.0022 ) and indicates two dissimilar classes of children: those that prefer constantly energetic play of low mental/emotional load; and those that report as fun a dynamic play that involves high mental/emotional load independently of physical effort. The generality of the methodology, its limitations, its usability as a real-time feedback mechanism for entertainment augmentation and as a validation tool are discussed", "keywords": ["affective computing", "fun", "entertainment modeling", "physical games", "preference learning", "physiology", "heart rate", "blood volume pulse", "skin conductance"]} {"id": "kp20k_training_947", "title": "Discrete program-size dependent software reliability assessment: Modeling, estimation, and goodness-of-fit comparisons", "abstract": "In this paper we propose a discrete program-size dependent software reliability growth model flexibly describing the software failure-occurrence phenomenon based on a discrete Weibull distribution. We also conduct model comparisons of our discrete SRGM with existing discrete SRGMs by using actual data sets. The program size is one of the important metrics of software complexity. It is known that flexible discrete software reliability growth modeling is difficult due to the mathematical manipulation under a conventional modeling-framework in which the time-dependent behavior of the cumulative number of detected faults is formulated by a difference equation. Our discrete SRGM is developed under an existing unified modeling-framework based on the concept of general order-statistics, and can incorporate the effect of the program size into software reliability assessment. Further, we discuss the method of parameter estimation, and derive software reliability assessment measures of our discrete SRGM. Finally, we show numerical examples of discrete software reliability analysis based on our discrete SRGM by using actual data", "keywords": ["software reliability assessment", "modeling framework", "program size", "discrete weibull distribution", "heuristic parameter estimation algorithm", "goodness-of-fit"]} {"id": "kp20k_training_948", "title": "Implications of the fit between organizational structure and ERP: A structural contingency theory perspective", "abstract": "Despite the tremendous popularity and great potential, the field of Enterprise Resource Planning (ERP) adoption and implementation is littered with remarkable failures. Though many contributing factors have been cited in the literature, we argue that the integrated nature of ERP systems, which generally requires an organization to adopt standardized business processes reflected in the design of the software, is a key factor contributing to these failures. We submit that the integration and standardization imposed by most ERP systems may not be suitable for all types of organizations and thus the fit between the characteristics of the adopting organization and the standardized business process designs embedded in the adopted ERP system affects the likelihood of implementation success or failure. In this paper, we use the structural contingency theory to identify a set of dimensions of organizational structure and ERP system characteristics that can be used to gauge the degree of fit, thus providing some insights into successful ERP implementations. Propositions are developed based on analyses regarding the success of ERP implementations in different types of organizations. These propositions also provide directions for future research that might lead to prescriptive guidelines for managers of organizations contemplating implementing ERP systems", "keywords": ["erp", "erp implementation", "contingency theory", "organizational structure"]} {"id": "kp20k_training_949", "title": "a neuroscience-based design of intelligent tools for the elderly and disabled", "abstract": "The author has developed one basic research approach for universal accessibility over a period of 28 years. As reviewed in this paper, he and his co-researchers have designed several intelligent tools for universal accessibility as well as obtained many basic findings concerning neuroscience of human information processing. Some of the tools have been manufactured in Japan and the technologies as well as the basic findings have been applied to construct human-centered computer interfaces such as virtual reality, automatic speech recognition and speech syntheses. Moreover, these newly developed computer interface technologies have led to the improvement in the design of models for developing universal accessibility devices. Lastly, the author has emphasized that a neuroscience-based design of intelligent tools for the elderly and disabled may open a large market", "keywords": ["the disabled", "the elderly", "digital hearing aid", "universal accessibility", "tactile communication", "artificial larynx", "speech recognition", "information technology", "screen reader", "virtual reality"]} {"id": "kp20k_training_950", "title": "Designing for semantic access: A video browsing system", "abstract": "Users of browsing applications often have vague information needs which can only be described in conceptual terms. Therefore, a video browsing system must accept conceptual queries for preselection and offer mechanisms for interactive inspection of the result set by the user. In this paper, we describe a MM-DBMS that we extended with the following components: Our retrieval engine calculates relevance values for the results of a conceptual query by feature aggregation on video shot granularity to offer conceptual, content-based access. To reduce startup delays within sessions, our admission control module admits only complete browsing sessions, if required resources, which are heuristically predicted from query results, are available. In addition, our intelligent client buffer strategy employs the retrieval relevance values to enable flexible user interactions during browsing", "keywords": ["semantic browsing", "conceptual video retrieval", "content-based search", "semantic buffering", "session-based admission control"]} {"id": "kp20k_training_951", "title": "a method for analyzing reading comprehension in computer science courses", "abstract": "Reading has traditionally been seen as an essential component in learning, especially at the university level. However, many instructors in higher education, especially in technical courses, do not emphasize reading or try to evaluate it. In this abstract we present an automated system designed to measure and improve reading comprehension and describe preliminary results using the system", "keywords": ["language", "strategies", "reading"]} {"id": "kp20k_training_952", "title": "Studies on Soluble Ectodomain Proteins of Relaxin (LGR7) and Insulin 3 (LGR8) Receptors", "abstract": "Abstract: The ectodomains of both the relaxin (LGR7) and the INSL3 (LGR8) receptors can be expressed on the cell surface using only a single transmembrane domain. These membrane-anchored proteins retain the ability to bind relaxin and can be cleaved from the cell surface. The subsequent LGR7 protein, 7BP, binds relaxin and can act as a functional relaxin antagonist. By contrast, the equivalent LGR8 protein 8BP does not bind relaxin or antagonize LGR8 activity. The 7BP protein has been successfully immobilized onto chemically derivatized surfaces for the capture of relaxin peptides and subsequent identification via SELDI-MS analysis", "keywords": ["relaxin", "lgr7", "lgr8", "7bp", "ciphergen seldi-ms"]} {"id": "kp20k_training_953", "title": "on-chip delay measurement for silicon debug", "abstract": "Efficient test and debug techniques are indispensable for performance characterization of large complex integrated circuits in deep-submicron and nanometer technologies. Performance characterization of such chips requires on-chip hardware and efficient debug schemes in order to reduce time to market and ensure shipping of chips with lower defect levels. In this paper we present an on-chip scheme for delay fault detection and performance characterization. The proposed technique allows for accurate measurement of delays of speed paths for speed binning and facilitates a systematic and efficient test and debug scheme for delay faults. The area overhead associated with the proposed technique is very low", "keywords": ["delay fault testing", "design for testability", "silicon debug"]} {"id": "kp20k_training_954", "title": "Surface Mooring Network in the Kuroshio Extension", "abstract": "As a contribution to the Global Earth Observation System of Systems, the National Oceanic and Atmospheric Administration (NOAA) is developing surface moorings that carry a suite of field-proven and cost-effective sensors to monitor air-sea heat, moisture, and momentum fluxes, carbon dioxide uptake, and upper ocean temperature, salinity, and currents. In June 2004, an NOAA surface mooring, referred to as the Kuroshio Extension Observatory (KEO), was deployed in the Kuroshio Extension's (KE) southern recirculation gyre, approximately 300 nautical miles east of Japan. In 2006, a partnership between NOAA and the Japan Agency for Marine-Earth Science and Technology was formed that deployed a second mooring (referred to as JKEO) north of the KE jet in February 2007. KE is a region of strong currents, typhoons, and winter storms. Designing and maintaining moorings in the KE is a challenging engineering task. All data are publicly available. A subset of the data are telemetered and made available in near real time through the Global Telecommunications System and web-based data distribution systems. Data from these time-series reference sites serve a wide research and operational community and are being used for assessing numerical weather prediction analyses and reanalyses and for quantifying the air-sea interaction in this dynamic region", "keywords": ["air-sea interaction", "atmospheric measurements", "climate", "global earth observation system of systems ", "ocean measurements"]} {"id": "kp20k_training_955", "title": "A parallel fully coupled algebraic multilevel preconditioner applied to multiphysics PDE applications: Drift-diffusion, flow/transport/reaction, resistive MHD", "abstract": "This study considers the performance of a fully coupled algebraic multilevel preconditioner for Newton-Krylov solution methods. The performance of the preconditioner is demonstrated on a set of challenging multiphysics partial differential equation (PDE) applications: a drift-diffusion approximation for semiconductor devices; a low Mach number formulation for the simulation of coupled flow, transport and non-equilibrium chemical reactions; and a low Mach number formulation for visco-resistive magnetohydrodynamics (MHD) systems. These systems contain multiple physical mechanisms that are strongly coupled, highly nonlinear, non-symmetric and produce solutions with multiple length-and time-scales. In the context of this study the governing PDEs for these systems are discretized in space by a stabilized finite element (FE) method that collocates all unknowns at each node of the FE mesh. The algebraic multilevel preconditioner is based on an aggressive-coarsening graph-partitioning of the non-zero block structure of the Jacobian matrix. The performance of the algebraic multilevel preconditioner is compared with a standard variable overlap additive Schwarz domain decomposition preconditioner. Representative performance and parallel scaling results are presented for a set of direct-to-steady-state and fully implicit transient solutions. The performance studies include parallel weak scaling studies on up to 4096 cores and also includes the solution of systems as large as two billion unknowns carried out on 24 000 cores of a Cray XT3/4. In general, the results of this study indicate that on this reasonably diverse set of challenging multiphysics applications the algebraic multilevel preconditioner performs very well. ", "keywords": ["multilevel preconditioners", "algebraic multigrid", "finite element methods", "newton-krylov", "schwarz domain decomposition preconditioners", "graph partitioning"]} {"id": "kp20k_training_956", "title": "An efficient bi-objective personnel assignment algorithm based on a hybrid particle swarm optimization model", "abstract": "A hybrid particle swarm optimization (HPSO) algorithm which utilizes random-key (RK) encoding scheme, individual enhancement (IE) scheme, and particle swarm optimization (PSO) for solving a bi-objective personnel assignment problem (BOPAP) is presented. The HPSO algorithm which was proposed by Kuo et al., 2007andKuo et al., 2009b is used to solve the flow-shop scheduling problem (FSSP). In the research of BOPAP, the main contribution of the work is to improve the f1_f2 heuristic algorithm which was proposed by Huang, Chiu, Yeh, and Chang (2009). The objective of the f1_f2 heuristic algorithm is to get the satisfaction level (SL) value which is satisfied the bi-objective values f1, and f2 for the personnel assignment problem. In this paper, PSO is used to search the solution of the input problem in the BOPAP space. Then, with the RK encoding scheme in the virtual space, we can exploit the global search ability of PSO thoroughly. Based on the IE scheme, we can enhance the local search ability of particles. The experimental results show that the solution quality of BOPAP based on the proposed HPSO algorithm for the first objective f1 (i.e., total score), the second objective f2 (i.e., standard deviation), the coefficient of variance (CV), and the time cost is far better than that of the f1_f2 heuristic algorithm. To the best our knowledge, this presented result of the BOPAP is the best bi-objective algorithm known", "keywords": ["bi-objective personnel assignment problem", "particle swarm optimization", "random-key encoding scheme", "individual enhancement scheme", "hpso"]} {"id": "kp20k_training_957", "title": "Micro droplets generated on a rising bubble through an oppositely charged oil/water interface", "abstract": "The mass transfer between immiscible two-liquid phases can be greatly enhanced by bubbling gas through a reactor. Numerous micro water droplets breaking out from a ruptured water film around a rising bubble through the oil (upper phase)/water (lower phase) interface were demonstrated in the preceding paper (Uemura et al. in Europhys Lett 92:34004, 2010). In this study, we attempt to oppositely charge the oil and water layers, taking into account the findings of the preliminary study (Uemura et al. in J Vis 13:85, 2010). As a result, this study successfully produces more and finer water droplets than the preceding experiments", "keywords": ["bubble", "immiscible two-liquid interface", "ripple", "electric field", "high-speed photography"]} {"id": "kp20k_training_958", "title": "Evolving collaboration networks in Scientometrics in 1978-2010: a micro-macro analysis", "abstract": "This paper reports first results on the interplay of different levels of the science system. Specifically, we would like to understand if and how collaborations at the author (micro) level impact collaboration patterns among institutions (meso) and countries (macro). All 2,541 papers (articles, proceedings papers, and reviews) published in the international journal Scientometrics from 1978-2010 are analyzed and visualized across the different levels and the evolving collaboration networks are animated over time. Studying the three levels in isolation we gain a number of insights: (1) USA, Belgium, and England dominated the publications in Scientometrics throughout the 33-year period, while the Netherlands and Spain were the subdominant countries; (2) the number of institutions and authors increased over time, yet the average number of papers per institution grew slowly and the average number of papers per author decreased in recent years; (3) a few key institutions, including Univ Sussex, KHBO, Katholieke Univ Leuven, Hungarian Acad Sci, and Leiden Univ, have a high centrality and betweenness, acting as gatekeepers in the collaboration network; (4) early key authors (Lancaster FW, Braun T, Courtial JP, Narin F, or VanRaan AFJ) have been replaced by current prolific authors (such as Rousseau R or Moed HF). Comparing results across the three levels reveals that results from one level might propagate to the next level, e.g., top rankings of a few key single authors can not only have a major impact on the ranking of their institution but also lead to a dominance of their country at the country level; movement of prolific authors among institutions can lead to major structural changes in the institution networks. To our knowledge, this is the most comprehensive and the only multi-level study of Scientometrics conducted to date", "keywords": ["scientometrics", "evolving network", "co-author", "micro-marco analysis"]} {"id": "kp20k_training_959", "title": "testing a walkthrough methodology for theory-based design of walk-up-and-use interfaces", "abstract": "The value of theoretical analyses in user interface design has been hotly debated. All sides agree that it is difficult to apply current theoretical models within the constraints of real-world development projects. We attack this problem in the context of bringing the theoretical ideas within a model of exploratory learning [19] to bear on the evaluation of alternative interfaces for walk-up-and-use systems. We derived a cognitive walkthrough procedure for systematically evaluating features of an interface in the context of the theory. Four people independently applied this procedure to four alternative interfaces for which we have empirical usability data. Consideration of the walkthrough sheds light on the consistency with which such a procedure can be applied as well as the accuracy of the results", "keywords": ["value", "use", "accuracy", "systems", "methodology", "usability", "test", "developer", "design", "context", "data", "theory", "project", "interfaces", "consistency", "learning", "model", "feature", "constraint", "lighting", "evaluation", "attack", "user interface design"]} {"id": "kp20k_training_960", "title": "Fair flow control for ATM-ABR multipoint connections", "abstract": "Multipoint-to-multipoint communication can be implemented by combining the point-to-multipoint and multipoint-to-point connection algorithms. In an ATM multipoint-to-point connection, multiple sources send data to the same destination on a shared tree. Traffic from multiple branches is merged into a single stream after every merge point. It is sometimes impossible for the network to determine any source-specific characteristics since all sources in the multipoint connection may use the same connection identifiers. The challenge is to develop a fair rate allocation algorithm without per-source accounting as this is inequivalent to per-connection or per-flow accounting in this case. We define fairness objectives for multipoint connections, and we design and simulate an O(1) fair ATM-ABR rate allocation scheme for point-to-point and multipoint connections sharing the same links. Simulation results show that the algorithm performs well and exhibits many desirable properties. We list key modifications necessary for any ATM-ABR rate allocation scheme to fairly accommodate multiple sources. ", "keywords": ["available bit rate", "asynchronous transfer mode", "bandwidth allocation algorithms", "congestion control algorithms", "fairness", "multicasting"]} {"id": "kp20k_training_961", "title": "Geometric algorithms for automated design of rotary-platen multi-shot molds", "abstract": "This paper describes algorithms for automated design of rotary-platen type of multi-shot molds for manufacturing multi-material objects. The approach behind our algorithms works in the following manner. First, we classify the given multi-material object into several basic types based on the relationships among different components in the object. For every basic type, we find a molding sequence based on the precedence constraints resulting due to accessibility and disassembly requirements. Then, starting from the last mold stage, we generate the mold pieces for every mold stage. We expect that algorithms described in this paper will provide the necessary foundations for automating the design of rotary-platen molds", "keywords": ["geometric reasoning", "multi-shot molds", "mold design"]} {"id": "kp20k_training_962", "title": "A real-time responsiveness measurement method of linux-based mobile systems for P2P cloud systems", "abstract": "Linux-based mobile computing systems such as robots, electronic control devices, and smart-phone are the most important types of P2P cloud systems in recent days. To improve the overall performance of networked systems, each mobile computing system requires real-time characteristics. For this reason, mobile computing system developers want to know how well real-time responsiveness is supported; several real-time measurement tools have been proposed. However, those previous tools have their own measurement schemes and we think that the results from those models do not show how responsive those systems are. In this paper, we propose ELRM, a new real-time measurement method that has clear measurement interval definitions and an accurate measurement method for real-time responsiveness. We evaluate ELRM on various mobile computing systems and compare it with other existing models. As a result, our method can obtain more accurate and intuitive real-time responsiveness measurement results", "keywords": ["real-time", "responsiveness", "measurement", "measurement interval", "preemption latency"]} {"id": "kp20k_training_963", "title": "Automatic lung segmentation method for MRI-based lung perfusion studies of patients with chronic obstructive pulmonary disease", "abstract": "A novel fully automatic lung segmentation method for magnetic resonance (MR) images of patients with chronic obstructive pulmonary disease (COPD) is presented. The main goal of this work was to ease the tedious and time-consuming task of manual lung segmentation, which is required for region-based volumetric analysis of four-dimensionalMR perfusion studies which goes beyond the analysis of small regions of interest", "keywords": ["lung segmentation", "lung perfusion", "nonlinear registration"]} {"id": "kp20k_training_964", "title": "headphones with touch control", "abstract": "The Touch Headphones are meant for portable music players and aim to present an improvement to the conventional remote control in the headphone wire, and a solution for controls on wireless in-ear type headphones. Two capacitive touch sensors per earpiece sense when earpieces are being tapped on, and being put in or out", "keywords": ["headphones", "music playback", "capacitive touch control", "capacitive touch sensor", "user system interaction", "portable music players", "tapping patterns", "user interface", "mp3"]} {"id": "kp20k_training_965", "title": "An adaptive unsupervised approach toward pixel clustering and color image segmentation", "abstract": "This paper proposes an adaptive unsupervised scheme that could find diverse applications in pattern recognition as well as in computer vision, particularly in color image segmentation The algorithm, named Ant Colony-Fuzzy C-means Hybrid Algorithm (AFHA), adaptively clusters image pixels viewed as three dimensional data pieces in the RGB color space The Ant System (AS) algorithm is applied for intelligent initialization of cluster centroids. which endows clustering with adaptivity. Considering algorithmic efficiency, an ant subsampling step is performed to reduce computational complexity while keeping the clustering performance close to original one. Experimental results have demonstrated AFHA clustering's advantage of smaller distortion and more balanced cluster centroid distribution over FCM with random and uniform initialization Quantitative comparisons with the X-means algorithm also show that AFHA makes a better pre-segmentation scheme over X-means We further extend its application to natural image segmentation. taking into account the spatial information and conducting merging steps in the image space Extensive tests were taken to examine the performance of the proposed scheme Results indicate that compared with classical segmentation algorithms such as mean shift and normalized cut, our method could generate reasonably good or better image partitioning, which illustrates the method's practical value ", "keywords": ["ant system", "clustering", "fuzzy c-means", "image segmentation"]} {"id": "kp20k_training_966", "title": "an implementation of the acm/siggraph proposed graphics standard in a multisystem environment", "abstract": "Los Alamos Scientific Laboratory (LASL) has implemented a graphics system designed to support one user interface for all graphics devices in all operating environments at LASL. The Common Graphics System (CGS) will support Level One of the graphics standard proposed by the ACM/SIGGRAPH Graphic Standards Planning Committee. CGS is available in six operating environments of two different word lengths and supports four types of graphics devices. It can generate a pseudodevice file that may be postprocessed and edited for a particular graphics device, or it can generate device-specific graphics output directly. Program overlaying and dynamic buffer sharing are also supported. CGS is structured to isolate operating system dependencies and graphics device dependencies. It is written in the RATFOR (RATional FORtran) language, which supports control flow statements and macro expansion. CGS is maintained as a single source program from which each version can be extracted automatically", "keywords": ["device-independent graphics", "portability", "plan", "computer graphics", "graphics", "control flow", "buffers", "sharing", "pseudodevice", "dependencies", "standards", "structure", "macros", "systems", "environments", "device", "dynamic", "language", "types", "operating system", "implementation", "user interface", "operability", "support", "version", "laboratory"]} {"id": "kp20k_training_967", "title": "can social bookmarking enhance search in the web", "abstract": "Social bookmarking is an emerging type of a Web service that helps users share, classify, and discover interesting resources. In this paper, we explore the concept of an enhanced search, in which data from social bookmarking systems is exploited for enhancing search in the Web. We propose combining the widely used link-based ranking metric with the one derived using social bookmarking data. First, this increases the precision of a standard link-based search by incorporating popularity estimates from aggregated data of bookmarking users. Second, it provides an opportunity for extending the search capabilities of existing search engines. Individual contributions of bookmarking users as well as the general statistics of their activities are used here for a new kind of a complex search where contextual, temporal or sentiment-related information is used. We investigate the usefulness of social bookmarking systems for the purpose of enhancing Web search through a series of experiments done on datasets obtained from social bookmarking systems. Next, we show the prototype system that implements the proposed approach and we present some preliminary results", "keywords": ["precise", "activation", "statistics", "use", "web", "social bookmarking", "pagerank", "metrication", "concept", "standardization", "temporal", "experience", "general", "paper", "informal", "combinational", "linking", "search", "exploration", "sharing", "web search", "systems", "capabilities", "users", "search engine", "prototype", "metadata", "social search", "data", "complexity", "bookmark", " web services ", "resource", "contextual", "ranking"]} {"id": "kp20k_training_968", "title": "A survey on approaches to gridification", "abstract": "The Grid shows itself as a globally distributed computing environment, in which hardware and software resources are virtualized to transparently provide applications with vast capabilities. Just like the electrical power grid, the Grid aims at offering a powerful yet easy-to-use computing infrastructure to which applications can be easily 'plugged' and efficiently executed. Unfortunately, it is still very difficult to Grid-enable applications, since current tools force users to take into account many details when adapting applications to run on the Grid. In this paper, we survey some of the recent efforts in providing tools for easy gridification of applications and propose several taxonomies to identify approaches followed in the materialization of such tools. We conclude this paper by describing common features among the proposed approaches, and by pointing out open issues and future directions in the research and development of gridification methods. ", "keywords": ["grid computing", "grid development", "gridification tools"]} {"id": "kp20k_training_969", "title": "Online sequential extreme learning machine in nonstationary environments", "abstract": "System identification in nonstationary environments represents a challenging problem to solve and lots of efforts have been put by the scientific community in the last decades to provide adequate solutions on purpose. Most of them are targeted to work under the system linearity assumption, but also some have been proposed to deal with the nonlinear case study. In particular the authors have recently advanced a neural architecture, namely time-varying neural networks (TV-NN), which has shown remarkable identification properties in the presence of nonlinear and nonstationary conditions. TV-NN training is an issue due to the high number of free parameters and the extreme learning machine (ELM) approach has been successfully used on purpose. ELM is a fast learning algorithm that has recently caught much attention within the neural networks (NNs) research community. Many variants of ELM have been appeared in recent literature, specially for the stationary case study. The reference one for TV-NN training is named ELM-TV and is of batch-learning type. In this contribution an online sequential version of ELM-TV is developed, in response to the need of dealing with applications where sequential arrival or large number of training data occurs. This algorithm generalizes the corresponding counterpart working under stationary conditions. Its performances have been evaluated in some nonstationary and nonlinear system identification tasks and related results show that the advanced technique produces comparable generalization performances to ELM-TV, ensuring at the same time all benefits of an online sequential approach", "keywords": ["nonstationary and nonlinear system identification", "time-varying neural networks", "extreme learning machine", "online sequential learning"]} {"id": "kp20k_training_970", "title": "Accelerating mean time to failure computations", "abstract": "In this paper we consider the problem of numerical computation of the mean time to failure (MTTF) in Markovian dependability and/or performance models. The problem can be cast as a system of linear equations which is solved using an iterative method preserving sparsity of the Markov chain matrix. For highly dependable systems, system failure is a rare event and the above system solution can take an extremely large number of iterations. We propose to solve the problem by dividing the computation in two parts. First, by making some of the high probability states absorbing, we compute the MTTF of the modified Markov chain. In a subsequent step, by solving another system of linear equations, we are able to compute the MTTF of the original model. We prove that for a class of highly dependable systems, the resulting method can speed up computation of the MTTF by orders of magnitude. Experimental results supporting this claim are presented. We also obtain bounds on the convergence rate for computing the mean entrance time of a rare set of states in a class of queueing models", "keywords": ["markov chains", "mean time to failure", "numerical methods"]} {"id": "kp20k_training_971", "title": "Genetic Code: An Alternative Model of Translation", "abstract": "Abstract: Our earlier studies of translation have led us to a specific numeric coding of nucleotides (A = 0, C = 1, G = 2, and U = 3)that is, a quaternary numeric system; to ordering of digrams and codons (read right to left: .yx and Z.yx) as ordinal numbers from 000 to 111; and to seek hypothetic transformation of mRNA to 20 canonic amino acids. In this work, we show that amino acids match the ordinal numberthat is, follow as transforms of their respective digrams and/or mRNA-codons. Sixteen digrams and their respective amino acids appear as a parallel (discrete) array. A first approximation of translation in this view is demonstrated by a twisted spiral on the side of phantom codons and by ordering amino acids in the form of a cross on the other side, whereby the transformation of digrams and/or phantom codons to amino acids appears to be one-to-one! Classification of canonical amino acids derived from our dynamic model clarifies physicochemical criteria, such as purinity, pyrimidinity, and particularly codon rules. The system implies both the rules of Siemion and Siemion and of Davidov, as well as balances of atomic and nucleon numbers within groups of amino acids. Formalization in this system offers the possibility of extrapolating backward to the initial organization of heredity", "keywords": ["modeling", "translation", "genetic code"]} {"id": "kp20k_training_972", "title": "A two-step automatic sleep stage classification method with dubious range detection", "abstract": "A two-step classifier for automatic sleep staging is proposed. The system provides two outputs: non-dubious and dubious classification. The dubious epochs are tagged and re-assigned according to a post-processing step. The system indicates to an expert physician which results need revision. The accuracy of non-dubious classification for wake and REM is around 97", "keywords": ["automatic sleep scoring", "misclassifications detection", "subjects? variability", "dubious range", "clinical applications"]} {"id": "kp20k_training_973", "title": "Smartphone-based hierarchical crowdsourcing for weed identification", "abstract": "A novel hierarchical crowdsourcing-based system for weed identification. Combines image processing with crowdsourcing weed identification. Framework for unsupervised determination of crowd hierarchy. Prototype that supports low cost and accurate weed identification", "keywords": ["human crowd", "amazon mechanical turk", "probabilistic framework", "weed image identification"]} {"id": "kp20k_training_974", "title": "Cooperating with free riders in unstructured P2P networks", "abstract": "Free riding is a common phenomenon in peer-to-peer (P2P) file sharing networks. Although several mechanisms have been proposed to handle free ridingmostly to exclude free riders, few of them have been adopted in a practical system. This may be attributed to the fact that the mechanisms are often nontrivial, and that completely eliminating free riders could jeopardize the sheer power of the network arising from the huge volume of its participants. Rather than excluding free riders, we incorporate and utilize them to provide global index service to the files shared in the network, as well as to relay messages in the search process. The simulation results indicate that our mechanism not only can shift the query processing load from non-free riders to free riders, but can also significantly boost the search efficiency of a plain Gnutella. Moreover, the mechanism is quite resilient to high free riding ratio", "keywords": ["peer-to-peer ", "free riding", "unstructured overlay", "gnutella", "condensity"]} {"id": "kp20k_training_975", "title": "Sharing many secrets with computational provable security", "abstract": "Two new multi-secret sharing schemes, with computational provable security. The security proofs are in the standard model. The two schemes generalize schemes previously proposed in the literature. We compare the two schemes in terms of security, efficiency and extendability. The schemes work for general access structures", "keywords": ["multi-secret sharing schemes", "provable security", "symmetric encryption", "cryptography"]} {"id": "kp20k_training_976", "title": "Joint Packet Scheduling and Radio Resource Assignment for WiMAX Networks", "abstract": "The IEEE 802.16 standard defines the QoS signaling framework and various types of service flows, but left the QoS based Packet Scheduling and Radio Resource Assignment undefined. This paper proposes a novel joint Packet Scheduling and Radio Resource Assignment algorithm for WiMAX Networks. Our algorithms can effectively assign the suitable slots to meet the QoS requirements of the different service type flows while taking the throughput and fairness into considerations. The effectiveness of our algorithms have been demonstrated through extensive analysis and simulation data. The results show that our algorithms greatly improve the throughput with relatively low complexity", "keywords": ["ieee 802.16", "packet scheduling", "radio resource assignment", "qos"]} {"id": "kp20k_training_977", "title": "multi-context photo browsing on mobile devices based on tilt dynamics", "abstract": "This paper presents a photo browsing system on mobile devices to browse and search photos efficiently by tilting action. It employs tilt dynamics and a multi-scale photo screen layout for enhancing the browsing and the search capability respectively. The implementation uses continuous inputs from an accelerometer, and a multimodal (visual, audio and vibrotactile) display coupled with the states of this model. The model is based on a simple physical model, with its characteristics shaped to enhance controllability. The multi-scale layout holds both local and global view for users to both control photos and look at the surrounding context in a single framework. The experiment on Samsung MITs PDA used seven novice users browsing from 100 photos. We compare a tilt-based interaction method with a button-based browser and an iPod wheel by a quantitative usability criteria and subjective experience. The proposed tilt dynamics improves the usability over conventional dynamics. The iPod wheel has mixed performance comparing worse on some metrics than button pushing or tilt interaction, despite its commercial popularity", "keywords": ["mobile interaction", "tilt dynamics", "photo browsing", "motion-based interaction", "multi-scale view", "accelerometer"]} {"id": "kp20k_training_978", "title": "Session based access control in geographically replicated Internet services", "abstract": "Performance critical services over Internet often rely on geographically distributed architectures of replicated servers. Content Delivery Networks (CDN) are a typical example where service is based on a distributed architecture of replica servers to guarantee resource availability and proximity to final users. In such distributed systems, network links are not dedicated, and may be subject to external traffic. This brings up the need to develop access control policies that adapt to network load changing conditions. Further, Internet services are mainly session based, thus an access control support must take into account a proper differentiation of requests and perform session based decisions while considering the dynamic availability of resources due to external traffic. In this paper we introduce a distributed architecture with access control capabilities at session aware access points. We consider two types of services characterized by different patterns of resource consumption and priorities. We formulate a Markov Modulated Poisson Decision Process for access control that captures the heterogeneity of multimedia services and the variable availability of resources due to external traffic. The proposed model is optimized by means of stochastic analysis, showing the impact of external traffic on service quality. The structural properties of the optimal solutions are studied and considered as the basis for the formulation of heuristics. The performance of the proposed heuristics is studied by means of simulations, showing that in some typical scenario they perform close to the optimum", "keywords": ["content delivery networks", "qos", "session based access control"]} {"id": "kp20k_training_979", "title": "Periods in partial words: An algorithm", "abstract": "Partial words are finite sequences over a finite alphabet that may contain some holes. A variant of the celebrated FineWilf theorem shows the existence of a bound L=L(h,p,q) L = L ( h , p , q ) such that if a partial word of length at least L with h holes has periods p and q , then it also has period gcd(p,q) gcd ( p , q ) . In this paper, we associate a graph with each p- and q-periodic word, and study two types of vertex connectivity on such a graph: modified degree connectivity and r-set connectivity where r = q mod p . As a result, we give an algorithm for computing L(h,p,q) L ( h , p , q ) in the general case and show how to use it to derive the closed formulas", "keywords": ["automata and formal languages", "combinatorics on words", "partial words", "fine and wilf?s theorem", "strong periods", "graph connectivity", "optimal lengths"]} {"id": "kp20k_training_980", "title": "Unsupervised learning of word segmentation rules with genetic algorithms and inductive logic programming", "abstract": "This article presents a combination of unsupervised and supervised learning techniques for the generation of word segmentation rules from a raw list of words. First, a language bias for word se mentation is introduced and a simple genetic algorithm is used in the search for a segmentation that corresponds to the best bias value. In the second phase, the words segmented by the genetic algorithm are used as an input for the first order decision list learner CLOG. The result is a set of first order rules which can be used for segmentation of unseen words. When applied on either the training data or unseen data, these rules produce segmentations which are linguistically meaningful, and to a large degree conforming to the annotation provided", "keywords": ["unsupervised machine learning", "inductive logic programming", "natural language", "word segmentation"]} {"id": "kp20k_training_981", "title": "Nonexponential evolution equations and operator ordering", "abstract": "Nonexponential evolution equations can be treated using a formalism involving the evolution operator method, which, unlike the ordinary case, is not expressed in terms of exponential operators. The use of this technique requires particular care associated with the operator ordering. In this paper, we will present a first systematic approach to this type of problems. ", "keywords": ["evolutions equations", "operational ordering", "higher-order tricomi functions"]} {"id": "kp20k_training_982", "title": "Application of genetic algorithm for unknown parameter estimations in cylindrical fin", "abstract": "This article deals with the application of the genetic algorithm (GA) for optimizing an inverse problem and retrieving unknown parameters in cylindrical fin geometry. Parameters such as the thermal conductivity and the heat transfer coefficient are attempted for estimation in order to satisfy a desired temperature field in the medium. The study is done for single-parameter and simultaneous two-parameter retrievals. The temperature field is calculated from a forward problem using the finite difference method using some known values of the properties. These properties are ultimately retrieved by an inverse approach using the GA. The study is done for different controlling parameters such as the number of generations, measurement errors and number of measurement locations. For two parameter simultaneous estimation, many combination of unknown parameters are observed to satisfy a given temperature field, and their ratio is only found to be successfully estimated. The present work is proposed to be useful for selecting the thermal properties which are required to satisfy a given temperature field", "keywords": ["genetic algorithm", "fin", "parameter retrieval", "heat transfer coefficient", "thermal conductivity", "forward method", "inverse method"]} {"id": "kp20k_training_983", "title": "Formulation of pedestrian movement in microscopic models with continuous space representation", "abstract": "When the microscopic pedestrian models, in which pedestrian space is continuously represented, are used to simulate pedestrian movement in the buildings with internal obstacles, some issues arise and need be dealt with in detail. This paper discusses two of the issues, namely formulating the desired direction of each pedestrian in the buildings and determining the region around each pedestrian, other individuals and obstacles in which affect his or her movement. The methods for computing the desired direction and effect region are proposed, using the algorithms for the potential of pedestrian space. By numerical experiments, the performance results of three proposed formulae for the desired direction are compared, the method for the effect region is tested, and the validity of the method for computing the desired direction as considering the border effect of obstacles is verified. Numerical results indicate that the proposed methods can be used to formulate pedestrian movement, especially in the buildings with internal obstacles, in the microscopic models with continuous space representation", "keywords": ["pedestrian flow", "microscopic model", "desired direction", "effect region", "border effect"]} {"id": "kp20k_training_984", "title": "Extending CLIPS to support temporal representation and reasoning", "abstract": "Applications using expert systems for monitoring and control problems often require the ability to represent temporal knowledge and to apply reasoning based on that knowledge. Incorporating temporal representation and reasoning into expert systems leads to two problems in development: dealing with an implied temporal order of events using a non-procedural tool; and maintaining the large number of temporal relations that can occur among facts in the knowledge base. In this paper we explore these problems by using an expert system shell, CLIPS (C Language Integrated Production System), to create temporal relations using common knowledge-based constructs. We also build an extension to CLIPS through a user-defined function which generates the temporal relations from those facts. We use the extension to create and maintain temporal relations in a workflow application that monitors and controls an engineering design change review process. We also propose a solution to ensure truth maintenance among temporally related facts that links our temporal extension to the CLIPS facility for truth maintenance", "keywords": ["temporal expert systems", "clips temporal representation", "workflow", "truth maintenance"]} {"id": "kp20k_training_985", "title": "MPEG-21 digital items to support integration of heterogeneous multimedia content", "abstract": "The MELISA system is a distributed platform for multi-platform sports content broadcasting, providing end users with a wide range of real-time interactive services during the sport event, such as statistics, visual aids or enhancements, betting, and user- and context-specific advertisements. In this paper, we present the revamped design of the complete system and the implementation of a middleware entity utilizing concepts present in the emerging MPEG-21 framework. More specifically, all multimedia content is packaged in a self-contained \"digital item\", containing both the binary information (video, graphics, etc.) required for the playback, as well as structured representations of the different entities that can handle this item and the actions they can perform on it. This module essentially stands between the different components of the integrated content preparation system, thereby not disrupting their original functionality at all; additional tweaks are performed in the receiver sides, as well, to ensure that the additional information and provisions are respected. The outcome of this design upgrade is that IPR issues are dealt with successfully, both with respect to the content itself and to the functionality of the subscription levels; in addition to this, end users can be presented with personalized forms of the final content, e.g., viewing in-play virtual advertisements that match their shopping habits and preferences, thus enhancing the viewing experience and creating more revenue opportunities via targeted advertisements. ", "keywords": ["heterogeneous content adaptation", "metadata", "mpeg-21", "user modelling", "mpeg-4"]} {"id": "kp20k_training_986", "title": "Linguistic recognition system for identification of some possible genes mediating the development of lung adenocarcinoma", "abstract": "In the present article, we develop a linguistic recognition system for identification of some possible genes mediating the development of human lung adenocarcinoma. The methodology involves dimensionality reduction, classifying the genes through incorporation of the notion of linguistic fuzzy sets low, medium and high, and finally selection of some possible genes obtained by a rule generation/grouping technique. The system has been successfully applied on two microarray gene expression data sets. The results are appropriately validated by some earlier investigations, gene expression profiles and t-test. The proposed methodology has been able to find more true positives than an existing one in identifying responsible genes. Moreover, we have found some new genes that may have role in mediating the development of lung adenocarcinoma. ", "keywords": ["fuzzy sets", "low", "medium", "high", "microarray", "gene expression", "p-value"]} {"id": "kp20k_training_987", "title": "The existence and upper bound for two types of restricted connectivity", "abstract": "In this paper, we study two types of restricted connectivity: ?k(G) ? k ( G ) is the cardinality of a minimum vertex cut S such that every component of G?S G ? S has at least k vertices; ? k ? ( G ) is the cardinality of a minimum vertex cut S such that there are at least two components in G?S G ? S of order at least k . In this paper, we give some sufficient conditions for the existence and upper bound of ?k(G) ? k ( G ) and/or ? k ? ( G ) , and study some properties of these two parameters", "keywords": ["restricted connectivity"]} {"id": "kp20k_training_988", "title": "Coloring Geometric Range Spaces", "abstract": "We study several coloring problems for geometric range-spaces. In addition to their theoretical interest, some of these problems arise in sensor networks. Given a set of points in R(2) or R(3), we want to color them so that every region of a certain family (e. g., every disk containing at least a certain number of points) contains points of many (say, k) different colors. In this paper, we think of the number of colors and the number of points as functions of k. Obviously, for a fixed k using k colors, it is not always possible to ensure that every region containing k points has all colors present. Thus, we introduce two types of relaxations: either we allow the number of colors used to increase to c(k), or we require that the number of points in each region increases to p(k). Symmetrically, given a finite set of regions in R(2) or R(3), we want to color them so that every point covered by a sufficiently large number of regions is contained in regions of k different colors. This requires the number of covering regions or the number of allowed colors to be greater than k. The goal of this paper is to bound these two functions for several types of region families, such as halfplanes, halfspaces, disks, and pseudo-disks. This is related to previous results of Pach, Tardos, and Toth on decompositions of coverings", "keywords": ["coloring", "covering", "decompositions", "geometric hypergraphs"]} {"id": "kp20k_training_989", "title": "A Methodology for Design of Scalable Architectures in Software Radio Networks: a Unified Device and Network Perspective", "abstract": "This paper proposes the Tissue methodology as a novel methodology for analysis, design and synthesis of networked embedded systems and subsequent development of distributed architectural frameworks. The proposed method aims at reducing the development time through the use of reconfigurable HW/SW components and the application of automatic code generation techniques. We devise the usefulness of the proposed methodology in the context of mobile ad-hoc networks (MANET) which exploit Software Radio (SR) technology for reconfigurability issues. Drawbacks of current design and simulation tools and advantages coming from the application of the TM are discussed in the paper", "keywords": ["software architecture", "simulation", "automatic code generation", "tissue methodology", "manets", "software defined radio"]} {"id": "kp20k_training_990", "title": "Temporal development methods for agent-based systems", "abstract": "In this paper we overview one specific approach to the formal development of multi-agent systems. This approach is based on the use of temporal logics to represent both the behaviour of individual agents, and the macro-level behaviour of multi-agent systems. We describe how formal specification, veri. cation and refinement can all be developed using this temporal basis, and how implementation can be achieved by directly executing these formal representations. We also show how the basic framework can be extended in various ways to handle the representation and implementation of agents capable of more complex deliberation and reasoning", "keywords": ["agent-based systems", "formal methods", "temporal and modal logics"]} {"id": "kp20k_training_991", "title": "Lay persons' and professionals' nutrition-related vocabularies and their matching to a general and a specific thesaurus", "abstract": "This study examines the differences between expressions used by lay persons and professionals in nutrition-related questions and answers, and to what degree General Finnish Ontology (GFO) and a medical thesaurus (FinMeSH) cover these expressions. Fifty question-answer pairs were collected in an electronic answering service. Nutrition-related concepts and their expressions with their semantic relations were identified. The vocabularies of lay persons and professionals were found to be quite similar. This hints that a special consumer health vocabulary in the field of nutrition is not needed. GFO covered 32% of all expressions in questions and 37% of expressions in answers. FinMeSH covered 33% of expressions in both groups. The overlapping match of the thesauri was low, 25% in both questions and in answers. GFO and FinMeSH were found to be poor tools for supporting users in expressing nutrition-related information needs. GFO seemed not to form a covering bridge to FinMeSH", "keywords": ["coverage", "finmesh", "matching lay persons", "nutrition vocabulary", "ontology", "professionals", "thesaurus"]} {"id": "kp20k_training_992", "title": "Estimating forest biomass using small footprint LiDAR data: An individual tree-based approach that incorporates training data", "abstract": "A new individual tree-based algorithm for determining forest biomass using small footprint LiDAR data was developed and tested. This algorithm combines computer vision and optimization techniques to become the first training data-based algorithm specifically designed for processing forest LiDAR data. The computer vision portion of the algorithm uses generic properties of trees in small footprint LiDAR canopy height models (CHMs) to locate trees and find their crown boundaries and heights. The ways in which these generic properties are used for a specific scene and image type is dependent on 11 parameters, nine of which are set using training data and the NelderMead simplex optimization procedure. Training data consist of small sections of the LiDAR data and corresponding ground data. After training, the biomass present in areas without ground measurements is determined by developing a regression equation between properties derived from the LiDAR data of the training stands and biomass, and then applying the equation to the new areas. A first test of this technique was performed using 25 plots (radius=15 m) in a loblolly pine plantation in central Virginia, USA (37.42N, 78.68W) that was not intensively managed, together with corresponding data from a LiDAR canopy height model (resolution=0.5 m). Results show correlations (r) between actual and predicted aboveground biomass ranging between 0.59 and 0.82, and RMSEs between 13.6 and 140.4 t/ha depending on the selection of training and testing plots, and the minimum diameter at breast height (7 or 10 cm) of trees included in the biomass estimate. Correlations between LiDAR-derived plot density estimates were low (0.22?r?0.56) but generally significant (at a 95% confidence level in most cases, based on a one tailed test), suggesting that the program is able to properly identify trees. Based on the results it is concluded that the validation of the first training data-based algorithm for determining forest biomass using small footprint LiDAR data was a success, and future refinement and testing are merited", "keywords": ["lidar", "forestry", "computer vision", "optimization"]} {"id": "kp20k_training_993", "title": "The upper layers of the ISO/OSI reference model - (part II) (Reprinted from Computer Standards and Interfaces, vol 5, pg 65-77, 1986", "abstract": "This review is intended as an introduction to the communication concepts and functions associated with the upper layers of the ISO/OSI Reference Model. It describes, in general terms, the requirements and the benefits of an Open Communication and defines the fundamental requirements for interworking among distributed computer systems. ", "keywords": ["iso open systems interconnection", "osi", "application layer", "presentation layer", "interworking", "interconnection", "standards"]} {"id": "kp20k_training_994", "title": "Adjustable Chain Trees for Proteins", "abstract": "A chain tree is a data structure for changing protein conformations. It enables very fast detection of clashes and free energy potential calculations. A modified version of chain trees that adjust themselves to the changing conformations of folding proteins is introduced. This results in much tighter bounding volume hierarchies and therefore fewer intersection checks. Computational results indicate that the efficiency of the adjustable chain trees is significantly improved compared to the traditional chain trees", "keywords": ["combinatorial optimization", "computational molecular biology", "protein folding"]} {"id": "kp20k_training_995", "title": "Real-time point-based rendering using visibility map", "abstract": "Because of its simplicity and intuitive approach, point-based rendering has been a very popular research area. Recent approaches have focused on hardware-accelerated techniques. By applying a deferred shading scheme, both high-quality images and high-performance rendering have been achieved. However, previous methods showed problems related to depth-based visibility computation. We propose an extended point-based rendering method using a visibility map. In our method we employ a distance-based visibility technique (replacing depth-based visibility), an averaged position map and an adaptive fragment processing scheme, resulting in more accurate and improved image quality, as well as improved rendering performance", "keywords": ["display algorithms", "point-based rendering", "deferred shading", "hardware accelerated graphics"]} {"id": "kp20k_training_996", "title": "Expression and Effect of Transforming Growth Factor-? and Tumor Necrosis Factor-? in Human Pheochromocytoma", "abstract": "This study observed the expression of transforming growth factor-? (TGF-?) and tumor necrosis factor-? (TNF-?) in pheochromocytoma (PHEO) tissue and examined their effects on the proliferation and apoptosis of human PHEO cells. The mRNA and protein expressions of TGF-? and TNF-? were higher in PHEO tissues than in normal adrenal medullary tissues, and their expressions varied with pathological features. TGF-? and TNF-? stimulated the proliferation of primary human PHEO cells, but had no effect on the cell apoptosis. Both TGF-? and TNF-? might be involved in the pathogenesis of human PHEO. TNF-? needs to be further investigated before its treatment of PHEO can be realized in clinical practice", "keywords": ["pheochromocytoma", "tgf", "tnf", "proliferation", "apoptosis", "expression"]} {"id": "kp20k_training_997", "title": "Interactive visualisation of spins and clusters in regular and small-world Ising models with CUDA on GPUs", "abstract": "Three-dimensional simulation models are hard to visualise for dense lattice systems, even with cutaways and flythrough techniques. We use multiple Graphics Processing Units (GPUs), CUDA and OpenGL to increase our understanding of computational simulation models such as the 2-D and 3-D Ising systems with small-world link rewiring by accelerating both the simulation and visualisation into interactive time. We show how interactive model parameter updates, visual overlaying of measurements and graticules, cluster labelling and other visual highlighting cues enhance user intuition of the models meaning and exploit the enhanced simulation speed to handle model systems large enough to explore multi-scale phenomena", "keywords": ["visualisation", "ising model", "cuda", "gpu", "instrumentation"]} {"id": "kp20k_training_998", "title": "Efficient and Accurate Nearest Neighbor and Closest Pair Search in High-Dimensional Space", "abstract": "Nearest Neighbor (NN) search in high-dimensional space is an important problem in many applications. From the database perspective, a good solution needs to have two properties: (i) it can be easily incorporated in a relational database, and (ii) its query cost should increase sublinearly with the dataset size, regardless of the data and query distributions. Locality-Sensitive Hashing (LSH) is a well-known methodology fulfilling both requirements, but its current implementations either incur expensive space and query cost, or abandon its theoretical guarantee on the quality of query results. Motivated by this, we improve LSH by proposing an access method called the Locality-Sensitive B-tree (LSB-tree) to enable fast, accurate, high-dimensional NN search in relational databases. The combination of several LSB-trees forms a LSB-forest that has strong quality guarantees, but improves dramatically the efficiency of the previous LSH implementation having the same guarantees. In practice, the LSB-tree itself is also an effective index which consumes linear space, supports efficient updates, and provides accurate query results. In our experiments, the LSB-tree was faster than: (i) iDistance (a famous technique for exact NN search) by two orders of magnitude, and (ii) MedRank (a recent approximate method with nontrivial quality guarantees) by one order of magnitude, and meanwhile returned much better results. As a second step, we extend our LSB technique to solve another classic problem, called Closest Pair (CP) search, in high-dimensional space. The long-term challenge for this problem has been to achieve subquadratic running time at very high dimensionalities, which fails most of the existing solutions. We show that, using a LSB-forest, CP search can be accomplished in (worst-case) time significantly lower than the quadratic complexity, yet still ensuring very good quality. In practice, accurate answers can be found using just two LSB-trees, thus giving a substantial reduction in the space and running time. In our experiments, our technique was faster: (i) than distance browsing (a well-known method for solving the problem exactly) by several orders of magnitude, and (ii) than D-shift (an approximate approach with theoretical guarantees in low-dimensional space) by one order of magnitude, and at the same time, outputs better results", "keywords": ["theory", "algorithms", "experimentation", "locality-sensitive hashing", "nearest neighbor search", "closest pair search"]} {"id": "kp20k_training_999", "title": "a performance evaluation of a coverage compensation based algorithm for wireless sensor networks", "abstract": "Recent years, coverage has been widely investigated as one of the fundamental quality measurements of wireless sensor networks. In order to maintaining the coverage while saving energy of networks, algorithms have been developed to keep a minimum cover set of sensors working and turn off the redundant sensors. Generally, centralized algorithms can give a better result than distributed algorithms in terms of the number of active sensors. However, the heavy computation requirements and message overhead for collecting geographical location data keep centralized algorithms out of most distributed scenarios. In this article, Based on the idea of coverage compensation a distributed node partition algorithm for random deployments is presented to generate a minimum cover set by using the optimal node distributions created by the centralized algorithms such as GA. A Genetic Algorithm for coverage is proposed too to demonstrate how an optimal coverage node distribution created by GA can be used in a distributed scenario. Ours works are simulated on JGAP and NS2. The simulation result shows that our partition algorithm based on coverage compensation can achieve the same performance as OCOPS in terms of coverage and number of active sensors while using less control messages", "keywords": ["genetic algorithm", "coverage compensation", "node partition", "wireless sensor network", "coverage"]} {"id": "kp20k_training_1000", "title": "Automated detection of microaneurysms using scale-adapted blob analysis and semi-supervised learning", "abstract": "Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier which can detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images", "keywords": ["microaneurysms", "diabetic retinopathy", "fundus image", "semi-supervised learning", "blobs", "scale-space"]} {"id": "kp20k_training_1001", "title": "some complexity questions related to distributive computing(preliminary report", "abstract": "Let M &equil; {0, 1, 2, ..., m 1} , N &equil; {0, 1, 2,..., n 1} , and f:M N ? {0, 1} a Boolean-valued function. We will be interested in the following problem and its related questions. Let i &egr; M , j &egr; N be integers known only to two persons P 1 and P 2 , respectively. For P 1 and P 2 to determine cooperatively the value f ( i, j ), they send information to each other alternately, one bit at a time, according to some algorithm. The quantity of interest, which measures the information exchange necessary for computing f , is the minimum number of bits exchanged in any algorithm. For example, if f ( i, j ) &equil; ( i + j ) mod 2. then 1 bit of information (conveying whether i is odd) sent from P 1 to P 2 will enable P 2 to determine f ( i, j ), and this is clearly the best possible. The above problem is a variation of a model of Abelson [1] concerning information transfer in distributive computions", "keywords": ["measurement", "value", "timing", "complexity", "model", "examples", "algorithm", "informal", "variation", "functional", "computation"]} {"id": "kp20k_training_1002", "title": "Towards flattenable mesh surfaces", "abstract": "In many industries, products are constructed by assembled surface patches in ?3 ? 3 , where each patch is expected to have an isometric map to a corresponding region in ?2 ? 2 . The widely investigated developable surfaces in differential geometry show this property. However, the method to model a piecewise-linear surface with this characteristic is still under research. To distinguish from the continuous developable surface, we name them as flattenable mesh surfaces since a polygonal mesh has the isometric mapping property if it can be flattened into a two-dimensional sheet without stretching. In this paper, a novel flattenable mesh surface (Flattenable Laplacian mesh) is introduced and the relevant modelling tool is formulated. Moreover, for a given triangular mesh which is almost flattenable, a local perturbation approach is developed to improve its flattenability. The interference between the meshes under process and their nearby objects has been prevented in this local flattenable perturbation. Both the computations of Flattenable Laplacian meshes and the flattenable perturbation are based on the constrained optimization technology", "keywords": ["flattenable", "freeform mesh surfaces", "nonlinear subdivision", "geometry processing", "constrained optimization"]} {"id": "kp20k_training_1003", "title": "Identification of critical points for the design and synthesis of flexible processes", "abstract": "Optimization problems for the design and synthesis of flexible chemical processes are often associated with highly discretized models. The ultimate goal of this work is to significantly reduce the set of uncertain parameter points used in these problems. To accomplish the task, an approach was developed for identifying the minimum set of critical points needed for flexible design. Critical points in this work represent those values of uncertain parameters that determine optimal overdesign of process, so that feasible operation is assured within the specified domain of uncertain parameters. The proposed approach identifies critical values of uncertain parameters a-priori by the separate maximization of each design variable, together with simultaneous optimization of the economic objective function. During this procedure, uncertain parameters are transformed into continuous variables. Three alternative methods are proposed within this approach: the formulation based on Karush-Kuhn-Tucker (KKT) optimality conditions, the iterative two-level method, and the approximate one-level method. The identified critical points are then used for the discretization of infinite uncertain problems, in order to obtain the design with the optimum objective function and flexibility index at unity. All three methods can identify vertex or even nonvertex critical points, whose total number is less than or equal to the number of design variables, which represents a significant reduction in the problem's dimensionality. Some examples are presented illustrating the applicability and efficiency of the proposed approach, as well as the role of the critical points in the optimization of design problems under uncertainty. ", "keywords": ["flexibility", "design", "synthesis", "process", "uncertain", "vertex", "nonvertex", "critical point", "two-level"]} {"id": "kp20k_training_1004", "title": "Synthesis of autosymmetric functions in a new three-level form", "abstract": "Autosymmetric functions exhibit a special type of regularity that can speed-up the minimization process. Based on this autosymmetry, we propose a three level form of logic synthesis, called ORAX (EXOR-AND-OR), to be compared with the standard minimal SOP (Sum of Products) form. First we provide a fast ORAX minimization algorithm for autosymmetric functions. The ORAX network for a function f has a first level of at most 2(n - k) EXOR gates, followed by the AND-OR levels, where n is the number of input variables and k is the \"autosymmetry degree\" of f. In general a minimal ORAX form has smaller size than a standard minimal SOP form for the same function. We show how the gain in area of ORAX over SOP can be measured without explicitly generating the latter. If preferred, a SOP expression can be directly derived from the corresponding ORAX. A set of experimental results confirms that the ORAX form is generally more compact than the SOP form, and its synthesis is much faster than classical three-level logic minimization. Indeed ORAX and SOP minimization times are often comparable, and in some cases ORAX synthesis is even faster", "keywords": ["autosymmetry", "exor factor", "sop form", "orax form", "three-level synthesis", "logical design"]} {"id": "kp20k_training_1005", "title": "A new class of multi-stable neural networks: Stability analysis and learning process", "abstract": "Recently, multi-stable Neural Networks (NN) with exponential number of attractors have been presented and analyzed theoretically; however, the learning process of the parameters of these systems while considering stability conditions and specifications of real world problems has not been studied. In this paper, a new class of multi-stable NNs using sinusoidal dynamics with exponential number of attractors is introduced. The sufficient conditions for multi-stability of the proposed system are posed using Lyapunov theorem. In comparison to the other methods in this class of multi-stable NNs, the proposed method is used as a classifier by applying a learning process with respect to the topological information of data and conditions of Lyapunov multi-stability. The proposed NN is applied on both synthetic and real world datasets with an accuracy comparable to classical classifiers", "keywords": ["multi-stable neural network", "exponential number of attractors", "lyapunov stability", "classification", "sinusoidal dynamic"]} {"id": "kp20k_training_1006", "title": "an effective approach to entity resolution problem using quasi-clique and its application to digital libraries", "abstract": "We study how to resolve entities that contain a group of related elements in them (e.g., an author entity with a list of citations or an intermediate result by GROUP BY SQL query). Such entities, named as grouped-entities , frequently occur in many applications. By exploiting contextual information mined from the group of elements per entity in addition to syntactic similarity, we show that our approach, Quasi-Clique , improves precision and recall unto 91% when used together with a variety of existing entity resolution solutions, but never worsens them", "keywords": ["entity resolution", "name disambiguation", "graph partition"]} {"id": "kp20k_training_1007", "title": "A parallel shortest path algorithm based on graph-partitioning and iterative correcting", "abstract": "In this paper, we focus on satisfying the actual demands of quickly finding the shortest paths over real-road networks in an intelligent transportation system. A parallel shortest path algorithm based on graph partitioning and iterative correcting is proposed. After evaluating the algorithm using three real road networks, we conclude that our graph-partitioning and iterative correcting based parallel algorithm has good performance. In addition, we do the evaluation with two hardware platforms, and the new parallel algorithm achieves more than a 15-fold speedup on 16 processes in an IBM cluster (16 cores, 4 nodes), gets about 20-fold speedup on 16 processes with Dawning 5000A server(16 cores, 1 nodes", "keywords": ["parallel shortest path algorithm", "intelligent transportation", "parallel computing", "graph partitioning"]} {"id": "kp20k_training_1008", "title": "teaching page replacement algorithms with a java-based vm simulator", "abstract": "Computer system courses have long benefited from simulators in conveying important concepts to students. We have modified the Java source code of the MOSS virtual memory simulator to allow users to easily switch between different page replacement algorithms including FIFO, LRU, and Optimal replacement algorithms. The simulator clearly demonstrates the behavior of the page replacement algorithms in a virtual memory system, and provides a convenient way to illustrate page faults and their corresponding page fault costs. Equipped with a GUI for control and page table visualization, it allows the student to visually see how page tables operate and which pages page replacement algorithms evict in case of a page fault. Moreover, class projects may be assigned requiring operating system students to code new page replacement algorithms which they want to simulate and integrate them into the MOSS VM simulator code files thus enhancing the students' Java coding skills. By running various simulations, students can collect page replacement statistics thus comparing the performance of various replacement algorithms", "keywords": ["simulation", "moss virtual memory simulator", "page replacement"]} {"id": "kp20k_training_1009", "title": "Finite element modeling of multi-pass welding and shaped metal deposition processes", "abstract": "This paper describes the formulation adopted for the numerical simulation of the shaped metal deposition process (SMD) and the experimental work carried out at ITP Industry to calibrate and validate the proposed model. The SMD process is a novel manufacturing technology, similar to the multi-pass welding used for building features such as lugs and flanges on fabricated components (see Fig.1a and b). A fully coupled thermo-mechanical solution is adopted including phase-change phenomena defined in terms of both latent heat release and shrinkage effects. Temperature evolution as well as residual stresses and distortions, due to the successive welding layers deposited, are accurately simulated coupling the heat transfer and the mechanical analysis. The material behavior is characterized by a thermo-elasto-viscoplastic constitutive model coupled with a metallurgical model. Nickel super-alloy 718 is the target material of this work. Both heat convection and heat radiation models are introduced to dissipate heat through the boundaries of the component. An in-house coupled FE software is used to deal with the numerical simulation and an ad-hoc activation methodology is formulated to simulate the deposition of the different layers of filler material. Difficulties and simplifying hypotheses are discussed. Thermo-mechanical results are presented in terms of both temperature evolution and distortions, and compared with the experimental data obtained at the SMD laboratory of ITP", "keywords": ["shaped metal deposition process", "multi-pass welding", "hot-cracking", "thermo-mechanical analysis", "finite element method"]} {"id": "kp20k_training_1010", "title": "A K-nearest neighbours method based on imprecise probabilities", "abstract": "K-nearest neighbours algorithms are among the most popular existing classification methods, due to their simplicity and good performances. Over the years, several extensions of the initial method have been proposed. In this paper, we propose a K-nearest neighbours approach that uses the theory of imprecise probabilities, and more specifically lower previsions. We show that the proposed approach has several assets: it can handle uncertain data in a very generic way, and decision rules developed within this theory allow us to deal with conflicting information between neighbours or with the absence of close neighbour to the instance to classify. We show that results of the basic k-NN and weighted k-NN methods can be retrieved by the proposed approach. We end with some experiments on the classical data sets", "keywords": ["classification", "lower prevision", "nearest neighbours", "uncertain data"]} {"id": "kp20k_training_1011", "title": "Optimizing distortion for real-time data gathering in randomly deployed sensor networks", "abstract": "In several wireless sensor network applications, it is required to perform real-time reconstruction of the data field being sensed by the network. This task is generally carried Out at a central location, e.g. sink node, using a continuous data gathering phase and relying on the known correlation properties of the underlying data field. Estimating the overall spatial and temporal distortion in the reconstructed field is an important step toward deciding the number of sensors to be deployed and the data collection algorithm to be used. However, estimating distortion in arbitrary networks is a challenging task. Existing work has focused on regular network deployments such as one- and two-dimensional girds. Such deployments are deemed infeasible in a realistic environment. In this paper, we consider one- and two-dimensional random networks. For the analysis purposes, we assume that the nodes are randomly deployed following Poisson distribution. We determine the total distortion function given the correlation coefficients of the field while assuming a simple data gathering protocol. Based on this, we also determine the optimal number of nodes to be deployed in the field that will minimize distortion. ", "keywords": ["correlated data fields", "real time data gathering", "distortion analysis", "wireless sensor networks"]} {"id": "kp20k_training_1012", "title": "Variable structure neural networks for adaptive control of nonlinear systems using the stochastic approximation", "abstract": "This paper is concerned with the adaptive control of continuous-time nonlinear dynamical systems using neural networks. Referred to as a variable structure neural network, a novel neural network architecture, is proposed and shown to be useful in approximating the unknown nonlinearities of dynamical systems. In the variable structure neural network, the number of basis functions can be either increased or decreased with time according specified design strategies so that the network will not overfit or underfit the data set. Based on the Gaussian radial basis function (GBRF) variable neural network, an adaptive control scheme is presented. The location of the centers and the determination of the widths of the GBRFs are analysed using a new method inspired from the adaptive diffuse element method combined with a pruning algorithm. In the standard problem of a feedback based control, the cost to be minimized is a function of the output derivative. When the cost function depends on the output error, the gradient method cannot be applied to adjust the neural network parameters. In this case, the stochastic approximation approach allows the computation of the cost function derivatives. The developed weight adaptive laws use a stochastic approximation algorithm. This algorithm consists of the use of the KieferWolfowitz method", "keywords": ["adaptive control", "variable structure neural network", "radial basis functions", "stochastic approximation"]} {"id": "kp20k_training_1013", "title": "Evaluation of two finite element formulations for a rapid 3D stress analysis of sandwich structures", "abstract": "For efficiently simulating the impact behavior of sandwich structures made from composite face sheets and a lightweight core a rapid and accurate 3D stress analysis is essential. For that reason, a three-layered finite element formulation based on plane stress assumptions was recently developed by Krger et al. [Krger, WA, Rolfes R, Rohwer K. A three-layered sandwich element with improved transverse shear stiffness and stress based on FSDT. Comput Struct, submitted for publication]. It has turned out, however, that under concentrated out-of-plane loads this element formulation lacks appropriate accuracy of stress results. Therefore, an improved finite element formulation is developed, which accounts for the full 3D stress state. In a post-processing routine, the transverse stresses are improved by using the Extended 2D Method, which was developed by Rolfes and Rohwer [Rolfes R, Rohwer K. Improved transverse shear stresses in composite finite elements based on first order shear deformation theory. Int J Numer Meth Eng 1997;40:5160] and extended to a three-layered sandwich structure by Krger et al. Both the finite element formulation by Krger et al. and the new formulation presented in the present article use pure displacement approaches and require only C0-continuity conditions, which simplifies integration into existing FE codes and allows combined application with other finite elements. Two examples demonstrate the accuracy and applicability of the two elements", "keywords": ["sandwich", "composite", "layered shell element", "3d stress analysis"]} {"id": "kp20k_training_1014", "title": "Low-Complexity Load Balancing with a Self-Organized Intelligent Distributed Antenna System", "abstract": "A high call blocking rate is a consequence of an inefficient utilization of system resources, which is often caused by a load imbalance in the network. Load imbalances are common in wireless networks with a large number of cellular users. This paper investigates a load-balancing scheme for mobile networks that optimizes cellular performance with constraints of physical resource limits and users quality of service demands. In order to efficiently utilize the system resources, an intelligent distributed antenna system (IDAS) fed by a multi base transceiver station (BTS) has the ability to distribute the system resources over a given geographic area. To enable load balancing among distributed antenna modules we dynamically allocate the remote antenna modules to the BTSs using an intelligent algorithm. A self-optimizing network for an IDAS is formulated as an integer based linear constrained optimization problem, which tries to balance the load among the BTSs. A discrete particle swarm optimization (DPSO) algorithm as an evolutionary algorithm is proposed to solve the optimization problem. The computational results of the DPSO algorithm demonstrate optimum performance for small-scale networks and near-optimum performance for large-scale networks. The DPSO algorithm is faster with marginally less complexity than an exhaustive search algorithm", "keywords": ["distributed antenna system", "load balancing", "self-optimization network", "evolutionary algorithms"]} {"id": "kp20k_training_1015", "title": "Self-assessed changes in mental health and employment status as a result of unemployment training", "abstract": "The main question addressed in this article is: What factors in an unemployment programme serve both the individual and society? Our research focuses on background variables and process variables and how these can be assumed to affect certain dependent variables in unemployment training. The current focus is on the dependent variable subjective assessment of the effect of the training on mental health, together with the more objective dependent variable of employment status after training. Self-confidence, well-being, faith in the future, level of initiative and personal development have been used as indicators of self-assessed mental health. Data were collected from an unemployment training programme in Sweden and the variables combined to create a hypothetical model. The model was statistically tested and then modified with the aid of LISREL statistics, which helps to adjust the model to statistical acceptance. The findings show that the salient factors directly related to the subjective assessment of the effect of training on mental health are gender, attitude to skills development, perceived training requirements and formal educational background. The latter relationship was negative. Of indirect importance are the level of commitment of the teacher, the satisfaction of the trainee with the process, and the level of control. The duration of previous unemployment was the only independent variable, which directly affected the employment status after the training, and this was in the negative direction. Of indirect importance for this dependent variable were training requirement, satisfaction with the process, own level of control and attitude to skills development", "keywords": ["unemployment training", "mental health", "subjective assessment", "employment status"]} {"id": "kp20k_training_1016", "title": "Sliding mode control of quantized systems against bounded disturbances", "abstract": "This paper investigates the sliding mode control problem of quantized systems with simultaneous input and output disturbances. In a network environment, the output measurements are supposed to be quantized with a logarithmic strategy before transmitting over the digital channels. The main difficulties in this design are as follows: (1) there exists input/output disturbances and state time-delay in the plant under consideration, such that model discretization is difficult to be implemented. The design work is therefore forbidden to be considered in continuous-time domain; (2) the quantized signals (piecewise constants) cannot be used to synthesize a continuous-time sliding mode surface; (3) traditional observer technique is not effective to handle output disturbances. In this paper, a filtering-based technique is proposed to solve these difficulties, based on which a sliding-mode observer-based control scheme is developed to stabilize the resulting closed-loop systems. Finally, the effectiveness of the proposed methodology is illustrated via a numerical example", "keywords": ["networked control system", "signal quantization", "sliding mode control", "sliding mode observer"]} {"id": "kp20k_training_1017", "title": "Synchronization of Stochastic Fuzzy Cellular Neural Networks with Leakage Delay Based on Adaptive Control", "abstract": "This paper considers the synchronization problem of coupled chaotic fuzzy cellular neural networks with stochastic noise perturbation and time delay in the leakage term by using adaptive feedback control. Motivated by the achievements from both the stability of neural networks with time delay in the leakage term and the synchronization of coupled chaotic fuzzy cellular neural networks with stochastic perturbation, Lyapunov stability theory combining with stochastic analysis approaches are employed to derive sufficient criteria ensuring the coupled chaotic fuzzy cellular neural networks to be completely synchronous. This paper also presents an illustrative example and uses simulated results of this example to show the feasibility and effectiveness of the proposed scheme", "keywords": ["synchronization", "fuzzy cellular neural networks", "stochastic perturbation", "adaptive control", "leakage delay"]} {"id": "kp20k_training_1018", "title": "Designing discriminative spatial filter vectors in motor imagery braincomputer interface", "abstract": "The problem of a volume conduction effect in electroencephalography is considered one of the challenging issues in braincomputer interface (BCI) community. In this article, we propose a novel method of designing a class-discriminative spatial filter assuming that a combination of spatial pattern vectors, irrespective of the eigenvalues of the common spatial pattern (CSP), can produce better performance in terms of classification accuracy. We select discriminative spatial filter vectors that determine features in a pairwise manner, that is, eigenvectors of the K largest eigenvalue and the K smallest eigenvalue. Although the pair of the eigenvectors of the K largest and the K smallest eigenvalues helps extract discriminative features, we believe that a different set of eigenvector pairs is more appropriate to extract class-discriminative features. In our experimental results using the publicly available dataset of BCI Competition IV, we show that the proposed method outperformed the conventional CSP methods and a filter-bank CSP. ", "keywords": ["braincomputer interface", "common spatial pattern", "feature selection", "motor imagery classification", "electroencephalography"]} {"id": "kp20k_training_1019", "title": "Underspecification for a simple process algebra of recursive processes", "abstract": "This paper deals with underspecification for process algebras which is relevant in early design stages. We consider a form of underspecification that arises from a situation where at a certain design stage the decision between several options of system behaviour is to be postponed until more information is available. We follow an approach of Veglioni and De Nicola (Lecture Notes in Computer Science 1466 (1998) 179) who propose to interpret the choice operator + of a simple class of finite process terms as underspecification whenever it combines two processes that have some initial action in common, as e.g. in (a.P + b.Q) + (a.R + c.S). In particular, we consider recursive processes and discuss several extensions. ", "keywords": ["specification", "underspecification", "semantics", "refinement"]} {"id": "kp20k_training_1020", "title": "pervasive games for education", "abstract": "The paper analyzes how pervasive games can be used for an efficient transfer of knowledge at universities. Pervasive games provide an innovative game model that combines the real world with the virtual world. In this instance, the game concept is used in conjunction with mobile phones as a means of interaction and communication enabler to support learning. The paper presents the design of a pervasive learning game, which was compared with a conventional case study approach in an empirical study with 100 students in respect of learning efficiency and motivation to learn. The empirical results reveal that the pervasive game leads to higher energetic activation, more positive emotions, more positive attitudes towards learning content and more efficient knowledge transfer than the conventional case study approach", "keywords": ["mobile phone", "learning", "empirical study", "education", "pervasive game"]} {"id": "kp20k_training_1021", "title": "Optimal numerical parameterization of discontinuous Galerkin method applied to wave propagation problems", "abstract": "This paper deals with the high-order discontinuous Galerkin (DG) method for solving wave propagation problems. First, we develop a one-dimensional DG scheme and numerically compute dissipation and dispersion errors for various polynomial orders. An optimal combination of time stepping scheme together with the high-order DG spatial scheme is presented. It is shown that using a time stepping scheme with the same formal accuracy as the DG scheme is too expensive for the range of wave numbers that is relevant for practical applications. An efficient implementation of a high-order DG method in three dimensions is presented. Using ID convergence results, we further show how to adequately choose elementary polynomial orders in order to equi-distribute a priori the discretization error. We also show a straightforward manner to allow variable polynomial orders in a DG scheme. We finally propose some numerical examples in the field of aero-acoustics. ", "keywords": ["discontinuous galerkin", "aero-acoustics", "variable p", "runge-kutta", "dispersion analysis"]}