id
stringlengths
7
7
title
stringlengths
3
578
abstract
stringlengths
0
16.7k
keyphrases
sequence
prmu
sequence
2iuKxXZ
On the depth distribution of linear codes
The depth distribution of a linear code was recently introduced by Etzion. In this correspondence, a number of basic and interesting properties for the depth of finite words and the depth distribution of linear codes are obtained. In addition, we study the enumeration problem of counting the number of linear subcodes with the prescribed depth constraints, and derive some explicit and interesting enumeration formulas. Furthermore, we determine the depth distribution of Reed-Muller code RM (m, r). Finally, we show that there are exactly nine depth-equivalence classes for the ternary [11, 6, 5] Golay codes.
[ "depth", "depth distribution", "linear codes", "derivative", "reed-muller codes", "depth-equivalence classes", "ternary golay code" ]
[ "P", "P", "P", "P", "P", "P", "R" ]
4H9xUQm
Are we there yet? ?
Statistical approaches to Artificial Intelligence are behind most success stories of the field in the past decade. The idea of generating non-trivial behaviour by analysing vast amounts of data has enabled recommendation systems, search engines, spam filters, optical character recognition, machine translation and speech recognition, among other things. As we celebrate the spectacular achievements of this line of research, we need to assess its full potential and its limitations. What are the next steps to take towards machine intelligence?
[ "artificial intelligence", "intelligent behaviour", "cybernetics", "statistical learning theory", "data driven ai", "intelligent systems", "pattern analysis", "viterbis algorithm", "history of artificial intelligence" ]
[ "P", "R", "U", "M", "M", "R", "U", "U", "M" ]
-mp3XQE
The complexity of the matroid-greedoid partition problem
We show that the maximum matroid-greedoid partition problem is NP-hard to approximate to within 1/2 + epsilon for any epsilon > 0, which matches the trivial factor 1/2 approximation algorithm. The main tool in our hardness of approximation result is an extractor code with polynomial rate, alphabet size and list size, together with an efficient algorithm for list-decoding. We show that the recent extractor construction of Guruswami, Umans and Vadhan [V. Guruswami. C. Umans, S.P. Vadhan, Unbalanced expanders and randomness extractors from Parvaresh-Vardy codes, in: IEEE Conference on Computational Complexity, IEEE Computer Society, 2007, pp. 96-108] can be used to obtain a code with these properties. We also show that the parameterized matroid-greedoid partition problem is fixed-parameter tractable. (C) 2008 Elsevier B.V. All rights reserved.
[ "extractor codes", "matroid", "greedoid", "matroid partition problem", "fixed-parameter complexity" ]
[ "P", "U", "U", "M", "R" ]
4a1GDxz
Exploring the ncRNAncRNA patterns based on bridging rules
ncRNAs play an important role in the regulation of gene expression. However, many of their functions have not yet been fully discovered. There are complicated relationships between ncRNAs in different categories. Finding these relationships can contribute to identify ncRNAs functions and properties. We extend the association rule to represent the relationship between two ncRNAs. Based on this rule, we can speculate the ncRNAs function when it interacts with other ncRNAs. We propose two measures to explore the relationships between ncRNAs in different categories. Entropy theory is to calculate how close two ncRNAs are. Association rule is to represent the interactions between ncRNAs. We use three datasets from miRBase and RNAdb. Two from miRBase are designed for finding relationships between miRNAs; the other from RNAdb is designed for relationships among miRNA, snoRNA and piRNA. We evaluate our measures from both biological significance and performance perspectives. All the cross-species patterns regarding miRNA that we found are proven correct using miRNAMap 2.0. In addition, we find novel cross-genomes patterns such as (hsa-mir-190b?hsa-mir-153-2). According to the patterns we find, we can (1) explore one ncRNAs function from another with known function and (2) speculate the functions of both of them based on the relationship even we do no understand either of them. Our methods merits also include: (1) they are suitable for any ncRNA datasets and (2) they are not sensitive to the parameters.
[ "ncrnas", "bridging rules", "entropy", "mirna", "joint entropy", "mutual information" ]
[ "P", "P", "P", "P", "M", "U" ]
-ASp&v1
Gaussian mixture modelling to detect random walks in capital markets
In this paper, Gaussian mixture modelling is used to detect random walks in capital markets with the Kolmogorov-Smirnov test. The main idea is to use Gaussian mixture modelling to fit asset return distributions and then use the Kolmogorov-Smirnov test to determine the number of components. Several quantities are used to characterize Gaussian mixture models and ascertain whether random walks exist in capital markets. Empirical studies on China securities markets and Forex markets are used to demonstrate the proposed procedure. (C) 2003 Elsevier Ltd. All rights reserved.
[ "gaussian mixture modelling", "the kolmogorov-smirnov test", "asset return distributions", "the random walks hypothesis", "em algorithm" ]
[ "P", "P", "P", "M", "U" ]
-hJ:wmQ
Scientific design rationale
Design rationale should be regarded both as a tool for the practice of design, and as a method to enable the science of design. Design rationale answers questions about why a given design takes the form that it does. Answers to these why questions represent a significant portion of the knowledge generated from design research. This knowledge, along with that from empirical studies of designs in use, contributes to what Simon called the sciences of the artificial. Most research on the nature and use of design rationale has been analytic or theoretical. In this article, we describe an empirical study of the roles that design rationale can play in the conduct of design research. We report results from an interview study with 16 design researchers investigating how they construe and carry out design as research. The results include an integrated framework of the affordances design rationale can contribute to design research. The framework and supporting qualitative data provide insight into how design rationale might be more effectively leveraged as a first-class methodology for research into the creation and use of artifacts.
[ "design rationale", "design research", "affordances", "design research methodology" ]
[ "P", "P", "P", "R" ]
15h1Peu
High flowability monomer resists for thermal nanoimprint lithography
In this paper, we have been using polymer and thermally curable monomer resists in a full 8in. wafer thermal nanoimprint lithography process. Using exactly the same imprinting conditions, we observed that a monomer solution provides a much larger resist redistribution than a polymer resist. Imprinting Fresnel zone plates, composed of micro- and nano-meter features, was possible only with the monomer resist. In order to reduce the shrinkage ratio of the monomer resists, acrylatesilsesquioxane materials were synthesised. With a simple diffusion-like model, we could extract a mean free path of 1.1mm for the monomer resist, while a polymer flows only on distances below 10?m in the same conditions.
[ "monomer resists", "nanoimprint lithography", "flow properties", "polyhedral silsesquioxane" ]
[ "P", "P", "M", "U" ]
-NSapoQ
Binarized Support Vector Machines
The widely used support vector machine (SVM) method has shown to yield very good results in supervised classification problems. Other methods such as classification trees have become more popular among practitioners than SVM thanks to their interpretability, which is an important issue in data mining. In this work, we propose an SVM-based method that automatically detects the most important predictor variables and the role they play in the classifier. In particular, the proposed method is able to detect those values and intervals that are critical for the classification. The method involves the optimization of a linear programming problem in the spirit of the Lasso method with a large number of decision variables. The numerical experience reported shows that a rather direct use of the standard column generation strategy leads to a classification method that, in terms of classification ability, is competitive against the standard linear SVM and classification trees. Moreover, the proposed method is robust; i.e., it is stable in the presence of outliers and invariant to change of scale or measurement units of the predictor variables. When the complexity of the classifier is an important issue, a wrapper feature selection method is applied, yielding simpler but still competitive classifiers.
[ "binarization", "support vector machines", "supervised classification", "column generation" ]
[ "P", "P", "P", "P" ]
4vNCmpG
Ambrosio-Tortorelli Segmentation of Stochastic Images: Model Extensions, Theoretical Investigations and Numerical Methods
We discuss an extension of the Ambrosio-Tortorelli approximation of the Mumford-Shah functional for the segmentation of images with uncertain gray values resulting from measurement errors and noise. Our approach yields a reliable precision estimate for the segmentation result, and it allows us to quantify the robustness of edges in noisy images and under gray value uncertainty. We develop an ansatz space for such images by identifying gray values with random variables. The use of these stochastic images in the minimization of energies of Ambrosio-Tortorelli type leads to stochastic partial differential equations for a stochastic smoothed version of the original image and a stochastic phase field for the edge set. For the discretization of these equations we utilize the generalized polynomial chaos expansion and the generalized spectral decomposition (GSD) method. In contrast to the simple classical sampling technique, this approach allows for an efficient determination of the stochastic properties of the output image and edge set by computations on an optimally small set of random variables. Also, we use an adaptive grid approach for the spatial dimensions to further improve the performance, and we extend an edge linking method for the classical Ambrosio-Tortorelli model for use with our stochastic model. The performance of the method is demonstrated on artificial data and a data set from a digital camera as well as real medical ultrasound data. A comparison of the intrusive GSD discretization with a stochastic collocation and a Monte Carlo sampling is shown.
[ "segmentation", "stochastic images", "uncertainty", "stochastic partial differential equations", "polynomial chaos", "generalized spectral decomposition", "adaptive grid", "edge linking", "ambrosio-tortorelli model", "image processing" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "M" ]
Qw5W3:D
A provably convergent heuristic for stochastic bicriteria integer programming
We propose a general-purpose algorithm APS (Adaptive Pareto-Sampling) for determining the set of Pareto-optimal solutions of bicriteria combinatorial optimization (CO) problems under uncertainty, where the objective functions are expectations of random variables depending on a decision from a finite feasible set. APS is iterative and population-based and combines random sampling with the solution of corresponding deterministic bicriteria CO problem instances. Special attention is given to the case where the corresponding deterministic bicriteria CO problem can be formulated as a bicriteria integer linear program (ILP). In this case, well-known solution techniques such as the algorithm by Chalmet et al. can be applied for solving the deterministic subproblem. If the execution of APS is terminated after a given number of iterations, only an approximate solution is obtained in general, such that APS must be considered a metaheuristic. Nevertheless, a strict mathematical result is shown that ensures, under rather mild conditions, convergence of the current solution set to the set of Pareto-optimal solutions. A modification replacing or supporting the bicriteria ILP solver by some metaheuristic for multicriteria CO problems is discussed. As an illustration, we outline the application of the method to stochastic bicriteria knapsack problems by specializing the general framework to this particular case and by providing computational examples.
[ "integer programming", "combinatorial optimization", "metaheuristics", "convergence proof", "stochastic optimization" ]
[ "P", "P", "P", "M", "R" ]
-kYGPpK
The impact of a simulation game on operations management education
This study presents a new simulation game and analyzes its impact on operations management education. The proposed simulation was empirically tested by comparing the number of mistakes during the first and second halves of the game. Data were gathered from 100 teams of four or five undergraduate students in business administration, taking their first course in operations management. To assess learning, instead of relying solely on an overall performance measurement, as is usually done in the skill-based learning literature, we analyzed the evolution of different types of mistakes that were made by students in successive rounds of play. Our results show that although simple decision-making skills can be acquired with traditional teaching methods, simulation games are more effective when students have to develop decision-making abilities for managing complex and dynamic situations. (C) 2011 Elsevier Ltd. All rights reserved.
[ "simulations", "interactive learning environment", "applications in operations management", "post-secondary education" ]
[ "P", "M", "M", "M" ]
4cWNVFy
Covering a set of points in a plane using two parallel rectangles
In this paper we consider the problem of finding two parallel rectangles in arbitrary orientation for covering a given set of n points in a plane, such that the area of the larger rectangle is minimized. We propose an algorithm that solves the problem in O(n(3)) time using O(n(2)) space. Without altering the complexity, our approach can be used to solve another optimization problem namely, minimize the sum of the areas of two arbitrarily oriented parallel rectangles covering a given set of points in a plane. (C) 2009 Elsevier B.V. All rights reserved.
[ "covering", "rectangles", "algorithms", "optimization", "computational geometry" ]
[ "P", "P", "P", "P", "U" ]
4gC-K-R
Investigating the extreme programming system - An empirical study
In this paper we discuss our empirical study about the advantages and difficulties 15 Greek software companies experienced applying Extreme Programming (XP) as a holistic system in software development. Based on a generic XP system including feedback influences and using a cause-effect model including social-technical affecting factors, as our research tool, the study statistically evaluates the application of XP practices in the software companies being studied. Data were collected from 30 managers and developers, using the sample survey technique with questionnaires and interviews, in a time period of six months. Practices were analysed individually, using Descriptive Statistics (DS), and as a whole by building up different models using stepwise Discriminant Analysis (DA). The results have shown that companies, facing various problems with common code ownership, on-site customer, 40-hour week and metaphor, prefer to develop their own tailored XP method and way of working-practices that met their requirements. Pair programming and test-driven development were found to be the most significant success factors. Interactions and hidden dependencies for the majority of the practices as well as communication and synergy between skilled personnel were found to be other significant success factors. The contribution of this preliminary research work is to provide some evidence that may assist companies in evaluating whether the XP system as a holistic framework would suit their current situation.
[ "extreme programming system", "empirical study", "cause-effect model", "stepwise discriminant analysis", "common code ownership", "on-site customer", "metaphor", "pair programming", "test-driven development", "agile methods", "feedback model", "developer perception", "manager perception", "planning game", "refactoring", "simple design", "continuous integration", "short release cycles", "40-hour-week", "coding standards" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "M", "R", "M", "M", "U", "U", "U", "U", "U", "U", "M" ]
4L32b&B
An optimal GTS scheduling algorithm for time-sensitive transactions in IEEE 802.15.4 networks ?
IEEE 802.15.4 is a new enabling standard for low-rate wireless personal area networks and has been widely accepted as a de facto standard for wireless sensor networking. While primary motivations behind 802.15.4 are low power and low cost wireless communications, the standard also supports time and rate sensitive applications because of its ability to operate in TDMA access modes. The TDMA mode of operation is supported via the Guaranteed Time Slot (GTS) feature of the standard. In a beacon-enabled network topology, the Personal Area Network (PAN) coordinator reserves and assigns the GTS to applications on a first-come-first-served (FCFS) basis in response to requests from wireless sensor nodes. This fixed FCFS scheduling service offered by the standard may not satisfy the time constraints of time-sensitive transactions with delay deadlines. Such operating scenarios often arise in wireless video surveillance and target detection applications running on sensor networks. In this paper, we design an optimal work-conserving scheduling algorithm for meeting the delay constraints of time-sensitive transactions and show that the proposed algorithm outperforms the existing scheduling model specified in IEEE 802.15.4.
[ "gts", "scheduling", "scheduling", "lr-wpan", "schedulability", "edf" ]
[ "P", "P", "P", "U", "P", "U" ]
GMVLxTy
CONTROLLED DENSE CODING WITH CLUSTER STATE
Two schemes for controlled dense coding with a one-dimensional four-particle cluster state are investigated. In this protocol, the supervisor (Cliff) can control the channel and the average amount of information transmitted from the sender (Alice) to the receiver (Bob) by adjusting the local measurement angle theta. It is shown that the results for the average amounts of information are unique from the different two schemes.
[ "controlled dense coding", "cluster state", "average amount of information", "povm" ]
[ "P", "P", "P", "U" ]
2KiQqk7
Slope stability analysis using the limit equilibrium method and two finite element methods
In this paper, the factors of safety and critical slip surfaces obtained by the limit equilibrium method (LEM) and two finite element methods (the enhanced limit strength method (ELSM) and strength reduction method (SRM)) are compared. Several representative two-dimensional slope examples are analysed. Using the associated flow rule, the results showed that the two finite element methods were generally in good agreement and that the LEM yielded a slightly lower factor of safety than the two finite element methods did. Moreover, a key condition regarding the stress field is shown to be necessary for ELSM analysis.
[ "lem limit equilibrium method", "srm strength reduction method", "elsm enhanced limit strength method", "fos factor of safety", "srf strength reduction factor", "pso particle swarm optimisation" ]
[ "R", "R", "R", "M", "M", "U" ]
4QEGEa9
Deformation invariant attribute vector for deformable registration of longitudinal brain MR images
This paper presents a novel approach to define deformation invariant attribute vector (DIAV) for each voxel in 3D brain image for the purpose of anatomic correspondence detection. The DIAV method is validated by using synthesized deformation in 3D brain MRI images. Both theoretic analysis and experimental studies demonstrate that the proposed DIAV is invariant to general nonlinear deformation. Moreover, our experimental results show that the DIAV is able to capture rich anatomic information around the voxels and exhibit strong discriminative ability. The DIAV has been integrated into a deformable registration algorithm for longitudinal brain MR images, and the results on both simulated and real brain images are provided to demonstrate the good performance of the proposed registration algorithm based on matching of DIAVs.
[ "deformation invariant attribute vector", "deformable registration", "brain mri", "longitudinal imaging" ]
[ "P", "P", "P", "R" ]
2VaK2pZ
Carbapenem-resistant Enterobacteriaceae: biology, epidemiology, and management
Introduced in the 1980s, carbapenem antibiotics have served as the last line of defense against multidrug-resistant Gram-negative organisms. Over the last decade, carbapenem-resistant Enterobacteriaceae (CRE) have emerged as a significant public health threat. This review summarizes the molecular genetics, natural history, and epidemiology of CRE and discusses approaches to prevention and treatment.
[ "carbapenem-resistant enterobacteriaceae", "molecular genetics", "treatment", "antimicrobial resistance", "carbapenemases", "infection control" ]
[ "P", "P", "P", "U", "U", "U" ]
1vjU4k:
hypergraph-based inductive learning for generating implicit key phrases
This paper presents a novel approach to generate implicit key phrases which are ignored in previous researches. Recent researches prefer to extract key phrases with semi-supervised transductive learning methods, which avoid the problem of training data. In this paper, based on a transductive learning method, we formulate the phrases in the document as a hypergraph and expand the hypergraph to include implicit phrases, which are ranked by an inductive learning approach. The highest ranked phrases are seen as implicit key phrases, and experimental results demonstrate the satisfactory performance of this approach.
[ "hypergraph", "implicit key phrase", "transductive learning", "inductive semi-supervised learning", "key phrase generation" ]
[ "P", "P", "P", "R", "R" ]
1tQ3AEr
Strategic commitment to price to stimulate downstream innovation in a supply chain
It is generally in a firms interest for its supply chain partners to invest in innovations. To the extent that these innovations either reduce the partners variable costs or stimulate demand for the end product, they will tend to lead to higher levels of output for all of the firms in the chain. However, in response to the innovations of its partners, a firm may have an incentive to opportunistically increase its own prices. The possibility of such opportunistic behavior creates a hold-up problem that leads supply chain partners to underinvest in innovation. Clearly, this hold-up problem could be eliminated by a pre-commitment to price. However, by making an advance commitment to price, a firm sacrifices an important means of responding to demand uncertainty. In this paper we examine the trade-off that is faced when a firms channel partner has opportunities to invest in either cost reduction or quality improvement, i.e. demand enhancement. Should it commit to a price in order to encourage innovation, or should it remain flexible in order to respond to demand uncertainty. We discuss several simple wholesale pricing mechanisms with respect to this trade-off.
[ "channel coordination", "channels of distribution", "industrial organization", "cost reducing r&d" ]
[ "M", "M", "U", "M" ]
3-f6wRr
mutation-based software testing using program schemata
Mutation analysis is a powerful technique for assessing the quality of test data used in unit testing software. Unfortunately, current automated mutation analysis systems suffer from severe performance problems. In this paper the principles of mutation analysis are reviewed, current automation approaches are described, and a new method of performing mutation analysis is outlined. Performance improvements of over 300\% are reported and other advantages of this new method are highlighted.
[ "mutation", "software testing", "software", "test", "program schemata", "mutation analysis", "analysis", "quality", "data", "unit test", "automation", "systems", "performance", "paper", "method", "fault-based testing" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M" ]
4ZNg6pQ
Bamboo: A Data-Centric, Object-Oriented Approach to Many-core Software
Traditional data-oriented programming languages such as dataflow languages and stream languages provide a natural abstraction for parallel programming. In these languages, a developer focuses on the flow of data through the computation and these systems free the developer from the complexities of low-level, thread-oriented concurrency primitives. This simplification comes at a cost-traditional data-oriented approaches restrict the mutation of state and, in practice, the types of data structures a program can effectively use. Bamboo borrows from work in typestate and software transactions to relax the traditional restrictions of data-oriented programming models to support mutation of arbitrary data structures. We have implemented a compiler for Bamboo which generates code for the TILEPro64 many-core processor. We have evaluated this implementation on six benchmarks: Tracking, a feature tracking algorithm from computer vision; KMeans, a K-means clustering algorithm; MonteCarlo, a Monte Carlo simulation; FilterBank, a multi-channel filter bank; Fractal, a Mandelbrot set computation; and Series, a Fourier series computation. We found that our compiler generated implementations that obtained speedups ranging from 26.2x to 61.6x when executed on 62 cores.
[ "languages", "algorithms", "many-core programming", "data-centric languages" ]
[ "P", "P", "R", "M" ]
1BV&XvV
Performance optimization problem in speculative prefetching
Speculative prefetching has been proposed to improve the response time of network access. Previous studies in speculative prefetching focus on building and evaluating access models for the purpose of access prediction. This paper investigates a complementary area which has been largely ignored, that of performance modeling. We analyze the performance of a prefetcher that has uncertain knowledge about future accesses. Our performance metric is the improvement in access time, for which we derive a formula in terms of resource parameters (time available and time required for prefetching) and speculative parameters (probabilities for next access). We develop a prefetch algorithm to maximize the improvement in access time. The algorithm is based on finding the best solution to a stretch knapsack problem, using theoretically proven apparatus to reduce the search space. An integration between speculative prefetching and caching is also investigated.
[ "speculative prefetching", "caching" ]
[ "P", "P" ]
:LhuyKP
inspiring collaboration through the use of videoconferencing technology
At the beginning of 2007 the University of Washington opened the Odegaard Videoconference Studio which allowed groups on campus to communicate with colleagues that were physically in different locations. The opening of this facility inspired all sorts of collaborating on a more frequent basis as traveling, and more importantly the time and expense involved with traveling, was now not as necessary in order to have a meeting. Many boundaries for collaboration were removed through the use of different types of technology that allowed for video and audio conferencing, and, data and application sharing. This provided for a way to share ideas in more detail, make decisions, and receive feedback quicker, making the overall process more efficient, personal, and overall more effective.
[ "videoconferencing", "collaboration technologies" ]
[ "P", "R" ]
4YznaDH
expanders, sorting in rounds and superconcentrators of limited depth
Expanding graphs and superconcentrators are relevant to theoretical computer science in several ways. Here we use finite geometries to construct explicitly highly expanding graphs with essentially the smallest possible number of edges. Our graphs enable us to improve significantly previous results on a parallel sorting problem, by describing an explicit algorithm to sort n elements in k time units using &Ogr;( n &agr;k ) processors, where, e.g., &agr; 2 = 7/4. Using our graphs we can also construct efficient n -superconcentrators of limited depth. For example, we construct an n superconcentrator of depth 3 with &Ogr;( n 4/3 ) edges; better than the previous known results.
[ "sorting", "graph", "relevance", "computer science", "use", "parallel", "algorithm", "timing", "processor", "efficiency", "examples" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P" ]
3-pSTnb
Synchrony and frequency regulation by synaptic delay in networks of self-inhibiting neurons
We show that a pair of mutually coupled self-inhibitory neurons can display stable synchronous oscillations provided only that the delay to the onset of inhibition is sufficiently long. The frequency of these oscillations is determined either entirely by the length of the synaptic delay, or by the synaptic delay and intrinsic time constants. We also show how cells can exhibit transient synchronous oscillations where the length of the transients is determined by the synaptic delay, but where the frequency is largely independent of the delay.
[ "synaptic delay", "inhibition", "synchronous oscillations" ]
[ "P", "P", "P" ]
-NgMVFd
minimizing power dissipation during write operation to register files
This paper presents a power reduction mechanism for the write operation in register files (RegFiles), which adds a conditional charge-sharing structure to the pair of complementary bit-lines in each column of the RegFile. Because the read and write ports for the RegFile are separately implemented, it is possible to avoid pre-charging the bit-line pair for consecutive writes. More precisely, when writing same values to some cells in the same column of the RegFile, it is possible to eliminate energy consumption due to precharging of the bit-line pair. At the same time, when writing opposite values to some cells in the same column of the RegFile, it is possible to reduce energy consumed in charging the bit-line pair thanks to charge-sharing. Motivated by these observations, we modify the bit-line structure of the write ports in the RegFile such that i) we remove per-cycle bitline pre-charging and ii) we employ conditional data dependent charge-sharing. Experimental results on a set of SPEC2000INT / MediaBench benchmarks show an average of 61.5\% energy savings with 5.1\% area overhead and 16.2\% increase in write access delay.
[ "power", "write operation", "register file" ]
[ "P", "P", "P" ]
MW8HEWK
A decision support framework for metrics selection in goal-based measurement programs: GQM-DSFMS
Complex GQM-based measurement programs lead to the need for decision support in metric selection. We provide an decision support framework in choosing an optimal set of metrics to maximize measurement goal achievement for a given budget. The framework was evaluated by comparison with expert opinion in a CMMI Level 3 company. Extent of addressing information needs under a fixed budged was higher when selecting metrics using the framework.
[ "decision support", "optimization", "software measurement program", "goal based measurement", "goal question metric", "gqm", "prioritization" ]
[ "P", "P", "M", "M", "M", "U", "U" ]
18X7pK2
energy/area/delay trade-offs in the physical design of on-chip segmented bus architecture
The increasing gap between design productivity and chip complexity, and the emerging Systems-On-Chip (SOC) architectural template have led to the wide utilization of reusable Intellectual Property (IP) cores. The physical design implementation of the macro cells (IP blocks or pre-designed blocks) in general needs to find a well balanced solution among chip area, on-chip interconnect energy and critical path delay. We are especially interested in the entire trade-off curve among these three criteria at the floorplanning stage. We show this concept for a real communication scheme based on segmented bus, rather than just an extreme solution. A fast exploration design flow from the memory organization to the final layout is introduced to explore the design space.
[ "energy", "delay", "trade-offs", "physical design", "design", "segmented bus", "architecture", "product", "complexity", "template", "reusability", "intellectual property", "implementation", "macros", "general", "interconnect", "critic", "floorplanning", "concept", "communication", "scheme", "exploration", "flow", "memorialized", "organization", "layout", "design space", "system on-chip" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "M" ]
Ng3SB8H
System integration of a miniature rotorcraft for aerial tele-operation research
This paper describes the development and integration of the systems required for research into human interaction with a tele-operated miniature rotorcraft. Because of the focus on vehicles capable of operating indoors, the size of the vehicle was limited to 35 cm, and therefore the hardware had to be carefully chosen to meet the ensuing size and weight constraints, while providing sufficient flight endurance. The components described in this work include the flight hardware, electronics, sensors, and software necessary to conduct tele-operation experiments. The integration tasks fall into three main areas. First, the paper discusses the choice of rotorcraft platform best suited for indoor operation addressing the issues of size, payload capabilities, and power consumption. The second task was to determine what electronics and sensing could be integrated into a rotorcraft with significant payload limitations. Finally, the third task involved characterizing the various components both individually and as a complete system. The paper concludes with an overview of ongoing tele-operation research performed with the embedded rotorcraft platform. (C) 2010 Elsevier Ltd. All rights reserved.
[ "miniature rotorcraft", "embedded systems", "indoor navigation" ]
[ "P", "R", "M" ]
3T&QbwJ
Rank inclusion in criteria hierarchies
This paper presents a method called Rank Inclusion in Criteria Hierarchies (RICH) for the analysis of incomplete preference information in hierarchical weighting models. In RICH, the decision maker is allowed to specify subsets of attributes which contain the most important attribute or, more generally, to associate a set of rankings with a given set of attributes. Such preference statements lead to possibly non-convex sets of feasible attribute weights, allowing decision recommendations to be obtained through the computation of dominance relations and decision rules. An illustrative example on the selection of a subcontractor is presented, and the computational properties of RICH are considered.
[ "incomplete preference information", "hierarchical weighting models", "multiple criteria analysis", "decision analysis" ]
[ "P", "P", "M", "R" ]
-&hTgLb
Automatic relative orientation of large scale imagery over urban areas using Modified Iterated Hough Transform
The automation of relative orientation (RO) has been the major focus of the photogrammetric research community in the last decade. Despite the reported progress, there is no reliable (robust) approach that can perform automatic relative orientation (ARO) using large-scale imagery over urban areas. A reliable and general method for solving matching problems in various photogrammetric activities has been developed at The Ohio State University. This approach has been used to solve single photo resection using free-form linear features, surface matching and relative orientation. The approach estimates the parameters of a mathematical model relating the entities of two datasets when the correspondence of the involved entities is unknown. When applied to relative orientation, the coplanarity model is used to relate extracted edge pixels and/or feature points from a stereo-pair. In its execution, the relative orientation parameters are solved sequentially, using the coplanarity model to evaluate all possible pairings of the input primitives and choosing the most probable solution. As a result of this technique, the matched entities that correspond to the parameter solution are implicitly determined. Experiments using real data conclude that this is a robust method for relative orientation for both urban and rural scenes.
[ "automatic relative orientation", "hough transform", "matching", "robust parameter estimation" ]
[ "P", "P", "P", "R" ]
-mZyMcB
Emergency railway wagon scheduling by hybrid biogeography-based optimization
Railway transportation plays an important role in many disaster relief and other emergency supply chains. Based on the analysis of several recent disaster rescue operations in China, the paper proposes a mathematical model for emergency railway wagon scheduling, which considers multiple target stations requiring relief supplies, source stations for providing supplies, and central stations for allocating railway wagons. Under the emergency environment, the aim of the problem is to minimize the weighted time for delivering all the required supplies to the targets. For efficiently solving the problem, we develop a new hybrid biogeography-based optimization (BBO) algorithm, which uses a local ring topology of population to avoid premature convergence, includes the differential evolution (DE) mutation operator to perform effective exploration, and takes some problem-specific mechanisms for fine-tuning the search process and handling the constraints. Computational experiments show that our algorithm is robust and scalable, and outperforms some state-of-the-art heuristic algorithms on a set of problem instances.
[ "railway wagon scheduling", "biogeography-based optimization (bbo)", "ring topology", "differential evolution (de)", "emergency relief supply" ]
[ "P", "P", "P", "P", "R" ]
-7:p3cG
Biasvariance analysis in estimating true query model for information retrieval
We study the retrieval effectiveness-stability tradeoff in query model estimation. This tradeoff is investigated through a novel angle, i.e., biasvariance tradeoff. We formulate the performance biasvariance and estimation biasvariance. We investigate various query estimation methods using biasvariance analysis. Experiments have been conducted to verify hypotheses on biasvariance analysis.
[ "biasvariance", "information retrieval", "query language model" ]
[ "P", "P", "M" ]
1TAaF3V
Qualitative constraint satisfaction problems: An extended framework with landmarks
Dealing with spatial and temporal knowledge is an indispensable part of almost all aspects of human activity. The qualitative approach to spatial and temporal reasoning, known as Qualitative Spatial and Temporal Reasoning (QSTR), typically represents spatial/temporal knowledge in terms of qualitative relations (e.g., to the east of, after), and reasons with spatial/temporal knowledge by solving qualitative constraints. When formulating qualitative constraint satisfaction problems (CSPs), it is usually assumed that each variable could be "here, there and everywhere".(1) Practical applications such as urban planning, however, often require a variable to take its value from a certain finite domain, i.e. it is required to be 'here or there, but not everywhere'. Entities in such a finite domain often act as reference objects and are called "landmarks" in this paper. The paper extends the classical framework of qualitative CSPs by allowing variables to take values from finite domains. The computational complexity of the consistency problem in this extended framework is examined for the five most important qualitative calculi, viz. Point Algebra, Interval Algebra, Cardinal Relation Algebra, RCC5, and RCC8. We show that all these consistency problems remain in NP and provide, under practical assumptions, efficient algorithms for solving basic constraints involving landmarks for all these calculi. (c) 2013 Elsevier B.V. All rights reserved.
[ "constraint satisfaction", "landmarks", "qualitative spatial and temporal reasoning", "qualitative calculi" ]
[ "P", "P", "P", "P" ]
9Q26Bmz
the interaction of software prefetching with ilp processors in shared-memory systems
Current microprocessors aggressively exploit instruction-level parallelism (ILP) through techniques such as multiple issue, dynamic scheduling, and non-blocking reads. Recent work has shown that memory latency remains a significant performance bottleneck for shared-memory multiprocessor systems built of such processors.This paper provides the first study of the effectiveness of software-controlled non-binding prefetching in shared memory multiprocessors built of state-of-the-art ILP-based processors. We find that software prefetching results in significant reductions in execution time (12\% to 31\%) for three out of five applications on an ILP system. However, compared to previous-generation system, software prefetching is significantly less effective in reducing the memory stall component of execution time on an ILP system. Consequently, even after adding software prefetching, memory stall time accounts for over 30\% of the total execution time in four out of five applications on our ILP system.This paper also investigates the interaction of software prefetching with memory consistency models on ILP-based multiprocessors. In particular, we seek to determine whether software prefetching can equalize the performance of sequential consistency (SC) and release consistency (RC). We find that even with software prefetching, for three out of five applications, RC provides a significant reduction in execution time (15\% to 40\%) compared to SC.
[ "interaction", "software", "prefetching", "ilp", "processor", "memorialized", "systems", "exploit", "instruction-level parallelism", "dynamic scheduling", "latency", "performance", "paper", "effect", "shared memory", "shared memory multiprocessors", "reduction", "timing", "applications", "component", "account", "consistency", "model", "sequential consistency", "generation", "art", "binding" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "U", "U", "U" ]
2WN7sSp
A scalable and extensible framework for query answering over RDF
The Semantic Web is gaining increasing interest to fulfill the need of sharing, retrieving, and reusing information. In this context, the Resource Description Framework (RDF) has been conceived to provide an easy way to represent any kind of data and metadata, according to a lightweight model and syntaxes for serialization (RDF/XML, N3, etc.). Despite RDF has the advantage of being general and simple, it cannot be used as a storage model as it is, since it can be easily shown that even simple management operations involve serious performance limitations. In this paper we present a framework which provides a flexible and persistent layer relying on a novel storage model that guarantees good scalability and performance of query evaluation. The approach is based on the notion of construct, that represents a concept of the domain of interest. This makes the approach easily extensible and independent from the specific knowledge representation language. Based on this representation, reasoning capabilities are supported by a rule-based engine. Finally we present experimental results over real world scenarios to demonstrate the feasibility of the approach.
[ "query answering", "rdf", "rdf", "rdfs", "metamodel" ]
[ "P", "P", "P", "P", "U" ]
56L7oCR
Using interactive 3-D visualization for public consultation
3-D models are often developed to aid the design and development of indoor and outdoor environments. This study explores the use of interactive 3-D visualization for public consultation for outdoor environments. Two visualization techniques (interactive 3-D visualization and static visualization) were compared using the method of individual testing. Visualization technique had no effect on the perception of the represented outdoor environment, but there was a preference for using interactive 3-D. Previously established mechanisms for a preference for interactive 3-D visualization in other domains were confirmed in the perceived strengths and weaknesses of visualization techniques. In focus-group discussion, major preferences included provision of more information through interactive 3-D visualization and wider access to information for public consultation. From a users' perspective, the findings confirm the strong potential of interactive 3-D visualization for public consultation. (C) 2010 Elsevier B.V. All rights reserved.
[ "visualization", "public consultation", "outdoor environment", "virtual reality", "e-government" ]
[ "P", "P", "P", "U", "U" ]
3ESj6G5
Polymorphic nodal elements and their application in discontinuous Galerkin methods
In this work, we discuss two different but related aspects of the development of efficient discontinuous Galerkin methods on hybrid element grids for the computational modeling of gas dynamics in complex geometries or with adapted grids. In the first part, a recursive construction of different nodal sets for hp finite elements is presented. They share the property that the nodes along the sides of the two-dimensional elements and along the edges of the three-dimensional elements are the LegendreGaussLobatto points. The different nodal elements are evaluated by computing the Lebesgue constants of the corresponding Vandermonde matrix. In the second part, these nodal elements are applied within the modal discontinuous Galerkin framework. We still use a modal based formulation, but introduce a nodal based integration technique to reduce computational cost in the spirit of pseudospectral methods. We illustrate the performance of the scheme on several large scale applications and discuss its use in a recently developed space-time expansion discontinuous Galerkin scheme.
[ "nodal", "discontinuous galerkin", "hp finite elements", "lebesgue constants", "modal", "polynomial interpolation", "quadrature free", "unstructured", "triangle", "quadrilateral", "polygonal", "tetrahedron", "hexahedron", "prism", "pentahedron", "pyramid" ]
[ "P", "P", "P", "P", "P", "U", "U", "U", "U", "U", "U", "U", "U", "U", "U", "U" ]
2oMQEpq
Deployment-Based Solution for Prolonging Lifetime in Sensor Networks with Multiple Mobile Sinks
Enhancing sensor network lifetime is an important research topic for wireless sensor networks. Solutions based on linear programming, clustering, controlled non-uniform node distributions and mobility are presented separately in the literature. Even thought, the problem is still open and not fully solved. Drawbacks exist for all the above solutions when considered separately. Perhaps a solution that is able to provide composite benefits of some of them could better solve the problem. In this paper, we introduce a solution for prolonging the lifetime of sensor networks. The proposed solution is based on a deployment strategy of multiple mobile sinks. In our proposal, data traffic is directed away from the network center toward the network peripheral where sinks would be initially deployed. Sinks stay stationary while collecting the data reports that travel over the network perimeter toward them. Eventually perimeter nodes would be exposed to a peeling phenomenon which results in partitioning one or more sinks from their one-hop neighbors. The partitioned sinks move discrete steps following the direction of the progressive peeling towards the network center. The mechanism maintains the network connectivity and delays the occurrence of partition. Moreover, it balances the load among nodes and reduces the energy consumption. The performance of the proposed protocol is evaluated using intensive simulations. The results show the efficiency (in terms of both reliability and connectivity) of our deployment strategy with the associated data collection protocol.
[ "deployment", "sensor networks", "mobile sinks", "data collection" ]
[ "P", "P", "P", "P" ]
4QY&frD
design and applications of an algorithm benchmark system in a computational problem solving environment
Benchmark tests are often used to evaluate the quality of products by a set of common criteria. In this paper we describe a computational problem solving environment based on open source codes and an algorithm benchmark system, which is embedded in the environment as a plug-in system. The algorithm benchmark system can be used to compare the performance of various algorithms or to evaluate the behavior of an algorithm with different input instances. The current implementation allows users to compare or evaluate algorithms written in C/C++. Some examples of the algorithm benchmark system that evaluates the memory utilization, time complexity and the output of algorithms are also presented. Algorithm benchmark impresses the learning effect; students can not only comprehend the performance of respective algorithms but also write their own programs to challenge the best known results.
[ "benchmark", "problem-solving environment", "algorithm visualization", "knowledge portal" ]
[ "P", "M", "M", "U" ]
-T5QxmM
automated performance tuning
This tutorial presents automated techniques for implementing and optimizing numeric and symbolic libraries on modern computing platforms including SSE, multicore, and GPU. Obtaining high performance requires effective use of the memory hierarchy, short vector instructions, and multiple cores. Highly tuned implementations are difficult to obtain and are platform dependent. For example, Intel Core i7 980 XE has a peak floating point performance of over 100 GFLOPS and the NVIDIA Tesla C870 has a peak floating point performance of over 500 GFLOPS, however, achieving close to peak performance on such platforms is extremely difficult. Consequently, automated techniques are now being used to tune and adapt high performance libraries such as ATLAS (math-atlas.sourceforge.net), PLASMA (icl.cs.utk.edu/plasma) and MAGMA (icl.cs.utk.edu/magma) for dense linear algebra, OSKI (bebop.cs.berkeley.edu/oski) for sparse linear algebra, FFTW (www.fftw.org) for the fast Fourier transform (FFT), and SPIRAL (www.spiral.net) for wide class of digital signal processing (DSP) algorithms. Intel currently uses SPIRAL to generate parts of their MKL and IPP libraries.
[ "vectorization", "autotuning", "high-performance computing", "code generation and optimization", "parallelism" ]
[ "P", "U", "M", "M", "U" ]
3DUxU95
Explicit solutions for a class indirect pharmacodynamic response models
Explicit solutions for four, ordinary differential equation (ODE)-based, types of indirect response models are presented. These response models were introduced by Dayneka et aL in 1993 [J. Pharmacokinet. Biopharm. 21 (1993) 457] to describe pharmacodynamic responses utilizing inhibitory or stimulatory Em,x type functions. The explicit solutions are expressed in terms of hypergeometric F-2(1) functions and their analytical continuations. A practical application is demonstrated for modeling the kinetics of drug action for ibandronate, a potent bisphosphonate that suppresses bone turnover resulting in a reduction in the markers of bone turnover. Ten times shorter model evaluation times, with the explicit solution compared with the differential equation implementation, may enhance situations where a large number of model evaluations are needed, such as clinical trial simulations and parameter estimation. (C) 2004 Elsevier Ireland Ltd. All rights reserved.
[ "explicit solution", "indirect response model", "hypergeometric function f-2(1)", "nonmem" ]
[ "P", "P", "R", "U" ]
PVZ5nf3
a web-based consumer-oriented intelligent decision support system for personalized e-services
Due to the rapid advancement of electronic commerce and web technologies in recent years, the concepts and applications of decision support systems have been significantly extended. One quickly emerging research topic is the consumer-oriented decision support system that provides functional supports to consumers for efficiently and effectively making personalized decisions. In this paper we present an integrated framework for developing web-based consumer-oriented intelligent decision support systems to facilitate all phases of consumer decision-making process in business-to-consumer e-services applications. Major application functional modules comprised in the system framework include consumer and personalized management, navigation and search, evaluation and selection, planning and design, community and collaboration management, auction and negotiation, transactions and payments, quality and feedback control, as well as communications and information distributions. System design and implementation methods will be illustrated using an example. Also explored are various potential e-services application domains including e-tourism and e-investment.
[ "intelligent decision support system", "personalization", "e-services", "decision making process" ]
[ "P", "P", "P", "R" ]
11Ux:ka
Efficient segment-based video transcoding proxy for mobile multimedia services ?
To support various bandwidth requirements for mobile multimedia services for future heterogeneous mobile environments, such as portable notebooks, personal digital assistants (PDAs), and 3G cellular phones, a transcoding video proxy is usually necessary to provide mobile clients with adapted video streams by not only transcoding videos to meet different needs on demand, but also caching them for later use. Traditional proxy technology is not applicable to a video proxy because it is less cost-effective to cache the complete videos to fit all kinds of clients in the proxy. Since transcoded video objects have inheritance dependency between different bit-rate versions, we can use this property to amortize the retransmission overhead from transcoding other objects cached in the proxy. In this paper, we propose the object relation graph (ORG) to manage the static relationships between video versions and an efficient replacement algorithm to dynamically manage video segments cached in the proxy. Specifically, we formulate a transcoding time constrained profit function to evaluate the profit from caching each version of an object. The profit function considers not only the sum of the costs of caching individual versions of an object, but also the transcoding relationship among these versions. In addition, an effective data structure, cached object relation tree (CORT), is designed to facilitate the management of multiple versions of different objects cached in the transcoding proxy. Experimental results show that the proposed algorithm outperforms companion schemes in terms of the byte-hit ratios and the startup latency.
[ "transcoding", "multimedia", "segment caching", "mobile network" ]
[ "P", "P", "P", "M" ]
4yA9j5p
Automated process planning method to machine A B-Spline free-form feature on a mill-turn center
In this paper, we present a methodology for automating the process planning and NC code generation for a widely encountered class of free-form features that can be machined on a 3-axis mill-turn center. The free-form feature family that is considered is that of extruded protrusions whose cross-section is a closed, periodic B-Spline curve. in this methodology, for machining a part with B-Spline protrusion located at the free end, the part is first rough turned to the maximum profile diameter of the B-Spline, followed by rough profile cutting and finish profiling with axially mounted end mill tools. The identification and sequencing of machining volumes is completely automated, as is the generation of actual NC code. The approach supports both convex and non-convex profiles. In the case of non-convex profiles, the process planning algorithm ensures that there is no gouging of the work piece by the tool. The algorithm also identifies when sections of the tool path lie outside the work piece and utilizes rapid traverses in these regions to reduce cutting time. This methodology presents an integrated turn-mill process planning where by making the process fully automated from design with no user intervention making the overall process planning efficient. The algorithm was tested on several examples and test parts using the unmodified NC code obtained from the implementation were run on a Moriseiki mill-turn center. The parts that were produced met the dimensional specifications of the desired part. (C) 2008 Elsevier Ltd. All rights reserved.
[ "computer-aided process planning", "feature-based design", "computer-aided manufacturing" ]
[ "M", "M", "U" ]
1HM8Kk3
Stability results for two classes of linear time-delay and hybrid systems
The stability of linear time-delay systems with point internal delays is difficult to deal with in practice because of the fact that their characteristic equation is usually of transcendent type rather than of polynomial type. This feature causes usually the system to possess an infinite number of poles. In this paper, stability tests for this class of systems are obtained either based on extensions of classical tests applicable to delay-free systems or on approaches within the framework of two-dimensional digital filters. Some of those two-dimensional stability tests are also proved to be useful for stability testing of a common class of linear hybrid systems which involve coupled continuous and digital substates after a slight "ad-hoc" adaptation of the tests for that situation.
[ "stability (control theory)", "man-machine systems", "time series analysis" ]
[ "M", "M", "U" ]
3-6dHNh
A pseudo-nearest-neighbor approach for missing data recovery on Gaussian random data sets
Missing data handling is an important preparation step for most data discrimination or mining tasks. Inappropriate treatment of missing data may cause large errors or false results. In this paper, we study the effect of a missing data recovery method, namely the pseudo-nearest-neighbor substitution approach, on Gaussian distributed data sets that represent typical cases in data discrimination and data mining applications. The error rate of the proposed recovery method is evaluated by comparing the clustering results of the recovered data sets to the clustering results obtained on the originally complete data sets. The results are also compared with that obtained by applying two other missing data handling methods, the constant default value substitution and the missing data ignorance (non-substitution) methods. The experiment results provided a valuable insight to the improvement of the accuracy for data discrimination and knowledge discovery on large data sets containing missing values.
[ "missing data", "missing data recovery", "data mining", "data imputation", "data clustering", "gaussian data distribution" ]
[ "P", "P", "P", "M", "R", "R" ]
-1pLk17
A Lagrangian relaxation approach to the edge-weighted clique problem
The b-clique polytope CPnb is the convex hull of the node and edge incidence vectors of all subcliques of size at most b of a complete graph on n nodes. Including the Boolean quadric polytope QPn=CPnn as a special case and being closely related to the quadratic knapsack polytope, it has received considerable attention in the literature. In particular, the max-cut problem is equivalent with optimizing a linear function over CPnn. The problem of optimizing linear functions over CPnb has so far been approached via heuristic combinatorial algorithms and cutting-plane methods. We study the structure of CPnb in further detail and present a new computational approach to the linear optimization problem based on the idea of integrating cutting planes into a Lagrangian relaxation of an integer programming problem that Balas and Christofides had suggested for the traveling salesman problem. In particular, we show that the separation problem for tree inequalities becomes polynomial in our Lagrangian framework. Finally, computational results are presented.
[ "lagrangian relaxation", "boolean quadric polytope", "quadratic knapsack polytope", "cutting plane", "mathematical programming", "clique polytope", "cut polytope" ]
[ "P", "P", "P", "P", "M", "R", "R" ]
2PXEKFG
resource aware programming in the pixie os
This paper presents Pixie, a new sensor node operating system designed to support the needs of data-intensive applications. These applications, which include high-resolution monitoring of acoustic, seismic, acceleration, and other signals, involve high data rates and extensive in-network processing. Given the fundamentally resource-limited nature of sensor networks, a pressing concern for such applications is their ability to receive feedback on, and adapt their behavior to, fluctuations in both resource availability and load. The Pixie OS is based on a dataflow programming model based on the concept of resource tickets, a core abstraction for representing resource availability and reservations. By giving the system visibility and fine-grained control over resource management, a broad range of policies can be implemented. To shield application programmers from the burden of managing these details, Pixie provides a suite of resource brokers, which mediate between low-level physical resources and higher-level application demands. Pixie is implemented in NesC and supports limited backwards compatibility with TinyOS. We describe Pixie in the context of two applications: limb motion analysis for patients undergoing treatment for motion disorders, and acoustic target detection using a network of microphones. We present a range of experiments demonstrating Pixie's ability to accurately account for resource availability at runtime and enable a range of both generic and application-specific adaptations.
[ "resource", "program", "paper", "sensor", "operating system", "systems", "support", "data", "applications", "monitor", "acceleration", "signaling", "network", "process", "sensor networks", "feedback", "behavior", "availability", "dataflow", "program modelling", "concept", "core", "abstraction", "visibility", "control", "resource management", "management", "policy", "programmer", "physical", "compatibility", "tinyos", "context", "motion analysis", "motion", "detection", "experience", "account", "runtime", "generic", "resource reservations", "resource-aware programming", "wireless sensor networks" ]
[ "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "P", "R", "M", "M" ]
4UgW3NC
Highly Undersampled Magnetic Resonance Image Reconstruction via Homotopic l(0)-Minimization
In clinical magnetic resonance imaging (MRI), any reduction in scan time offers a number of potential benefits ranging from high-temporal-rate observation of physiological processes to improvements in patient comfort. Following recent developments in compressive sensing (CS) theory, several authors have demonstrated that certain classes of MR images which possess sparse representations in some transform domain can be accurately reconstructed from very highly undersampled K-space data by solving a convex l(1)-minimization problem. Although l(1)-based techniques are extremely powerful, they inherently require a degree of oversampling above the theoretical minimum sampling rate to guarantee that exact reconstruction can be achieved. In this paper, we propose a generalization of the CS paradigm based on homotopic approximation of the l(0) quasi-norm and show how MR image reconstruction can be pushed even further below the Nyquist limit and significantly closer to the theoretical bound. Following a brief review of standard CS methods and the developed theoretical extensions, several example MRI reconstructions from highly undersampled K-space data are presented.
[ "image reconstruction", "magnetic resonance imaging (mri)", "compressed sensing", "compressive sensing (cs)", "nonconvex optimization" ]
[ "P", "P", "P", "P", "U" ]
-mW-Mcg
An incremental verification algorithm for real-time systems
We present an incremental algorithm for model checking the red-time systems against the requirements specified in the real-time extension of modal mu-calculus. Using this algorithm, we avoid the repeated construction and analysis of the whole state-space during the course of evolution of the system from time to time. We use a finite representation of the system, like most other algorithms on real-time systems. We construct and update a graph (called TSG) that is derived from the region graph and the formula. This allows us to halt the construction of this graph when enough nodes have been explored to determine the truth of the formula. TSG is minimal in the sense of partitioning the infinite state space into regions and it expresses a relation on the set of regions of the partition. We use the structure of the formula to derive this partition. When a change is applied to the timed automaton of the system, we find a new partition from the current partition and the TSG with minimum cost.
[ "model-checking", "timed mu-calculus", "timed automata", "requirements specification", "labeled transition systems" ]
[ "U", "M", "M", "M", "M" ]
-fFWDe8
A Survey on Transport Protocols for Wireless Multimedia Sensor Networks
Wireless networks composed of multimedia-enabled resource-constrained sensor nodes have enriched a large set of monitoring sensing applications. In such communication scenario, however, new challenges in data transmission and energy-efficiency have arisen due to the stringent requirements of those sensor networks. Generally, congested nodes may deplete the energy of the active congested paths toward the sink and incur in undesired communication delay and packet dropping, while bit errors during transmission may negatively impact the end-to-end quality of the received data. Many approaches have been proposed to face congestion and provide reliable communications in wireless sensor networks, usually employing some transport protocol that address one or both of these issues. Nevertheless, due to the unique characteristics of multimedia-based wireless sensor networks, notably minimum bandwidth demand, bounded delay and reduced energy consumption requirement, communication protocols from traditional scalar wireless sensor networks are not suitable for multimedia sensor networks. In the last decade, such requirements have fostered research in adapting existing protocols or proposing new protocols from scratch. We survey the state of the art of transport protocols for wireless multimedia sensor networks, addressing the recent developments and proposed strategies for congestion control and loss recovery. Future research directions are also discussed, outlining the remaining challenges and promising investigation areas.
[ "survey", "transport protocols", "wireless multimedia sensor networks", "congestion control", "loss recovery" ]
[ "P", "P", "P", "P", "P" ]
4q:X9Xq
Two integrable couplings of the Tu hierarchy and their Hamiltonian structures
The double integrable couplings of the Tu hierarchy are worked out by use of Vector loop algebras G 6 and G 9 respectively. Also the Hamiltonian structures of the obtained system are given by the quadratic-form identity.
[ "integrable couplings", "tu hierarchy", "hamiltonian structure", "vector loop algebra", "quadratic-form identity" ]
[ "P", "P", "P", "P", "P" ]
4H9EUcX
Dynamic simulation of bioreactor systems using orthogonal collocation on finite elements
The dynamics of continuous biological processes is addressed in this paper. Numerical simulation of a conventional activated sludge process shows that despite the large differences in the dynamics of the species investigated. the orthogonal collocation on finite element technique with three internal collocation and four elements (OCFE-34) gives excellent numerical results for bioreactor models up to a Peclet number of 50. It is shown that there is little improvement in numerical accuracy when a much larger internal collocation point is introduced. Over and above Peclet number of 50, considered to be large for this process. simulation with the global orthogonal collocation (GOC) technique is infeasible. Due to the banded nature of its structural matrix, the method of lines (MOL) technique requires the lowest computing time, typically four times less than that required by the OCFE-34. Validation of the hydraulics of an existing pilot-scale subsurface flow (SSF) constructed wetland process using the aforementioned numerical techniques suggested that the OCFE is superior to the MOL and GOC in terms of numerical stability, (C) 1999 Elsevier Science Ltd. All rights reserved.
[ "orthogonal collocation on finite element", "activated sludge", "peclet number", "global orthogonal collocation", "method of lines", "ssf constructed wetland" ]
[ "P", "P", "P", "P", "P", "R" ]
2ucDCMH
Detecting regularities on grammar-compressed strings
We address the problems of detecting and counting various forms of regularities in a string represented as a straight-line program (SLP) which is essentially a context free grammar in the Chomsky normal form. Given an SLP of size n that represents a string s of length N, our algorithm computes all runs and squares in s in O(n3h) O ( n 3 h ) time and O(n2) O ( n 2 ) space, where h is the height of the derivation tree of the SLP. We also show an algorithm to compute all gapped-palindromes in O(n3h+gnhlog?N) O ( n 3 h + g n h log ? N ) time and O(n2) O ( n 2 ) space, where g is the length of the gap. As one of the main components of the above solution, we propose a new technique called approximate doubling which seems to be a useful tool for a wide range of algorithms on SLPs. Indeed, we show that the technique can be used to compute the periods and covers of the string in O(n2h) O ( n 2 h ) time and O(nh(n+log2?N)) O ( n h ( n + log 2 ? N ) ) time, respectively.
[ "straight-line programs (slps)", "runs", "squares", "gapped palindromes", "compressed string processing algorithms" ]
[ "P", "P", "P", "M", "M" ]
2j9X2NB
Achieving reusability and composability with a simulation conceptual model
Reusability and composability (R&C) are two important quality characteristics that have been very difficult to achieve in the Modelling and Simulation (M&S) discipline. Reuse provides many technical and economical benefits. Composability has been increasingly crucial for M&S of a system of systems, in which disparate systems are composed with each other. The purpose of this paper is to describe how R&C can be achieved by using a simulation conceptual model (CM) in a community of interest (COI). We address R&C in a multifaceted manner covering many M&S areas (types). M&S is commonly employed where R&C are very much needed by many COIs. We present how a CM developed for a COI can assist in R&C for the design of any type of large-scale complex M&S application in that COI. A CM becomes an asset for a COI and offers significant economic benefits through its broader applicability and more effective utilization.
[ "reusability", "composability", "simulation", "conceptual model", "simulation model development" ]
[ "P", "P", "P", "P", "R" ]
-tZ-6tv
Wavelength decomposition approach for computing blocking probabilities in multicast WDM optical networks
We present an approximate analytical method to evaluate the blocking probabilities in multicast Wavelength Division Multiplexing (WDM) networks without wavelength converters. Our approach is based on the wavelength decomposition approach in which the WDM network is divided into layers (colors) and the moment matching method is used to characterize the overflow traffic from one layer to another. Analyzing blocking probabilities for unicast and multicast calls in each layer of the network is derived from an exact approach. We assume static routing with either First-Fit or random wavelength assignment algorithm. Results are presented which indicate the accuracy of our method.
[ "blocking probability", "wdm", "mutlicast routing" ]
[ "P", "P", "M" ]
-KuvaJe
A new local meshless method for steady-state heat conduction in heterogeneous materials
In this paper a truly meshless method based on the integral form of energy equation is presented to study the steady-state heat conduction in the anisotropic and heterogeneous materials. The presented meshless method is based on the satisfaction of the integral form of energy balance equation for each sub-particle (sub-domain) inside the material. Moving least square (MLS) approximation is used for approximation of the field variable over the randomly located nodes inside the domain. In the absence of heat generation, the domain integration is eliminated from the formulation of presented method and the computational efforts are reduced substantially with respect to the conventional MLPG method. A direct method is presented for treatment of material discontinuity at the heterogeneous material in the presented meshless method. As a practical problem the heat conduction in fibrous composite material is studied and the steady-state heat conduction in unidirectional fibermatrix composites is investigated. The solution domain includes a small area of the composite system called representative volume element (RVE). Comparison of numerical results shows that the presented meshless method is simple, effective, accurate and less costly method for micromechanical analysis of heat conduction in heterogeneous materials.
[ "heterogeneous material", "truly meshless method", "micromechanical analysis", "heat conduction problem", "fiber reinforced composite" ]
[ "P", "P", "P", "R", "M" ]
4Q&2DCm
A Territory Defining Multiobjective Evolutionary Algorithms and Preference Incorporation
We have developed a steady-state elitist evolutionary algorithm to approximate the Pareto-optimal frontiers of multiobjective decision making problems. The algorithms define a territory around each individual to prevent crowding in any region. This maintains diversity while facilitating the fast execution of the algorithm. We conducted extensive experiments on a variety of test problems and demonstrated that our algorithm performs well against the leading multiobjective evolutionary algorithms. We also developed a mechanism to incorporate preference information in order to focus on the regions that are appealing to the decision maker. Our experiments show that the algorithm approximates the Pareto-optimal solutions in the desired region very well when we incorporate the preference information.
[ "evolutionary algorithms", "preference incorporation", "crowding prevention", "guidance", "multiobjective optimization" ]
[ "P", "P", "R", "U", "M" ]
1awMtwb
A holistic frame-of-reference for modelling social systems
Purpose - To outline a philosophical system of inquiry that may be used as a frame-of-reference for modelling social systems. Design/methodology/approach - The paper draws on insights from cognitive science, autopoiesis, management cybernetics and non-linear dynamics. Findings - The outcome of this paper is an outline of a frame-of-reference to be used as a starting point (or a frame of orientation) for any problem solving/modelling intent or act. The framework highlights the importance of epistemological reflection and the need to avoid any separation of the process of knowing from that of modelling. It also emphasises the importance of inquiry into the assumptions that underpin the methods, tools and techniques that we employ, and into the tacit beliefs of the human actors who use them. Research limitations/implications - The presented frame-of-reference should be regarded as an evolving system of inquiry, one that seeks to incorporate contemporary human insight. Practical implications - Exactly, how the frame-of-reference presented in this paper should be exploited within an organisational or educational context, is a question to which there is no single "correct" answer. What is primarily important, however, is that it should be used to raise the profile of, and disseminate the benefits that accrue from, inquiry which goes beyond the simple application of tools and methods. Originality/value - This paper proposes a new framework-of-reference for modelling social systems that draws on insights from cognitive science, autopoiesis, management cybernetics and non-linear dynamics.
[ "modelling", "cybernetics", "social dynamics" ]
[ "P", "P", "R" ]
1i:&CLF
A source-synchronous double-data-rate parallel optical transceiver IC
Source-synchronous double-data-rate (DDR) signaling is widely used in electrical interconnects to eliminate clock recovery and to double communication bandwidth. This paper describes the design of a parallel optical transceiver integrated circuit (IC) that uses source-synchronous DDR optical signaling. On the transmit side, two 8-b electrical inputs are multiplexed, encoded, and sent over two high-speed optical links. On the receive side, the procedure is reversed to produce two 8-b electrical outputs. The proposed IC integrates analog vertical-cavity surface-emitting lasers (VCSELs), drivers and optical receivers with digital DDR multiplexing, serialization, and deserialization circuits. It was fabricated in a 0.5-mu m silicon-on-sapphire (SOS) complementary metal-oxide-semiconductor (CMOS) process. Linear arrays of quad VCSELs and photodetectors were attached to the proposed transceiver IC using Hip-chip bonding. A free-space optical link system was constructed to demonstrate correct IC functionality. The test results show successful transceiver operation at a data rate of 500 Mb/s with a 250-MHz DDR clock, achieving a gigabit of aggregate bandwidth. While the proposed DDR scheme is well suited for low-skew fiber-ribbon, free-space, and waveguide optical links, it can also be extended to links with higher skew with the addition of skew-compensation circuitry. To the authors' knowledge, this is the first demonstration of parallel optical transceivers that use source-synchronous DDR signaling.
[ "flip-chip", "high-speed-interconnect", "optical interconnects", "optoelectronic-integrated circuits", "source-synchronous signaling" ]
[ "U", "U", "R", "M", "R" ]
XYauYe2
FPCODE: AN EFFICIENT APPROACH FOR MULTI-MODAL BIOMETRICS
Although face recognition technology has progressed substantially, its performance is still not satisfactory due to the challenges of great variations in illumination, expression and occlusion. This paper aims to improve the accuracy of personal identification, when only few samples are registered as templates, by integrating multiple modal biometrics, i.e. face and palmprint. We developed in this paper a feature code, namely FPCode, to represent the features of both face and palmprint. Though feature code has been used for palmprint recognition in literature, it is first applied in this paper for face recognition and multi-modal biometrics. As the same feature is used, fusion is much easier. Experimental results show that both feature level and decision level fusion strategies achieve much better performance than single modal biometrics. The proposed approach uses fixed length 1/0 bits coding scheme that is very efficient in matching, and at the same time achieves higher accuracy than other fusion methods available in literature.
[ "face recognition", "palmprint recognition", "gabor feature", "fusion code", "feature fusion" ]
[ "P", "P", "M", "R", "R" ]
auUTcY&
Kinetics and energetics during uphill and downhill carrying of different weights
During physically heavy work tasks the musculoskeletal tissues are exposed to both mechanical and metabolic loading. The aim of the present study was to test a biomechanical model for prediction of whole-body energy turnover from kinematic and anthropometric data during load carrying. Total loads of 0, 10 and 20kg were carried symmetrically or asymmetrically in the hands, while walking on a treadmill (4.5kmh?1) horizontally, uphill, or downhill the slopes being 8%. Mean values for the directly measured oxygen uptake ranged for all trials from 0.5 to 2.1 l O2min?1, and analysis of variance showed significant differences regarding slope, load carried, and symmetry. The calculated values of oxygen uptake based on the biomechanical model correlated significantly with the directly measured values, fitting to the line Y=0.990X+0.144 , where Y is the estimated and X is the measured oxygen uptake in lmin?1. The close relationship between energy turnover rate measured directly and estimated based on a biomechanical model justifies the assessment of the metabolic load from kinematic data.
[ "biomechanics", "manual material handling" ]
[ "P", "U" ]
1honskc
Granular prototyping in fuzzy clustering
We introduce a logic-driven clustering in which prototypes are formed and evaluated in a sequential manner. The way of revealing a structure in data is realized by maximizing a certain performance index (objective function) that takes into consideration an overall level of matching (to be maximized) and a similarity level between the prototypes (the component to be minimized). The prototypes identified in the process come with the optimal weight vector that serves to indicate the significance of the individual features (coordinates) in the data grouping represented by the prototype. Since the topologies of these groupings are in general quite diverse the optimal weight vectors are reflecting the anisotropy of the feature space, i.e., they show some local ranking of features in the data space. Having found the prototypes we consider an inverse similarity problem and show how the relevance of the prototypes translates into their granularity.
[ "granular prototypes", "direct and inverse matching problem", "information granulation", "logic-based clustering", "similarity index", "t- and s-norms" ]
[ "P", "M", "U", "M", "R", "M" ]
1NL7fZ1
empirical evaluation of latency-sensitive application performance in the cloud
Cloud computing platforms enable users to rent computing and storage resources on-demand to run their networked applications and employ virtualization to multiplex virtual servers belonging to different customers on a shared set of servers. In this paper, we empirically evaluate the efficacy of cloud platforms for running latency-sensitive multimedia applications. Since multiple virtual machines running disparate applications from independent users may share a physical server, our study focuses on whether dynamically varying background load from such applications can interfere with the performance seen by latency-sensitive tasks. We first conduct a series of experiments on Amazon's EC2 system to quantify the CPU, disk, and network jitter and throughput fluctuations seen over a period of several days. We then turn to a laboratory-based cloud and systematically introduce different levels of background load and study the ability to isolate applications under different settings of the underlying resource control mechanisms. We use a combination of micro-benchmarks and two real-world applications--the Doom 3 game server and Apple's Darwin Streaming Server--for our experimental evaluation. Our results reveal that the jitter and the throughput seen by a latency-sensitive application can indeed degrade due to background load from other virtual machines. The degree of interference varies from resource to resource and is the most pronounced for disk-bound latency-sensitive tasks, which can degrade by nearly 75\% under sustained background load. We also find that careful configuration of the resource control mechanisms within the virtualization layer can mitigate, but not eliminate, this interference.
[ "cloud computing", "virtualization", "multimedia", "resource isolation" ]
[ "P", "P", "P", "R" ]
3PYYjZ:
The role of ChineseAmerican scientists in ChinaUS scientific collaboration: a study in nanotechnology
In this paper, we use bibliometric methods and social network analysis to analyze the pattern of ChinaUS scientific collaboration on individual level in nanotechnology. Results show that ChineseAmerican scientists have been playing an important role in ChinaUS scientific collaboration. We find that ChinaUS collaboration in nanotechnology mainly occurs between Chinese and ChineseAmerican scientists. In the co-authorship network, ChineseAmerican scientists tend to have higher betweenness centrality. Moreover, the series of polices implemented by the Chinese government to recruit oversea experts seems to contribute a lot to ChinaUS scientific collaboration.
[ "chineseamerican", "scientific collaboration", "nanotechnology", "collaboration network" ]
[ "P", "P", "P", "R" ]
3YFqC6x
Localization of spherical fruits for robotic harvesting
The orange picking robot (OPR) is a project for developing a robot that is able to harvest oranges automatically. One of the key tasks in this robotic application is to identify the fruit and to measure its location in three dimensions. This should be performed using image processing techniques which must be sufficiently robust to cope with variations in lighting conditions and a changing environment. This paper describes the image processing system developed so far to guide automatic harvesting of oranges, which here has been integrated in the first complete full-scale prototype OPR.
[ "fruit harvesting", "color clustering", "stereo matching", "visual tracking" ]
[ "R", "U", "U", "U" ]
-ATAqaC
game based learning for computer science education
Today, learners increasingly demand for innovative and motivating learning scenarios that strongly respond to their habits of using media. One of the many possible solutions to this demand is the use of computer games to support the acquisition of knowledge. This paper reports on chances and challenges of applying a game-based learning scenario for the acquisition of IT knowledge as realized by the German BMBF project SpITKom. After briefly describing the learning potential of Multiplayer Browser Games as well as the educational objectives and target group of the SpITKom project, we will present the main results of a study that was carried out in the first phase of the project to guide the game design. In the course of the study, data were collected regarding (a) the computer game preferences of the target group and (b) the target group's competencies in playing computer games. We will then introduce recommendations that were deduced from the study's findings and that outline the concept and the prototype of the game.
[ "game based learning", "it knowledge", "game design", "learners difficult to reach" ]
[ "P", "P", "P", "M" ]
1-rvAL8
Efficient evaluation functions for evolving coordination
This paper presents fitness evaluation functions that efficiently evolve coordination in large multi-component systems. In particular, we focus on evolving distributed control policies that are applicable to dynamic and stochastic environments. While it is appealing to evolve such policies directly for an entire system, the search space is prohibitively large in most cases to allow such an approach to provide satisfactory results. Instead, we present an approach based on evolving system components individually where each component aims to maximize its own fitness function. Though this approach sidesteps the exploding state space concern, it introduces two new issues: (1) how to create component evaluation functions that are aligned with the global evaluation function; and (2) how to create component evaluation functions that are sensitive to the fitness changes of that component, while remaining relatively insensitive to the fitness changes of other components in the system. If the first issue is not addressed, the resulting system becomes uncoordinated; if the second issue is not addressed, the evolutionary process becomes either slow to converge or worse, incapable of converging to good solutions. This paper shows how to construct evaluation functions that promote coordination by satisfying these two properties. We apply these evaluation functions to the distributed control problem of coordinating multiple rovers to maximize aggregate information collected. We focus on environments that are highly dynamic (changing points of interest), noisy (sensor and actuator faults), and communication limited (both for observation of other rovers and points of interest) forcing the rovers to evolve generalized solutions. On this difficult coordination problem, the control policy evolved using aligned and component-sensitive evaluation functions outperforms global evaluation functions by up to 400%. More notably, the performance improvements increase when the problems become more difficult (larger, noisier, less communication). In addition we provide an analysis of the results by quantifying the two characteristics (alignment and sensitivity discussed above) leading to a systematic study of the presented fitness functions.
[ "fitness evaluation", "distributed control", "evolution strategies" ]
[ "P", "P", "U" ]
36z5xHf
Contention-free communication scheduling for array redistribution
Array redistribution is required often in programs on distributed memory parallel computers. It is essential to use efficient algorithms for redistribution, otherwise the performance of the programs may degrade considerably. The redistribution overheads consist of two parts: index computation and interprocessor communication. If there is no communication scheduling in a redistribution algorithm, the communication contention may occur, which increases the communication waiting time. In order to solve this problem, in this paper, we propose a technique to schedule the communication so that it becomes contention-free. Our approach initially generates a communication table to represent the communication relations among sending nodes and receiving nodes. According to the communication table, we then generate another table named communication scheduling table. Each column of communication scheduling table is a permutation of receiving node numbers in each communication step. Thus the communications in our redistribution algorithm are contention-free. Our approach can deal with multi-dimensional shape changing redistribution.
[ "communication scheduling", "array redistribution", "parallelizing compilers", "hpf", "distributed memory machines" ]
[ "P", "P", "M", "U", "M" ]
3A3JC8g
Quadratic weighted median filters for edge enhancement of noisy images
Quadratic Volterra filters are effective in image sharpening applications. The linear combination of polynomial terms, however, yields poor performance in noisy environments. Weighted median (WM) filters, in contrast, are well known for their outlier suppression and detail preservation properties. The WM sample selection methodology is naturally extended to the quadratic sample case, yielding a filter structure referred to as quadratic weighted median (QWM) that exploits the higher order statistics of the observed samples while simultaneously being robust to outliers arising in the higher order statistics of environment noise. Through statistical analysis of higher order samples, it is shown that, although the parent Gaussian distribution is light tailed, the higher order terms exhibit heavy-tailed distributions. The optimal combination of terms contributing to a quadratic system, i.e., cross and square, is approached from a maximum likelihood perspective which yields the WM processing of these terms. The proposed QWM filter structure is analyzed through determination of the output variance and breakdown probability. The studies show that the QWM exhibits lower variance and breakdown probability indicating the robustness of the proposed structure. The performance of the QWM filter is tested on constant regions, edges and real images, and compared to its weighted-sum dual, the quadratic Volterra filter. The simulation results show that the proposed method simultaneously suppresses the noise and enhances image details. Compared with the quadratic Volterra sharpener, the QWM filter exhibits superior qualitative and quantitative performance in noisy image sharpening.
[ "volterra filtering", "weighted median (wm) filtering", "asymptotic tail mass", "maximum likelihood estimation", "robust image sharpening", "unsharp masking" ]
[ "P", "P", "M", "M", "R", "U" ]
57aUhVv
ELRA - European language resources association-background, recent developments and future perspectives
The European Language Resources Association (ELRA) was founded in 1995 with the mission of providing language resources (LR) to European research institutions and companies. In this paper we describe the background, the mission and the major activities since then.
[ "language resources", "evaluation", "production", "standards", "validation" ]
[ "P", "U", "U", "U", "U" ]
2NARQSx
MULTIPLE CONCURRENCE OF MULTI-PARTITE QUANTUM SYSTEM
We propose a new way of description of the global entanglement property of a multi-partite pure state quantum system. Based on the idea of bipartite concurrence, by dividing the multi-partite quantum system into two subsystems, a combination of all the bipartite concurrences of a multipartite quantum system is used to describe the entanglement property of the multi-partite system. We derive the analytical results for GHZ-state, W-state with arbitrary number of qubits, and cluster state with the number of particles no greater than 6.
[ "multiple concurrence of multi-partite quantum system", "entanglement", "w-state", "cluster state", "ghz-state" ]
[ "P", "P", "P", "P", "U" ]
1McJuuz
Tolerant information retrieval with backpropagation networks
Neural networks can learn fi-om human decisions and preferences. Especially in, human-computer interaction, adaptation to the behaviour and expectations of the user is necessary. Ih information retrieval, an important area within human-computer interaction, expectations are difficult to meet. The inherently vague nature of information retrieval has bed to the application of vague processing techniques. Neural networks seem to have great potential to model the cognitive processes involved more appropriately. Current models based on neural networks and their implications for human-computer interaction ar-e analysed. COSIMIR (Cognitive Similarity Learning in Information Retrieval), an innovative model integrating human knowledge into the core of the retrieval process, is presented. It applies backpropagation to information retrieval, integrating human-centred and soft and tolerant computing into the core of the retrieval process. A further backpropagation model, the transformation network for heterogeneous data sources, is discussed. Empirical evaluations have provided promising results.
[ "information retrieval", "backpropagation", "neural networks", "human-computer interaction", "similarity", "spreading activation" ]
[ "P", "P", "P", "P", "P", "U" ]
3&cQEYy
Approximation of mean time between failure when a system has periodic maintenance
This paper describes a simple technique for estimating the mean time between failure (MTBF) of a system that has periodic maintenance at regular intervals. This type of maintenance is typically found in high reliability, mission-oriented applications where it is convenient to perform maintenance after the completion of the mission. This approximation technique can greatly simplify the MTBF analysis for large systems. The motivation for this analysis was to understand the nature of the error in the approximation and to develop a means for quantifying that error. This paper provides the derivation of the equations that bound the error that can result when using this approximation method. It shows that, for most applications, the MTBF calculations can be greatly simplified with only a very small sacrifice in accuracy.
[ "periodic maintenance", "mean time between failure (mtbf)", "reliability modeling" ]
[ "P", "P", "M" ]
18p6pve
Supplying Web 2.0: An empirical investigation of the drivers of consumer transmutation of culture-oriented digital information goods
This paper describes an empirical study of behaviors associated with consumers' creative modification of digital information goods found in Web 2.0 and elsewhere. They are products of culture such as digital images, music, video, news and computer games. We will refer to them as "digital culture products". How do consumers who transmute such products differ from those who do not, and from each other? This study develops and tests a theory of consumer behavior in transmuting digital culture products, separating consumers into different groups based on how and why they transmute. With our theory, we posit these groups as having differences of motivation, as measured by product involvement and innovativeness, and of ability as measured by computer skills. A survey instrument to collect data from Internet-capable computer users on the relevant constructs, and on their transmutation activities, is developed and distributed using a web-based survey hosting service. The data are used to test hypotheses that consumers' enduring involvement and innovativeness are positively related to transmutation behaviors, and that computer self-efficacy moderates those relationships. The empirical results support the hypotheses that enduring involvement and innovativeness do motivate transmutation behavior. The data analysis also supports the existence of a moderating relationship of computer self-efficacy with respect to enduring involvement, but not of computer self-efficacy with respect to innovativeness. The findings further indicate that transmutation activities should be expected to impact Web 2.0-oriented companies, both incumbents and start-ups, as they make decisions about how to incorporate consumers into their business models not only as recipients of content, but also as its producers. (C) 2010 Elsevier B. V. All rights reserved.
[ "information goods", "creativity", "culture products", "consumer behavior", "digital entertainment", "digital mashup", "remix", "media products" ]
[ "P", "P", "P", "P", "M", "M", "U", "M" ]
1JZVmgH
Polynomial cost for solving IVP for high-index DAE
We show that the cost of solving initial value problems for high-index differential algebraic equations is polynomial in the number of digits of accuracy requested. The algorithm analyzed is built on a Taylor series method developed by Pryce for solving a general class of differential algebraic equations. The problem may be fully implicit, of arbitrarily high fixed index and contain derivatives of any order. We give estimates of the residual which are needed to design practical error control algorithms for differential algebraic equations. We show that adaptive meshes are always more efficient than non-adaptive meshes. Finally, we construct sufficiently smooth interpolants of the discrete solution.
[ "initial value problem", "differential algebraic equations", "taylor series", "adaptive step-size control", "structural analysis", "automatic differentiation", "holder mean" ]
[ "P", "P", "P", "M", "U", "M", "U" ]
14GeGJL
A Novel Wavelength Hopping Passive Optical Network (WH-PON) for Provision of Enhanced Physical Security
A novel secure wavelength hopping passive optical network (WH-PON) is presented in which physical layer security is introduced to the access network. The WH-PON design uses a pair of matched tunable lasers in the optical line terminal to create a time division multiplexed signal in which each data frame is transmitted at a unique wavelength. The transmission results for a 32-channel WH-PON operating at a data rate of 2.5 Gb/s are presented in this paper. The inherent security of the WH-PON design is verified through an attempted cross-channel eavesdropping attempt at an optical network unit. The results presented verify that the WH-PON provides secure broadband service in the access network.
[ "wavelength hopping", "passive optical network", "access network", "tunable laser", "broadband", "fiber-to-the-x" ]
[ "P", "P", "P", "P", "P", "U" ]
2LyAhmB
On the Information Flow Required for Tracking Control in Networks of Mobile Sensing Agents
We design controllers that permit mobile agents with distributed or networked sensing capabilities to track (follow) desired trajectories, identify what trajectory information must be distributed to each agent for tracking, and develop methods to minimize the communication needed for the trajectory information distribution.
[ "tracking", "cooperative control", "dynamical networks" ]
[ "P", "M", "M" ]
2bDUP81
Analysis of timing-based mutual exclusion with random times
Various timing-based mutual exclusion algorithms have been proposed that guarantee mutual exclusion if certain timing assumptions hold. In this paper, we examine how these algorithms behave when the time for the basic operations is governed by probability distributions. In particular, we are concerned with how often such algorithms succeed in allowing a processor to obtain a critical region and how this success rate depends on the random variables involved. We explore this question in the case where operation times are governed by exponential and gamma distributions, using both theoretical analysis and simulations.
[ "mutual exclusion", "timed mutual exclusion", "markov chains", "locks" ]
[ "P", "R", "U", "U" ]
1zk&i&3
Modeling virtual worlds in databases
A method of modeling virtual worlds in databases is presented. The virtual world model is conceptually divided into several distinct elements, which are separately represented in a database. The model pen-nits to dynamically generate virtual scenes. (C) 2003 Published by Elsevier B.V.
[ "modeling", "databases", "data structures", "virtual reality" ]
[ "P", "P", "U", "M" ]
2bu8ee2
An efficient scheduling algorithm for scalable video streaming over P2P networks ?
During recent years, the Internet has witnessed rapid advancement in peer-to-peer (P2P) media streaming. In these applications, an important issue has been the block scheduling problem, which deals with how each node requests the media data blocks from its neighbors. In most streaming systems, peers are likely to have heterogeneous upload/download bandwidths, leading to the fact that different peers probably perceive different streaming quality. Layered (or scalable) streaming in P2P networks has recently been proposed to address the heterogeneity of the network environment. In this paper, we propose a novel block scheduling scheme that is aimed to address the P2P layered video streaming. We define a soft priority function for each block to be requested by a node in accordance with the blocks significance for video playback. The priority function is unique in that it strikes good balance between different factors, which makes the priority of a block well represent the relative importance of the block over a wide variation of block size between different layers. The block scheduling problem is then transformed to an optimization problem that maximizes the priority sum of the delivered video blocks. We develop both centralized and distributed scheduling algorithms for the problem. Simulation of two popular scalability types has been conducted to evaluate the performance of the algorithms. The simulation results show that the proposed algorithm is effective in terms of bandwidth utilization and video quality.
[ "p2p streaming", "scalable video coding", "block scheduling algorithm" ]
[ "R", "M", "R" ]
4mxJFTq
A Threshold for a Polynomial Solution of #2SAT
The #SAT problem is a classical #P-complete problem even for monotone, Horn and two conjunctive formulas (the last known as #2SAT). We present a novel branch and bound algorithm to solve the #2SAT problem exactly. Our procedure establishes a new threshold where #2SAT can be computed in polynomial time. We show that for any 2-CF formula F with n variables where #2SAT(F) <= p(n), for some polynomial p, #2SAT(F) is computed in polynomial time. This is a new way to measure the degree of difficulty for solving #2SAT and, according to such measure our algorithm allows to determine a boundary between 'hard' and 'easy' instances of the #2SAT problem.
[ "#2sat problem", "branch-bound algorithm", "polynomial thresholds", "efficient counting" ]
[ "P", "M", "R", "U" ]
1PRvUYd
Bagging and Boosting statistical machine translation systems
In this article we address the issue of generating diversified translation systems from a single Statistical Machine Translation (SMT) engine for system combination. Unlike traditional approaches, we do not resort to multiple structurally different SMT systems, but instead directly learn a strong SMT system from a single translation engine in a principled way. Our approach is based on Bagging and Boosting which are two instances of the general framework of ensemble learning. The basic idea is that we first generate an ensemble of weak translation systems using a base learning algorithm, and then learn a strong translation system from the ensemble. One of the advantages of our approach is that it can work with any of current SMT systems and make them stronger almost "for free". Beyond this, most system combination methods are directly applicable to the proposed framework for generating the final translation system from the ensemble of weak systems. We evaluate our approach on Chinese-English translation in three state-of-the-art SMT systems, including a phrase-based system, a hierarchical phrase-based system and a syntax-based system. Experimental results on the NIST MT evaluation corpora show that our approach leads to significant improvements in translation accuracy over the baselines. More interestingly, it is observed that our approach is able to improve the existing system combination systems. The biggest improvements are obtained by generating weak systems using Bagging/Boosting, and learning the strong system using a state-of-the-art system combination method. (C) 2012 Elsevier B.V. All rights reserved.
[ "statistical machine translation", "system combination", "ensemble learning" ]
[ "P", "P", "P" ]
uEkKoK3
Functional dimensioning and tolerancing software for concurrent engineering applications
This paper describes the development of a prototype software package for solving functional dimensioning and tolerancing (FD&T) problems in a Concurrent Engineering environment. It provides a systematic way of converting functional requirements of a product into dimensional specifications by means of the following steps: firstly, the relationships necessary for solving FD&T problems are represented in a matrix form, known as functional requirements/dimensions (FR/D) matrix. Secondly, the values of dimensions and tolerances are then determined by satisfying all these relationships represented in a FR/D matrix by applying a comprehensive strategy which includes: tolerance allocation strategies for different types of FD&T problems and for determining an optimum solution order for coupled functional equations. The prototype software is evaluated by its potential users, and the results indicate that it can be an effective computer-based tool for solving FD&T problems in a CE environment. (C) 2003 Elsevier B.V. All rights reserved.
[ "functional dimensioning and tolerancing", "concurrent engineering", "tolerance allocation" ]
[ "P", "P", "P" ]
1mX8VtX
Parametric Model-Checking of Stopwatch Petri Nets
At the border between control and verification, parametric verification can be used to synthesize constraints on the parameters to ensure that a system verifies given specifications. In this paper we propose a new framework for the parametric verification of time Petri nets with stopwatches. We first introduce a parametric extension of time Petri nets with inhibitor arcs (ITPNs) with temporal parameters and we define a symbolic representation of the parametric state-space based on the classical state-class graph method. Then, we propose semi-algorithms for the parametric model-checking of a subset of parametric TCTL formulae on ITPNs. These results have been implemented in the tool ROMEO and we illustrate them in a case-study based on a scheduling problem.
[ "parameters", "model-checking", "stopwatches", "time petri nets", "state-class graph" ]
[ "P", "P", "P", "P", "P" ]
oYG34ai
Modelling the interaction of catecholamines with the alpha(1A) Adrenoceptor towards a ligand-induced receptor structure
Adrenoceptors are members of the important G protein coupled receptor family for which the detailed mechanism of activation remains unclear. In this study, we have combined docking and molecular dynamics simulations to model the ligand induced effect on an homology derived human alpha(1A) adrenoceptor. Analysis of agonist/alpha(1A) adrenoceptor complex interactions focused on the role of the charged amine group, the aromatic ring, the N-methyl group of adrenaline, the beta hydroxyl group and the catechol meta and para hydroxyl groups of the catecholamines. The most critical interactions for the binding of the agonists are consistent with many earlier reports and our study suggests new residues possibly involved in the agonist-binding site, namely Thr-174 and Cys-176. We further observe a number of structural changes that occur upon agonist binding including a movement of TM-V away from TM-III and a change in the interactions of Asp-123 of the conserved DRY motif. This may cause Arg-124 to move out of the TM helical bundle and change the orientation of residues in IC-II and IC-III, allowing for increased affinity of coupling to the G-protein.
[ "molecular dynamics", "agonists", "alpha(1a)-adrenoceptor", "molecular docking", "receptor activation" ]
[ "P", "P", "U", "R", "R" ]
4gM16UJ
probabilistic string similarity joins
Edit distance based string similarity join is a fundamental operator in string databases. Increasingly, many applications in data cleaning, data integration, and scientific computing have to deal with fuzzy information in string attributes. Despite the intensive efforts devoted in processing (deterministic) string joins and managing probabilistic data respectively, modeling and processing probabilistic strings is still a largely unexplored territory. This work studies the string join problem in probabilistic string databases, using the expected edit distance (EED) as the similarity measure. We first discuss two probabilistic string models to capture the fuzziness in string values in real-world applications. The string-level model is complete, but may be expensive to represent and process. The character-level model has a much more succinct representation when uncertainty in strings only exists at certain positions. Since computing the EED between two probabilistic strings is prohibitively expensive, we have designed efficient and effective pruning techniques that can be easily implemented in existing relational database engines for both models. Extensive experiments on real data have demonstrated order-of-magnitude improvements of our approaches over the baseline.
[ "probabilistic strings", "string joins", "approximate string queries" ]
[ "P", "P", "M" ]
-5-TojV
A high performance simulator of the immune response
The application of concepts and methods of statistical mechanics to biological problems is one of the most promising frontiers of computational physics. For instance Cellular Automata (CA), i.e. fully discrete dynamical systems evolving according to Boolean laws, appear to be extremely well suited to the simulation of the immune system dynamics. A prominent example of immunological CA is represented by the CeladaSeiden automaton that has proven capable of providing several new insights into the dynamics of the immune system response. In the present paper we describe a parallel version of the CeladaSeiden automaton. Details on the parallel implementation as well as performance data on the IBM SP2 parallel platform are presented and commented on.
[ "immune response", "cellular automata (ca)", "parallel virtual machine (pvm)", "memory management" ]
[ "P", "P", "M", "U" ]
1WGoaq1
Speaker adaptation of language and prosodic models for automatic dialog act segmentation of speech
Speaker-dependent modeling has a long history in speech recognition, but has received less attention in speech understanding. This study explores speaker-specific modeling for the task of automatic segmentation of speech into dialog acts (DAs), using a linear combination of speaker-dependent and speaker-independent language and prosodic models. Data come from 20 frequent speakers in the ICSI meeting corpus; adaptation data per speaker ranges from 5k to 115k words. We compare performance for both reference transcripts and automatic speech recognition output. We find that: (1) speaker adaptation in this domain results both in a significant overall improvement and in improvements for many individual speakers, (2) the magnitude of improvement for individual speakers does not depend on the amount of adaptation data, and (3) language and prosodic models differ both in degree of improvement, and in relative benefit for specific DA classes. These results suggest important future directions for speaker-specific modeling in spoken language understanding tasks.
[ "speaker adaptation", "dialog act segmentation", "spoken language understanding", "prosody modeling", "language modeling" ]
[ "P", "P", "P", "M", "R" ]
4CBUTsK
An efficient method for electromagnetic scattering analysis
We present a novel method to solve the magnetic field integral equation (MFIE) using the method of moments (MoM) efficiently. This method employs a linear combination of the divergence-conforming RaoWiltonGlisson (RWG) function and the curl-conforming nRWG function to test the MFIE in MoM. The discretization process and the relationship of this new testing function with the previously employed RWG and nRWG testing functions are presented. Numerical results of radar cross section (RCS) data for objects with sharp edges and corners show that accuracy of the MFIE can be improved significantly through the use of the new testing functions. At the same time, only the commonly used RWG basis functions are needed for this method.
[ "electromagnetic scattering", "magnetic field integral equation (mfie)", "method of moments (mom)", "combined raowiltonglisson function (crwg)" ]
[ "P", "P", "P", "M" ]
1hdqF6H
Empirical mode decomposition synthesis of fractional processes in 1D-and 2D-space
We report here on image texture analysis and on numerical simulation of fractional Brownian textures based on the newly emerged Empirical Mode Decomposition (EMD). EMD introduced by N.E. Huang et al. is a promising tool to non-stationary signal representation as a sum of zero-mean AM-FM components called Intrinsic Mode Functions (IMF). Recent works published by P. Flandrin et al. relate that, in the case of fractional Gaussian noise (fGn), EMD acts essentially as a dyadic filter bank that can be compared to wavelet decompositions. Moreover, in the context of fGn identification, P. Handrin et al. show that variance progression across IMFs is related to Hurst exponent H through a scaling law. Starting with these recent results, we propose a new algorithm to generate fGn, and fractional Brownian motion (fBm) of Hurst exponent H from IMFs obtained from EMD of a White noise, i.e. ordinary Gaussian noise (fGn with H= 1/2). (c) 2005 Elsevier B.V. All rights reserved.
[ "empirical mode decomposition", "fractional processes synthesis", "gaussian and brownian texture images" ]
[ "P", "R", "R" ]
4yroxwt
Flow topology in a steady three-dimensional lid-driven cavity
We present in this paper a thorough investigation of three-dimensional flow in a cubical cavity, subject to a constant velocity lid on its roof. In this steady-state analysis, we adopt the mixed formulation on tri-quadratic elements to preserve mass conservation. To resolve difficulties in the asymmetric and indefinite large-size matrix equations, we apply the BiCGSTAB solution solver. To achieve stability, weighting functions are designed in favor of variables on the upstream side. To achieve accuracy, the weighting functions are properly chosen so that false diffusion errors can be largely suppressed by the equipped streamline operator. Our aim is to gain some physical insight into the vortical flow using a theoretically rigorous topological theory. To broaden our understanding of the vortex dynamics in the cavity, we also study in detail the longitudinal spiralling motion in the flow interior. (C) 2002 Elsevier Science Ltd. All rights reserved.
[ "three-dimensional", "bicgstab solution solver", "topological theory" ]
[ "P", "P", "P" ]
1zB-&Uy
Image object classification using saccadic search, spatio-temporal pattern encoding and self-organisation
A method for extracting features from photographic images is investigated. The input image is through a saccadic search algorithm divided into a set of sub-images, segmented and coded by a spatio-temporal encoding engine. The input image is thus represented by a set of characteristic pattern signatures, well suited for classification by an unsupervised neural network. A strategy using multiple self-organising feature maps (SOM) in a hierarchical manner is used. With this approach, using a certain degree of user selection, a database of sub-images is grouped according to similarities in signature space.
[ "segmentation", "signatures", "saccadic eye movement", "foveation", "pcnn time-series", "hierarchical som" ]
[ "P", "P", "M", "U", "U", "R" ]
-b4QjR-
Theoretical properties of LFSRs for built-in self test
Linear Feedback Shift-Registers have been studied for a long time as interesting solutions for error detection and correction techniques in transmissions. In the test domain, and principally in Built-In Self Test applications, they are often used as generators of pseudo-random test sequences. Conversely, their potential to generate prescribed deterministic test sequences is dealt within more recent works, and nowadays, allows the investigation of efficient test with a pseudo-deterministic BIST technique. Pseudo-deterministic test sequences are composed of both deterministic and pseudo-random test patterns and offer high fault coverage with a tradeoff between test length and hardware cost. In this paper, synthesis techniques for LFSRs that embed such kind of sequences are described.
[ "built-in self test", "linear feedback shift register", "hardware test pattern generator" ]
[ "P", "M", "R" ]
3gw64RB
Two fixed-parameter algorithms for vertex covering by paths on trees
VERTEX COVERING BY PATHS ON TREES with applications in machine translation is the task to cover all vertices of a tree T = (V, E) by choosing a minimum-weight subset of given paths in the tree. The problem is NP-hard and has recently been solved by an exact algorithm running in O(4(C) center dot vertical bar V vertical bar(2)) time, where C denotes the maximum number of paths covering a tree vertex. We improve this running time to O(2(C) center dot C center dot vertical bar V vertical bar). On the route to this, we introduce the problem TREE-LIKE WEIGHTED HITTING SET which might be of independent interest. In addition, for the unweighted case Of VERTEX COVERING BY PATHS ON TREES, we present an exact algorithm using a search tree of size O(2(k) center dot k!), where k denotes the number of chosen covering paths. Finally, we briefly discuss the existence of a size-O(k(2)) problem kernel. (C) 2007 Elsevier B.V. All rights reserved.
[ "exact algorithms", "graph algorithms", "combinatorial problems", "fixed-parameter tractability" ]
[ "P", "M", "M", "M" ]
4EzYd2E
Graphical dynamic linear models: specification, use and graphical transformations
In this work, we propose a dynamic graphical model as a tool for Bayesian inference and forecasting in dynamic systems described by a series which is dependent on a state vector evolving according to a Markovian law. We build sequential algorithms for the probabilities propagation. This sequentiality turns out to be represented by the dynamic graphical structure alter carrying out several goal-oriented sequential graphical transformations. (C) 2000 Elsevier Science Inc. All rights reserved. MSG: 60J99; 68T30; 62H99.
[ "graphical transformations", "graphical models", "dynamic models", "markovian dynamic systems", "learning and forecasting algorithms" ]
[ "P", "P", "R", "R", "M" ]
1KZsLN:
Alignment with non-overlapping inversions and translocations on two strings ?
An inversion and a translocation are important in bio sequence analysis and motivate researchers to consider the sequence alignment problem using these operations. Based on inversion and translocation, we introduce a new alignment problem with non-overlapping inversions and translocationsgiven two strings x and y, find an alignment with non-overlapping inversions and translocations for x and y. This problem has interesting application for finding a common sequence from two mutated sequences. We, in particular, consider the alignment problem when non-overlapping inversions and translocations are allowed for both x and y. We design an efficient algorithm that determines the existence of such an alignment and retrieves an alignment, if exists.
[ "non-overlapping inversion", "translocation", "sequence alignment" ]
[ "P", "P", "P" ]
4GRkgKn
A twist to partial least squares regression
A modification of the PLS1 algorithm is presented. Stepwise optimization over a set of candidate loading weights obtained by taking powers of the y-X correlations and X standard deviations generalizes the classical PLS1 based on y-X covariances and hence adds flexibility to the modelling. When good linear predictions can be obtained, the suggested approach often finds models with fewer and more interpretable components. Good performance is demonstrated when compared with the classical PLS1 on calibration benchmark data sets. An important part of the comparisons is managed by a novel model selection strategy. The selection is based on choosing the simplest model among those with a cross-validation error smaller than the pre-specified significance limit of a chi(2)-statistic. Copyright (C) 2005 John Wiley & Sons, Ltd.
[ "pls1", "model selection", "cross-validation", "powers of correlations and standard deviations", "model interpretation" ]
[ "P", "P", "P", "R", "R" ]