200971
int64
549k
3.01B
ontology as a source for rule generation
stringlengths
5
255
This paper discloses the potential of OWL (Web Ontology Language) ontologies for generation of rules. The main purpose of this paper is to identify new types of rules, which may be generated from OWL ontologies. Rules, generated from OWL ontologies, are necessary for the functioning of the Semantic Web Expert System. It is expected that the Semantic Web Expert System (SWES) will be able to process ontologies from the Web with the purpose to supplement or even to develop its knowledge base.
stringlengths
91
9.63k
549,074
a novel methodology for thermal analysis a 3 dimensional memory integration
The semiconductor industry is reaching a fascinating confluence in several evolutionary trends that will likely lead to a number of revolutionary changes in the design, implementation, scaling, and the use of computer systems. However, recently Moore's law has come to a stand-still since device scaling beyond 65 nm is not practical. 2D integration has problems like memory latency, power dissipation, and large foot-print. 3D technology comes as a solution to the problems posed by 2D integration. The utilization of 3D is limited by the problem of temperature crisis. It is important to develop an accurate power profile extraction methodology to design 3D structure. In this paper, design of 3D integration of memory is considered and hence the static power dissipation of the memory cell is analysed in transistor level and is used to accurately model the inter-layer thermal effects for 3D memory stack. Subsequently, packaging of the chip is considered and modelled using an architecture level simulator. This modelling is intended to analyse the thermal effects of 3D memory, its reliability and lifetime of the chip, with greater accuracy.
630,234
spreadsheets on the move an evaluation of mobile spreadsheets
The power of mobile devices has increased dramatically in the last few years. These devices are becoming more sophisticated allowing users to accomplish a wide variety of tasks while on the move. The increasingly mobile nature of business has meant that more users will need access to spreadsheets while away from their desktop and laptop computers. Existing mobile applications suffer from a number of usability issues that make using spreadsheets in this way more difficult. This work represents the first evaluation of mobile spreadsheet applications. Through a pilot survey the needs and experiences of experienced spreadsheet users was examined. The range of spreadsheet apps available for the iOS platform was also evaluated in light of these users' needs.
803,423
multi view metric learning for multi view video summarization
Traditional methods on video summarization are designed to generate summaries for single-view video records; and thus they cannot fully exploit the redundancy in multi-view video records. In this paper, we present a multi-view metric learning framework for multi-view video summarization that combines the advantages of maximum margin clustering with the disagreement minimization criterion. The learning framework thus has the ability to find a metric that best separates the data, and meanwhile to force the learned metric to maintain original intrinsic information between data points, for example geometric information. Facilitated by such a framework, a systematic solution to the multi-view video summarization problem is developed. To the best of our knowledge, it is the first time to address multi-view video summarization from the viewpoint of metric learning. The effectiveness of the proposed method is demonstrated by experiments.
1,102,481
big data analytics in future internet of things
Current research on Internet of Things (IoT) mainly focuses on how to enable general objects to see, hear, and smell the physical world for themselves, and make them connected to share the observations. In this paper, we argue that only connected is not enough, beyond that, general objects should have the capability to learn, think, and understand both the physical world by themselves. On the other hand, the future IoT will be highly populated by large numbers of heterogeneous networked embedded devices, which are generating massive or big data in an explosive fashion. Although there is a consensus among almost everyone on the great importance of big data analytics in IoT, to date, limited results, especially the mathematical foundations, are obtained. These practical needs impels us to propose a systematic tutorial on the development of effective algorithms for big data analytics in future IoT, which are grouped into four classes: 1) heterogeneous data processing, 2) nonlinear data processing, 3) high-dimensional data processing, and 4) distributed and parallel data processing. We envision that the presented research is offered as a mere baby step in a potentially fruitful research direction. We hope that this article, with interdisciplinary perspectives, will stimulate more interests in research and development of practical and effective algorithms for specific IoT applications, to enable smart resource allocation, automatic network operation, and intelligent service provisioning.
1,532,644
machine learner for automated reasoning 0 4 and 0 5
Machine Learner for Automated Reasoning (MaLARea) is a learning and reasoning system for proving in large formal libraries where thousands of theorems are available when attacking a new conjecture, and a large number of related problems and proofs can be used to learn specific theorem-proving knowledge. The last version of the system has by a large margin won the 2013 CASC LTB competition. This paper describes the motivation behind the methods used in MaLARea, discusses the general approach and the issues arising in evaluation of such system, and describes the Mizar@Turing100 and CASC'24 versions of MaLARea.
1,649,614
guarded variable automata over infinite alphabets
We define guarded variable automata (GVAs), a simple extension of finite automata over infinite alphabets. In this model the transitions are labelled by letters or variables ranging over an infinite alphabet and guarded by conjunction of equalities and disequalities. GVAs are well-suited for modeling component-based applications such as web services. They are closed under intersection, union, concatenation and Kleene operator, and their nonemptiness problem is PSPACE-complete. We show that the simulation preorder of GVAs is decidable. Our proof relies on the characterization of the simulation by means of games and strategies. This result can be applied to service composition synthesis.
1,703,066
asynchronous cellular operations on gray images extracting topographic shape features and their relations
A variety of operations of cellular automata on gray images is presented. All operations are of a wave-front nature finishing in a stable state. They are used to extract shape descripting gray objects robust to a variety of pattern distortions. Topographic terms are used: "lakes", "dales", "dales of dales". It is shown how mutual object relations like "above" can be presented in terms of gray image analysis and how it can be used for character classification and for gray pattern decomposition. Algorithms can be realized with a parallel asynchronous architecture. Keywords: Pattern Recognition, Mathematical Morphology, Cellular Automata, Wave-front Algorithms, Gray Image Analysis, Topographical Shape Descriptors, Asynchronous Parallel Processors, Holes, Cavities, Concavities, Graphs.
1,810,480
cryptographic hardening of d sequences
This paper shows how a one-way mapping using majority information on adjacent bits will improve the randomness of d-sequences. Supporting experimental results are presented. It is shown that the behavior of d-sequences is different from that of other RNG sequences.
2,127,868
off grid doa estimation based on analysis of the convexity of maximum likelihood function
Spatial compressive sensing (SCS) has recently been applied to direction-of-arrival (DOA) estimation owing to advantages over conventional ones. However the performance of compressive sensing (CS)-based estimation methods decreases when true DOAs are not exactly on the discretized sampling grid. We solve the off-grid DOA estimation problem using the deterministic maximum likelihood (DML) estimation method. In this work, we analyze the convexity of the DML function in the vicinity of the global solution. Especially under the condition of large array, we search for an approximately convex range around the ture DOAs to guarantee the DML function convex. Based on the convexity of the DML function, we propose a computationally efficient algorithm framework for off-grid DOA estimation. Numerical experiments show that the rough convex range accords well with the exact convex range of the DML function with large array and demonstrate the superior performance of the proposed methods in terms of accuracy, robustness and speed.
2,131,697
gesture based continuous authentication for wearable devices the google glass case
We study the feasibility of touch gesture behavioural biometrics for implicit authentication of users on a smartglass (Google Glass) by proposing a continuous authentication system using two classifiers: SVM with RBF kernel, and a new classifier based on Chebyshev's concentration inequality. Based on data collected from 30 volunteers, we show that such authentication is feasible both in terms of classification accuracy and computational load on smartglasses. We achieve a classification accuracy of up to 99% with only 75 training samples using behavioural biometric data from four different types of touch gestures. To show that our system can be generalized, we test its performance on touch data from smartphones and found the accuracy to be similar to smartglasses. Finally, our experiments on the permanence of gestures show that the negative impact of changing user behaviour with time on classification accuracy can be best alleviated by periodically replacing older training samples with new randomly chosen samples.
2,224,865
adaptability checking in multi level complex systems
A hierarchical model for multi-level adaptive systems is built on two basic levels: a lower behavioural level B accounting for the actual behaviour of the system and an upper structural level S describing the adaptation dynamics of the system. The behavioural level is modelled as a state machine and the structural level as a higher-order system whose states have associated logical formulas (constraints) over observables of the behavioural level. S is used to capture the global and stable features of B, by a dening set of allowed behaviours. The adaptation semantics is such that the upper S level imposes constraints on the lower B level, which has to adapt whenever it no longer can satisfy them. In this context, we introduce weak and strong adaptability, i.e. the ability of a system to adapt for some evolution paths or for all possible evolutions, respectively. We provide a relational characterisation for these two notions and we show that adaptability checking, i.e. deciding if a system is weak or strong adaptable, can be reduced to a CTL model checking problem. We apply the model and the theoretical results to the case study of motion control of autonomous transport vehicles.
2,277,080
is twitter a public sphere for online conflicts a cross ideological and cross hierarchical look
The rise in popularity of Twitter has led to a debate on its impact on public opinions. The optimists foresee an increase in online participation and democratization due to social media's personal and interactive nature. Cyber-pessimists, on the other hand, explain how social media can lead to selective exposure and can be used as a disguise for those in power to disseminate biased information. To investigate this debate empirically, we evaluate Twitter as a public sphere using four metrics: equality, diversity, reciprocity and quality. Using these measurements, we analyze the communication patterns between individuals of different hierarchical levels and ideologies. We do this within the context of three diverse conflicts: Israel-Palestine, US Democrats-Republicans, and FC Barcelona-Real Madrid. In all cases, we collect data around a central pair of Twitter accounts representing the two main parties. Our results show in a quantitative manner that Twitter is not an ideal public sphere for democratic conversations and that hierarchical effects are part of the reason why it is not.
2,567,775
new mechanism of combination crossover operators in genetic algorithm for solving the traveling salesman problem
Traveling salesman problem (TSP) is a well-known in computing field. There are many researches to improve the genetic algorithm for solving TSP. In this paper, we propose two new crossover operators and new mechanism of combination crossover operators in genetic algorithm for solving TSP. We experimented on TSP instances from TSP-Lib and compared the results of proposed algorithm with genetic algorithm(GA), which used MSCX. Experimental results show that, our proposed algorithm is better than the GA using MSCX on the min, mean cost values.
2,649,379
an all around near optimal solution for the classic bin packing problem
In this paper we present the first algorithm with optimal average-case and close-to-best known worst-case performance for the classic on-line problem of bin packing. It has long been observed that known bin packing algorithms with optimal average-case performance were not optimal in the worst-case sense. In particular First Fit and Best Fit had optimal average-case ratio of 1 but a worst-case competitive ratio of 1.7. The wasted space of First Fit and Best Fit for a uniform random sequence of length $n$ is expected to be $\Theta(n^{2/3})$ and $\Theta(\sqrt{n} \log ^{3/4} n)$, respectively. The competitive ratio can be improved to 1.691 using the Harmonic algorithm; further variations of this algorithm can push down the competitive ratio to 1.588. However, Harmonic and its variations have poor performance on average; in particular, Harmonic has average-case ratio of around 1.27. In this paper, first we introduce a simple algorithm which we term Harmonic Match. This algorithm performs as well as Best Fit on average, i.e., it has an average-case ratio of 1 and expected wasted space of $\Theta(\sqrt{n} \log ^{3/4} n)$. Moreover, the competitive ratio of the algorithm is as good as Harmonic, i.e., it converges to $ 1.691$ which is an improvement over 1.7 of Best Fit and First Fit. We also introduce a different algorithm, termed as Refined Harmonic Match, which achieves an improved competitive ratio of $1.636$ while maintaining the good average-case performance of Harmonic Match and Best Fit. Finally, our extensive experimental evaluation of the studied bin packing algorithms shows that our proposed algorithms have comparable average-case performance with Best Fit and First Fit, and this holds also for sequences that follow distributions other than the uniform distribution.
2,755,347
reaching approximate byzantine consensus in partially connected mobile networks
We consider the problem of approximate consensus in mobile networks containing Byzantine nodes. We assume that each correct node can communicate only with its neighbors and has no knowledge of the global topology. As all nodes have moving ability, the topology is dynamic. The number of Byzantine nodes is bounded by f and known by all correct nodes. We first introduce an approximate Byzantine consensus protocol which is based on the linear iteration method. As nodes are allowed to collect information during several consecutive rounds, moving gives them the opportunity to gather more values. We propose a novel sufficient and necessary condition to guarantee the final convergence of the consensus protocol. The requirement expressed by our condition is not "universal": in each phase it affects only a single correct node. More precisely, at least one correct node among those that propose either the minimum or the maximum value which is present in the network, has to receive enough messages (quantity constraint) with either higher or lower values (quality constraint). Of course, nodes' motion should not prevent this requirement to be fulfilled. Our conclusion shows that the proposed condition can be satisfied if the total number of nodes is greater than 3f+1.
2,907,876
formal security analysis of registration protocols for interactive systems a methodology and a case of study
In this work we present and formally analyze CHAT-SRP (CHAos based Tickets-Secure Registration Protocol), a protocol to provide interactive and collaborative platforms with a cryptographically robust solution to classical security issues. Namely, we focus on the secrecy and authenticity properties while keeping a high usability. Indeed, most interactive platforms currently base their security properties almost exclusively on the correct implementation and configuration of the systems. In this sense, users are forced to blindly trust the system administrators and developers. Moreover, as far as we know, there is a lack of formal methodologies for the verification of security properties for interactive applications. We propose here a methodology to fill this gap, i.e., to analyse both the security of the proposed protocol and the pertinence of the underlying premises. In this concern, we propose the definition and formal evaluation of a protocol for the distribution of digital identities. Once distributed, these identities can be used to verify integrity and source of information. We base our security analysis on tools for automatic verification of security protocols widely accepted by the scientific community, and on the principles they are based upon. In addition, it is assumed perfect cryptographic primitives in order to focus the analysis on the exchange of protocol messages. The main property of our protocol is the incorporation of tickets, created using digests of chaos based nonces (numbers used only once) and users’ personal data. Combined with a multichannel authentication scheme with some previous knowledge, these tickets provide security during the whole protocol by linking univocally each user with a single request. This way, we prevent impersonation and Man In The Middle attacks, which are the main security problems in registration protocols for interactive platforms. As a proof of concept, we also
3,076,088
numerically representing a stochastic process algebra
The syntactic nature and compositionality characteristic of stochastic process algebras make models to be easily understood by human beings, but not convenient for machines as well as people to directly carry out mathematical analysis and stochastic simulation. This paper presents a numerical representation schema for the stochastic process algebra PEPA, which can provide a platform to directly and conveniently employ a variety of computational approaches to both qualitatively and quantitatively analyse the models. Moreover, these approaches developed on the basis of the schema are demonstrated and discussed. In particular, algorithms for automatically deriving the schema from a general PEPA model and simulating the model based on the derived schema to derive performance measures are presented.
3,277,280
big spectrum data the new resource for cognitive wireless networking
AbstractThe era of Big Data is here now, which has brought both unprecedented opportunities and criticalchallenges. In this article, from a perspective of cognitive wireless networking, we start with a definitionof Big Spectrum Data by analyzing its characteristics in terms of six V’s, i.e., volume, variety, velocity,veracity, viability, and value. We then present a high-level tutorial on research frontiers in Big SpectrumData analytics to guide the development of practical algorithms. We also highlight Big Spectrum Dataas the new resource for cognitive wireless networking by presenting the emerging use cases. We now live in an era of data deluge. We live in a world that has more than a billion transistorsper human; a world with more than 4 billion mobile phone subscribers and about 30 billion radiofrequency identification tags produced globally within the last two years [1]. All these sensorsgenerate data. Sadly, much of this data is simply thrown away, because of the lack of efficientmechanisms to derive value from it. This fact motivates the worldwide increasing interests inBig Data.Although there are still debates on whether Big Data is a big opportunity or a big bubble, BigData is here now and is going to transform how we gain insights and how we make decisionsin the future [2]. To leverage the Big Data opportunities and challenges, many governments,organizations and academic institutions come forward to take initiates. For example, the US
3,333,725
robust multirobot coordination using priority encoded homotopic constraints
We study the problem of coordinating multiple robots along fixed geometric paths. Our contribution is threefold. First we formalize the intuitive concept of priorities as a binary relation induced by a feasible coordination solution, without excluding the case of robots following each other on the same geometric path. Then we prove that two paths in the coordination space are continuously deformable into each other if and only if they induce the \emph{same priority graph}, that is, the priority graph uniquely encodes homotopy classes of coordination solutions. Finally, we give a simple control law allowing to safely navigate into homotopy classes \emph{under kinodynamic constraints} even in the presence of unexpected events, such as a sudden robot deceleration without notice. It appears the freedom within homotopy classes allows to much deviate from any pre-planned trajectory without ever colliding nor having to re-plan the assigned priorities.
3,365,555
ultrametric component analysis with application to analysis of text and of emotion
We review the theory and practice of determining what parts of a data set are ultrametric. It is assumed that the data set, to begin with, is endowed with a metric, and we include discussion of how this can be brought about if a dissimilarity, only, holds. The basis for part of the metric-endowed data set being ultrametric is to consider triplets of the observables (vectors). We develop a novel consensus of hierarchical clusterings. We do this in order to have a framework (including visualization and supporting interpretation) for the parts of the data that are determined to be ultrametric. Furthermore a major objective is to determine locally ultrametric relationships as opposed to non-local ultrametric relationships. As part of this work, we also study a particular property of our ultrametricity coefficient, namely, it being a function of the difference of angles of the base angles of the isosceles triangle. This work is completed by a review of related work, on consensus hierarchies, and of a major new application, namely quantifying and interpreting the emotional content of narrative.
3,765,441
efficient mixed norm regularization algorithms and safe screening methods
Sparse learning has recently received increasing attention in many areas including machine learning, statistics, and applied mathematics. The mixed-norm regularization based on the l1/lq norm with q > 1 is attractive in many applications of regression and classification in that it facilitates group sparsity in the model. The resulting optimization problem is, however, challenging to solve due to the inherent structure of the l1/lq-regularization. Existing work deals with special cases including q = 2,∞, and they can not be easily extended to the general case. In this paper, we propose an efficient algorithm based on the accelerated gradient method for solving the l1/lq-regularized problem, which is applicable for all values of q larger than 1, thus significantly extending existing work. One key building block of the proposed algorithm is the l1/lq-regularized Euclidean projection (EP1q). Our theoretical analysis reveals the key properties of EP1q and illustrates why EP1q for the general q is significantly more challenging to solve than the special cases. Based on our theoretical analysis, we develop an efficient algorithm for EP1q by solving two zero finding problems. To further improve the efficiency of solving large dimensional l1/lq regularized problems, we propose an efficient and effective “screening” method which is able to quickly identify the inactive groups, i.e., groups that have 0 components in the solution. This may lead to substantial reduction in the number of groups to be entered to the optimization. An appealing feature of our screening method is that the data set needs to be scanned only once to run the screening. Compared to that of solving the l1/lq-regularized problems, the computational cost of our screening test is negligible. The key of the proposed screening method is an accurate sensitivity analysis of the dual optimal solution when the regularization parameter varies. Experimental results demonstrate the efficiency of the proposed algorithm.
4,177,294
qualitative propagation and scenario based explanation of probabilistic reasoning
Comprehensible explanations of probabilistic reasoning are a prerequisite for wider acceptance of Bayesian methods in expert systems and decision support systems. A study of human reasoning under uncertainty suggests two different strategies for explaining probabilistic reasoning: The first, qualitative belief propagation, traces the qualitative effect of evidence through a belief network from one variable to the next. This propagation algorithm is an alternative to the graph reduction algorithms of Wellman (1988) for inference in qualitative probabilistic networks. It is based on a qualitative analysis of intercausal reasoning, which is a generalization of Pearl's "explaining away", and an alternative to Wellman's definition of qualitative synergy. The other, Scenario-based reasoning, involves the generation of alternative causal "stories" accounting for the evidence. Comparing a few of the most probable scenarios provides an approximate way to explain the results of probabilistic reasoning. Both schemes employ causal as well as probabilistic knowledge. Probabilities may be presented as phrases and/or numbers. Users can control the style, abstraction and completeness of explanations.
4,318,847
on the μ parameters of the petersen graph
For an undirected, simple, finite, connected graph $G$, we denote by $V(G)$ and $E(G)$ the sets of its vertices and edges, respectively. A function $\varphi:E(G)\rightarrow \{1,...,t\}$ is called a proper edge $t$-coloring of a graph $G$, if adjacent edges are colored differently and each of $t$ colors is used. The least value of $t$ for which there exists a proper edge $t$-coloring of a graph $G$ is denoted by $\chi'(G)$. For any graph $G$, and for any integer $t$ satisfying the inequality $\chi'(G)\leq t\leq |E(G)|$, we denote by $\alpha(G,t)$ the set of all proper edge $t$-colorings of $G$. Let us also define a set $\alpha(G)$ of all proper edge colorings of a graph $G$: $$ \alpha(G)\equiv\bigcup_{t=\chi'(G)}^{|E(G)|}\alpha(G,t). $$ #R##N#An arbitrary nonempty finite subset of consecutive integers is called an interval. If $\varphi\in\alpha(G)$ and $x\in V(G)$, then the set of colors of edges of $G$ which are incident with $x$ is denoted by $S_G(x,\varphi)$ and is called a spectrum of the vertex $x$ of the graph $G$ at the proper edge coloring $\varphi$. If $G$ is a graph and $\varphi\in\alpha(G)$, then define $f_G(\varphi)\equiv|\{x\in V(G)/S_G(x,\varphi) \textrm{is an interval}\}|$. #R##N#For a graph $G$ and any integer $t$, satisfying the inequality $\chi'(G)\leq t\leq |E(G)|$, we define: $$ \mu_1(G,t)\equiv\min_{\varphi\in\alpha(G,t)}f_G(\varphi),\qquad \mu_2(G,t)\equiv\max_{\varphi\in\alpha(G,t)}f_G(\varphi). $$ #R##N#For any graph $G$, we set: $$ \mu_{11}(G)\equiv\min_{\chi'(G)\leq t\leq|E(G)|}\mu_1(G,t),\qquad \mu_{12}(G)\equiv\max_{\chi'(G)\leq t\leq|E(G)|}\mu_1(G,t), $$ $$ \mu_{21}(G)\equiv\min_{\chi'(G)\leq t\leq|E(G)|}\mu_2(G,t),\qquad \mu_{22}(G)\equiv\max_{\chi'(G)\leq t\leq|E(G)|}\mu_2(G,t). $$ #R##N#For the Petersen graph, the exact values of the parameters $\mu_{11}$, $\mu_{12}$, $\mu_{21}$ and $\mu_{22}$ are found.
4,403,717
quantitative testing semantics for non interleaving
This paper presents a non-interleaving denotational semantics for the π-calculus. The basic idea is to define a notion of test where the outcome is not only whether a given process passes a given test, but also in how many different ways it can pass it. More abstractly, the set of possible outcomes for tests forms a semiring, and the set of process interpretations appears as a module over this semiring, in which basic syntactic constructs are affine operators. This notion of test leads to a trace semantics in which traces are partial orders, in the style of Mazurkiewicz traces, extended with readiness information. Our construction has standard may- and must-testing as special cases.
4,407,728
visual noise from natural scene statistics reveals human scene category representations
Our perceptions are guided both by the bottom-up information entering our eyes, as well as our top-down expectations of what we will see. Although bottom-up visual processing has been extensively studied, comparatively little is known about top-down signals. Here, we describe REVEAL (Representations Envisioned Via Evolutionary ALgorithm), a method for visualizing an observer's internal representation of a complex, real-world scene, allowing us to, for the first time, visualize the top-down information in an observer's mind. REVEAL rests on two innovations for solving this high dimensional problem: visual noise that samples from natural image statistics, and a computer algorithm that collaborates with human observers to efficiently obtain a solution. In this work, we visualize observers' internal representations of a visual scene category (street) using an experiment in which the observer views the naturalistic visual noise and collaborates with the algorithm to externalize his internal representation. As no scene information was presented, observers had to use their internal knowledge of the target, matching it with the visual features in the noise. We matched reconstructed images with images of real-world street scenes to enhance visualization. Critically, we show that the visualized mental images can be used to predict rapid scene detection performance, as each observer had faster and more accurate responses to detecting real-world images that were the most similar to his reconstructed street templates. These results show that it is possible to visualize previously unobservable mental representations of real world stimuli. More broadly, REVEAL provides a general method for objectively examining the content of previously private, subjective mental experiences.
4,412,340
is matching pursuit solving convex problems
Sparse recovery ({\tt SR}) has emerged as a very powerful tool for signal processing, data mining and pattern recognition. To solve {\tt SR}, many efficient matching pursuit (\texttt{MP}) algorithms have been proposed. However, it is still not clear whether {\tt SR} can be formulated as a convex problem that is solvable using \texttt{MP} algorithms. To answer this, in this paper, a novel convex relaxation model is presented, which is solved by a general matching pursuit (\texttt{GMP}) algorithm under the convex programming framework. {\tt GMP} has several advantages over existing methods. At first, it solves a convex problem and guarantees to converge to a optimum. In addition, with $\ell_1$-regularization, it can recover any $k$-sparse signals if the restricted isometry constant $\sigma_k\leq 0.307-\nu$, where $\nu$ can be arbitrarily close to 0. Finally, when dealing with a batch of signals, the computation burden can be much reduced using a batch-mode \texttt{GMP}. Comprehensive numerical experiments show that \texttt{GMP} achieves better performance than other methods in terms of sparse recovery ability and efficiency. We also apply \texttt{GMP} to face recognition tasks on two well-known face databases, namely, \emph{{Extended using}} and \emph{AR}. Experimental results demonstrate that {\tt GMP} can achieve better recognition performance than the considered state-of-the-art methods within acceptable time. {Particularly, the batch-mode {\tt GMP} can be up to 500 times faster than the considered $\ell_1$ methods.}
4,481,033
on mobile bluetooth tags
This paper presents a new approach for hyper-local data sharing and delivery on the base of discoverable Bluetooth nodes. Our approach allows customers to associate user-defined data with network nodes and use a special mobile application (context-aware browser) for presenting this information to mobile users in proximity. Alternatively, mobile services can request and share local data in M2M applications rely on network proximity. Bluetooth nodes in cars are among the best candidates for the role of the bearing nodes.
4,728,653
manyclaw slicing and dicing riemann solvers for next generation highly parallel architectures
Next generation computer architectures will include order of magnitude more intra-node parallelism; however, many application programmers have a difficult time keeping their codes current with the state-of-the-art machines. In this context, we analyze Hyperbolic PDE solvers, which are used in the solution of many important applications in science and engineering. We present ManyClaw, a project intended to explore the exploitation of intra-node parallelism in hyperbolic PDE solvers via the Clawpack software package for solving hyperbolic PDEs. Our goal is to separate the low level parallelism and the physical equations thus providing users the capability to leverage intra-node parallelism without explicitly writing code to take advantage of newer architectures.
4,913,547
fast generation of dynamic complex networks with underlying hyperbolic geometry
Complex networks have become increasingly popular for modeling real-world phenomena, ranging from web hyperlinks to interactions between people. Realistic generative network models are important in this context as they avoid privacy concerns of real data and simplify complex network research regarding data sharing, reproducibility, and scalability studies. We study a geometric model creating unit-disk graphs in hyperbolic space. Previous work provided empirical and theoretical evidence that this model creates networks with a hierarchical structure and other realistic features. However, the investigated networks were small, possibly due to a quadratic running time of a straightforward implementation. % We provide a faster generator for a representative subset of these networks. Our experiments indicate a time complexity of $O((n^{3/2}+m) \log n)$ for our implementation and thus confirm our theoretical considerations. To our knowledge our implementation is the first one with subquadratic running time. The acceleration stems primarily from the reduction of pairwise distance computations through a polar quadtree newly adapted to hyperbolic space. We also extend the generator to an alternative dynamic model which preserves graph properties in expectation. Finally, we generate and evaluate the largest networks of this model published so far. Our implementation computes networks with billions of edges in less than an hour. A comprehensive network analysis shows that important features of complex networks, such as a low diameter, power-law degree distribution and a high clustering coefficient, are retained over different graph sizes and densities.
5,159,100
agent oriented approach for detecting and managing risks in emergency situations
This paper presents an agent-oriented approach to build a decision support system aimed at helping emergency managers to detect and to manage risks. We stress the flexibility and the adaptivity characteristics that are crucial to build a robust and efficient system, able to resolve complex problems. The system should be independent as much as possible from the subject of study. Thereby, an original approach based on a mechanism of perception, representation, characterisation and assessment is proposed. The work described here is applied on the RoboCupRescue application. Experimentations and results are provided.
5,319,776
constructing strategy of online learning in higher education transaction cost economy
The online learning tools and management also known as Learning Management System (LMS) have been adopted by higher education as it allows convenient and flexibility in learning process between students and instructors or tutors with minimal cost. The adoption of online learning tools in university has allowed users (students and instructors) to interact, share and discuss anytime-anywhere conveniently. Many students nowadays rely on online resources based using their mobile devices, substituting traditional learning interactions. Universities need strategy to sustain in providing intensive interactions and spreading word out mouth of good services through online learning tools by focusing on niche markets and creating close relationship with their stakeholders. The study presented in this paper analyses how universities design best practices in adopting LMS and evaluate its current state for future improvement. In fact, with proper strategies of LMS, universities have opportunities to sustain their business by offering interesting packages and to improve their services through intensive interactions with their users. In this study, we deploy Transaction Cost Economics (TCE) to understand the change business environment and to construct a model for higher institution to regulate their scenario on online learning strategies in fast changing and threatening business environment.
5,896,900
effective spectral unmixing via robust representation and learning based sparsity
Hyperspectral unmixing (HU) plays a fundamental role in a wide range of hyperspectral applications. It is still challenging due to the common presence of outlier channels and the large solution space. To address the above two issues, we propose a novel model by emphasizing both robust representation and learning-based sparsity. Specifically, we apply the $\ell_{2,1}$-norm to measure the representation error, preventing outlier channels from dominating our objective. In this way, the side effects of outlier channels are greatly relieved. Besides, we observe that the mixed level of each pixel varies over image grids. Based on this observation, we exploit a learning-based sparsity method to simultaneously learn the HU results and a sparse guidance map. Via this guidance map, the sparsity constraint in the $\ell_{p}\!\left(\!0\!<\! p\!\leq\!1\right)$-norm is adaptively imposed according to the learnt mixed level of each pixel. Compared with state-of-the-art methods, our model is better suited to the real situation, thus expected to achieve better HU results. The resulted objective is highly non-convex and non-smooth, and so it is hard to optimize. As a profound theoretical contribution, we propose an efficient algorithm to solve it. Meanwhile, the convergence proof and the computational complexity analysis are systematically provided. Extensive evaluations verify that our method is highly promising for the HU task---it achieves very accurate guidance maps and much better HU results compared with state-of-the-art methods.
6,657,555
optimistic rates for learning with a smooth loss
We establish an excess risk bound of O(H R_n^2 + R_n \sqrt{H L*}) for empirical risk minimization with an H-smooth loss function and a hypothesis class with Rademacher complexity R_n, where L* is the best risk achievable by the hypothesis class. For typical hypothesis classes where R_n = \sqrt{R/n}, this translates to a learning rate of O(RH/n) in the separable (L*=0) case and O(RH/n + \sqrt{L^* RH/n}) more generally. We also provide similar guarantees for online and stochastic convex optimization with a smooth non-negative objective.
6,908,809
adadelta an adaptive learning rate method
We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.
7,087,849
on channels with asynchronous side information
Several channels with asynchronous side information are introduced. We first consider channels that are state dependent with asynchronous side information at the transmitter. It is assumed that the state information sequence is a possibly delayed version of the state sequence, and that the encoder and the decoder are aware of the fact that the state information might be delayed. It is additionally assumed that an upper bound on the delay is known to both encoder and decoder, but other than that, they are ignorant of the actual delay. We consider both the causal and the noncausal cases and present achievable rates for these channels, and the corresponding coding schemes. We find the capacity of the asynchronous Gel'fand-Pinsker channel with feedback. We further consider a memoryless state dependent channel with asynchronous side information at both the transmitter and receiver, and establish a single-letter expression for its capacity. Finally, we introduce the asynchronous cognitive multiple-access channel with an uninformed encoder, and an informed one. We assume that the informed encoder knows in advance the message of the uninformed encoder. We present a multi-letter expression for the capacity region of this channel, and state single-letter outer and inner bounds on its capacity region. Finally, we study the binary as well as the Gaussian asynchronous multiple-access channels with an uninformed encoder.
7,271,355
sockpuppet detection in wikipedia a corpus of real world deceptive writing for linking identities
This paper describes the corpus of sockpuppet cases we gathered from Wikipedia. A sockpuppet is an online user account created with a fake identity for the purpose of covering abusive behavior and/or subverting the editing regulation process. We used a semi-automated method for crawling and curating a dataset of real sockpuppet investigation cases. To the best of our knowledge, this is the first corpus available on real-world deceptive writing. We describe the process for crawling the data and some preliminary results that can be used as baseline for benchmarking research. The dataset will be released under a Creative Commons license from our project website: this http URL
7,399,832
autonomous reconfiguration procedures for ejb based enterprise applications
Enterprise Applications (EA) are complex software systems for supporting the business of companies. Evolution of an EA should not affect its availability, e.g., because of a temporal shutdown, business operations may be affected. One possibility to address this problem is the seamless reconfiguration of the affected EA, i.e., applying the relevant changes while the system is running. Our approach to seamless reconfiguration focuses on component-oriented EAs. It is based on the Autonomic Computing infrastructure mKernel that enables the management of EAs that are realized using Enterprise Java Beans (EJB) 3.0 technology. In contrast to other approaches that provide no or only limited reconfiguration facilities, our approach consists of a comprehensive set of steps, that perform fine-grained reconfiguration tasks. These steps can be combined into generic and autonomous reconfiguration procedures for EJB-based EAs. The procedures are not limited to a certain reconfiguration strategy. Instead, our approach provides several reusable strategies and is extensible w.r.t. the opportunity to integrate new ones.
7,610,491
a cf based randomness measure for sequences
This note examines the question of randomness in a sequence based on the continued fraction (CF) representation of its corresponding representation as a number, or as D sequence. We propose a randomness measure that is directly equal to the number of components of the CF representation. This provides a means of quantifying the randomness of the popular PN sequences as well. A comparison is made of representation as a fraction and as a continued fraction.
7,728,809
optimal auction mechanism for spectrum allocation in cognitive radio networks under uncertain spectrum availability
In this paper, we consider the problem of dynamic spectrum allocation in cognitive radio (CR) networks and propose a new sealed-bid auction framework to address the spectrum allocation problem when the spectrum is not available with certainty. In our model, we assume that the moderator plays the dual role of being a fusion center (FC) for spectrum sensing and an auctioneer for spectrum allocation where it attempts to maximize its utility. We also consider the cost of collisions with the primary user (PU) and assign this cost to the FC, making it completely responsible for its allocation decision. With the help of CRs participating in the network, the FC makes a global inference on the availability of the spectrum followed by spectrum allocation. We investigate the optimal auction-based framework for spectrum allocation and also investigate the conditions under which such an auction is feasible. Note that the optimal auction forces the moderator to compensate the CRs for the sensing cost that they incur. Some numerical examples are presented for illustration.
8,696,858
extended core and choosability of a graph
YVES AUBRY, JEAN-CHRISTOPHE GODIN AND OLIVIER TOGNIAbstract. A graph G is (a,b)-choosable if for any color list of size aassociated with each vertices, one can choose a subset of b colors suchthat adjacent vertices are colored with disjoint color sets. This papershows an equivalence between the (a,b)-choosability of a graph and the(a,b)-choosability of one of its subgraphs called the extended core. Asan application, this result allows to prove the (5,2)-choosability and(7,3)-colorability of triangle-free induced subgraphs of the triangularlattice.
8,796,604
conditional model checking
Software model checking, as an undecidable problem, has three possible outcomes: (1) the program satisfies the specification, (2) the program does not satisfy the specification, and (3) the model checker fails. The third outcome usually manifests itself in a space-out, time-out, or one component of the verification tool giving up; in all of these failing cases, significant computation is performed by the verification tool before the failure, but no result is reported. We propose to reformulate the model-checking problem as follows, in order to have the verification tool report a summary of the performed work even in case of failure: given a program and a specification, the model checker returns a condition P ---usually a state predicate--- such that the program satisfies the specification under the condition P ---that is, as long as the program does not leave states in which P is satisfied. We are of course interested in model checkers that return conditions P that are as weak as possible. Instead of outcome (1), the model checker will return P = true; instead of (2), the condition P will return the part of the state space that satisfies the specification; and in case (3), the condition P can summarize the work that has been performed by the model checker before space-out, time-out, or giving up. If complete verification is necessary, then a different verification method or tool may be used to focus on the states that violate the condition. We give such conditions as input to a conditional model checker, such that the verification problem is restricted to the part of the state space that satisfies the condition. Our experiments show that repeated application of conditional model checkers, using different conditions, can significantly improve the verification results, state-space coverage, and performance.
9,255,041
service provisioning and profit maximization in network assisted adaptive http streaming
Adaptive HTTP streaming with centralized consideration of multiple streams has gained increasing interest. It poses a special challenge that the interests of both content provider and network operator need to be deliberately balanced. More importantly, the adaptation strategy is required to be flexible enough to be ported to various systems that work under different network environments, QoE levels, and economic objectives. To address these challenges, we propose a Markov Decision Process (MDP) based network-assisted adaptation framework, wherein cost of buffering, significant playback variation, bandwidth management and income of playback are jointly investigated. We then demonstrate its promising service provisioning and maximal profit for a mobile network in which fair or differentiated service is required.
9,420,056
statistical patterns in written language
Quantitative linguistics has been allowed, in the last few decades, within the admittedly blurry boundaries of the field of complex systems. A growing host of applied mathematicians and statistical physicists devote their efforts to disclose regularities, correlations, patterns, and structural properties of language streams, using techniques borrowed from statistics and information theory. Overall, results can still be categorized as modest, but the prospects are promising: medium- and long-range features in the organization of human language -which are beyond the scope of traditional linguistics- have already emerged from this kind of analysis and continue to be reported, contributing a new perspective to our understanding of this most complex communication system. This short book is intended to review some of these recent contributions.
9,657,784
evasion attacks against machine learning at test time
In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker's knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.
9,822,165
source unfoldings of convex polyhedra via certain closed curves
We extend the notion of a source unfolding of a convex polyhedron P to be based on a closed polygonal curve Q in a particular class rather than based on a point. The class requires that Q \lives on a cone" to both sides; it includes simple, closed quasigeodesics. Cutting a particular subset of the cut locus of Q (in P) leads to a non-overlapping unfolding of the polyhedron. This gives a new general method to unfold the surface of any convex polyhedron to a simple, planar polygon.
10,023,423
science and ethnicity how ethnicities shape the evolution of computer science research community
Globalization and the world wide web has resulted in academia and science being an international and multicultural community forged by researchers and scientists with different ethnicities. How ethnicity shapes the evolution of membership, status and interactions of the scientific community, however, is not well understood. This is due to the difficulty of ethnicity identification at the large scale. We use name ethnicity classification as an indicator of ethnicity. Based on automatic name ethnicity classification of 1.7+ million authors gathered from Web, the name ethnicity of computer science scholars is investigated by population size, publication contribution and collaboration strength. By showing the evolution of name ethnicity from 1936 to 2010, we discover that ethnicity diversity has increased significantly over time and that different research communities in certain publication venues have different ethnicity compositions. We notice a clear rise in the number of Asian name ethnicities in papers. Their fraction of publication contribution increases from approximately 10% to near 50% from 1970 to 2010. We also find that name ethnicity acts as a homophily factor on coauthor networks, shaping the formation of coauthorship as well as evolution of research communities.
10,088,590
wireless sensor networks localization methods multidimensional scaling vs semidefinite programming approach
With the recent development of technology, wireless sensor networks are becoming an important part of many applications such as health and medical applications, military applications, agriculture monitoring, home and office applications, environmental monitoring, etc. Knowing the location of a sensor is important, but GPS receivers and sophisticated sensors are too expensive and require processing power. Therefore, the localization wireless sensor network problem is a growing field of interest. The aim of this paper is to give a comparison of wireless sensor network localization methods, and therefore, multidimensional scaling and semidefinite programming are chosen for this research. Multidimensional scaling is a simple mathematical technique widely-discussed that solves the wireless sensor networks localization problem. In contrast, semidefinite programming is a relatively new field of optimization with a growing use, although being more complex. In this paper, using extensive simulations, a detailed overview of these two approaches is given, regarding different network topologies, various network parameters and performance issues. The performances of both techniques are highly satisfactory and estimation errors are minimal.
10,346,559
comment on robustness and regularization of support vector machines by h xu et al journal of machine learning research vol 10 pp 1485 1510 2009 arxiv 0803 3490
This paper comments on the published work dealing with robustness and regularization of support vector machines (Journal of Machine Learning Research, vol. 10, pp. 1485-1510, 2009) [arXiv:0803.3490] by H. Xu, etc. They proposed a theorem to show that it is possible to relate robustness in the feature space and robustness in the sample space directly. In this paper, we propose a counter example that rejects their theorem.
10,542,216
cryptanalyzing an image encryption scheme based on logistic map
Abstract Recently, an image encryption scheme based 2 on logistic map was proposed. It has been reported by a 3 research group that its equivalent secret key can be re- 4 constructed with only one pair of known-plaintext and 5 the corresponding cipher-text. Utilizing stable distribu- 6 tion of the chaotic states generated by iterating the lo- 7 gistic map, this paper further demonstrates that much 8 more information about the secret key can be derived 9 under the same condition. 10 Keywords cryptanalysis · chosen-plaintext attack · 11 image encryption · logistic map 12 1Introduction 13 With the rapid development of information transmis- 14 sion technology and popularization of multimedia cap- 15 ture device, multimedia data are transmitted over all 16 kinds of wired/wireless networks more and more fre- 17 quently. Consequently, the security of multimedia data 18 becomes a serious concern of many people. However, 19 the traditional text encryption schemes cannot protect 20 multimedia data efficiently, mainly due to the big dif-
11,415,338
genetic algorithm to make persistent security and quality of image in steganography from rs analysis
Retention of secrecy is one of the significant features during communication activity. Steganography is one of the popular methods to achieve secret communication between sender and receiver by hiding message in any form of cover media such as an audio, video, text, images etc. Least significant bit encoding is the simplest encoding method used by many steganography programs to hide secret message in 24bit, 8bit colour images and grayscale images. Steganalysis is a method of detecting secret message hidden in a cover media using steganography. RS steganalysis is one of the most reliable steganalysis which performs statistical analysis of the pixels to successfully detect the hidden message in an image. However, existing steganography method protects the information against RS steganalysis in grey scale images. This paper presents a steganography method using genetic algorithm to protect against the RS attack in colour images. Stego image is divided into number of blocks. Subsequently, with the implementation of natural evolution on the stego image using genetic algorithm enables to achieve optimized security and image quality.
11,799,683
energy aware lease scheduling in virtualized data centers
Energy efficiency has become an important measurement of scheduling algorithms in virtualized data centers. One of the challenges of energy-efficient scheduling algorithms, however, is the trade-off between minimizing energy consumption and satisfying quality of service (e.g. performance, resource availability on time for reservation requests). We consider resource needs in the context of virtualized data centers of a private cloud system, which provides resource leases in terms of virtual machines (VMs) for user applications. In this paper, we propose heuristics for scheduling VMs that address the above challenge. On performance evaluation, simulated results have shown a significant reduction on total energy consumption of our proposed algorithms compared with an existing First-Come-First-Serve (FCFS) scheduling algorithm with the same fulfillment of performance requirements. We also discuss the improvement of energy saving when additionally using migration policies to the above mentioned algorithms.
11,937,046
metric learning across heterogeneous domains by respectively aligning both priors and posteriors
In this paper, we attempts to learn a single metric across two heterogeneous do-mains where source domain is fully labeled and has many samples while targetdomain has only a few labeled samples but abundant unlabeled samples. Tothe best of our knowledge, this task is seldom touched. The proposed learningmodel has a simple underlying motivation: all the samples in both the sourceand the target domains are mapped into a common space, where both theirpriors P(sample)s and their posteriors P(label|sample)s are forced to be re-spectively aligned as much as possible. We show that the two mappings, fromboth the source domain and the target domain to the common space, can bereparameterized into a single positive semi-definite(PSD) matrix. Then we de-velop an efficient Bregman Projection algorithm to optimize the PDS matrixover which a LogDet function is used to regularize. Furthermore, we also showthat this model can be easily kernelized and verify its effectiveness in cross-language retrieval task and cross-domain object recognition task.
12,544,235
reliable deniable and hidable communication over parallel link networks
We consider the scenario wherein Alice wants to (potentially) communicate to the intended receiver Bob over a network consisting of multiple parallel links in the presence of a passive eavesdropper Willie, who observes an unknown subset of links. A primary goal of our communication protocol is to make the communication "deniable", {\it i.e.}, Willie should not be able to {\it reliably} estimate whether or not Alice is transmitting any {\it covert} information to Bob. Moreover, if Alice is indeed actively communicating, her covert messages should be information-theoretically "hidable" in the sense that Willie's observations should not {\it leak any information} about Alice's (potential) message to him -- our notion of hidability is slightly stronger than the notion of information-theoretic strong secrecy well-studied in the literature, and may be of independent interest. It can be shown that deniability does not imply either hidability or (weak or strong) information-theoretic secrecy; nor does any form of information-theoretic secrecy imply deniability. We present matching inner and outer bounds on the capacity for deniable and hidable communication over {\it parallel-link networks}.
12,599,010
high throughput genome wide association analysis for single and multiple phenotypes
The variance component tests used in genomewide association studies of thousands of individuals become computationally exhaustive when multiple traits are analysed in the context of omics studies. We introduce two high-throughput algorithms -- CLAK-CHOL and CLAK-EIG -- for single and multiple phenotype genome-wide association studies (GWAS). The algorithms, generated with the help of an expert system, reduce the computational complexity to the point that thousands of traits can be analyzed for association with millions of polymorphisms in a course of days on a standard workstation. By taking advantage of problem specific knowledge, CLAK-CHOL and CLAK-EIG significantly outperform the current state-of-the-art tools in both single and multiple trait analysis.
12,653,049
on the design and analysis of multiple view descriptors
We propose an extension of popular descriptors based on gradient orientation histograms (HOG, computed in a single image) to multiple views. It hinges on interpreting HOG as a conditional density in the space of sampled images, where the effects of nuisance factors such as viewpoint and illumination are marginalized. However, such marginalization is performed with respect to a very coarse approximation of the underlying distribution. Our extension leverages on the fact that multiple views of the same scene allow separating intrinsic from nuisance variability, and thus afford better marginalization of the latter. The result is a descriptor that has the same complexity of single-view HOG, and can be compared in the same manner, but exploits multiple views to better trade off insensitivity to nuisance variability with specificity to intrinsic variability. We also introduce a novel multi-view wide-baseline matching dataset, consisting of a mixture of real and synthetic objects with ground truthed camera motion and dense three-dimensional geometry.
12,686,061
why early stage software startups fail a behavioral framework
Software startups are newly created companies with little operating history and oriented towards producing cutting-edge products. As their time and resources are extremely scarce, and one failed project can put them out of business, startups need effective practices to face with those unique challenges. However, only few scientific studies attempt to address characteristics of failure, especially during the early-stage. With this study we aim to raise our understanding of the failure of early-stage software startup companies. This state-of-practice investigation was performed using a literature review followed by a multiple-case study approach. The results present how inconsistency between managerial strategies and execution can lead to failure by means of a behavioral framework. Despite strategies reveal the first need to understand the problem/solution fit, actual executions prioritize the development of the product to launch on the market as quickly as possible to verify product/market fit, neglecting the necessary learning process.
12,798,267
an optical image encryption scheme based on depth conversion integral imaging and chaotic maps
Integral imaging-based cryptographic algorithms provides a new way to design secure and robust image encryption schemes. In this paper, we introduce a performance-enhanced image encryption schemes based on depth-conversion integral imaging and chaotic maps, aiming to meet the requirements of secure image transmission. First, the input image is decomposed into an elemental image array (EIA) by utilizing a pinhole array. Then, the obtained image are encrypted by combining the use of cellular automata and chaotic logistic maps. In the image reconstruction process, the conventional computational integral imaging reconstruction (CIIR) technique is a pixel-superposition technique; the resolution of the reconstructed image is dramatically degraded due to the large magnification in the superposition process as the pickup distance increases. The smart mapping technique is introduced to improve the problem of CIIR. A novel property of the proposed scheme is its depth-conversion ability, which converts original elemental images recorded at long distance to ones recorded near the pinhole array and consequently reduce the magnification factor. The results of numerical simulations demonstrate the effectiveness and security of this proposed scheme.
12,803,156
survey on combinatorial register allocation and instruction scheduling
Register allocation (mapping variables to processor registers or memory) and instruction scheduling (reordering instructions to increase instruction-level parallelism) are essential tasks for generating efficient assembly code in a compiler. In the past three decades, combinatorial optimization has emerged as an alternative to traditional, heuristic algorithms for these two tasks. Combinatorial optimization approaches can deliver optimal solutions according to a model, can precisely capture trade-offs between conflicting decisions, and are more flexible at the expense of increased compilation time.#R##N##R##N#This article provides an exhaustive literature review and a classification of combinatorial optimization approaches to register allocation and instruction scheduling, with a focus on the techniques that are most applied in this context: integer programming, constraint programming, partitioned Boolean quadratic programming, and enumeration. Researchers in compilers and combinatorial optimization can benefit from identifying developments, trends, and challenges in the area; compiler practitioners may discern opportunities and grasp the potential benefit of applying combinatorial optimization.
12,847,816
unweighted stochastic local search can be effective for random csp benchmarks
We present ULSA, a novel stochastic local search algorithm for random binary constraint satisfaction problems (CSP). ULSA is many times faster than the prior state of the art on a widely-studied suite of random CSP benchmarks. Unlike the best previous methods for these benchmarks, ULSA is a simple unweighted method that does not require dynamic adaptation of weights or penalties. ULSA obtains new record best solutions satisfying 99 of 100 variables in the challenging frb100-40 benchmark instance.
12,848,949
tracking the frequency moments at all times
The traditional requirement for a randomized streaming algorithm is just {\em one-shot}, i.e., algorithm should be correct (within the stated $\eps$-error bound) at the end of the stream. In this paper, we study the {\em tracking} problem, where the output should be correct at all times. The standard approach for solving the tracking problem is to run $O(\log m)$ independent instances of the one-shot algorithm and apply the union bound to all $m$ time instances. In this paper, we study if this standard approach can be improved, for the classical frequency moment problem. We show that for the $F_p$ problem for any $1 < p \le 2$, we actually only need $O(\log \log m + \log n)$ copies to achieve the tracking guarantee in the cash register model, where $n$ is the universe size. Meanwhile, we present a lower bound of $\Omega(\log m \log\log m)$ bits for all linear sketches achieving this guarantee. This shows that our upper bound is tight when $n=(\log m)^{O(1)}$. We also present an $\Omega(\log^2 m)$ lower bound in the turnstile model, showing that the standard approach by using the union bound is essentially optimal.
13,053,856
granular association rule mining through parametric rough sets for cold start recommendation
Granular association rules reveal patterns hide in many-to-many relationships which are common in relational databases. In recommender systems, these rules are appropriate for cold start recommendation, where a customer or a product has just entered the system. An example of such rules might be "40% men like at least 30% kinds of alcohol; 45% customers are men and 6% products are alcohol." Mining such rules is a challenging problem due to pattern explosion. In this paper, we propose a new type of parametric rough sets on two universes to study this problem. The model is deliberately defined such that the parameter corresponds to one threshold of rules. With the lower approximation operator in the new parametric rough sets, a backward algorithm is designed for the rule mining problem. Experiments on two real world data sets show that the new algorithm is significantly faster than the existing sandwich algorithm. This study indicates a new application area, namely recommender systems, of relational data mining, granular computing and rough sets.
13,404,512
probabilistic verifiers for asymmetric debates
We examine the power of silent constant-space probabilistic verifiers that watch asymmetric debates (where one side is unable to see some of the messages of the other) between two deterministic provers, and try to determine who is right. We prove that probabilistic verifiers outperform their deterministic counterparts as asymmetric debate checkers. It is shown that the membership problem for every language in NSPACE(s(n)) has a 2 s(n) -time debate where one prover is completely blind to the other one, for polynomially bounded space constructible s(n). When partial information is allowed to be seen by the handicapped prover, the class of languages debatable in 2 s(n) time contains TIME(2 s(n) ), so a probabilistic finite automaton can solve any decision problem in P with small error in polynomial time with the aid of such a debate. We also compare our systems with those with a single prover, and with competing-prover interactive proof systems.
13,864,758
lattice polytopes in coding theory
In this paper we discuss combinatorial questions about lattice polytopes motivated by recent results on minimum distance estimation for toric codes. We also include a new inductive bound for the minimum distance of generalized toric codes. As an application, we give new formulas for the minimum distance of generalized toric codes for special lattice point configurations.
13,891,724
efficient computation of mean truncated hitting times on very large graphs
Previous work has shown the effectiveness of random walk hitting times as a measure of dissimilarity in a variety of graph-based learning problems such as collaborative filtering, query suggestion or finding paraphrases. However, application of hitting times has been limited to small datasets because of computational restrictions. This paper develops a new approximation algorithm with which hitting times can be computed on very large, disk-resident graphs, making their application possible to problems which were previously out of reach. This will potentially benefit a range of large-scale problems.
13,980,395
tight approximation of image matching
In this work we consider the image matching problem for two grayscale n n images, M1 and M2 (where pixel values range from 0 to 1). Our goal is to nd an ane transformation T that maps pixels from M1 to pixels in M2 so that the dierences over pixels p between M1(p) and M2(T (p)) is minimized. Our focus here is on sublinear algorithms that give an approximate result for this problem, that is, we wish to perform this task while querying as few pixels from both images as possible, and give a transformation that comes close to minimizing the dierence. We give an algorithm for the image matching problem that returns a transformationT which minimizes the sum of dierences
14,178,847
benchmarking motion planning algorithms an extensible infrastructure for analysis and visualization
Motion planning is a key problem in robotics that is concerned with finding a path that satisfies a goal specification subject to constraints. In its simplest form, the solution to this problem consists of finding a path connecting two states, and the only constraint is to avoid collisions. Even for this version of the motion planning problem, there is no efficient solution for the general case [1]. The addition of differential constraints on robot motion or more general goal specifications makes motion planning even harder. Given its complexity, most planning algorithms forego completeness and optimality for slightly weaker notions such as resolution completeness, probabilistic completeness [2], and asymptotic optimality.
14,230,115
lagrangian duality based algorithms in online scheduling
We consider Lagrangian duality based approaches to design and analyze algorithms for online energy-efficient scheduling. First, we present a primal-dual framework. Our approach makes use of the Lagrangian weak duality and convexity to derive dual programs for problems which could be formulated as convex assignment problems. The duals have intuitive structures as the ones in linear programming. The constraints of the duals explicitly indicate the online decisions and naturally lead to competitive algorithms. Second, we use a dual-fitting approach, which also based on the weak duality, to study problems which are unlikely to admit convex relaxations. Through the analysis, we show an interesting feature in which primal-dual gives idea for designing algorithms while the analysis is done by dual-fitting. #R##N#We illustrate the advantages and the flexibility of the approaches through problems in different setting: from single machine to unrelated machine environments, from typical competitive analysis to the one with resource augmentation, from convex relaxations to non-convex relaxations.
14,398,335
uw spf the university of washington semantic parsing framework
The University of Washington Semantic Parsing Framework (SPF) is a learning and inference framework for mapping natural language to formal representation of its meaning.
14,456,953
recognition of handwritten bangla basic characters and digits using convex hull based feature set
In dealing with the problem of recognition of handwritten character patterns of varying shapes and sizes, selection of a proper feature set is important to achieve high recognition performance. The current research aims to evaluate the performance of the convex hull based feature set, i.e. 125 features in all computed over different bays attributes of the convex hull of a pattern, for effective recognition of isolated handwritten Bangla basic characters and digits. On experimentation with a database of 10000 samples, the maximum recognition rate of 76.86% is observed for handwritten Bangla characters. For Bangla numerals the maximum success rate of 99.45%. is achieved on a database of 12000 sample. The current work validates the usefulness of a new kind of feature set for recognition of handwritten Bangla basic characters and numerals
14,609,079
gardner s minichess variant is solved
A 5x5 board is the smallest board on which one can set up all kind of chess pieces as a start position. We consider Gardner's minichess variant in which all pieces are set as in a standard chessboard (from Rook to King). This game has roughly 9x10^{18} legal positions and is comparable in this respect with checkers. We weakly solve this game, that is we prove its game-theoretic value and give a strategy to draw against best play for White and Black sides. Our approach requires surprisingly small computing power. We give a human readable proof. The way the result is obtained is generic and could be generalized to bigger chess settings or to other games.
14,769,209
decentralised multi agent reinforcement learning for dynamic and uncertain environments
Multi-Agent Reinforcement Learning (MARL) is a widely used technique for optimization in decentralised control problems. However, most applications of MARL are in static environments, and are not suitable when agent behaviour and environment conditions are dynamic and uncertain. Addressing uncertainty in such environments remains a challenging problem for MARL-based systems. The dynamic nature of the environment causes previous knowledge of how agents interact to become outdated. Advanced knowledge of potential changes through prediction significantly supports agents converging to near-optimal control solutions. In this paper we propose P-MARL, a decentralised MARL algorithm enhanced by a prediction mechanism that provides accurate information regarding up-coming changes in the environment. This prediction is achieved by employing an Artificial Neural Network combined with a Self-Organising Map that detects and matches changes in the environment. The proposed algorithm is validated in a realistic smart-grid scenario, and provides a 92% Pareto efficient solution to an electric vehicle charging problem.
15,233,957
an epistemic strategy logic
This paper presents an extension of temporal epistemic logic with operators that quantify over agent strategies. Unlike previous work on alternating temporal epistemic logic, the semantics works with systems whose states explicitly encode the strategy being used by each of the agents. This provides a natural way to express what agents would know were they to be aware of some of the strategies being used by other agents. A number of examples that rely upon the ability to express an agent's knowledge about the strategies being used by other agents are presented to motivate the framework, including reasoning about game theoretic equilibria, knowledge-based programs, and information theoretic computer security policies. Relationships to several variants of alternating temporal epistemic logic are discussed. The computational complexity of model checking the logic and several of its fragments are also characterized.
15,652,975
incremental view maintenance for nested relational databases
Incremental view maintenance is an essential tool for speeding up the processing of large, locally changing workloads. Its fundamental challenge is to ensure that changes are propagated from input to output more efficiently than via recomputation. We formalize this requirement for positive nested relational algebra (NRA+) on bags and we propose a transformation deriving deltas for any expression in the language. #R##N#The main difficulty in maintaining nested queries lies in the inability to express within NRA+ the efficient updating of inner bags, i.e., without completely replacing the tuples that contain them. To address this problem, we first show how to efficiently incrementalize IncNRA+, a large fragment of NRA+ whose deltas never generate inner bag updates. We then provide a semantics-preserving transformation that takes any nested query into a collection of IncNRA+ queries. This constitutes the first static solution for the efficient incremental processing of languages with nested collections. Furthermore, we show that the state-of-the-art technique of recursive IVM, originally developed for positive relational algebra with aggregation, also extends to nested queries. #R##N#Finally, we generalize our static approach for the efficient incrementalization of NRA+ to a family of simply-typed lambda calculi, given that its primitives are themselves efficiently incrementalizable.
15,740,608
systems for near real time analysis of large scale dynamic graphs
Graphs are widespread data structures used to model a wide variety of problems. The sheer amount of data to be processed has prompted the creation of a myriad of systems that help us cope with massive scale graphs. The pressure to deliver fast responses to queries on the graph is higher than ever before, as it is demanded by many applications (e.g. online recommendations, auctions, terrorism protection, etc.). In addition, graphs change continuously (so do the real world entities that typically represent). Systems must be ready for both: near real-time and dynamic massive graphs. We survey systems taking their scalability, real-time potential and capability to support dynamic changes to the graph as driving guidelines. The main techniques and limitations are distilled and categorised. The algorithms run on top of graph systems are not ready for prime time dynamism either. Therefore,a short overview on dynamic graph algorithms has also been included.
16,124,843
analyzing incentives for protocol compliance in complex domains a case study of introduction based routing
Formal analyses of incentives for compliance with network protocols often appeal to gametheoretic models and concepts. Applications of game-theoretic analysis to network security have generally been limited to highly stylized models, where simplified environments enable tractable study of key strategic variables. We propose a simulation-based approach to gametheoretic analysis of protocol compliance, for scenarios with large populations of agents and large policy spaces. We define a general procedure for systematically exploring a structured policy space, directed expressly to resolve the qualitative classification of equilibrium behavior as compliant or non-compliant. The techniques are illustrated and exercised through an extensive case study analyzing compliance incentives for introduction-based routing. We find that the benefits of complying with the protocol are particularly strong for nodes subject to attack, and the overall compliance level achieved in equilibrium, while not universal, is sufficient to support the desired security goals of the protocol.
16,289,923
waterfowl a compact self indexed rdf store with inference enabled dictionaries
In this paper, we present a novel approach { called WaterFowl { for the storage of RDF triples that addresses some key issues in the contexts of big data and the Semantic Web. The architecture of our prototype, largely based on the use of succinct data structures, enables the representation of triples in a self-indexed, compact manner without requiring decompression at query answering time. Moreover, it is adapted to eciently support RDF and RDFS entailment regimes thanks to an optimized encoding of ontology concepts and properties that does not require a complete inference materialization or extensive query rewriting algorithms. This approach implies to make a distinction between the terminological and the assertional components of the knowledge base early in the process of data preparation, i:e:, preprocessing the data before storing it in our structures. The paper describes the complete architecture of this system and presents some preliminary results obtained from evaluations conducted on our rst prototype.
16,627,279
polynomial systems solving by fast linear algebra
Polynomial system solving is a classical problem in mathematics with a wide range of applications. This makes its complexity a fundamental problem in computer science. Depending on the context, solving has different meanings. In order to stick to the most general case, we consider a representation of the solutions from which one can easily recover the exact solutions or a certified approximation of them. Under generic assumption, such a representation is given by the lexicographical Grobner basis of the system and consists of a set of univariate polynomials. The best known algorithm for computing the lexicographical Grobner basis is in $\widetilde{O}(d^{3n})$ arithmetic operations where $n$ is the number of variables and $d$ is the maximal degree of the equations in the input system. The notation $\widetilde{O}$ means that we neglect polynomial factors in $n$. We show that this complexity can be decreased to $\widetilde{O}(d^{\omega n})$ where $2 \leq \omega < 2.3727$ is the exponent in the complexity of multiplying two dense matrices. Consequently, when the input polynomial system is either generic or reaches the Bezout bound, the complexity of solving a polynomial system is decreased from $\widetilde{O}(D^3)$ to $\widetilde{O}(D^\omega)$ where $D$ is the number of solutions of the system. To achieve this result we propose new algorithms which rely on fast linear algebra. When the degree of the equations are bounded uniformly by a constant we propose a deterministic algorithm. In the unbounded case we present a Las Vegas algorithm.
16,868,154
factors influencing the quality of the user experience in ubiquitous recommender systems
The use of mobile devices and the rapid growth of the internet and networking infrastructure has brought the necessity of using Ubiquitous recommender systems. However in mobile devices there are different factors that need to be considered in order to get more useful recommendations and increase the quality of the user experience. This paper gives an overview of the factors related to the quality and proposes a new hybrid recommendation model.
16,874,441
a novel modified apriori approach for web document clustering
The Traditional apriori algorithm can be used for clustering the web documents based on the association technique of data mining. But this algorithm has several limitations due to repeated database scans and its weak association rule analysis. In modern world of large databases, efficiency of traditional apriori algorithm would reduce manifolds. In this paper, we proposed a new modified apriori approach by cutting down the repeated database scans and improving association analysis of traditional apriori algorithm to cluster the web documents. Further we improve those clusters by applying Fuzzy C-Means (FCM), K-Means and Vector Space Model (VSM) techniques separately. We use Classic3 and Classic4 datasets of Cornell University having more than 10,000 documents and run both traditional apriori and our modified apriori approach on it. Experimental results show that our approach outperforms the traditional apriori algorithm in terms of database scan and improvement on association of analysis.
17,147,848
dynamic optimization for heterogeneous powered wireless multimedia sensor networks with correlated sources and network coding
The energy consumption in wireless multimedia sensor networks (WMSN) is much greater than that in traditional wireless sensor networks. Thus, it is a huge challenge to remain the perpetual operation for WMSN. In this paper, we propose a new heterogeneous energy supply model for WMSN through the coexistence of renewable energy and electricity grid. We address to cross-layer optimization for the multiple multicast with distributed source coding and intra-session network coding in heterogeneous powered wireless multimedia sensor networks (HPWMSN) with correlated sources. The aim is to achieve the optimal reconstruct distortion at sinks and the minimal cost of purchasing electricity from electricity grid. Based on the Lyapunov drift-plus-penalty with perturbation technique and dual decomposition technique, we propose a fully distributed dynamic cross-layer algorithm, including multicast routing, source rate control, network coding, session scheduling and energy management, only requiring knowledge of the instantaneous system state. The explicit trade-off between the optimization objective and queue backlog is theoretically proven. Finally, the simulation results verify the theoretic claims.
17,233,403
solving linear equations using a jacobi based time variant adaptive hybrid evolutionary algorithm
Large set of linear equations, especially for sparse and structured coefficient (matrix) equations, solutions using classical methods become arduous. And evolutionary algorithms have mostly been used to solve various optimization and learning problems. Recently, hybridization of classical methods (Jacobi method and Gauss-Seidel method) with evolutionary computation techniques have successfully been applied in linear equation solving. In the both above hybrid evolutionary methods, uniform adaptation (UA) techniques are used to adapt relaxation factor. In this paper, a new Jacobi Based Time-Variant Adaptive (JBTVA) hybrid evolutionary algorithm is proposed. In this algorithm, a Time-Variant Adaptive (TVA) technique of relaxation factor is introduced aiming at both improving the fine local tuning and reducing the disadvantage of uniform adaptation of relaxation factors. This algorithm integrates the Jacobi based SR method with time variant adaptive evolutionary algorithm. The convergence theorems of the proposed algorithm are proved theoretically. And the performance of the proposed algorithm is compared with JBUA hybrid evolutionary algorithm and classical methods in the experimental domain. The proposed algorithm outperforms both the JBUA hybrid algorithm and classical methods in terms of convergence speed and effectiveness.
17,842,384
a betweenness structure entropy of complex networks
The structure entropy is an important index to illuminate the structure property of the complex network. Most of the existing structure entropies are based on the degree distribution of the complex network. But the structure entropy based on the degree can not illustrate the structure property of the weighted networks. In order to study the structure property of the weighted networks, a new structure entropy of the complex networks based on the betweenness is proposed in this paper. Comparing with the existing structure entropy, the proposed method is more reasonable to describe the structure property of the complex weighted networks.
17,860,788
intelligent paging strategy for multi carrier cdma system
Subscriber satisfaction and maximum radio resource utilization are the pivotal criteria in communication system design. In multi-Carrier CDMA system, different paging algorithms are used for locating user within the shortest possible time and best possible utilization of radio resources. Different paging algorithms underscored different techniques based on the different purposes. However, low servicing time of sequential search and better utilization of radio resources of concurrent search can be utilized simultaneously by swapping of the algorithms. In this paper, intelligent mechanism has been developed for dynamic algorithm assignment basing on time-varying traffic demand, which is predicted by radial basis neural network; and its performance has been analyzed are based on prediction efficiency of different types of data. High prediction efficiency is observed with a good correlation coefficient (0.99) and subsequently better performance is achieved by dynamic paging algorithm assignment. This claim is substantiated by the result of proposed intelligent paging strategy.
17,876,968
design and evaluation of mechanisms for a multicomputer object store
Multicomputers have traditionally been viewed as powerful compute engines. It is from this perspective that they have been applied to various problems in order to achieve significant performance gains. There are many applications for which this compute intensive approach is only a partial solution. CAD, virtual reality, simulation, document management and analysis all require timely access to large amounts of data. This thesis investigates the use of the object store paradigm to harness the large distributed memories found on multicomputers. The design, implementation, and evaluation of a distributed object server on the Fujitsu AP1000 is described. The performance of the distributed object server under example applications, mainly physical simulation problems, is used to evaluate solutions to the problems of client space recovery, object migration, and coherence maintenance. #R##N#The distributed object server follows the client-server model, allows object replication, and uses binary semaphores as a concurrency control measure. Instrumentation of the server under these applications supports several conclusions: client space recovery should be dynamically controlled by the application, predictively prefetching object replicas yields benefits in restricted circumstances, object migration by storage unit (segment) is not generally suitable where there are many objects per storage unit, and binary semaphores are an expensive concurrency control measure in this environment.
18,471,624
a graph based perspective to total carbon footprint assessment of non marginal technology driven projects use case of ott iptv
Life Cycle Assessment (LCA) of green and sustainable projects has been found to be a necessary analysis in order to include all upstream, downstream, and indirect impacts. Because of the complexity of interactions, the differential impacts with respect to a baseline, i.e., a business-as-usual (BAU) scenario, are commonly considered to relatively compare various projects. However, as the degree of penetration of a project in the baseline increases, the popular marginal assumption does no longer hold, and the differential impacts may become inconsistent. Although various mythologies have been successfully proposed and used to contain such a side effect, the bottom-up nature, which initiates the assessment from the project itself and ultimately widens the scope, could easily fail to acknowledge critical modifications to the baseline. This is highly relevant in terms of ICT's disruptive and dynamic technologies which push the baseline to become a marginal legacy. In this work, an analytic formalism is presented to provide a means of comparison of such technologies and projects. The core idea behind the proposed methodology is a magnitude-insensitive graph-based distance function to differentially compare a project with a baseline. The applicability of the proposed methodology is then evaluated in a use case of OTT/IPTV online media distribution services.
18,595,695
global bandits with holder continuity
Standard Multi-Armed Bandit (MAB) problems assume that the arms are independent. However, in many application scenarios, the information obtained by playing an arm provides information about the remainder of the arms. Hence, in such applications, this informativeness can and should be exploited to enable faster convergence to the optimal solution. In this paper, we introduce and formalize the Global MAB (GMAB), in which arms are globally informative through a global parameter, i.e., choosing an arm reveals information about all the arms. We propose a greedy policy for the GMAB which always selects the arm with the highest estimated expected reward, and prove that it achieves bounded parameter-dependent regret. Hence, this policy selects suboptimal arms only finitely many times, and after a finite number of initial time steps, the optimal arm is selected in all of the remaining time steps with probability one. In addition, we also study how the informativeness of the arms about each other's rewards affects the speed of learning. Specifically, we prove that the parameter-free (worst-case) regret is sublinear in time, and decreases with the informativeness of the arms. We also prove a sublinear in time Bayesian risk bound for the GMAB which reduces to the well-known Bayesian risk bound for linearly parameterized bandits when the arms are fully informative. GMABs have applications ranging from drug and treatment discovery to dynamic pricing.
19,654,316
efficient and distributed secret sharing in general networks
Shamir's (n, k) threshold secret sharing is an important component of several cryptographic protocols, such as those for secure multiparty-computation, key management, and Byzantine agreement. These protocols typically assume the presence of direct communication links from the dealer to all participants, in which case, the dealer can directly pass the shares of the secret to each participant. In this paper, we consider the problem of secret sharing when the dealer does not have direct links to all the participants, and instead, the dealer and the participants form a general network. We present an efficient and distributed algorithm for secret sharing over general networks that satisfy what we call the k-propagating-dealer condition. #R##N#We derive information-theoretic lower bounds on the communication complexity of secret sharing over any network, which may also be of independent interest. We show that for networks satisfying the k-propagating-dealer condition, the communication complexity of our algorithm is {\Theta}(n), and furthermore, is a constant factor away from the lower bounds. We also show that, in contrast, the existing solution entails a communication-complexity that is super-linear for a wide class of networks, and is {\Theta}(n^2) in the worst case. Moreover, the amount of randomness required under our algorithm is a constant, while that required under the existing solution increases with n for a large class of networks, and in particular, is {\Theta}(n) whenever the degree of the dealer is bounded. Finally, while the existing solution requires considerable coordination in the network and knowledge of the global topology, our algorithm is completely distributed and requires each node to know only the identities of its neighbours. Our algorithm thus allows for efficient generalization of several cryptographic protocols to a large class of general networks.
19,690,682
the dynamics of offensive messages in the world of social media the control of cyberbullying on twitter
The 21st century has redefined the way we communicate, our concept of individual and group privacy, and the dynamics of acceptable behavioral norms. The messaging dynamics on Twitter, an internet social network, has opened new ways/modes of spreading information. As a result cyberbullying or in general, the spread of offensive messages, is a prevalent problem. The aim of this report is to identify and evaluate conditions that would dampen the role of cyberbullying dynamics on Twitter. We present a discrete-time non-linear compartmental model to explore how the introduction of a Quarantine class may help to hinder the spread of offensive messages. We based the parameters of this model on recent Twitter data related to a topic that communities would deem most offensive, and found that for Twitter a level of quarantine can always be achieved that will immediately suppress the spread of offensive messages, and that this level of quarantine is independent of the number of offenders spreading the message. We hope that the analysis of this dynamic model will shed some insights into the viability of new models of methods for reducing cyberbullying in public social networks.
19,777,160
an implementation of sub cad in maple
AbstractCylindrical algebraic decomposition (CAD) is an important tool for the investiga-tion of semi-algebraic sets, with applications in algebraic geometry and beyond. Wehave previously reported on an implementation of CAD in Maple which offers theoriginal projection and lifting algorithm of Collins along with subsequent improve-ments.Here we report on new functionality: specifically the ability to build cylindricalalgebraic sub-decompositions (sub-CADs) where only certain cells are returned. Wehave implemented algorithms to return cells of a prescribed dimensions or higher(layered sub-CADs), and an algorithm to return only those cells on which given poly-nomials are zero (variety sub-CADs). These offer substantial savings in output sizeand computation time.The code described and an introductory Maple worksheet / pdf demonstratingthe full functionality of the package should accompany this report. This work is supported by EPSRC grant EP/J003247/1. 1 Introduction This report concerns ProjectionCAD: a Maplepackage for cylindrical algebraic decom-position (CAD) developed at the University of Bath. The extended abstract [18] at ICMS2014 describes how this package utilises recent CAD work in the RegularChainsLibraryof Maple, while still following the classical projection and lifting framework for CADconstruction. The present report is to accompany the release of ProjectionCADversion3, describing the new functionality this introduced. The report should be accompaniedby the code described and an introductory Maple worksheet / pdf demonstrating thefull functionality of the package. The previous two versions of ProjectionCADare hostedalongside similar reports documenting their functionality [16, 17].Version 3 introduces functionality for cylindrical algebraic sub-decompositions(sub-CADs): subsets of CADs sufficient to describe the solutions of a given formulae.Two distinct types are provided, whose theory was developed in [27]. The first typecontains only those cells of a certain dimension and higher, reducing both the outputsize and computational time by giving only output of the required generality. We haveimplemented both a direct and recursive algorithm to build these layeredsub-CADs. Thesecond type contains only those cells on which given equations are satisfied (lie on aprescribed variety). When building a CAD for a formula with an equational constraintthen only these cells can contain the solution set. These varietysub-CADs clearly reducethe output, and can also reduce computation time depending on the rank of the varietyrelative to the variable ordering.1
20,142,263
novelty search in competitive coevolution
One of the main motivations for the use of competitive coevolution systems is their ability to capitalise on arms races between competing species to evolve increasingly sophisticated solutions. Such arms races can, however, be hard to sustain, and it has been shown that the competing species often converge prematurely to certain classes of behaviours. In this paper, we investigate if and how novelty search, an evolutionary technique driven by behavioural novelty, can overcome convergence in coevolution. We propose three methods for applying novelty search to coevolutionary systems with two species: (i) score both populations according to behavioural novelty; (ii) score one population according to novelty, and the other according to fitness; and (iii) score both populations with a combination of novelty and fitness. We evaluate the methods in a predator-prey pursuit task. Our results show that novelty-based approaches can evolve a significantly more diverse set of solutions, when compared to traditional fitness-based coevolution.
21,181,113
a framework of constructions of minimum storage regenerating codes with the optimal update access property for distributed storage systems based on invariant subspace technique
In this paper, we present a generic framework for constructing systematic minimum storage regenerating codes with two parity nodes based on invariant subspace technique. Codes constructed in our framework not only contain some best known codes as special cases, but also include some new codes with good properties such as the optimal access property and the optimal update property. In addition, to the best of our knowledge, two of the new codes have the largest number of systematic nodes with the optimal update property for given store capacity of an individual node.
21,492,475
do starting and ending effects in fixed price group buying differ
With the growing popularity of group-buying websites, a plethora of group-buying options is available to consumers. Given this range of choices, information diffusion in group-buying can greatly influence consumers' purchase decisions. Our study uses large-scale datasets from the top two group-buying websites in China, to explore the diffusion process and examine mass media communication (MMC) and interpersonal communication (IPC) during different periods of the buying process. The analysis results indicate that MMC and IPC at the start of the process can positively affect the sales, while it leads to fewer sales during the ending period in fixed-price group-buying, which contradicts the results of previous studies. To the best of our knowledge, this is the first study to explore information diffusion in group- buying. This study provides a number of theoretical insights into group-buying from a new perspective, as well as practical management implications.
21,537,352
the vectorial λ calculus
We describe a type system for the linear-algebraic λ-calculus. The type system accounts for the linear-algebraic aspects of this extension of λ-calculus: it is able to statically describe the linear combinations of terms that will be obtained when reducing the programs. This gives rise to an original type theory where types, in the same way as terms, can be superposed into linear combinations. We prove that the resulting typed λ-calculus is strongly normalising and features weak subject reduction. Finally, we show how to naturally encode matrices and vectors in this typed calculus.
21,732,078
collaborative p2p streaming of interactive live free viewpoint video
We study an interactive live streaming scenario where multiple peers pull streams of the same free viewpoint video that are synchronized in time but not necessarily in view. In free viewpoint video, each user can periodically select a virtual view between two anchor camera views for display. The virtual view is synthesized using texture and depth videos of the anchor views via depth-image-based rendering (DIBR). In general, the distortion of the virtual view increases with the distance to the anchor views, and hence it is beneficial for a peer to select the closest anchor views for synthesis. On the other hand, if peers interested in different virtual views are willing to tolerate larger distortion in using more distant anchor views, they can collectively share the access cost of common anchor views. #R##N#Given anchor view access cost and synthesized distortion of virtual views between anchor views, we study the optimization of anchor view allocation for collaborative peers. We first show that, if the network reconfiguration costs due to view-switching are negligible, the problem can be optimally solved in polynomial time using dynamic programming. We then consider the case of non-negligible reconfiguration costs (e.g., large or frequent view-switching leading to anchor-view changes). In this case, the view allocation problem becomes NP-hard. We thus present a locally optimal and centralized allocation algorithm inspired by Lloyd's algorithm in non-uniform scalar quantization. We also propose a distributed algorithm with guaranteed convergence where each peer group independently make merge-and-split decisions with a well-defined fairness criteria. The results show that depending on the problem settings, our proposed algorithms achieve respective optimal and close-to-optimal performance in terms of total cost, and outperform a P2P scheme without collaborative anchor selection.
21,814,889
diameters of permutation groups on graphs and linear time feasibility test of pebble motion problems
Let $G$ be an $n$-vertex connected, undirected, simple graph. The vertices of $G$ are populated with $n$ uniquely labeled pebbles, one on each vertex. Allowing pebbles on cycles of $G$ to rotate (synchronous rotations along multiple disjoint cycles are permitted), the resulting pebble permutations form a group $\G$ uniquely determined by $G$. Let the diameter of $\G$ (denoted $diam(\G)$) represent the length of the longest product of generators (cyclic pebble rotations) required to reach an element of $\G$, we show that $diam(\G) = O(n^2)$. #R##N#Extending the formulation to allow $p \le n$ pebbles on an $n$-vertex graph, we obtain a variation of the (classic) pebble motion problem (first fully described in Kornhauser, Miller, and Spirakis, 1984) that also allows rotations of pebbles along a fully occupied cycle. For our formulation as well as the (classic) pebble motion problem, given any start and goal pebble configurations, we provide a linear time algorithm that decides whether the goal configuration is reachable from the start configuration. This gives a positive answer to an open problem raised by (Auletta et al., 1999)
21,938,781
enhancing navigation on wikipedia with social tags
Social tagging has become an interesting approach to improve search and navigation over the actual Web, since it aggregates the tags added by different users to the same resource in a collaborative way. This way, it results in a list of weighted tags describing its resource. Combined to a classical taxonomic classification system such as that by Wikipedia, social tags can enhance document navigation and search. On the one hand, social tags suggest alternative navigation ways, including pivot-browsing, popularity-driven navigation, and filtering. On the other hand, it provides new metadata, sometimes uncovered by documents' content, that can substantially improve document search. In this work, the inclusion of an interface to add user-defined tags describing Wikipedia articles is proposed, as a way to improve article navigation and retrieval. As a result, a prototype on applying tags over Wikipedia is proposed in order to evaluate its effectiveness.
22,313,492
controlling a software defined network via distributed controllers
In this paper, we propose a distributed OpenFlow controller and an associated coordination framework that achieves scalability and reliability even under heavy data center loads. The proposed framework, which is designed to work with all existing OpenFlow controllers with minimal or no required changes, provides support for dynamic addition and removal of controllers to the cluster without any interruption to the network operation. We demonstrate performance results of the proposed framework implemented over an experimental testbed that uses controllers running Beacon.
22,413,998
extracting herbrand trees from coq
Software certification aims at proving the correctness of programs but in many cases, the use of external libraries allows only a conditional proof: it depends on the assumption that the libraries meet their specifications. In particular, a bug in these libraries might still impact the certified program. In this case, the difficulty that arises is to isolate the defective library function and provide a counter-example. In this paper, we show that this problem can be logically formalized as the construction of a Herbrand tree for a contradictory universal theory and address it. The solution we propose is based on a proof of Herbrand's theorem in the proof assistant Coq. Classical program extraction using Krivine's classical realizability then translates this proof into a certified program that computes Herbrand trees. Using this tree and calls to the library functions, we are able to determine which function is defective and explicitly produce a counter-example to its specification.
22,895,787
animation of virtual mannequins robot like simulation or motion captures
In order to optimize the costs and time of design of the new products while improving their quality, concurrent engineering is based on the digital model of these products, the numerical model. However, in order to be able to avoid definitively physical model, old support of the design, without loss of information, new tools must be available. Especially, a tool making it possible to check simply and quickly the maintainability of complex mechanical sets using the numerical model is necessary. Since one decade, our team works on the creation of tool for the generation and the analysis of trajectories of virtual mannequins. The simulation of human tasks can be carried out either by robot-like simulation or by simulation by motion capture. This paper presents some results on the both two methods. The first method is based on a multi-agent system and on a digital mock-up technology, to assess an efficient path planner for a manikin or a robot for access and visibility task taking into account ergonomic constraints or joint and mechanical limits. In order to solve this problem, the human operator is integrated in the process optimization to contribute to a global perception of the environment. This operator cooperates, in real-time, with several automatic local elementary agents. In the case of the second approach, we worked with the CEA and EADS/CCR to solve the constraints related to the evolution of human virtual in its environment on the basis of data resulting from motion capture system. An approach using of the virtual guides was developed to allow to the user the realization of precise trajectory in absence of force feedback. The result of this work validates solutions through the digital mock-up; it can be applied to simulate maintenability and mountability tasks.
22,981,764
tractable strong outlier identification
In knowledge bases expressed in default logic, outliers are sets of literals, or observations, that feature unexpected properties. This paper introduces the notion of strong outliers and studies the complexity problems related to outlier recognition in the fragment of acyclic normal unary theories and the related one of mixed unary theories. We show that recognizing strong outliers in acyclic normal unary theories can be done in polynomial time. Moreover, we show that this result is sharp, since switching to either general outliers, cyclic theories or acyclic mixed unary theories makes the problem intractable. This is the only fragment of default theories known so far for which the general outlier recognition problem is tractable. Based on these results, we have designed a polynomial time algorithm for enumerating all strong outliers of bounded size in an acyclic normal unary default theory. These tractability results rely on the Incremental Lemma, an interesting result on its own, that provides conditions under which a mixed unary default theory displays a monotonic reasoning behavior.
README.md exists but content is empty. Use the Edit dataset card button to edit it.
Downloads last month
51
Edit dataset card