The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 3 new columns ({'node_id', 'neighbour', 'label'}) and 5 missing columns ({'ID', 'label_id', 'abstract', 'mag_id', 'title'}).

This happened while the csv dataset builder was generating data using

hf://datasets/Sherirto/CSTAG/Children/Children.csv (at revision 7e59eaee59c806a0a1119f6c432437e85cf53d0b)

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              category: string
              text: string
              label: int64
              node_id: int64
              neighbour: string
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 823
              to
              {'ID': Value(dtype='int64', id=None), 'mag_id': Value(dtype='int64', id=None), 'title': Value(dtype='string', id=None), 'abstract': Value(dtype='string', id=None), 'label_id': Value(dtype='int64', id=None), 'category': Value(dtype='string', id=None), 'text': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 3 new columns ({'node_id', 'neighbour', 'label'}) and 5 missing columns ({'ID', 'label_id', 'abstract', 'mag_id', 'title'}).
              
              This happened while the csv dataset builder was generating data using
              
              hf://datasets/Sherirto/CSTAG/Children/Children.csv (at revision 7e59eaee59c806a0a1119f6c432437e85cf53d0b)
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

ID
int64
mag_id
int64
title
string
abstract
string
label_id
int64
category
string
text
string
0
9,657,784
Title: evasion attacks against machine learning at test time
Abstract: In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker's knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.
4
arxiv cs cr
Title: evasion attacks against machine learning at test time. Abstract: In security-sensitive applications, the success of machine learning depends on a thorough vetting of their resistance to adversarial data. In one pertinent, well-motivated attack scenario, an adversary may attempt to evade a deployed system at test time by carefully manipulating attack samples. In this work, we present a simple but effective gradient-based approach that can be exploited to systematically assess the security of several, widely-used classification algorithms against evasion attacks. Following a recently proposed framework for security evaluation, we simulate attack scenarios that exhibit different risk levels for the classifier by increasing the attacker's knowledge of the system and her ability to manipulate attack samples. This gives the classifier designer a better picture of the classifier performance under evasion attacks, and allows him to perform a more informed model selection (or parameter setting). We evaluate our approach on the relevant security task of malware detection in PDF files, and show that such systems can be easily evaded. We also sketch some countermeasures suggested by our analysis.
1
39,886,162
Title: how hard is computing parity with noisy communications
Abstract: We show a tight lower bound of $\Omega(N \log\log N)$ on the number of transmissions required to compute the parity of $N$ input bits with constant error in a noisy communication network of $N$ randomly placed sensors, each having one input bit and communicating with others using local transmissions with power near the connectivity threshold. This result settles the lower bound question left open by Ying, Srikant and Dullerud (WiOpt 06), who showed how the sum of all the $N$ bits can be computed using $O(N \log\log N)$ transmissions. The same lower bound has been shown to hold for a host of other functions including majority by Dutta and Radhakrishnan (FOCS 2008). #R##N#Most works on lower bounds for communication networks considered mostly the full broadcast model without using the fact that the communication in real networks is local, determined by the power of the transmitters. In fact, in full broadcast networks computing parity needs $\theta(N)$ transmissions. To obtain our lower bound we employ techniques developed by Goyal, Kindler and Saks (FOCS 05), who showed lower bounds in the full broadcast model by reducing the problem to a model of noisy decision trees. However, in order to capture the limited range of transmissions in real sensor networks, we adapt their definition of noisy decision trees and allow each node of the tree access to only a limited part of the input. Our lower bound is obtained by exploiting special properties of parity computations in such noisy decision trees.
5
arxiv cs dc
Title: how hard is computing parity with noisy communications. Abstract: We show a tight lower bound of $\Omega(N \log\log N)$ on the number of transmissions required to compute the parity of $N$ input bits with constant error in a noisy communication network of $N$ randomly placed sensors, each having one input bit and communicating with others using local transmissions with power near the connectivity threshold. This result settles the lower bound question left open by Ying, Srikant and Dullerud (WiOpt 06), who showed how the sum of all the $N$ bits can be computed using $O(N \log\log N)$ transmissions. The same lower bound has been shown to hold for a host of other functions including majority by Dutta and Radhakrishnan (FOCS 2008). #R##N#Most works on lower bounds for communication networks considered mostly the full broadcast model without using the fact that the communication in real networks is local, determined by the power of the transmitters. In fact, in full broadcast networks computing parity needs $\theta(N)$ transmissions. To obtain our lower bound we employ techniques developed by Goyal, Kindler and Saks (FOCS 05), who showed lower bounds in the full broadcast model by reducing the problem to a model of noisy decision trees. However, in order to capture the limited range of transmissions in real sensor networks, we adapt their definition of noisy decision trees and allow each node of the tree access to only a limited part of the input. Our lower bound is obtained by exploiting special properties of parity computations in such noisy decision trees.
2
116,214,155
Title: on the absence of the rip in real world applications of compressed sensing and the rip in levels
Abstract: The purpose of this paper is twofold. The first is to point out that the Restricted Isometry Property (RIP) does not hold in many applications where compressed sensing is successfully used. This includes fields like Magnetic Resonance Imaging (MRI), Computerized Tomography, Electron Microscopy, Radio Interferometry and Fluorescence Microscopy. We demonstrate that for natural compressed sensing matrices involving a level based reconstruction basis (e.g. wavelets), the number of measurements required to recover all $s$-sparse signals for reasonable $s$ is excessive. In particular, uniform recovery of all $s$-sparse signals is quite unrealistic. This realisation shows that the RIP is insufficient for explaining the success of compressed sensing in various practical applications. The second purpose of the paper is to introduce a new framework based on a generalised RIP-like definition that fits the applications where compressed sensing is used. We show that the shortcomings that show that uniform recovery is unreasonable no longer apply if we instead ask for structured recovery that is uniform only within each of the levels. To examine this phenomenon, a new tool, termed the 'Restricted Isometry Property in Levels' is described and analysed. Furthermore, we show that with certain conditions on the Restricted Isometry Property in Levels, a form of uniform recovery within each level is possible. Finally, we conclude the paper by providing examples that demonstrate the optimality of the results obtained.
28
arxiv cs it
Title: on the absence of the rip in real world applications of compressed sensing and the rip in levels. Abstract: The purpose of this paper is twofold. The first is to point out that the Restricted Isometry Property (RIP) does not hold in many applications where compressed sensing is successfully used. This includes fields like Magnetic Resonance Imaging (MRI), Computerized Tomography, Electron Microscopy, Radio Interferometry and Fluorescence Microscopy. We demonstrate that for natural compressed sensing matrices involving a level based reconstruction basis (e.g. wavelets), the number of measurements required to recover all $s$-sparse signals for reasonable $s$ is excessive. In particular, uniform recovery of all $s$-sparse signals is quite unrealistic. This realisation shows that the RIP is insufficient for explaining the success of compressed sensing in various practical applications. The second purpose of the paper is to introduce a new framework based on a generalised RIP-like definition that fits the applications where compressed sensing is used. We show that the shortcomings that show that uniform recovery is unreasonable no longer apply if we instead ask for structured recovery that is uniform only within each of the levels. To examine this phenomenon, a new tool, termed the 'Restricted Isometry Property in Levels' is described and analysed. Furthermore, we show that with certain conditions on the Restricted Isometry Property in Levels, a form of uniform recovery within each level is possible. Finally, we conclude the paper by providing examples that demonstrate the optimality of the results obtained.
3
121,432,379
Title: a promise theory perspective on data networks
Abstract: Networking is undergoing a transformation throughout our industry. The shift from hardware driven products with ad hoc control to Software Defined Networks is now well underway. In this paper, we adopt the perspective of the Promise Theory to examine the current state of networking technologies so that we might see beyond specific technologies to principles for building flexible and scalable networks. Today's applications are increasingly distributed planet-wide in cloud-like hosting environments. Promise Theory's bottom-up modelling has been applied to server management for many years and lends itself to principles of self-healing, scalability and robustness.
8
arxiv cs ni
Title: a promise theory perspective on data networks. Abstract: Networking is undergoing a transformation throughout our industry. The shift from hardware driven products with ad hoc control to Software Defined Networks is now well underway. In this paper, we adopt the perspective of the Promise Theory to examine the current state of networking technologies so that we might see beyond specific technologies to principles for building flexible and scalable networks. Today's applications are increasingly distributed planet-wide in cloud-like hosting environments. Promise Theory's bottom-up modelling has been applied to server management for many years and lends itself to principles of self-healing, scalability and robustness.
4
231,147,053
Title: analysis of asymptotically optimal sampling based motion planning algorithms for lipschitz continuous dynamical systems
Abstract: Over the last 20 years significant effort has been dedicated to the development of sampling-based motion planning algorithms such as the Rapidly-exploring Random Trees (RRT) and its asymptotically optimal version (e.g. RRT*). However, asymptotic optimality for RRT* only holds for linear and fully actuated systems or for a small number of non-linear systems (e.g. Dubin's car) for which a steering function is available. The purpose of this paper is to show that asymptotically optimal motion planning for dynamical systems with differential constraints can be achieved without the use of a steering function. We develop a novel analysis on sampling-based planning algorithms that sample the control space. This analysis demonstrated that asymptotically optimal path planning for any Lipschitz continuous dynamical system can be achieved by sampling the control space directly. We also determine theoretical bounds on the convergence rates for this class of algorithms. As the number of iterations increases, the trajectory generated by these algorithms, approaches the optimal control trajectory, with probability one. Simulation results are promising.
27
arxiv cs ro
Title: analysis of asymptotically optimal sampling based motion planning algorithms for lipschitz continuous dynamical systems. Abstract: Over the last 20 years significant effort has been dedicated to the development of sampling-based motion planning algorithms such as the Rapidly-exploring Random Trees (RRT) and its asymptotically optimal version (e.g. RRT*). However, asymptotic optimality for RRT* only holds for linear and fully actuated systems or for a small number of non-linear systems (e.g. Dubin's car) for which a steering function is available. The purpose of this paper is to show that asymptotically optimal motion planning for dynamical systems with differential constraints can be achieved without the use of a steering function. We develop a novel analysis on sampling-based planning algorithms that sample the control space. This analysis demonstrated that asymptotically optimal path planning for any Lipschitz continuous dynamical system can be achieved by sampling the control space directly. We also determine theoretical bounds on the convergence rates for this class of algorithms. As the number of iterations increases, the trajectory generated by these algorithms, approaches the optimal control trajectory, with probability one. Simulation results are promising.
5
1,196,386,999
Title: the edge group coloring problem with applications to multicast switching
Abstract: This paper introduces a natural generalization of the classical edge coloring problem in graphs that provides a useful abstraction for two well-known problems in multicast switching. We show that the problem is NP-hard and evaluate the performance of several approximation algorithms, both analytically and experimentally. We find that for random $\chi$-colorable graphs, the number of colors used by the best algorithms falls within a small constant factor of $\chi$, where the constant factor is mainly a function of the ratio of the number of outputs to inputs. When this ratio is less than 10, the best algorithms produces solutions that use fewer than $2\chi$ colors. In addition, one of the algorithms studied finds high quality approximate solutions for any graph with high probability, where the probability of a low quality solution is a function only of the random choices made by the algorithm.
34
arxiv cs ds
Title: the edge group coloring problem with applications to multicast switching. Abstract: This paper introduces a natural generalization of the classical edge coloring problem in graphs that provides a useful abstraction for two well-known problems in multicast switching. We show that the problem is NP-hard and evaluate the performance of several approximation algorithms, both analytically and experimentally. We find that for random $\chi$-colorable graphs, the number of colors used by the best algorithms falls within a small constant factor of $\chi$, where the constant factor is mainly a function of the ratio of the number of outputs to inputs. When this ratio is less than 10, the best algorithms produces solutions that use fewer than $2\chi$ colors. In addition, one of the algorithms studied finds high quality approximate solutions for any graph with high probability, where the probability of a low quality solution is a function only of the random choices made by the algorithm.
6
1,444,859,417
Title: webvrgis based city bigdata 3d visualization and analysis
Abstract: This paper shows the WEBVRGIS platform overlying multiple types of data about Shenzhen over a 3d globe. The amount of information that can be visualized with this platform is overwhelming, and the GIS-based navigational scheme allows to have great flexibility to access the different available data sources. For example,visualising historical and forecasted passenger volume at stations could be very helpful when overlaid with other social data.
6
arxiv cs hc
Title: webvrgis based city bigdata 3d visualization and analysis. Abstract: This paper shows the WEBVRGIS platform overlying multiple types of data about Shenzhen over a 3d globe. The amount of information that can be visualized with this platform is overwhelming, and the GIS-based navigational scheme allows to have great flexibility to access the different available data sources. For example,visualising historical and forecasted passenger volume at stations could be very helpful when overlaid with other social data.
7
1,483,430,697
Title: information theoretic authentication and secrecy codes in the splitting model
Abstract: In the splitting model, information theoretic authentication codes allow non-deterministic encoding, that is, several messages can be used to communicate a particular plaintext. Certain applications require that the aspect of secrecy should hold simultaneously. Ogata-Kurosawa-Stinson-Saido (2004) have constructed optimal splitting authentication codes achieving perfect secrecy for the special case when the number of keys equals the number of messages. In this paper, we establish a construction method for optimal splitting authentication codes with perfect secrecy in the more general case when the number of keys may differ from the number of messages. To the best knowledge, this is the first result of this type.
4
arxiv cs cr
Title: information theoretic authentication and secrecy codes in the splitting model. Abstract: In the splitting model, information theoretic authentication codes allow non-deterministic encoding, that is, several messages can be used to communicate a particular plaintext. Certain applications require that the aspect of secrecy should hold simultaneously. Ogata-Kurosawa-Stinson-Saido (2004) have constructed optimal splitting authentication codes achieving perfect secrecy for the special case when the number of keys equals the number of messages. In this paper, we establish a construction method for optimal splitting authentication codes with perfect secrecy in the more general case when the number of keys may differ from the number of messages. To the best knowledge, this is the first result of this type.
8
1,486,601,621
Title: whealth transforming telehealth services
Abstract: A worldwide increase in proportions of older people in the population poses the challenge of managing their increasing healthcare needs within limited resources. To achieve this many countries are interested in adopting telehealth technology. Several shortcomings of state-of-the-art telehealth technology constrain widespread adoption of telehealth services. We present an ensemble-sensing framework - wHealth (short form of wireless health) for effective delivery of telehealth services. It extracts personal health information using sensors embedded in everyday devices and allows effective and seamless communication between patients and clinicians. Due to the non-stigmatizing design, ease of maintenance, simplistic interaction and seamless intervention, our wHealth platform has the potential to enable widespread adoption of telehealth services for managing elderly healthcare. We discuss the key barriers and potential solutions to make the wHealth platform a reality.
3
arxiv cs cy
Title: whealth transforming telehealth services. Abstract: A worldwide increase in proportions of older people in the population poses the challenge of managing their increasing healthcare needs within limited resources. To achieve this many countries are interested in adopting telehealth technology. Several shortcomings of state-of-the-art telehealth technology constrain widespread adoption of telehealth services. We present an ensemble-sensing framework - wHealth (short form of wireless health) for effective delivery of telehealth services. It extracts personal health information using sensors embedded in everyday devices and allows effective and seamless communication between patients and clinicians. Due to the non-stigmatizing design, ease of maintenance, simplistic interaction and seamless intervention, our wHealth platform has the potential to enable widespread adoption of telehealth services for managing elderly healthcare. We discuss the key barriers and potential solutions to make the wHealth platform a reality.
9
1,495,847,259
Title: nonparametric decentralized sequential detection via universal source coding
Abstract: We consider nonparametric or universal sequential hypothesis testing problem when the distribution under the null hypothesis is fully known but the alternate hypothesis corresponds to some other unknown distribution. These algorithms are primarily motivated fr om spectrum sensing in Cognitive Radios and intruder detection in wireless sensor networks. We use easily implementable universal lossless source codes to propose simple algorithms for such a setup. The algorithms are first proposed for discrete alphabet. Their performance and asymptotic properties are studied theoretically. Later these are extended to continuous alphabets. Their performance with two well known universal source codes, Lempel-Ziv code and Krichevsky-Trofimov estimator with Arithmetic Enc oder are compared. These algorithms are also compared with the tests using various other nonparametric estimators. Finally a decentralized version utilizing spatial diversity is also proposed. Its performa nce is analysed and asymptotic properties are proved.
28
arxiv cs it
Title: nonparametric decentralized sequential detection via universal source coding. Abstract: We consider nonparametric or universal sequential hypothesis testing problem when the distribution under the null hypothesis is fully known but the alternate hypothesis corresponds to some other unknown distribution. These algorithms are primarily motivated fr om spectrum sensing in Cognitive Radios and intruder detection in wireless sensor networks. We use easily implementable universal lossless source codes to propose simple algorithms for such a setup. The algorithms are first proposed for discrete alphabet. Their performance and asymptotic properties are studied theoretically. Later these are extended to continuous alphabets. Their performance with two well known universal source codes, Lempel-Ziv code and Krichevsky-Trofimov estimator with Arithmetic Enc oder are compared. These algorithms are also compared with the tests using various other nonparametric estimators. Finally a decentralized version utilizing spatial diversity is also proposed. Its performa nce is analysed and asymptotic properties are proved.
10
1,500,126,713
Title: online learning in decentralized multiuser resource sharing problems
Abstract: In this paper, we consider the general scenario of resource sharing in a decentralized system when the resource rewards/qualities are time-varying and unknown to the users, and using the same resource by multiple users leads to reduced quality due to resource sharing. Firstly, we consider a user-independent reward model with no communication between the users, where a user gets feedback about the congestion level in the resource it uses. Secondly, we consider user-specific rewards and allow costly communication between the users. The users have a cooperative goal of achieving the highest system utility. There are multiple obstacles in achieving this goal such as the decentralized nature of the system, unknown resource qualities, communication, computation and switching costs. We propose distributed learning algorithms with logarithmic regret with respect to the optimal allocation. Our logarithmic regret result holds under both i.i.d. and Markovian reward models, as well as under communication, computation and switching costs.
24
arxiv cs lg
Title: online learning in decentralized multiuser resource sharing problems. Abstract: In this paper, we consider the general scenario of resource sharing in a decentralized system when the resource rewards/qualities are time-varying and unknown to the users, and using the same resource by multiple users leads to reduced quality due to resource sharing. Firstly, we consider a user-independent reward model with no communication between the users, where a user gets feedback about the congestion level in the resource it uses. Secondly, we consider user-specific rewards and allow costly communication between the users. The users have a cooperative goal of achieving the highest system utility. There are multiple obstacles in achieving this goal such as the decentralized nature of the system, unknown resource qualities, communication, computation and switching costs. We propose distributed learning algorithms with logarithmic regret with respect to the optimal allocation. Our logarithmic regret result holds under both i.i.d. and Markovian reward models, as well as under communication, computation and switching costs.
11
1,512,190,626
Title: truthful secretaries with budgets
Abstract: We study online auction settings in which agents arrive and depart dynamically in a random (secretary) order, and each agent's private type consists of the agent's arrival and departure times, value and budget. We consider multi-unit auctions with additive agents for the allocation of both divisible and indivisible items. For both settings, we devise truthful mechanisms that give a constant approximation with respect to the auctioneer's revenue, under a large market assumption. For divisible items, we devise in addition a truthful mechanism that gives a constant approximation with respect to the liquid welfare --- a natural efficiency measure for budgeted settings introduced by Dobzinski and Paes Leme [ICALP'14]. Our techniques provide high-level principles for transforming offline truthful mechanisms into online ones, with or without budget constraints. To the best of our knowledge, this is the first work that addresses the non-trivial challenge of combining online settings with budgeted agents.
36
arxiv cs gt
Title: truthful secretaries with budgets. Abstract: We study online auction settings in which agents arrive and depart dynamically in a random (secretary) order, and each agent's private type consists of the agent's arrival and departure times, value and budget. We consider multi-unit auctions with additive agents for the allocation of both divisible and indivisible items. For both settings, we devise truthful mechanisms that give a constant approximation with respect to the auctioneer's revenue, under a large market assumption. For divisible items, we devise in addition a truthful mechanism that gives a constant approximation with respect to the liquid welfare --- a natural efficiency measure for budgeted settings introduced by Dobzinski and Paes Leme [ICALP'14]. Our techniques provide high-level principles for transforming offline truthful mechanisms into online ones, with or without budget constraints. To the best of our knowledge, this is the first work that addresses the non-trivial challenge of combining online settings with budgeted agents.
12
1,524,631,297
Title: improving the bound on the rip constant in generalized orthogonal matching pursuit
Abstract: The generalized Orthogonal Matching Pursuit (gOMP) is a recently proposed compressive sensing greedy recovery algorithm which generalizes the OMP algorithm by selecting N( ≥ 1) atoms in each iteration. In this letter, we demonstrate that the gOMP can successfully reconstruct a K-sparse signal from a compressed measurement y=Φx by a maximum of K iterations if the sensing matrix Φ satisfies the Restricted Isometry Property (RIP) of order NK, with the RIP constant δNK satisfying δNK <; √N/√K+2√N. The proposed bound is an improvement over the existing bound on δNK. We also show that by increasing the RIP order just by one (i.e., NK+1 from NK), it is possible to refine the bound further to δNK+1 <; √N/√K+√N, which is consistent (for N=1) with the near optimal bound on δK+1 in OMP.
28
arxiv cs it
Title: improving the bound on the rip constant in generalized orthogonal matching pursuit. Abstract: The generalized Orthogonal Matching Pursuit (gOMP) is a recently proposed compressive sensing greedy recovery algorithm which generalizes the OMP algorithm by selecting N( ≥ 1) atoms in each iteration. In this letter, we demonstrate that the gOMP can successfully reconstruct a K-sparse signal from a compressed measurement y=Φx by a maximum of K iterations if the sensing matrix Φ satisfies the Restricted Isometry Property (RIP) of order NK, with the RIP constant δNK satisfying δNK <; √N/√K+2√N. The proposed bound is an improvement over the existing bound on δNK. We also show that by increasing the RIP order just by one (i.e., NK+1 from NK), it is possible to refine the bound further to δNK+1 <; √N/√K+√N, which is consistent (for N=1) with the near optimal bound on δK+1 in OMP.
13
1,525,384,803
Title: a system for reflection in c
Abstract: Object-oriented programming languages such as Java and Objective C have become popular for implementing agent-based and other object-based simulations since objects in those languages can {\em reflect} (i.e. make runtime queries of an object's structure). This allows, for example, a fairly trivial {\em serialisation} routine (conversion of an object into a binary representation that can be stored or passed over a network) to be written. However C++ does not offer this ability, as type information is thrown away at compile time. Yet C++ is often a preferred development environment, whether for performance reasons or for its expressive features such as operator overloading. #R##N#In this paper, we present the {\em Classdesc} system which brings many of the benefits of object reflection to C++.
22
arxiv cs pl
Title: a system for reflection in c. Abstract: Object-oriented programming languages such as Java and Objective C have become popular for implementing agent-based and other object-based simulations since objects in those languages can {\em reflect} (i.e. make runtime queries of an object's structure). This allows, for example, a fairly trivial {\em serialisation} routine (conversion of an object into a binary representation that can be stored or passed over a network) to be written. However C++ does not offer this ability, as type information is thrown away at compile time. Yet C++ is often a preferred development environment, whether for performance reasons or for its expressive features such as operator overloading. #R##N#In this paper, we present the {\em Classdesc} system which brings many of the benefits of object reflection to C++.
14
1,528,301,850
Title: a bi level view of inpainting based image compression
Abstract: Inpainting based image compression approaches, especially linear and non-linear diffusion models, are an active research topic for lossy image compression. The major challenge in these compression models is to find a small set of descriptive supporting points, which allow for an accurate reconstruction of the original image. It turns out in practice that this is a challenging problem even for the simplest Laplacian interpolation model. In this paper, we revisit the Laplacian interpolation compression model and introduce two fast algorithms, namely successive preconditioning primal dual algorithm and the recently proposed iPiano algorithm, to solve this problem efficiently. Furthermore, we extend the Laplacian interpolation based compression model to a more general form, which is based on principles from bi-level optimization. We investigate two different variants of the Laplacian model, namely biharmonic interpolation and smoothed Total Variation regularization. Our numerical results show that significant improvements can be obtained from the biharmonic interpolation model, and it can recover an image with very high quality from only 5% pixels.
16
arxiv cs cv
Title: a bi level view of inpainting based image compression. Abstract: Inpainting based image compression approaches, especially linear and non-linear diffusion models, are an active research topic for lossy image compression. The major challenge in these compression models is to find a small set of descriptive supporting points, which allow for an accurate reconstruction of the original image. It turns out in practice that this is a challenging problem even for the simplest Laplacian interpolation model. In this paper, we revisit the Laplacian interpolation compression model and introduce two fast algorithms, namely successive preconditioning primal dual algorithm and the recently proposed iPiano algorithm, to solve this problem efficiently. Furthermore, we extend the Laplacian interpolation based compression model to a more general form, which is based on principles from bi-level optimization. We investigate two different variants of the Laplacian model, namely biharmonic interpolation and smoothed Total Variation regularization. Our numerical results show that significant improvements can be obtained from the biharmonic interpolation model, and it can recover an image with very high quality from only 5% pixels.
15
1,537,465,387
Title: distributed graph automata
Abstract: Combining ideas from distributed algorithms and alternating automata, we introduce a new class of finite graph automata that recognize precisely the languages of finite graphs definable in monadic second-order logic. By restricting transitions to be nondeterministic or deterministic, we also obtain two strictly weaker variants of our automata for which the emptiness problem is decidable.
33
arxiv cs fl
Title: distributed graph automata. Abstract: Combining ideas from distributed algorithms and alternating automata, we introduce a new class of finite graph automata that recognize precisely the languages of finite graphs definable in monadic second-order logic. By restricting transitions to be nondeterministic or deterministic, we also obtain two strictly weaker variants of our automata for which the emptiness problem is decidable.
16
1,539,916,885
Title: randomness efficient rumor spreading
Abstract: We study the classical rumor spreading problem, which is used to spread information in an unknown network with $n$ nodes. We present the first protocol for any expander graph $G$ with $n$ nodes and minimum degree $\Theta(n)$ such that, the protocol informs every node in $O(\log n)$ rounds with high probability, and uses $O(\log n\log\log n)$ random bits in total. The runtime of our protocol is tight, and the randomness requirement of $O(\log n\log\log n)$ random bits almost matches the lower bound of $\Omega(\log n)$ random bits. We further study rumor spreading protocols for more general graphs, and for several graph topologies our protocols are as fast as the classical protocol and use $\tilde{O}(\log n)$ random bits in total, in contrast to $O(n\log^2n)$ random bits used in the well-known rumor spreading push protocol. These results together give us almost full understanding of the randomness requirement for this basic epidemic process. #R##N#Our protocols rely on a novel reduction between rumor spreading processes and branching programs, and this reduction provides a general framework to derandomize these complex and distributed epidemic processes. Interestingly, one cannot simply apply PRGs for branching programs as rumor spreading process is not characterized by small-space computation. Our protocols require the composition of several pseudorandom objects, e.g. pseudorandom generators, and pairwise independent generators. Besides designing rumor spreading protocols, the techniques developed here may have applications in studying the randomness complexity of distributed algorithms.
34
arxiv cs ds
Title: randomness efficient rumor spreading. Abstract: We study the classical rumor spreading problem, which is used to spread information in an unknown network with $n$ nodes. We present the first protocol for any expander graph $G$ with $n$ nodes and minimum degree $\Theta(n)$ such that, the protocol informs every node in $O(\log n)$ rounds with high probability, and uses $O(\log n\log\log n)$ random bits in total. The runtime of our protocol is tight, and the randomness requirement of $O(\log n\log\log n)$ random bits almost matches the lower bound of $\Omega(\log n)$ random bits. We further study rumor spreading protocols for more general graphs, and for several graph topologies our protocols are as fast as the classical protocol and use $\tilde{O}(\log n)$ random bits in total, in contrast to $O(n\log^2n)$ random bits used in the well-known rumor spreading push protocol. These results together give us almost full understanding of the randomness requirement for this basic epidemic process. #R##N#Our protocols rely on a novel reduction between rumor spreading processes and branching programs, and this reduction provides a general framework to derandomize these complex and distributed epidemic processes. Interestingly, one cannot simply apply PRGs for branching programs as rumor spreading process is not characterized by small-space computation. Our protocols require the composition of several pseudorandom objects, e.g. pseudorandom generators, and pairwise independent generators. Besides designing rumor spreading protocols, the techniques developed here may have applications in studying the randomness complexity of distributed algorithms.
17
1,542,788,159
Title: back to the past source identification in diffusion networks from partially observed cascades
Abstract: When a piece of malicious information becomes rampant in an information diffusion network, can we identify the source node that originally introduced the piece into the network and infer the time when it initiated this? Being able to do so is critical for curtailing the spread of malicious information, and reducing the potential losses incurred. This is a very challenging problem since typically only incomplete traces are observed and we need to unroll the incomplete traces into the past in order to pinpoint the source. In this paper, we tackle this problem by developing a two-stage framework, which first learns a continuous-time diffusion network model based on historical diffusion traces and then identifies the source of an incomplete diffusion trace by maximizing the likelihood of the trace under the learned model. Experiments on both large synthetic and real-world data show that our framework can effectively go back to the past, and pinpoint the source node and its initiation time significantly more accurately than previous state-of-the-arts.
26
arxiv cs si
Title: back to the past source identification in diffusion networks from partially observed cascades. Abstract: When a piece of malicious information becomes rampant in an information diffusion network, can we identify the source node that originally introduced the piece into the network and infer the time when it initiated this? Being able to do so is critical for curtailing the spread of malicious information, and reducing the potential losses incurred. This is a very challenging problem since typically only incomplete traces are observed and we need to unroll the incomplete traces into the past in order to pinpoint the source. In this paper, we tackle this problem by developing a two-stage framework, which first learns a continuous-time diffusion network model based on historical diffusion traces and then identifies the source of an incomplete diffusion trace by maximizing the likelihood of the trace under the learned model. Experiments on both large synthetic and real-world data show that our framework can effectively go back to the past, and pinpoint the source node and its initiation time significantly more accurately than previous state-of-the-arts.
18
1,544,145,018
Title: bayesian two sample tests
Abstract: In this paper, we present two classes of Bayesian approaches to the two-sample problem. Our first class of methods extends the Bayesian t-test to include all parametric models in the exponential family and their conjugate priors. Our second class of methods uses Dirichlet process mixtures (DPM) of such conjugate-exponential distributions as flexible nonparametric priors over the unknown distributions.
24
arxiv cs lg
Title: bayesian two sample tests. Abstract: In this paper, we present two classes of Bayesian approaches to the two-sample problem. Our first class of methods extends the Bayesian t-test to include all parametric models in the exponential family and their conjugate priors. Our second class of methods uses Dirichlet process mixtures (DPM) of such conjugate-exponential distributions as flexible nonparametric priors over the unknown distributions.
19
1,546,946,208
Title: electrical structure based pmu placement in electric power systems
Abstract: Recent work on complex networks compared the topological and electrical structures of the power grid, taking into account the underlying physical laws that govern the electrical connectivity between various components in the network. A distance metric, namely, resistance distance was introduced to provide a more comprehensive description of interconnections in power systems compared with the topological structure, which is based only on geographic connections between network components. Motivated by these studies, in this paper we revisit the phasor measurement unit (PMU) placement problem by deriving the connectivity matrix of the network using resistance distances between buses in the grid, and use it in the integer program formulations for several standard IEEE bus systems. The main result of this paper is rather discouraging: more number of PMUs are required, compared with those obtained using the topological structure, to meet the desired objective of complete network observability without zero injection measurements. However, in light of recent advances in the electrical structure of the grid, our study provides a more realistic perspective of PMU placement in power systems. By further exploring the connectivity matrix derived using the electrical structure, we devise a procedure to solve the placement problem without resorting to linear programming.
19
arxiv cs sy
Title: electrical structure based pmu placement in electric power systems. Abstract: Recent work on complex networks compared the topological and electrical structures of the power grid, taking into account the underlying physical laws that govern the electrical connectivity between various components in the network. A distance metric, namely, resistance distance was introduced to provide a more comprehensive description of interconnections in power systems compared with the topological structure, which is based only on geographic connections between network components. Motivated by these studies, in this paper we revisit the phasor measurement unit (PMU) placement problem by deriving the connectivity matrix of the network using resistance distances between buses in the grid, and use it in the integer program formulations for several standard IEEE bus systems. The main result of this paper is rather discouraging: more number of PMUs are required, compared with those obtained using the topological structure, to meet the desired objective of complete network observability without zero injection measurements. However, in light of recent advances in the electrical structure of the grid, our study provides a more realistic perspective of PMU placement in power systems. By further exploring the connectivity matrix derived using the electrical structure, we devise a procedure to solve the placement problem without resorting to linear programming.
20
1,550,373,401
Title: on state dependent broadcast channels with cooperation
Abstract: In this paper, we investigate problems of communication over physically degraded, state-dependent broadcast channels (BCs) with cooperating decoders. Two different setups are considered and their capacity regions are characterized. First, we study a setting in which one decoder can use a finite capacity link to send the other decoder information regarding the messages or the channel states. In this scenario we analyze two cases: one where noncausal state information is available to the encoder and the strong decoder and the other where state information is available only to the encoder in a causal manner. Second, we examine a setting in which the cooperation between the decoders is limited to taking place before the outputs of the channel are given. In this case, one decoder, which is informed of the state sequence noncausally, can cooperate only to send the other decoder rate-limited information about the state sequence. The proofs of the capacity regions introduce a new method of coding for channels with cooperation between different users, where we exploit the link between the decoders for multiple-binning. Finally, we discuss the optimality of using rate splitting techniques when coding for cooperative BCs. In particular, we show that rate splitting is not necessarily optimal when coding for cooperative BCs by solving an example in which our method of coding outperforms rate splitting.
28
arxiv cs it
Title: on state dependent broadcast channels with cooperation. Abstract: In this paper, we investigate problems of communication over physically degraded, state-dependent broadcast channels (BCs) with cooperating decoders. Two different setups are considered and their capacity regions are characterized. First, we study a setting in which one decoder can use a finite capacity link to send the other decoder information regarding the messages or the channel states. In this scenario we analyze two cases: one where noncausal state information is available to the encoder and the strong decoder and the other where state information is available only to the encoder in a causal manner. Second, we examine a setting in which the cooperation between the decoders is limited to taking place before the outputs of the channel are given. In this case, one decoder, which is informed of the state sequence noncausally, can cooperate only to send the other decoder rate-limited information about the state sequence. The proofs of the capacity regions introduce a new method of coding for channels with cooperation between different users, where we exploit the link between the decoders for multiple-binning. Finally, we discuss the optimality of using rate splitting techniques when coding for cooperative BCs. In particular, we show that rate splitting is not necessarily optimal when coding for cooperative BCs by solving an example in which our method of coding outperforms rate splitting.
21
1,551,937,652
Title: detecting simultaneous integer relations for several real vectors
Abstract: An algorithm which either finds an nonzero integer vector m for given t real n-dimensional vectors x1,��� , xt such that x T m = 0 or proves that no such integer vector with norm less than a given bound exists is presented in this paper. The cost of the algorithm is at mostO(n 4 + n 3 log�(X)) exact arithmetic operations in dimension n and the least Euclidean norm�(X) of such integer vectors. It matches the best complexity upper bound known for this problem. Experimental data show that the algorithm is better than an already existi ng algorithm in the literature. In application, the algorit hm is used to get a complete method for finding the minimal polyno mial of an unknown complex algebraic number from its approximation, which runs even faster than the corresponding Maple built-in function.
14
arxiv cs sc
Title: detecting simultaneous integer relations for several real vectors. Abstract: An algorithm which either finds an nonzero integer vector m for given t real n-dimensional vectors x1,��� , xt such that x T m = 0 or proves that no such integer vector with norm less than a given bound exists is presented in this paper. The cost of the algorithm is at mostO(n 4 + n 3 log�(X)) exact arithmetic operations in dimension n and the least Euclidean norm�(X) of such integer vectors. It matches the best complexity upper bound known for this problem. Experimental data show that the algorithm is better than an already existi ng algorithm in the literature. In application, the algorit hm is used to get a complete method for finding the minimal polyno mial of an unknown complex algebraic number from its approximation, which runs even faster than the corresponding Maple built-in function.
22
1,553,895,888
Title: shannon meets carnot mutual information via thermodynamics
Abstract: In this contribution, the Gaussian channel is represented as an equivalent thermal system allowing to express its input-output mutual information in terms of thermodynamic quantities. This thermodynamic description of the mutual information is based upon a generalization of the $2^{nd}$ thermodynamic law and provides an alternative proof to the Guo-Shamai-Verd\'{u} theorem, giving an intriguing connection between this remarkable theorem and the most fundamental laws of nature - the laws of thermodynamics.
28
arxiv cs it
Title: shannon meets carnot mutual information via thermodynamics. Abstract: In this contribution, the Gaussian channel is represented as an equivalent thermal system allowing to express its input-output mutual information in terms of thermodynamic quantities. This thermodynamic description of the mutual information is based upon a generalization of the $2^{nd}$ thermodynamic law and provides an alternative proof to the Guo-Shamai-Verd\'{u} theorem, giving an intriguing connection between this remarkable theorem and the most fundamental laws of nature - the laws of thermodynamics.
23
1,555,565,700
Title: on list decodability of random rank metric codes
Abstract: In the present paper, we consider list decoding for both random rank metric codes and random linear rank metric codes. Firstly, we show that, for arbitrary $0 0$ ($\epsilon$ and $R$ are independent), if $0 0$ and any $0<\rho<1$, with high probability a random $F_q$-linear rank metric codes with rate $R=(1-\rho)(1-b\rho)-\epsilon$ can be list decoded up to a fraction $\rho$ of rank errors with constant list size $L$ satisfying $L\leq O(\exp(1/\epsilon))$.
28
arxiv cs it
Title: on list decodability of random rank metric codes. Abstract: In the present paper, we consider list decoding for both random rank metric codes and random linear rank metric codes. Firstly, we show that, for arbitrary $0 0$ ($\epsilon$ and $R$ are independent), if $0 0$ and any $0<\rho<1$, with high probability a random $F_q$-linear rank metric codes with rate $R=(1-\rho)(1-b\rho)-\epsilon$ can be list decoded up to a fraction $\rho$ of rank errors with constant list size $L$ satisfying $L\leq O(\exp(1/\epsilon))$.
24
1,556,595,261
Title: dealing with run time variability in service robotics towards a dsl for non functional properties
Abstract: Service robots act in open-ended, natural environments. Therefore, due to combinatorial explosion of potential situations, it is not possible to foresee all eventualities in advance during robot design. In addition, due to limited resources on a mobile robot, it is not feasible to plan any action on demand. Hence, it is necessary to provide a mechanism to express variability at design-time that can be efficiently resolved on the robot at run-time based on the then available information. In this paper, we introduce a DSL to express run- time variability focused on the execution quality of the robot (in terms of non-functional properties like safety and task efficiency) under changing situations and limited resources. We underpin the applicability of our approach by an example integrated into an overall robotics architecture.
27
arxiv cs ro
Title: dealing with run time variability in service robotics towards a dsl for non functional properties. Abstract: Service robots act in open-ended, natural environments. Therefore, due to combinatorial explosion of potential situations, it is not possible to foresee all eventualities in advance during robot design. In addition, due to limited resources on a mobile robot, it is not feasible to plan any action on demand. Hence, it is necessary to provide a mechanism to express variability at design-time that can be efficiently resolved on the robot at run-time based on the then available information. In this paper, we introduce a DSL to express run- time variability focused on the execution quality of the robot (in terms of non-functional properties like safety and task efficiency) under changing situations and limited resources. We underpin the applicability of our approach by an example integrated into an overall robotics architecture.
25
1,561,232,457
Title: a characterisation of context sensitive languages by consensus games
Abstract: We propose a game for recognising formal languages, in which two players with imperfect information need to coordinate on a common decision, given private input information. The players have a joint objec- tive to avoid an inadmissible decision, in spite of the uncertainty induced by the input. We show that this model of consensus acceptor games characterises context-sensitive languages, and conversely, that winning strategies in such games can be described by context-sensitive languages. This im- plies that it is undecidable whether a consensus game admits a winning strategy, and, even if so, it is PSPACE-hard to execute one. On the pos- itive side, we show that whenever a winning strategy exists, there exists one that can be implemented by a linear bounded automaton.
33
arxiv cs fl
Title: a characterisation of context sensitive languages by consensus games. Abstract: We propose a game for recognising formal languages, in which two players with imperfect information need to coordinate on a common decision, given private input information. The players have a joint objec- tive to avoid an inadmissible decision, in spite of the uncertainty induced by the input. We show that this model of consensus acceptor games characterises context-sensitive languages, and conversely, that winning strategies in such games can be described by context-sensitive languages. This im- plies that it is undecidable whether a consensus game admits a winning strategy, and, even if so, it is PSPACE-hard to execute one. On the pos- itive side, we show that whenever a winning strategy exists, there exists one that can be implemented by a linear bounded automaton.
26
1,561,890,487
Title: data structures for approximate range counting
Abstract: We present new data structures for approximately counting the number of points in orthogonal range. #R##N#There is a deterministic linear space data structure that supports updates in O(1) time and approximates the number of elements in a 1-D range up to an additive term $k^{1/c}$ in $O(\log \log U\cdot\log \log n)$ time, where $k$ is the number of elements in the answer, $U$ is the size of the universe and $c$ is an arbitrary fixed constant. We can estimate the number of points in a two-dimensional orthogonal range up to an additive term $ k^{\rho}$ in $O(\log \log U+ (1/\rho)\log\log n)$ time for any $\rho>0$. We can estimate the number of points in a three-dimensional orthogonal range up to an additive term $k^{\rho}$ in $O(\log \log U + (\log\log n)^3+ (3^v)\log\log n)$ time for $v=\log \frac{1}{\rho}/\log {3/2}+2$.
34
arxiv cs ds
Title: data structures for approximate range counting. Abstract: We present new data structures for approximately counting the number of points in orthogonal range. #R##N#There is a deterministic linear space data structure that supports updates in O(1) time and approximates the number of elements in a 1-D range up to an additive term $k^{1/c}$ in $O(\log \log U\cdot\log \log n)$ time, where $k$ is the number of elements in the answer, $U$ is the size of the universe and $c$ is an arbitrary fixed constant. We can estimate the number of points in a two-dimensional orthogonal range up to an additive term $ k^{\rho}$ in $O(\log \log U+ (1/\rho)\log\log n)$ time for any $\rho>0$. We can estimate the number of points in a three-dimensional orthogonal range up to an additive term $k^{\rho}$ in $O(\log \log U + (\log\log n)^3+ (3^v)\log\log n)$ time for $v=\log \frac{1}{\rho}/\log {3/2}+2$.
27
1,566,387,761
Title: holographic transformation for quantum factor graphs
Abstract: Recently, a general tool called a holographic transformation, which transforms an expression of the partition function to another form, has been used for polynomial-time algorithms and for improvement and understanding of the belief propagation. In this work, the holographic transformation is generalized to quantum factor graphs.
28
arxiv cs it
Title: holographic transformation for quantum factor graphs. Abstract: Recently, a general tool called a holographic transformation, which transforms an expression of the partition function to another form, has been used for polynomial-time algorithms and for improvement and understanding of the belief propagation. In this work, the holographic transformation is generalized to quantum factor graphs.
28
1,573,599,372
Title: rooted trees with probabilities revisited
Abstract: Rooted trees with probabilities are convenient to represent a class of random processes with memory. They allow to describe and analyze variable length codes for data compression and distribution matching. In this work, the Leaf-Average Node-Sum Interchange Theorem (LANSIT) and the well-known applications to path length and leaf entropy are re-stated. The LANSIT is then applied to informational divergence. Next, the dierential LANSIT is derived, which allows to write normalized functionals of leaf distributions as an average of functionals of branching distributions. Joint distributions of random variables and the corresponding conditional distributions are special cases of leaf distributions and branching distributions. Using the dierential LANSIT, Pinsker’s inequality is formulated for rooted trees with probabilities, with an application to the approximation of product distributions. In particular, it is shown that if the normalized informational divergence of a distribution and a product distribution approaches zero, then the entropy rate approaches the entropy rate of the product distribution.
28
arxiv cs it
Title: rooted trees with probabilities revisited. Abstract: Rooted trees with probabilities are convenient to represent a class of random processes with memory. They allow to describe and analyze variable length codes for data compression and distribution matching. In this work, the Leaf-Average Node-Sum Interchange Theorem (LANSIT) and the well-known applications to path length and leaf entropy are re-stated. The LANSIT is then applied to informational divergence. Next, the dierential LANSIT is derived, which allows to write normalized functionals of leaf distributions as an average of functionals of branching distributions. Joint distributions of random variables and the corresponding conditional distributions are special cases of leaf distributions and branching distributions. Using the dierential LANSIT, Pinsker’s inequality is formulated for rooted trees with probabilities, with an application to the approximation of product distributions. In particular, it is shown that if the normalized informational divergence of a distribution and a product distribution approaches zero, then the entropy rate approaches the entropy rate of the product distribution.
29
1,578,902,217
Title: time critical social mobilization
Abstract: The World Wide Web is commonly seen as a platform that can harness the collective abilities of large numbers of people to accomplish tasks with unprecedented speed, accuracy, and scale. To explore the Web’s ability for social mobilization, the Defense Advanced Research Projects Agency (DARPA) held the DARPA Network Challenge, in which competing teams were asked to locate 10 red weather balloons placed at locations around the continental United States. Using a recursive incentive mechanism that both spread information about the task and incentivized individuals to act, our team was able to find all 10 balloons in less than 9 hours, thus winning the Challenge. We analyzed the theoretical and practical properties of this mechanism and compared it with other approaches.
3
arxiv cs cy
Title: time critical social mobilization. Abstract: The World Wide Web is commonly seen as a platform that can harness the collective abilities of large numbers of people to accomplish tasks with unprecedented speed, accuracy, and scale. To explore the Web’s ability for social mobilization, the Defense Advanced Research Projects Agency (DARPA) held the DARPA Network Challenge, in which competing teams were asked to locate 10 red weather balloons placed at locations around the continental United States. Using a recursive incentive mechanism that both spread information about the task and incentivized individuals to act, our team was able to find all 10 balloons in less than 9 hours, thus winning the Challenge. We analyzed the theoretical and practical properties of this mechanism and compared it with other approaches.
30
1,581,827,225
Title: homomorphic encryption theory and application
Abstract: The goal of this chapter is to present a survey of homomorphic encryption techniques and their applications. After a detailed discussion on the introduction and motivation of the chapter, we present some basic concepts of cryptography. The fundamental theories of homomorphic encryption are then discussed with suitable examples. The chapter then provides a survey of some of the classical homomorphic encryption schemes existing in the current literature. Various applications and salient properties of homomorphic encryption schemes are then discussed in detail. The chapter then introduces the most important and recent research direction in the filed - fully homomorphic encryption. A significant number of propositions on fully homomorphic encryption is then discussed. Finally, the chapter concludes by outlining some emerging research trends in this exicting field of cryptography.
4
arxiv cs cr
Title: homomorphic encryption theory and application. Abstract: The goal of this chapter is to present a survey of homomorphic encryption techniques and their applications. After a detailed discussion on the introduction and motivation of the chapter, we present some basic concepts of cryptography. The fundamental theories of homomorphic encryption are then discussed with suitable examples. The chapter then provides a survey of some of the classical homomorphic encryption schemes existing in the current literature. Various applications and salient properties of homomorphic encryption schemes are then discussed in detail. The chapter then introduces the most important and recent research direction in the filed - fully homomorphic encryption. A significant number of propositions on fully homomorphic encryption is then discussed. Finally, the chapter concludes by outlining some emerging research trends in this exicting field of cryptography.
31
1,585,744,708
Title: learning transformations for clustering and classification
Abstract: A low-rank transformation learning framework for subspace clustering and classification is here proposed. Many high-dimensional data, such as face images and motion sequences, approximately lie in a union of low-dimensional subspaces. The corresponding subspace clustering problem has been extensively studied in the literature to partition such high-dimensional data into clusters corresponding to their underlying low-dimensional subspaces. However, low-dimensional intrinsic structures are often violated for real-world observations, as they can be corrupted by errors or deviate from ideal models. We propose to address this by learning a linear transformation on subspaces using matrix rank, via its convex surrogate nuclear norm, as the optimization criteria. The learned linear transformation restores a low-rank structure for data from the same subspace, and, at the same time, forces a a maximally separated structure for data from different subspaces. In this way, we reduce variations within subspaces, and increase separation between subspaces for a more robust subspace clustering. This proposed learned robust subspace clustering framework significantly enhances the performance of existing subspace clustering methods. Basic theoretical results here presented help to further support the underlying framework. To exploit the low-rank structures of the transformed subspaces, we further introduce a fast subspace clustering technique, which efficiently combines robust PCA with sparse modeling. When class labels are present at the training stage, we show this low-rank transformation framework also significantly enhances classification performance. Extensive experiments using public datasets are presented, showing that the proposed approach significantly outperforms state-of-the-art methods for subspace clustering and classification.
16
arxiv cs cv
Title: learning transformations for clustering and classification. Abstract: A low-rank transformation learning framework for subspace clustering and classification is here proposed. Many high-dimensional data, such as face images and motion sequences, approximately lie in a union of low-dimensional subspaces. The corresponding subspace clustering problem has been extensively studied in the literature to partition such high-dimensional data into clusters corresponding to their underlying low-dimensional subspaces. However, low-dimensional intrinsic structures are often violated for real-world observations, as they can be corrupted by errors or deviate from ideal models. We propose to address this by learning a linear transformation on subspaces using matrix rank, via its convex surrogate nuclear norm, as the optimization criteria. The learned linear transformation restores a low-rank structure for data from the same subspace, and, at the same time, forces a a maximally separated structure for data from different subspaces. In this way, we reduce variations within subspaces, and increase separation between subspaces for a more robust subspace clustering. This proposed learned robust subspace clustering framework significantly enhances the performance of existing subspace clustering methods. Basic theoretical results here presented help to further support the underlying framework. To exploit the low-rank structures of the transformed subspaces, we further introduce a fast subspace clustering technique, which efficiently combines robust PCA with sparse modeling. When class labels are present at the training stage, we show this low-rank transformation framework also significantly enhances classification performance. Extensive experiments using public datasets are presented, showing that the proposed approach significantly outperforms state-of-the-art methods for subspace clustering and classification.
32
1,586,330,215
Title: methods for integrating knowledge with the three weight optimization algorithm for hybrid cognitive processing
Abstract: In this paper we consider optimization as an approach for quickly and flexibly developing hybrid cognitive capabilities that are efficient, scalable, and can exploit knowledge to improve solution speed and quality. In this context, we focus on the Three-Weight Algorithm, which aims to solve general optimization problems. We propose novel methods by which to integrate knowledge with this algorithm to improve expressiveness, efficiency, and scaling, and demonstrate these techniques on two example problems (Sudoku and circle packing).
10
arxiv cs ai
Title: methods for integrating knowledge with the three weight optimization algorithm for hybrid cognitive processing. Abstract: In this paper we consider optimization as an approach for quickly and flexibly developing hybrid cognitive capabilities that are efficient, scalable, and can exploit knowledge to improve solution speed and quality. In this context, we focus on the Three-Weight Algorithm, which aims to solve general optimization problems. We propose novel methods by which to integrate knowledge with this algorithm to improve expressiveness, efficiency, and scaling, and demonstrate these techniques on two example problems (Sudoku and circle packing).
33
1,591,405,962
Title: csma local area networking under dynamic altruism
Abstract: In this paper, we consider medium access control of local area networks (LANs) under limitedinformation conditions as befits a distributed system. Rather than assuming “by rule” conformance to a protocol designed to regulate packet-flow rates (e.g., CSMA windowing), we begin with a noncooperative game framework and build a dynamic altruism term into the net utility. The effects of altruism are analyzed at Nash equilibrium for both the ALOHA and CSMA frameworks in the quasistationary (fictitious play) regime. We consider either power or throughput based costs of networking, and the cases of identical or heterogeneous (independent) users/players. In a numerical study we consider diverse players, and we see that the effects of altruism for similar players can be beneficial in the presence of significant congestion, but excessive altruism may lead to underuse of the channel when demand is low.
8
arxiv cs ni
Title: csma local area networking under dynamic altruism. Abstract: In this paper, we consider medium access control of local area networks (LANs) under limitedinformation conditions as befits a distributed system. Rather than assuming “by rule” conformance to a protocol designed to regulate packet-flow rates (e.g., CSMA windowing), we begin with a noncooperative game framework and build a dynamic altruism term into the net utility. The effects of altruism are analyzed at Nash equilibrium for both the ALOHA and CSMA frameworks in the quasistationary (fictitious play) regime. We consider either power or throughput based costs of networking, and the cases of identical or heterogeneous (independent) users/players. In a numerical study we consider diverse players, and we see that the effects of altruism for similar players can be beneficial in the presence of significant congestion, but excessive altruism may lead to underuse of the channel when demand is low.
34
1,595,098,738
Title: face frontalization for alignment and recognition
Abstract: Recently, it was shown that excellent results can be achieved in both face landmark localization and pose-invariant face recognition. These breakthroughs are attributed to the efforts of the community to manually annotate facial images in many different poses and to collect 3D faces data. In this paper, we propose a novel method for joint face landmark localization and frontal face reconstruction (pose correction) using a small set of frontal images only. By observing that the frontal facial image is the one with the minimum rank from all different poses we formulate an appropriate model which is able to jointly recover the facial landmarks as well as the frontalized version of the face. To this end, a suitable optimization problem, involving the minimization of the nuclear norm and the matrix $\ell_1$ norm, is solved. The proposed method is assessed in frontal face reconstruction (pose correction), face landmark localization, and pose-invariant face recognition and verification by conducting experiments on $6$ facial images databases. The experimental results demonstrate the effectiveness of the proposed method.
16
arxiv cs cv
Title: face frontalization for alignment and recognition. Abstract: Recently, it was shown that excellent results can be achieved in both face landmark localization and pose-invariant face recognition. These breakthroughs are attributed to the efforts of the community to manually annotate facial images in many different poses and to collect 3D faces data. In this paper, we propose a novel method for joint face landmark localization and frontal face reconstruction (pose correction) using a small set of frontal images only. By observing that the frontal facial image is the one with the minimum rank from all different poses we formulate an appropriate model which is able to jointly recover the facial landmarks as well as the frontalized version of the face. To this end, a suitable optimization problem, involving the minimization of the nuclear norm and the matrix $\ell_1$ norm, is solved. The proposed method is assessed in frontal face reconstruction (pose correction), face landmark localization, and pose-invariant face recognition and verification by conducting experiments on $6$ facial images databases. The experimental results demonstrate the effectiveness of the proposed method.
35
1,596,723,206
Title: from bounded affine types to automatic timing analysis
Abstract: Bounded linear types have proved to be useful for automated resource analysis and control in functional programming languages. In this paper we introduce an affine bounded linear typing discipline on a general notion of resource which can be modeled in a semiring. For this type system we provide both a general type-inference procedure, parameterized by the decision procedure of the semiring equational theory, and a (coherent) categorical semantics. This is a very useful type-theoretic and denotational framework for many applications to resource-sensitive compilation, and it represents a generalization of several existing type systems. As a non-trivial instance, motivated by our ongoing work on hardware compilation, we present a complex new application to calculating and controlling timing of execution in a (recursion-free) higher-order functional programming language with local store.
22
arxiv cs pl
Title: from bounded affine types to automatic timing analysis. Abstract: Bounded linear types have proved to be useful for automated resource analysis and control in functional programming languages. In this paper we introduce an affine bounded linear typing discipline on a general notion of resource which can be modeled in a semiring. For this type system we provide both a general type-inference procedure, parameterized by the decision procedure of the semiring equational theory, and a (coherent) categorical semantics. This is a very useful type-theoretic and denotational framework for many applications to resource-sensitive compilation, and it represents a generalization of several existing type systems. As a non-trivial instance, motivated by our ongoing work on hardware compilation, we present a complex new application to calculating and controlling timing of execution in a (recursion-free) higher-order functional programming language with local store.
36
1,601,434,380
Title: an efficient way to perform the assembly of finite element matrices in matlab and octave
Abstract: We describe different optimization techniques to perform the assembly of finite element matrices in Matlab and Octave, from the standard approach to recent vectorized ones, without any low level language used. We finally obtain a simple and efficient vectorized algorithm able to compete in performance with dedicated software such as FreeFEM++. The principle of this assembly algorithm is general, we present it for different matrices in the P1 finite elements case and in linear elasticity. We present numerical results which illustrate the computational costs of the different approaches
0
arxiv cs na
Title: an efficient way to perform the assembly of finite element matrices in matlab and octave. Abstract: We describe different optimization techniques to perform the assembly of finite element matrices in Matlab and Octave, from the standard approach to recent vectorized ones, without any low level language used. We finally obtain a simple and efficient vectorized algorithm able to compete in performance with dedicated software such as FreeFEM++. The principle of this assembly algorithm is general, we present it for different matrices in the P1 finite elements case and in linear elasticity. We present numerical results which illustrate the computational costs of the different approaches
37
1,606,458,877
Title: how auto encoders could provide credit assignment in deep networks via target propagation
Abstract: We propose to exploit {\em reconstruction} as a layer-local training signal for deep learning. Reconstructions can be propagated in a form of target propagation playing a role similar to back-propagation but helping to reduce the reliance on derivatives in order to perform credit assignment across many levels of possibly strong non-linearities (which is difficult for back-propagation). A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood. By generalizing gradients, target propagation may also allow to train deep networks with discrete hidden units. If the auto-encoder takes both a representation of input and target (or of any side information) in input, then its reconstruction of input representation provides a target towards a representation that is more likely, conditioned on all the side information. A deep auto-encoder decoding path generalizes gradient propagation in a learned way that can could thus handle not just infinitesimal changes but larger, discrete changes, hopefully allowing credit assignment through a long chain of non-linear operations. In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and disentangling factors. The motivations and theoretical justifications for this approach are laid down in this paper, along with conjectures that will have to be verified either mathematically or experimentally, including a hypothesis stating that such auto-encoder mediated target propagation could play in brains the role of credit assignment through many non-linear, noisy and discrete transformations.
24
arxiv cs lg
Title: how auto encoders could provide credit assignment in deep networks via target propagation. Abstract: We propose to exploit {\em reconstruction} as a layer-local training signal for deep learning. Reconstructions can be propagated in a form of target propagation playing a role similar to back-propagation but helping to reduce the reliance on derivatives in order to perform credit assignment across many levels of possibly strong non-linearities (which is difficult for back-propagation). A regularized auto-encoder tends produce a reconstruction that is a more likely version of its input, i.e., a small move in the direction of higher likelihood. By generalizing gradients, target propagation may also allow to train deep networks with discrete hidden units. If the auto-encoder takes both a representation of input and target (or of any side information) in input, then its reconstruction of input representation provides a target towards a representation that is more likely, conditioned on all the side information. A deep auto-encoder decoding path generalizes gradient propagation in a learned way that can could thus handle not just infinitesimal changes but larger, discrete changes, hopefully allowing credit assignment through a long chain of non-linear operations. In addition to each layer being a good auto-encoder, the encoder also learns to please the upper layers by transforming the data into a space where it is easier to model by them, flattening manifolds and disentangling factors. The motivations and theoretical justifications for this approach are laid down in this paper, along with conjectures that will have to be verified either mathematically or experimentally, including a hypothesis stating that such auto-encoder mediated target propagation could play in brains the role of credit assignment through many non-linear, noisy and discrete transformations.
38
1,607,460,189
Title: constrained parametric proposals and pooling methods for semantic segmentation in rgb d images
Abstract: We focus on the problem of semantic segmentation based on RGB-D data, with emphasis on analyzing cluttered indoor scenes containing many instances from many visual categories. Our approach is based on a parametric figure-ground intensity and depth-constrained proposal process that generates spatial layout hypotheses at multiple locations and scales in the image followed by a sequential inference algorithm that integrates the proposals into a complete scene estimate. Our contributions can be summarized as proposing the following: (1) a generalization of parametric max flow figure-ground proposal methodology to take advantage of intensity and depth information, in order to systematically and efficiently generate the breakpoints of an underlying spatial model in polynomial time, (2) new region description methods based on second-order pooling over multiple features constructed using both intensity and depth channels, (3) an inference procedure that can resolve conflicts in overlapping spatial partitions, and handles scenes with a large number of objects category instances, of very different scales, (4) extensive evaluation of the impact of depth, as well as the effectiveness of a large number of descriptors, both pre-designed and automatically obtained using deep learning, in a difficult RGB-D semantic segmentation problem with 92 classes. We report state of the art results in the challenging NYU Depth v2 dataset, extended for RMRC 2013 Indoor Segmentation Challenge, where currently the proposed model ranks first, with an average score of 24.61% and a number of 39 classes won. Moreover, we show that by combining second-order and deep learning features, over 15% relative accuracy improvements can be additionally achieved. In a scene classification benchmark, our methodology further improves the state of the art by 24%.
16
arxiv cs cv
Title: constrained parametric proposals and pooling methods for semantic segmentation in rgb d images. Abstract: We focus on the problem of semantic segmentation based on RGB-D data, with emphasis on analyzing cluttered indoor scenes containing many instances from many visual categories. Our approach is based on a parametric figure-ground intensity and depth-constrained proposal process that generates spatial layout hypotheses at multiple locations and scales in the image followed by a sequential inference algorithm that integrates the proposals into a complete scene estimate. Our contributions can be summarized as proposing the following: (1) a generalization of parametric max flow figure-ground proposal methodology to take advantage of intensity and depth information, in order to systematically and efficiently generate the breakpoints of an underlying spatial model in polynomial time, (2) new region description methods based on second-order pooling over multiple features constructed using both intensity and depth channels, (3) an inference procedure that can resolve conflicts in overlapping spatial partitions, and handles scenes with a large number of objects category instances, of very different scales, (4) extensive evaluation of the impact of depth, as well as the effectiveness of a large number of descriptors, both pre-designed and automatically obtained using deep learning, in a difficult RGB-D semantic segmentation problem with 92 classes. We report state of the art results in the challenging NYU Depth v2 dataset, extended for RMRC 2013 Indoor Segmentation Challenge, where currently the proposed model ranks first, with an average score of 24.61% and a number of 39 classes won. Moreover, we show that by combining second-order and deep learning features, over 15% relative accuracy improvements can be additionally achieved. In a scene classification benchmark, our methodology further improves the state of the art by 24%.
39
1,615,334,113
Title: cooperative game theoretic solution concepts for top k problems
Abstract: The problem of finding the $k$ most critical nodes, referred to as the $top\text{-}k$ problem, is a very important one in several contexts such as information diffusion and preference aggregation in social networks, clustering of data points, etc. It has been observed in the literature that the value allotted to a node by most of the popular cooperative game theoretic solution concepts, acts as a good measure of appropriateness of that node (or a data point) to be included in the $top\text{-}k$ set, by itself. However, in general, nodes having the highest $k$ values are not the desirable $top\text{-}k$ nodes, because the appropriateness of a node to be a part of the $top\text{-}k$ set depends on other nodes in the set. As this is not explicitly captured by cooperative game theoretic solution concepts, it is necessary to post-process the obtained values in order to output the suitable $top\text{-}k$ nodes. In this paper, we propose several such post-processing methods and give reasoning behind each of them, and also propose a standalone algorithm that combines cooperative game theoretic solution concepts with the popular greedy hill-climbing algorithm.
26
arxiv cs si
Title: cooperative game theoretic solution concepts for top k problems. Abstract: The problem of finding the $k$ most critical nodes, referred to as the $top\text{-}k$ problem, is a very important one in several contexts such as information diffusion and preference aggregation in social networks, clustering of data points, etc. It has been observed in the literature that the value allotted to a node by most of the popular cooperative game theoretic solution concepts, acts as a good measure of appropriateness of that node (or a data point) to be included in the $top\text{-}k$ set, by itself. However, in general, nodes having the highest $k$ values are not the desirable $top\text{-}k$ nodes, because the appropriateness of a node to be a part of the $top\text{-}k$ set depends on other nodes in the set. As this is not explicitly captured by cooperative game theoretic solution concepts, it is necessary to post-process the obtained values in order to output the suitable $top\text{-}k$ nodes. In this paper, we propose several such post-processing methods and give reasoning behind each of them, and also propose a standalone algorithm that combines cooperative game theoretic solution concepts with the popular greedy hill-climbing algorithm.
40
1,618,900,328
Title: regulation and the integrity of spreadsheets in the information supply chain
Abstract: Spreadsheets provide many of the key links between information systems, closing the gap between business needs and the capability of central systems. Recent regulations have brought these vulnerable parts of information supply chains into focus. The risk they present to the organisation depends on the role that they fulfil, with generic differences between their use as modeling tools and as operational applications. Four sections of the Sarbanes-Oxley Act (SOX) are particularly relevant to the use of spreadsheets. Compliance with each of these sections is dependent on maintaining the integrity of those spreadsheets acting as operational applications. This can be achieved manually but at high cost. There are a range of commercially available off-the-shelf solutions that can reduce this cost. These may be divided into those that assist in the debugging of logic and more recently the arrival of solutions that monitor the change and user activity taking place in business-critical spreadsheets. ClusterSeven provides one of these monitoring solutions, highlighting areas of operational risk whilst also establishing a database of information to deliver new business intelligence.
3
arxiv cs cy
Title: regulation and the integrity of spreadsheets in the information supply chain. Abstract: Spreadsheets provide many of the key links between information systems, closing the gap between business needs and the capability of central systems. Recent regulations have brought these vulnerable parts of information supply chains into focus. The risk they present to the organisation depends on the role that they fulfil, with generic differences between their use as modeling tools and as operational applications. Four sections of the Sarbanes-Oxley Act (SOX) are particularly relevant to the use of spreadsheets. Compliance with each of these sections is dependent on maintaining the integrity of those spreadsheets acting as operational applications. This can be achieved manually but at high cost. There are a range of commercially available off-the-shelf solutions that can reduce this cost. These may be divided into those that assist in the debugging of logic and more recently the arrival of solutions that monitor the change and user activity taking place in business-critical spreadsheets. ClusterSeven provides one of these monitoring solutions, highlighting areas of operational risk whilst also establishing a database of information to deliver new business intelligence.
41
1,623,729,836
Title: reconfigurable wireless networks
Abstract: Driven by the advent of sophisticated and ubiquitous applications, and the ever-growing need for information, wireless networks are without a doubt steadily evolving into profoundly more complex and dynamic systems. The user demands are progressively rampant, while application requirements continue to expand in both range and diversity. Future wireless networks, therefore, must be equipped with the ability to handle numerous, albeit challenging, requirements. Network reconfiguration, considered as a prominent network paradigm, is envisioned to play a key role in leveraging future network performance and considerably advancing current user experiences. This paper presents a comprehensive overview of reconfigurable wireless networks and an in-depth analysis of reconfiguration at all layers of the protocol stack. Such networks characteristically possess the ability to reconfigure and adapt their hardware and software components and architectures, thus enabling flexible delivery of broad services, as well as sustaining robust operation under highly dynamic conditions. The paper offers a unifying framework for research in reconfigurable wireless networks. This should provide the reader with a holistic view of concepts, methods, and strategies in reconfigurable wireless networks. Focus is given to reconfigurable systems in relatively new and emerging research areas such as cognitive radio networks, cross-layer reconfiguration, and software-defined networks. In addition, modern networks have to be intelligent and capable of self-organization. Thus, this paper discusses the concept of network intelligence as a means to enable reconfiguration in highly complex and dynamic networks. Key processes in network intelligence, such as reasoning, learning, and context awareness, are presented to illustrate how these methods can take reconfiguration to a new level. Finally, the paper is supported with several examples and case studies showing the tremendous impact of reconfiguration on wireless networks.
8
arxiv cs ni
Title: reconfigurable wireless networks. Abstract: Driven by the advent of sophisticated and ubiquitous applications, and the ever-growing need for information, wireless networks are without a doubt steadily evolving into profoundly more complex and dynamic systems. The user demands are progressively rampant, while application requirements continue to expand in both range and diversity. Future wireless networks, therefore, must be equipped with the ability to handle numerous, albeit challenging, requirements. Network reconfiguration, considered as a prominent network paradigm, is envisioned to play a key role in leveraging future network performance and considerably advancing current user experiences. This paper presents a comprehensive overview of reconfigurable wireless networks and an in-depth analysis of reconfiguration at all layers of the protocol stack. Such networks characteristically possess the ability to reconfigure and adapt their hardware and software components and architectures, thus enabling flexible delivery of broad services, as well as sustaining robust operation under highly dynamic conditions. The paper offers a unifying framework for research in reconfigurable wireless networks. This should provide the reader with a holistic view of concepts, methods, and strategies in reconfigurable wireless networks. Focus is given to reconfigurable systems in relatively new and emerging research areas such as cognitive radio networks, cross-layer reconfiguration, and software-defined networks. In addition, modern networks have to be intelligent and capable of self-organization. Thus, this paper discusses the concept of network intelligence as a means to enable reconfiguration in highly complex and dynamic networks. Key processes in network intelligence, such as reasoning, learning, and context awareness, are presented to illustrate how these methods can take reconfiguration to a new level. Finally, the paper is supported with several examples and case studies showing the tremendous impact of reconfiguration on wireless networks.
42
1,633,846,131
Title: contact representations of sparse planar graphs
Abstract: We study representations of graphs by contacts of circular arcs, CCA-representations for short, where the vertices are interior-disjoint circular arcs in the plane and each edge is realized by an endpoint of one arc touching the interior of another. A graph is (2,k)-sparse if every s-vertex subgraph has at most 2s - k edges, and (2, k)-tight if in addition it has exactly 2n - k edges, where n is the number of vertices. Every graph with a CCA- representation is planar and (2, 0)-sparse, and it follows from known results on contacts of line segments that for k >= 3 every (2, k)-sparse graph has a CCA-representation. Hence the question of CCA-representability is open for (2, k)-sparse graphs with 0 <= k <= 2. We partially answer this question by computing CCA-representations for several subclasses of planar (2,0)-sparse graphs. In particular, we show that every plane (2, 2)-sparse graph has a CCA-representation, and that any plane (2, 1)-tight graph or (2, 0)-tight graph dual to a (2, 3)-tight graph or (2, 4)-tight graph has a CCA-representation. Next, we study CCA-representations in which each arc has an empty convex hull. We characterize the plane graphs that have such a representation, based on the existence of a special orientation of the graph edges. Using this characterization, we show that every plane graph of maximum degree 4 has such a representation, but that finding such a representation for a plane (2, 0)-tight graph with maximum degree 5 is an NP-complete problem. Finally, we describe a simple algorithm for representing plane (2, 0)-sparse graphs with wedges, where each vertex is represented with a sequence of two circular arcs (straight-line segments).
20
arxiv cs cg
Title: contact representations of sparse planar graphs. Abstract: We study representations of graphs by contacts of circular arcs, CCA-representations for short, where the vertices are interior-disjoint circular arcs in the plane and each edge is realized by an endpoint of one arc touching the interior of another. A graph is (2,k)-sparse if every s-vertex subgraph has at most 2s - k edges, and (2, k)-tight if in addition it has exactly 2n - k edges, where n is the number of vertices. Every graph with a CCA- representation is planar and (2, 0)-sparse, and it follows from known results on contacts of line segments that for k >= 3 every (2, k)-sparse graph has a CCA-representation. Hence the question of CCA-representability is open for (2, k)-sparse graphs with 0 <= k <= 2. We partially answer this question by computing CCA-representations for several subclasses of planar (2,0)-sparse graphs. In particular, we show that every plane (2, 2)-sparse graph has a CCA-representation, and that any plane (2, 1)-tight graph or (2, 0)-tight graph dual to a (2, 3)-tight graph or (2, 4)-tight graph has a CCA-representation. Next, we study CCA-representations in which each arc has an empty convex hull. We characterize the plane graphs that have such a representation, based on the existence of a special orientation of the graph edges. Using this characterization, we show that every plane graph of maximum degree 4 has such a representation, but that finding such a representation for a plane (2, 0)-tight graph with maximum degree 5 is an NP-complete problem. Finally, we describe a simple algorithm for representing plane (2, 0)-sparse graphs with wedges, where each vertex is represented with a sequence of two circular arcs (straight-line segments).
43
1,634,869,392
Title: modularity aspects of disjunctive stable models
Abstract: Practically all programming languages allow the programmer to split a program into several modules which brings along several advantages in software development. In this paper, we are interested in the area of answer-set programming where fully declarative and nonmonotonic languages are applied. In this context, obtaining a modular structure for programs is by no means straightforward since the output of an entire program cannot in general be composed from the output of its components. To better understand the effects of disjunctive information on modularity we restrict the scope of analysis to the case of disjunctive logic programs (DLPs) subject to stable-model semantics. We define the notion of a DLP-function, where a well-defined input/output interface is provided, and establish a novel module theorem which indicates the compositionality of stable-model semantics for DLP-functions. The module theorem extends the well-known splitting-set theorem and enables the decomposition of DLP-functions given their strongly connected components based on positive dependencies induced by rules. In this setting, it is also possible to split shared disjunctive rules among components using a generalized shifting technique. The concept of modular equivalence is introduced for the mutual comparison of DLP-functions using a generalization of a translation-based verification method.
2
arxiv cs lo
Title: modularity aspects of disjunctive stable models. Abstract: Practically all programming languages allow the programmer to split a program into several modules which brings along several advantages in software development. In this paper, we are interested in the area of answer-set programming where fully declarative and nonmonotonic languages are applied. In this context, obtaining a modular structure for programs is by no means straightforward since the output of an entire program cannot in general be composed from the output of its components. To better understand the effects of disjunctive information on modularity we restrict the scope of analysis to the case of disjunctive logic programs (DLPs) subject to stable-model semantics. We define the notion of a DLP-function, where a well-defined input/output interface is provided, and establish a novel module theorem which indicates the compositionality of stable-model semantics for DLP-functions. The module theorem extends the well-known splitting-set theorem and enables the decomposition of DLP-functions given their strongly connected components based on positive dependencies induced by rules. In this setting, it is also possible to split shared disjunctive rules among components using a generalized shifting technique. The concept of modular equivalence is introduced for the mutual comparison of DLP-functions using a generalization of a translation-based verification method.
44
1,650,363,355
Title: replica symmetric bound for restricted isometry constant
Abstract: We develop a method for evaluating restricted isometry constants (RICs). This evaluation is reduced to the identification of the zero-points of entropy, which is defined for submatrices that are composed of columns selected from a given measurement matrix. Using the replica method developed in statistical mechanics, we assess RICs for Gaussian random matrices under the replica symmetric (RS) assumption. In order to numerically validate the adequacy of our analysis, we employ the exchange Monte Carlo (EMC) method, which has been empirically demonstrated to achieve much higher numerical accuracy than naive Monte Carlo methods. The EMC method suggests that our theoretical estimation of an RIC corresponds to an upper bound that is tighter than in preceding studies. Physical consideration indicates that our assessment of the RIC could be improved by taking into account the replica symmetry breaking.
28
arxiv cs it
Title: replica symmetric bound for restricted isometry constant. Abstract: We develop a method for evaluating restricted isometry constants (RICs). This evaluation is reduced to the identification of the zero-points of entropy, which is defined for submatrices that are composed of columns selected from a given measurement matrix. Using the replica method developed in statistical mechanics, we assess RICs for Gaussian random matrices under the replica symmetric (RS) assumption. In order to numerically validate the adequacy of our analysis, we employ the exchange Monte Carlo (EMC) method, which has been empirically demonstrated to achieve much higher numerical accuracy than naive Monte Carlo methods. The EMC method suggests that our theoretical estimation of an RIC corresponds to an upper bound that is tighter than in preceding studies. Physical consideration indicates that our assessment of the RIC could be improved by taking into account the replica symmetry breaking.
45
1,657,294,604
Title: pushdown abstractions of javascript
Abstract: We design a family of program analyses for JavaScript that make no approximation in matching calls with returns, exceptions with handlers, and breaks with labels. We do so by starting from an established reduction semantics for JavaScript and systematically deriving its intensional abstract interpretation. Our first step is to transform the semantics into an equivalent low-level abstract machine: the JavaScript Abstract Machine (JAM). We then give an infinite-state yet decidable pushdown machine whose stack precisely models the structure of the concrete program stack. The precise model of stack structure in turn confers precise control-flow analysis even in the presence of control effects, such as exceptions and finally blocks. We give pushdown generalizations of traditional forms of analysis such as k-CFA, and prove the pushdown framework for abstract interpretation is sound and computable.
22
arxiv cs pl
Title: pushdown abstractions of javascript. Abstract: We design a family of program analyses for JavaScript that make no approximation in matching calls with returns, exceptions with handlers, and breaks with labels. We do so by starting from an established reduction semantics for JavaScript and systematically deriving its intensional abstract interpretation. Our first step is to transform the semantics into an equivalent low-level abstract machine: the JavaScript Abstract Machine (JAM). We then give an infinite-state yet decidable pushdown machine whose stack precisely models the structure of the concrete program stack. The precise model of stack structure in turn confers precise control-flow analysis even in the presence of control effects, such as exceptions and finally blocks. We give pushdown generalizations of traditional forms of analysis such as k-CFA, and prove the pushdown framework for abstract interpretation is sound and computable.
46
1,661,863,441
Title: a notion of robustness for cyber physical systems
Abstract: Robustness as a system property describes the degree to which a system is able to function correctly in the presence of disturbances, i.e., unforeseen or erroneous inputs. In this paper, we introduce a notion of robustness termed input-output dynamical stability for cyber-physical systems (CPS) which merges existing notions of robustness for continuous systems and discrete systems. The notion captures two intuitive aims of robustness: bounded disturbances have bounded effects and the consequences of a sporadic disturbance disappear over time. We present a design methodology for robust CPS which is based on an abstraction and refinement process. We suggest several novel notions of simulation relations to ensure the soundness of the approach. In addition, we show how such simulation relations can be constructed compositionally. The different concepts and results are illustrated throughout the paper with examples.
19
arxiv cs sy
Title: a notion of robustness for cyber physical systems. Abstract: Robustness as a system property describes the degree to which a system is able to function correctly in the presence of disturbances, i.e., unforeseen or erroneous inputs. In this paper, we introduce a notion of robustness termed input-output dynamical stability for cyber-physical systems (CPS) which merges existing notions of robustness for continuous systems and discrete systems. The notion captures two intuitive aims of robustness: bounded disturbances have bounded effects and the consequences of a sporadic disturbance disappear over time. We present a design methodology for robust CPS which is based on an abstraction and refinement process. We suggest several novel notions of simulation relations to ensure the soundness of the approach. In addition, we show how such simulation relations can be constructed compositionally. The different concepts and results are illustrated throughout the paper with examples.
47
1,665,243,225
Title: entropy rate for hidden markov chains with rare transitions
Abstract: We consider Hidden Markov Chains obtained by passing a Markov Chain with rare transitions through a noisy memoryless channel. We obtain asymptotic estimates for the entropy of the resulting Hidden Markov Chain as the transition rate is reduced to zero. Let (Xn) be a Markov chain with finite state space S and transition matrix P(p) and let (Yn) be the Hidden Markov chain observed by passing (Xn) through a homogeneous noisy memoryless channel (i.e. Y takes values in a set T, and there exists a matrix Q such that P(Yn = jjXn = i;X n−1 −1 ;X 1+1;Y n−1 −1 ;Y 1 n+1) = Qij). We make the additional assumption on the channel that the rows of Q are distinct. In this case we call the channel statistically distinguishing. We assume that P(p) is of the form I + pA where A is a matrix with negative entries on the diagonal, non-negative entries in the off-diagonal terms and zero row sums. We further assume that for small positive p, the Markov chain with transition matrix P(p) is irreducible. Notice that for Markov chains of this form, the invariant distribution (�i)i2 S does not depend on p. In this case, we say that for small positive values of p, the Markov chain is in a rare transition regime. We will adopt the convention that H is used to denote the entropy of a fi- nite partition, whereas h is used to denote the entropy of a process (the en- tropy rate in information theory terminology). Given an irreducible Markov chain with transition matrix P, we let h(P) be the entropy of the Markov chain (i.e. h(P) = − P i;jiPij logPij wherei is the (unique) invariant distribution of the Markov chain and as usual we adopt the convention that 0log0 = 0). We also let Hchan(i) be the entropy of the output of the channel when the input symbol is i (i.e. Hchan(i) = − P j2 T Qij logQij). Let h(Y ) denote the entropy of Y (i.e.
28
arxiv cs it
Title: entropy rate for hidden markov chains with rare transitions. Abstract: We consider Hidden Markov Chains obtained by passing a Markov Chain with rare transitions through a noisy memoryless channel. We obtain asymptotic estimates for the entropy of the resulting Hidden Markov Chain as the transition rate is reduced to zero. Let (Xn) be a Markov chain with finite state space S and transition matrix P(p) and let (Yn) be the Hidden Markov chain observed by passing (Xn) through a homogeneous noisy memoryless channel (i.e. Y takes values in a set T, and there exists a matrix Q such that P(Yn = jjXn = i;X n−1 −1 ;X 1+1;Y n−1 −1 ;Y 1 n+1) = Qij). We make the additional assumption on the channel that the rows of Q are distinct. In this case we call the channel statistically distinguishing. We assume that P(p) is of the form I + pA where A is a matrix with negative entries on the diagonal, non-negative entries in the off-diagonal terms and zero row sums. We further assume that for small positive p, the Markov chain with transition matrix P(p) is irreducible. Notice that for Markov chains of this form, the invariant distribution (�i)i2 S does not depend on p. In this case, we say that for small positive values of p, the Markov chain is in a rare transition regime. We will adopt the convention that H is used to denote the entropy of a fi- nite partition, whereas h is used to denote the entropy of a process (the en- tropy rate in information theory terminology). Given an irreducible Markov chain with transition matrix P, we let h(P) be the entropy of the Markov chain (i.e. h(P) = − P i;jiPij logPij wherei is the (unique) invariant distribution of the Markov chain and as usual we adopt the convention that 0log0 = 0). We also let Hchan(i) be the entropy of the output of the channel when the input symbol is i (i.e. Hchan(i) = − P j2 T Qij logQij). Let h(Y ) denote the entropy of Y (i.e.
48
1,665,669,548
Title: memristors can implement fuzzy logic
Abstract: In our work we propose implementing fuzzy logic using memristors. Min and max operations are done by antipodally configured memristor circuits that may be assembled into computational circuits. We discuss computational power of such circuits with respect to m-efficiency and experimentally observed behavior of memristive devices. Circuits implemented with real devices are likely to manifest learning behavior. The circuits presented in the work may be applicable for instance in fuzzy classifiers.
18
arxiv cs et
Title: memristors can implement fuzzy logic. Abstract: In our work we propose implementing fuzzy logic using memristors. Min and max operations are done by antipodally configured memristor circuits that may be assembled into computational circuits. We discuss computational power of such circuits with respect to m-efficiency and experimentally observed behavior of memristive devices. Circuits implemented with real devices are likely to manifest learning behavior. The circuits presented in the work may be applicable for instance in fuzzy classifiers.
49
1,667,633,038
Title: asymptotic capacity of wireless ad hoc networks with realistic links under a honey comb topology
Abstract: We consider the effects of Rayleigh fading and lognormal shadowing in the physical interference model for all the successful transmissions of traffic across the network. New bounds are derived for the capacity of a given random ad hoc wireless network that reflect packet drop or capture probability of the transmission links. These bounds are based on a simplified network topology termed as honey-comb topology under a given routing and scheduling scheme.
28
arxiv cs it
Title: asymptotic capacity of wireless ad hoc networks with realistic links under a honey comb topology. Abstract: We consider the effects of Rayleigh fading and lognormal shadowing in the physical interference model for all the successful transmissions of traffic across the network. New bounds are derived for the capacity of a given random ad hoc wireless network that reflect packet drop or capture probability of the transmission links. These bounds are based on a simplified network topology termed as honey-comb topology under a given routing and scheduling scheme.
50
1,670,829,281
Title: on the performance of selection cooperation with imperfect channel estimation
Abstract: In this paper, we investigate the performance of selection cooperation in the presence of imperfect channel estimation. In particular, we consider a cooperative scenario with multiple relays and amplifyand-forward protocol over frequency flat fading channels. I n the selection scheme, only the “best” relay which maximizes the effective signal-to-noise ratio (SNR) at the receiver end is selected. We present lower and upper bounds on the effective SNR and derive closed-form expressions for the average symbol error rate (ASER), outage probability and average capacity per bandwidth of the received signal in the presence of channel estimation errors. A simulation study is presented to corroborate the analytical results and to demonstrate the performance of relay selection with imperfect channel estimation.
28
arxiv cs it
Title: on the performance of selection cooperation with imperfect channel estimation. Abstract: In this paper, we investigate the performance of selection cooperation in the presence of imperfect channel estimation. In particular, we consider a cooperative scenario with multiple relays and amplifyand-forward protocol over frequency flat fading channels. I n the selection scheme, only the “best” relay which maximizes the effective signal-to-noise ratio (SNR) at the receiver end is selected. We present lower and upper bounds on the effective SNR and derive closed-form expressions for the average symbol error rate (ASER), outage probability and average capacity per bandwidth of the received signal in the presence of channel estimation errors. A simulation study is presented to corroborate the analytical results and to demonstrate the performance of relay selection with imperfect channel estimation.
51
1,681,484,497
Title: informetric analyses of knowledge organization systems koss
Abstract: A knowledge organization system (KOS) is made up of concepts and semantic relations between the concepts which represent a knowledge domain terminologically. We distinguish between five approaches to KOSs: nomenclatures, classification systems, thesauri, ontologies and, as a borderline case of KOSs, folksonomies. The research question of this paper is: How can we informetrically analyze the effectiveness of KOSs? Quantitative informetric measures and indicators allow for the description, for comparative analyses as well as for evaluation of KOSs and their quality. We describe the state of the art of KOS evaluation. Most of the evaluation studies found in the literature are about ontologies. We introduce measures of the structure of KOSs (e.g., groundedness, tangledness, fan-out factor, or granularity) and indicators of KOS quality (completeness, consistency, overlap, and use).
38
arxiv cs dl
Title: informetric analyses of knowledge organization systems koss. Abstract: A knowledge organization system (KOS) is made up of concepts and semantic relations between the concepts which represent a knowledge domain terminologically. We distinguish between five approaches to KOSs: nomenclatures, classification systems, thesauri, ontologies and, as a borderline case of KOSs, folksonomies. The research question of this paper is: How can we informetrically analyze the effectiveness of KOSs? Quantitative informetric measures and indicators allow for the description, for comparative analyses as well as for evaluation of KOSs and their quality. We describe the state of the art of KOS evaluation. Most of the evaluation studies found in the literature are about ontologies. We introduce measures of the structure of KOSs (e.g., groundedness, tangledness, fan-out factor, or granularity) and indicators of KOS quality (completeness, consistency, overlap, and use).
52
1,682,705,844
Title: latent topic models for hypertext
Abstract: Latent topic models have been successfully applied as an unsupervised topic discovery technique in large document collections. With the proliferation of hypertext document collection such as the Internet, there has also been great interest in extending these approaches to hypertext [6, 9]. These approaches typically model links in an analogous fashion to how they model words - the document-link co-occurrence matrix is modeled in the same way that the document-word co-occurrence matrix is modeled in standard topic models. In this paper we present a probabilistic generative model for hypertext document collections that explicitly models the generation of links. Specifically, links from a word w to a document d depend directly on how frequent the topic of w is in d, in addition to the in-degree of d. We show how to perform EM learning on this model efficiently. By not modeling links as analogous to words, we end up using far fewer free parameters and obtain better link prediction results.
31
arxiv cs ir
Title: latent topic models for hypertext. Abstract: Latent topic models have been successfully applied as an unsupervised topic discovery technique in large document collections. With the proliferation of hypertext document collection such as the Internet, there has also been great interest in extending these approaches to hypertext [6, 9]. These approaches typically model links in an analogous fashion to how they model words - the document-link co-occurrence matrix is modeled in the same way that the document-word co-occurrence matrix is modeled in standard topic models. In this paper we present a probabilistic generative model for hypertext document collections that explicitly models the generation of links. Specifically, links from a word w to a document d depend directly on how frequent the topic of w is in d, in addition to the in-degree of d. We show how to perform EM learning on this model efficiently. By not modeling links as analogous to words, we end up using far fewer free parameters and obtain better link prediction results.
53
1,698,782,162
Title: complete security framework for wireless sensor networks
Abstract: Security concern for a Sensor Networks and level of security desired may differ according to application specific needs where the sensor networks are deployed. Till now, most of the security solutions proposed for sensor networks are layer wise i.e a particular solution is applicable to single layer itself. So, to integrate them all is a new research challenge. In this paper we took up the challenge and have proposed an integrated comprehensive security framework that will provide security services for all services of sensor network. We have added one extra component i.e. Intelligent Security Agent (ISA) to assess level of security and cross layer interactions. This framework has many components like Intrusion Detection System, Trust Framework, Key Management scheme and Link layer communication protocol. We have also tested it on three different application scenarios in Castalia and Omnet++ simulator.
4
arxiv cs cr
Title: complete security framework for wireless sensor networks. Abstract: Security concern for a Sensor Networks and level of security desired may differ according to application specific needs where the sensor networks are deployed. Till now, most of the security solutions proposed for sensor networks are layer wise i.e a particular solution is applicable to single layer itself. So, to integrate them all is a new research challenge. In this paper we took up the challenge and have proposed an integrated comprehensive security framework that will provide security services for all services of sensor network. We have added one extra component i.e. Intelligent Security Agent (ISA) to assess level of security and cross layer interactions. This framework has many components like Intrusion Detection System, Trust Framework, Key Management scheme and Link layer communication protocol. We have also tested it on three different application scenarios in Castalia and Omnet++ simulator.
54
1,717,093,152
Title: neural dissimilarity indices that predict oddball detection in behaviour
Abstract: Neuroscientists have recently shown that images that are difficult to find in visual search elicit similar patterns of firing across a population of recorded neurons. The $L^{1}$ distance between firing rate vectors associated with two images was strongly correlated with the inverse of decision time in behaviour. But why should decision times be correlated with $L^{1}$ distance? What is the decision-theoretic basis? In our decision theoretic formulation, we modeled visual search as an active sequential hypothesis testing problem with switching costs. Our analysis suggests an appropriate neuronal dissimilarity index which correlates equally strongly with the inverse of decision time as the $L^{1}$ distance. We also consider a number of other possibilities such as the relative entropy (Kullback-Leibler divergence) and the Chernoff entropy of the firing rate distributions. A more stringent test of equality of means, which would have provided a strong backing for our modeling fails for our proposed as well as the other already discussed dissimilarity indices. However, test statistics from the equality of means test, when used to rank the indices in terms of their ability to explain the observed results, places our proposed dissimilarity index at the top followed by relative entropy, Chernoff entropy and the $L^{1}$ indices. Computations of the different indices requires an estimate of the relative entropy between two Poisson point processes. An estimator is developed and is shown to have near unbiased performance for almost all operating regions.
28
arxiv cs it
Title: neural dissimilarity indices that predict oddball detection in behaviour. Abstract: Neuroscientists have recently shown that images that are difficult to find in visual search elicit similar patterns of firing across a population of recorded neurons. The $L^{1}$ distance between firing rate vectors associated with two images was strongly correlated with the inverse of decision time in behaviour. But why should decision times be correlated with $L^{1}$ distance? What is the decision-theoretic basis? In our decision theoretic formulation, we modeled visual search as an active sequential hypothesis testing problem with switching costs. Our analysis suggests an appropriate neuronal dissimilarity index which correlates equally strongly with the inverse of decision time as the $L^{1}$ distance. We also consider a number of other possibilities such as the relative entropy (Kullback-Leibler divergence) and the Chernoff entropy of the firing rate distributions. A more stringent test of equality of means, which would have provided a strong backing for our modeling fails for our proposed as well as the other already discussed dissimilarity indices. However, test statistics from the equality of means test, when used to rank the indices in terms of their ability to explain the observed results, places our proposed dissimilarity index at the top followed by relative entropy, Chernoff entropy and the $L^{1}$ indices. Computations of the different indices requires an estimate of the relative entropy between two Poisson point processes. An estimator is developed and is shown to have near unbiased performance for almost all operating regions.
55
1,720,451,657
Title: network maps of technology fields a comparative analysis of relatedness measures
Abstract: Network maps of technology fields extracted from patent databases are useful to aid in technology forecasting and road mapping. Constructing such a network requires a measure of the relatedness between pairs of technology fields. Despite the existence of various relatedness measures in the literature, it is unclear how to consistently assess and compare them, and which ones to select for constructing technology network maps. This ambiguity has limited the use of technology network maps for technology forecasting and roadmap analyses. To address this challenge, here we propose a strategy to evaluate alternative relatedness measures and identify the superior ones by comparing the structure properties of resulting technology networks. Using United States patent data, we execute the strategy through a comparative analysis of twelve relatedness measures, which quantify inter-field knowledge input similarity, field-crossing diversification likelihood or frequency of innovation agents, and co-occurrences of technology classes in the same patents. Our comparative analyses suggest two superior relatedness measures, normalized co-reference and inventor diversification likelihood, for constructing technology network maps.
26
arxiv cs si
Title: network maps of technology fields a comparative analysis of relatedness measures. Abstract: Network maps of technology fields extracted from patent databases are useful to aid in technology forecasting and road mapping. Constructing such a network requires a measure of the relatedness between pairs of technology fields. Despite the existence of various relatedness measures in the literature, it is unclear how to consistently assess and compare them, and which ones to select for constructing technology network maps. This ambiguity has limited the use of technology network maps for technology forecasting and roadmap analyses. To address this challenge, here we propose a strategy to evaluate alternative relatedness measures and identify the superior ones by comparing the structure properties of resulting technology networks. Using United States patent data, we execute the strategy through a comparative analysis of twelve relatedness measures, which quantify inter-field knowledge input similarity, field-crossing diversification likelihood or frequency of innovation agents, and co-occurrences of technology classes in the same patents. Our comparative analyses suggest two superior relatedness measures, normalized co-reference and inventor diversification likelihood, for constructing technology network maps.
56
1,738,519,518
Title: continuous double auction mechanism and bidding strategies in cloud computing markets
Abstract: Cloud computing has been an emerging model which aims at allowing customers to utilize computing resources hosted by Cloud Service Providers (CSPs). More and more consumers rely on CSPs to supply computing and storage service on the one hand, and CSPs try to attract consumers on favorable terms on the other. In such competitive cloud computing markets, pricing policies are critical to market efficiency. While CSPs often publish their prices and charge users according to the amount of resources they consume, auction mechanism is rarely applied. In fact a feasible auction mechanism is the most effective method for allocation of resources, especially double auction is more efficient and flexible for it enables buyers and sellers to enter bids and offers simultaneously. In this paper we bring up an electronic auction platform for cloud, and a cloud Continuous Double Auction (CDA) mechanism is formulated to match orders and facilitate trading based on the platform. Some evaluating criteria are defined to analyze the efficiency of markets and strategies. Furthermore, the selection of bidding strategies for the auction plays a very important role for each player to maximize its own profit, so we developed a novel bidding strategy for cloud CDA, BH-strategy, which is a two-stage game bidding strategy. At last we designed three simulation scenarios to compare the performance of our strategy with other dominating bidding strategies and proved that BH-strategy has better performance on surpluses, successful transactions and market efficiency. In addition, we discussed that our cloud CDA mechanism is feasible for cloud computing resource allocation.
5
arxiv cs dc
Title: continuous double auction mechanism and bidding strategies in cloud computing markets. Abstract: Cloud computing has been an emerging model which aims at allowing customers to utilize computing resources hosted by Cloud Service Providers (CSPs). More and more consumers rely on CSPs to supply computing and storage service on the one hand, and CSPs try to attract consumers on favorable terms on the other. In such competitive cloud computing markets, pricing policies are critical to market efficiency. While CSPs often publish their prices and charge users according to the amount of resources they consume, auction mechanism is rarely applied. In fact a feasible auction mechanism is the most effective method for allocation of resources, especially double auction is more efficient and flexible for it enables buyers and sellers to enter bids and offers simultaneously. In this paper we bring up an electronic auction platform for cloud, and a cloud Continuous Double Auction (CDA) mechanism is formulated to match orders and facilitate trading based on the platform. Some evaluating criteria are defined to analyze the efficiency of markets and strategies. Furthermore, the selection of bidding strategies for the auction plays a very important role for each player to maximize its own profit, so we developed a novel bidding strategy for cloud CDA, BH-strategy, which is a two-stage game bidding strategy. At last we designed three simulation scenarios to compare the performance of our strategy with other dominating bidding strategies and proved that BH-strategy has better performance on surpluses, successful transactions and market efficiency. In addition, we discussed that our cloud CDA mechanism is feasible for cloud computing resource allocation.
57
1,740,018,295
Title: the abc problem for gabor systems
Abstract: A Gabor system generated by a window function $\phi$ and a rectangular lattice $a \Z\times \Z/b$ is given by $${\mathcal G}(\phi, a \Z\times \Z/b):=\{e^{-2\pi i n t/b} \phi(t- m a):\ (m, n)\in \Z\times \Z\}.$$ One of fundamental problems in Gabor analysis is to identify window functions $\phi$ and time-frequency shift lattices $a \Z\times \Z/b$ such that the corresponding Gabor system ${\mathcal G}(\phi, a \Z\times \Z/b)$ is a Gabor frame for $L^2(\R)$, the space of all square-integrable functions on the real line $\R$. In this paper, we provide a full classification of triples $(a,b,c)$ for which the Gabor system ${\mathcal G}(\chi_I, a \Z\times \Z/b)$ generated by the ideal window function $\chi_I$ on an interval $I$ of length $c$ is a Gabor frame for $L^2(\R)$. For the classification of such triples $(a, b, c)$ (i.e., the $abc$-problem for Gabor systems), we introduce maximal invariant sets of some piecewise linear transformations and establish the equivalence between Gabor frame property and triviality of maximal invariant sets. We then study dynamic system associated with the piecewise linear transformations and explore various properties of their maximal invariant sets. By performing holes-removal surgery for maximal invariant sets to shrink and augmentation operation for a line with marks to expand, we finally parameterize those triples $(a, b, c)$ for which maximal invariant sets are trivial. The novel techniques involving non-ergodicity of dynamical systems associated with some novel non-contractive and non-measure-preserving transformations lead to our arduous answer to the $abc$-problem for Gabor systems.
28
arxiv cs it
Title: the abc problem for gabor systems. Abstract: A Gabor system generated by a window function $\phi$ and a rectangular lattice $a \Z\times \Z/b$ is given by $${\mathcal G}(\phi, a \Z\times \Z/b):=\{e^{-2\pi i n t/b} \phi(t- m a):\ (m, n)\in \Z\times \Z\}.$$ One of fundamental problems in Gabor analysis is to identify window functions $\phi$ and time-frequency shift lattices $a \Z\times \Z/b$ such that the corresponding Gabor system ${\mathcal G}(\phi, a \Z\times \Z/b)$ is a Gabor frame for $L^2(\R)$, the space of all square-integrable functions on the real line $\R$. In this paper, we provide a full classification of triples $(a,b,c)$ for which the Gabor system ${\mathcal G}(\chi_I, a \Z\times \Z/b)$ generated by the ideal window function $\chi_I$ on an interval $I$ of length $c$ is a Gabor frame for $L^2(\R)$. For the classification of such triples $(a, b, c)$ (i.e., the $abc$-problem for Gabor systems), we introduce maximal invariant sets of some piecewise linear transformations and establish the equivalence between Gabor frame property and triviality of maximal invariant sets. We then study dynamic system associated with the piecewise linear transformations and explore various properties of their maximal invariant sets. By performing holes-removal surgery for maximal invariant sets to shrink and augmentation operation for a line with marks to expand, we finally parameterize those triples $(a, b, c)$ for which maximal invariant sets are trivial. The novel techniques involving non-ergodicity of dynamical systems associated with some novel non-contractive and non-measure-preserving transformations lead to our arduous answer to the $abc$-problem for Gabor systems.
58
1,754,384,483
Title: inference less density estimation using copula bayesian networks
Abstract: We consider learning continuous probabilistic graphical models in the face of missing data. For non-Gaussian models, learning the parameters and structure of such models depends on our ability to perform efficient inference, and can be prohibitive even for relatively modest domains. Recently, we introduced the Copula Bayesian Network (CBN) density model - a flexible framework that captures complex high-dimensional dependency structures while offering direct control over the univariate marginals, leading to improved generalization. In this work we show that the CBN model also offers significant computational advantages when training data is partially observed. Concretely, we leverage on the specialized form of the model to derive a computationally amenable learning objective that is a lower bound on the log-likelihood function. Importantly, our energy-like bound circumvents the need for costly inference of an auxiliary distribution, thus facilitating practical learning of highdimensional densities. We demonstrate the effectiveness of our approach for learning the structure and parameters of a CBN model for two reallife continuous domains.
24
arxiv cs lg
Title: inference less density estimation using copula bayesian networks. Abstract: We consider learning continuous probabilistic graphical models in the face of missing data. For non-Gaussian models, learning the parameters and structure of such models depends on our ability to perform efficient inference, and can be prohibitive even for relatively modest domains. Recently, we introduced the Copula Bayesian Network (CBN) density model - a flexible framework that captures complex high-dimensional dependency structures while offering direct control over the univariate marginals, leading to improved generalization. In this work we show that the CBN model also offers significant computational advantages when training data is partially observed. Concretely, we leverage on the specialized form of the model to derive a computationally amenable learning objective that is a lower bound on the log-likelihood function. Importantly, our energy-like bound circumvents the need for costly inference of an auxiliary distribution, thus facilitating practical learning of highdimensional densities. We demonstrate the effectiveness of our approach for learning the structure and parameters of a CBN model for two reallife continuous domains.
59
1,769,594,948
Title: optimal detection of intersections between convex polyhedra
Abstract: For a polyhedron $P$ in $\mathbb{R}^d$, denote by $|P|$ its combinatorial complexity, i.e., the number of faces of all dimensions of the polyhedra. In this paper, we revisit the classic problem of preprocessing polyhedra independently so that given two preprocessed polyhedra $P$ and $Q$ in $\mathbb{R}^d$, each translated and rotated, their intersection can be tested rapidly. #R##N#For $d=3$ we show how to perform such a test in $O(\log |P| + \log |Q|)$ time after linear preprocessing time and space. This running time is the best possible and improves upon the last best known query time of $O(\log|P| \log|Q|)$ by Dobkin and Kirkpatrick (1990). #R##N#We then generalize our method to any constant dimension $d$, achieving the same optimal $O(\log |P| + \log |Q|)$ query time using a representation of size $O(|P|^{\lfloor d/2\rfloor + \varepsilon})$ for any $\varepsilon>0$ arbitrarily small. This answers an even older question posed by Dobkin and Kirkpatrick 30 years ago. #R##N#In addition, we provide an alternative $O(\log |P| + \log |Q|)$ algorithm to test the intersection of two convex polygons $P$ and $Q$ in the plane.
20
arxiv cs cg
Title: optimal detection of intersections between convex polyhedra. Abstract: For a polyhedron $P$ in $\mathbb{R}^d$, denote by $|P|$ its combinatorial complexity, i.e., the number of faces of all dimensions of the polyhedra. In this paper, we revisit the classic problem of preprocessing polyhedra independently so that given two preprocessed polyhedra $P$ and $Q$ in $\mathbb{R}^d$, each translated and rotated, their intersection can be tested rapidly. #R##N#For $d=3$ we show how to perform such a test in $O(\log |P| + \log |Q|)$ time after linear preprocessing time and space. This running time is the best possible and improves upon the last best known query time of $O(\log|P| \log|Q|)$ by Dobkin and Kirkpatrick (1990). #R##N#We then generalize our method to any constant dimension $d$, achieving the same optimal $O(\log |P| + \log |Q|)$ query time using a representation of size $O(|P|^{\lfloor d/2\rfloor + \varepsilon})$ for any $\varepsilon>0$ arbitrarily small. This answers an even older question posed by Dobkin and Kirkpatrick 30 years ago. #R##N#In addition, we provide an alternative $O(\log |P| + \log |Q|)$ algorithm to test the intersection of two convex polygons $P$ and $Q$ in the plane.
60
1,774,345,111
Title: limits of rush hour logic complexity
Abstract: Rush Hour Logic was introduced in [Flake&Baum99] as a model of computation inspired by the ``Rush Hour'' toy puzzle, in which cars can move horizontally or vertically within a parking lot. The authors show how the model supports polynomial space computation, using certain car configurations as building blocks to construct boolean circuits for a cpu and memory. They consider the use of cars of length 3 crucial to their construction, and conjecture that cars of size 2 only, which we'll call `Size 2 Rush Hour', do not support polynomial space computation. We settle this conjecture by showing that the required building blocks are constructible in Size 2 Rush Hour. Furthermore, we consider Unit Rush Hour, which was hitherto believed to be trivial, show its relation to maze puzzles, and provide empirical support for its hardness.
9
arxiv cs cc
Title: limits of rush hour logic complexity. Abstract: Rush Hour Logic was introduced in [Flake&Baum99] as a model of computation inspired by the ``Rush Hour'' toy puzzle, in which cars can move horizontally or vertically within a parking lot. The authors show how the model supports polynomial space computation, using certain car configurations as building blocks to construct boolean circuits for a cpu and memory. They consider the use of cars of length 3 crucial to their construction, and conjecture that cars of size 2 only, which we'll call `Size 2 Rush Hour', do not support polynomial space computation. We settle this conjecture by showing that the required building blocks are constructible in Size 2 Rush Hour. Furthermore, we consider Unit Rush Hour, which was hitherto believed to be trivial, show its relation to maze puzzles, and provide empirical support for its hardness.
61
1,777,784,767
Title: new separation between s f and bs f
Abstract: In this note we give a new separation between sensitivity and block sensitivity of Boolean functions: $bs(f)=(2/3)s(f)^2-(1/3)s(f)$.
9
arxiv cs cc
Title: new separation between s f and bs f. Abstract: In this note we give a new separation between sensitivity and block sensitivity of Boolean functions: $bs(f)=(2/3)s(f)^2-(1/3)s(f)$.
62
1,791,983,455
Title: a survey on handover management in mobility architectures
Abstract: This work presents a comprehensive and structured taxonomy of available techniques for managing the handover process in mobility architectures. Representative works from the existing literature have been divided into appropriate categories, based on their ability to support horizontal handovers, vertical handovers and multihoming. We describe approaches designed to work on the current Internet (i.e. IPv4-based networks), as well as those that have been devised for the "future" Internet (e.g. IPv6-based networks and extensions). Quantitative measures and qualitative indicators are also presented and used to evaluate and compare the examined approaches. This critical review provides some valuable guidelines and suggestions for designing and developing mobility architectures, including some practical expedients (e.g. those required in the current Internet environment), aimed to cope with the presence of NAT/firewalls and to provide support to legacy systems and several communication protocols working at the application layer.
8
arxiv cs ni
Title: a survey on handover management in mobility architectures. Abstract: This work presents a comprehensive and structured taxonomy of available techniques for managing the handover process in mobility architectures. Representative works from the existing literature have been divided into appropriate categories, based on their ability to support horizontal handovers, vertical handovers and multihoming. We describe approaches designed to work on the current Internet (i.e. IPv4-based networks), as well as those that have been devised for the "future" Internet (e.g. IPv6-based networks and extensions). Quantitative measures and qualitative indicators are also presented and used to evaluate and compare the examined approaches. This critical review provides some valuable guidelines and suggestions for designing and developing mobility architectures, including some practical expedients (e.g. those required in the current Internet environment), aimed to cope with the presence of NAT/firewalls and to provide support to legacy systems and several communication protocols working at the application layer.
63
1,798,241,237
Title: many task computing and blue waters
Abstract: This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware.
5
arxiv cs dc
Title: many task computing and blue waters. Abstract: This report discusses many-task computing (MTC) generically and in the context of the proposed Blue Waters systems, which is planned to be the largest NSF-funded supercomputer when it begins production use in 2012. The aim of this report is to inform the BW project about MTC, including understanding aspects of MTC applications that can be used to characterize the domain and understanding the implications of these aspects to middleware and policies. Many MTC applications do not neatly fit the stereotypes of high-performance computing (HPC) or high-throughput computing (HTC) applications. Like HTC applications, by definition MTC applications are structured as graphs of discrete tasks, with explicit input and output dependencies forming the graph edges. However, MTC applications have significant features that distinguish them from typical HTC applications. In particular, different engineering constraints for hardware and software must be met in order to support these applications. HTC applications have traditionally run on platforms such as grids and clusters, through either workflow systems or parallel programming systems. MTC applications, in contrast, will often demand a short time to solution, may be communication intensive or data intensive, and may comprise very short tasks. Therefore, hardware and software for MTC must be engineered to support the additional communication and I/O and must minimize task dispatch overheads. The hardware of large-scale HPC systems, with its high degree of parallelism and support for intensive communication, is well suited for MTC applications. However, HPC systems often lack a dynamic resource-provisioning feature, are not ideal for task communication via the file system, and have an I/O system that is not optimized for MTC-style applications. Hence, additional software support is likely to be required to gain full benefit from the HPC hardware.
64
1,812,070,052
Title: identifying reliable annotations for large scale image segmentation
Abstract: Challenging computer vision tasks, in particular semantic image segmentation, require large training sets of annotated images. While obtaining the actual images is often unproblematic, creating the necessary annotation is a tedious and costly process. Therefore, one often has to work with unreliable annotation sources, such as Amazon Mechanical Turk or (semi-)automatic algorithmic techniques. In this work, we present a Gaussian process (GP) based technique for simultaneously identifying which images of a training set have unreliable annotation and learning a segmentation model in which the negative effect of these images is suppressed. Alternatively, the model can also just be used to identify the most reliably annotated images from the training set, which can then be used for training any other segmentation method. By relying on "deep features" in combination with a linear covariance function, our GP can be learned and its hyperparameter determined efficiently using only matrix operations and gradient-based optimization. This makes our method scalable even to large datasets with several million training instances.
16
arxiv cs cv
Title: identifying reliable annotations for large scale image segmentation. Abstract: Challenging computer vision tasks, in particular semantic image segmentation, require large training sets of annotated images. While obtaining the actual images is often unproblematic, creating the necessary annotation is a tedious and costly process. Therefore, one often has to work with unreliable annotation sources, such as Amazon Mechanical Turk or (semi-)automatic algorithmic techniques. In this work, we present a Gaussian process (GP) based technique for simultaneously identifying which images of a training set have unreliable annotation and learning a segmentation model in which the negative effect of these images is suppressed. Alternatively, the model can also just be used to identify the most reliably annotated images from the training set, which can then be used for training any other segmentation method. By relying on "deep features" in combination with a linear covariance function, our GP can be learned and its hyperparameter determined efficiently using only matrix operations and gradient-based optimization. This makes our method scalable even to large datasets with several million training instances.
65
1,814,471,488
Title: improved analysis for subspace pursuit algorithm in terms of restricted isometry constant
Abstract: In the context of compressed sensing (CS), both Subspace Pursuit (SP) and Compressive Sampling Matching Pursuit (CoSaMP) are very important iterative greedy recovery algorithms which could reduce the recovery complexity greatly comparing with the well-known $\ell_1$-minimization. Restricted isometry property (RIP) and restricted isometry constant (RIC) of measurement matrices which ensure the convergency of iterative algorithms play key roles for the guarantee of successful reconstructions. In this paper, we show that for the $s$-sparse recovery, the RICs are enlarged to $\delta_{3s}<0.4859$ for SP and $\delta_{4s}<0.5$ for CoSaMP, which improve the known results significantly. The proposed results also apply to almost sparse signal and corrupted measurements.
28
arxiv cs it
Title: improved analysis for subspace pursuit algorithm in terms of restricted isometry constant. Abstract: In the context of compressed sensing (CS), both Subspace Pursuit (SP) and Compressive Sampling Matching Pursuit (CoSaMP) are very important iterative greedy recovery algorithms which could reduce the recovery complexity greatly comparing with the well-known $\ell_1$-minimization. Restricted isometry property (RIP) and restricted isometry constant (RIC) of measurement matrices which ensure the convergency of iterative algorithms play key roles for the guarantee of successful reconstructions. In this paper, we show that for the $s$-sparse recovery, the RICs are enlarged to $\delta_{3s}<0.4859$ for SP and $\delta_{4s}<0.5$ for CoSaMP, which improve the known results significantly. The proposed results also apply to almost sparse signal and corrupted measurements.
66
1,815,334,329
Title: squares of 3 sun free split graphs
Abstract: AbstractThesquareofagraphG,denotedbyG 2 ,isobtainedfromGbyputtinganedge between twodistinct verticeswhenevertheirdistanceis two. ThenG is called a square root of G 2 . Deciding whether a given graph has asquarerootisknowntobeNP-complete,eveniftherootisrequiredtobeasplit graph,thatis,agraphinwhichthevertexsetcanbepartitionedintoastablesetandaclique.Wegiveawiderangeofpolynomialtimesolvablecasesfortheproblemofrecognizing ifagivengraph isthesquareofsomespecial kindofsplitgraph. To the best of our knowledge, our result properly contains allpreviously known such cases. Ourpolynomial time algorithms are buildonastructuralinvestigationofgraphsthatadmitasplitsquarerootthatis 3-sun-free, and may pave the way toward a dichotomy theorem forrecognizingsquaresof(3-sun-free)splitgraphs.Keywords: Squareofgraphs,squareofsplitgraphs.2010 MSC:05C75,05C85. 1 Introduction The k-th power of a graph G, written G k , is obtained from G by adding newedges between any two different vertices at distance at most k in G. In casek= 2, G
9
arxiv cs cc
Title: squares of 3 sun free split graphs. Abstract: AbstractThesquareofagraphG,denotedbyG 2 ,isobtainedfromGbyputtinganedge between twodistinct verticeswhenevertheirdistanceis two. ThenG is called a square root of G 2 . Deciding whether a given graph has asquarerootisknowntobeNP-complete,eveniftherootisrequiredtobeasplit graph,thatis,agraphinwhichthevertexsetcanbepartitionedintoastablesetandaclique.Wegiveawiderangeofpolynomialtimesolvablecasesfortheproblemofrecognizing ifagivengraph isthesquareofsomespecial kindofsplitgraph. To the best of our knowledge, our result properly contains allpreviously known such cases. Ourpolynomial time algorithms are buildonastructuralinvestigationofgraphsthatadmitasplitsquarerootthatis 3-sun-free, and may pave the way toward a dichotomy theorem forrecognizingsquaresof(3-sun-free)splitgraphs.Keywords: Squareofgraphs,squareofsplitgraphs.2010 MSC:05C75,05C85. 1 Introduction The k-th power of a graph G, written G k , is obtained from G by adding newedges between any two different vertices at distance at most k in G. In casek= 2, G
67
1,816,985,167
Title: tree dynamics for peer to peer streaming
Abstract: This paper presents an asynchronous distributed algorithm to manage multiple trees for peer-to-peer streaming in a flow level model. It is assumed that videos are cut into substreams, with or without source coding, to be distributed to all nodes. The algorithm guarantees that each node receives sufficiently many substreams within delay logarithmic in the number of peers. The algorithm works by constantly updating the topology so that each substream is distributed through trees to as many nodes as possible without interference. Competition among trees for limited upload capacity is managed so that both coverage and balance are achieved. The algorithm is robust in that it efficiently eliminates cycles and maintains tree structures in a distributed way. The algorithm favors nodes with higher degree, so it not only works for live streaming and video on demand, but also in the case a few nodes with large degree act as servers and other nodes act as clients. #R##N#A proof of convergence of the algorithm is given assuming instantaneous update of depth information, and for the case of a single tree it is shown that the convergence time is stochastically tightly bounded by a small constant times the log of the number of nodes. These theoretical results are complemented by simulations showing that the algorithm works well even when most assumptions for the theoretical tractability do not hold.
34
arxiv cs ds
Title: tree dynamics for peer to peer streaming. Abstract: This paper presents an asynchronous distributed algorithm to manage multiple trees for peer-to-peer streaming in a flow level model. It is assumed that videos are cut into substreams, with or without source coding, to be distributed to all nodes. The algorithm guarantees that each node receives sufficiently many substreams within delay logarithmic in the number of peers. The algorithm works by constantly updating the topology so that each substream is distributed through trees to as many nodes as possible without interference. Competition among trees for limited upload capacity is managed so that both coverage and balance are achieved. The algorithm is robust in that it efficiently eliminates cycles and maintains tree structures in a distributed way. The algorithm favors nodes with higher degree, so it not only works for live streaming and video on demand, but also in the case a few nodes with large degree act as servers and other nodes act as clients. #R##N#A proof of convergence of the algorithm is given assuming instantaneous update of depth information, and for the case of a single tree it is shown that the convergence time is stochastically tightly bounded by a small constant times the log of the number of nodes. These theoretical results are complemented by simulations showing that the algorithm works well even when most assumptions for the theoretical tractability do not hold.
68
1,818,292,367
Title: stochastic ordering of interferences in large scale wireless networks
Abstract: Stochastic orders are binary relations defined on probability distributions which capture intuitive notions like being larger or being more variable. This paper introduces stochastic ordering of interference distributions in large-scale networks modeled as point process. Interference is the main performance-limiting factor in most wireless networks, thus it is important to understand its statistics. Since closed-form results for the distribution of interference for such networks are only available in limited cases, interference of networks are compared using stochastic orders, even when closed form expressions for interferences are not tractable. We show that the interference from a large-scale network depends on the fading distributions with respect to the stochastic Laplace transform order. The condition for path-loss models is also established to have stochastic ordering between interferences. The stochastic ordering of interferences between different networks are also shown. Monte-Carlo simulations are used to supplement our analytical results.
28
arxiv cs it
Title: stochastic ordering of interferences in large scale wireless networks. Abstract: Stochastic orders are binary relations defined on probability distributions which capture intuitive notions like being larger or being more variable. This paper introduces stochastic ordering of interference distributions in large-scale networks modeled as point process. Interference is the main performance-limiting factor in most wireless networks, thus it is important to understand its statistics. Since closed-form results for the distribution of interference for such networks are only available in limited cases, interference of networks are compared using stochastic orders, even when closed form expressions for interferences are not tractable. We show that the interference from a large-scale network depends on the fading distributions with respect to the stochastic Laplace transform order. The condition for path-loss models is also established to have stochastic ordering between interferences. The stochastic ordering of interferences between different networks are also shown. Monte-Carlo simulations are used to supplement our analytical results.
69
1,818,296,812
Title: earthquake disaster based efficient resource utilization technique in iaas cloud
Abstract: Cloud Computing is an emerging area. The main aim of the initial search-and-rescue period after strong earthquakes is to reduce the whole number of mortalities. One main trouble rising in this period is to and the greatest assignment of available resources to functioning zones. For this issue a dynamic optimization model is presented. The model uses thorough descriptions of the operational zones and of the available resources to determine the resource performance and efficiency for different workloads related to the response. A suitable solution method for the model is offered as well. In this paper, Earthquake Disaster Based Resource Scheduling (EDBRS) Framework has been proposed. The allocation of resources to cloud workloads based on urgency (emergency during Earthquake Disaster). Based on this criterion, the resource scheduling algorithm has been proposed. The performance of the proposed algorithm has been assessed with the existing common scheduling algorithms through the CloudSim. The experimental results show that the proposed algorithm outperforms the existing algorithms by reducing execution cost and time of cloud consumer workloads submitted to the cloud.
5
arxiv cs dc
Title: earthquake disaster based efficient resource utilization technique in iaas cloud. Abstract: Cloud Computing is an emerging area. The main aim of the initial search-and-rescue period after strong earthquakes is to reduce the whole number of mortalities. One main trouble rising in this period is to and the greatest assignment of available resources to functioning zones. For this issue a dynamic optimization model is presented. The model uses thorough descriptions of the operational zones and of the available resources to determine the resource performance and efficiency for different workloads related to the response. A suitable solution method for the model is offered as well. In this paper, Earthquake Disaster Based Resource Scheduling (EDBRS) Framework has been proposed. The allocation of resources to cloud workloads based on urgency (emergency during Earthquake Disaster). Based on this criterion, the resource scheduling algorithm has been proposed. The performance of the proposed algorithm has been assessed with the existing common scheduling algorithms through the CloudSim. The experimental results show that the proposed algorithm outperforms the existing algorithms by reducing execution cost and time of cloud consumer workloads submitted to the cloud.
70
1,823,940,520
Title: on descriptional complexity of the planarity problem for gauss words
Abstract: In this paper we investigate the descriptional complexity of knot theoretic problems and show upper bounds for planarity problem of signed and unsigned knot diagrams represented by Gauss words. Since a topological equivalence of knots can involve knot diagrams with arbitrarily many crossings then Gauss words will be considered as strings over an infinite (unbounded) alphabet. For establishing the upper bounds on recognition of knot properties, we study these problems in a context of automata models over an infinite alphabet.
33
arxiv cs fl
Title: on descriptional complexity of the planarity problem for gauss words. Abstract: In this paper we investigate the descriptional complexity of knot theoretic problems and show upper bounds for planarity problem of signed and unsigned knot diagrams represented by Gauss words. Since a topological equivalence of knots can involve knot diagrams with arbitrarily many crossings then Gauss words will be considered as strings over an infinite (unbounded) alphabet. For establishing the upper bounds on recognition of knot properties, we study these problems in a context of automata models over an infinite alphabet.
71
1,824,996,543
Title: multi access mimo systems with finite rate channel state feedback
Abstract: This paper characterizes the effect of finite rate channel state feedback on the sum rate of a multi-access multiple-input multiple-output (MIMO) system. We propose to control the users jointly, specifically, we first choose the users jointly and then select the corresponding beamforming vectors jointly. To quantify the sum rate, this paper introduces the composite Grassmann manifold and the composite Grassmann matrix. By characterizing the distortion rate function on the composite Grassmann manifold and calculating the logdet function of a random composite Grassmann matrix, a good sum rate approximation is derived. According to the distortion rate function on the composite Grassmann manifold, the loss due to finite beamforming decreases exponentially as the feedback bits on beamforming increases.
28
arxiv cs it
Title: multi access mimo systems with finite rate channel state feedback. Abstract: This paper characterizes the effect of finite rate channel state feedback on the sum rate of a multi-access multiple-input multiple-output (MIMO) system. We propose to control the users jointly, specifically, we first choose the users jointly and then select the corresponding beamforming vectors jointly. To quantify the sum rate, this paper introduces the composite Grassmann manifold and the composite Grassmann matrix. By characterizing the distortion rate function on the composite Grassmann manifold and calculating the logdet function of a random composite Grassmann matrix, a good sum rate approximation is derived. According to the distortion rate function on the composite Grassmann manifold, the loss due to finite beamforming decreases exponentially as the feedback bits on beamforming increases.
72
1,831,933,153
Title: an information theoretic perspective of the poisson approximation via the chen stein method
Abstract: The first part of this work considers the entropy of the sum of ( possibly dependent and non-identically distributed) Bernoulli random variables. Upper bounds on the error that follows from an approximation of this entropy by the entropy of a Poisson random variable with the same mean are derived via the Chen-Stein method. The second part of this work derives new lower bounds on the total variation distance and relative entropy between the distribution of the sum of independent Bernoulli random variables and the Poisson distribution. The starting point of the derivation of the new bounds in the second part of this work is an introduction of a new lower bound on the total variation distance, whose derivation generalizes and refines the anal ysis by Barbour and Hall (1984), based on the Chen-Stein method for the Poisson approximation. A new lower bound on the relative entropy between these two distributions is introd uced, and this lower bound is compared to a previously reported upper bound on the relative entropy by Kontoyiannis et al. (2005). The derivation of the new lower bound on the relative entropy follows from the new lower bound on the total variation distance, combined with a distribution-dependent refine ment of Pinsker’s inequality by Ordentlich and Weinberger (2005). Upper and lower bounds on the Bhattacharyya parameter, Chernoff information and Hellinger distance between the distribution of the sum of independent Bernoulli random variables and the Poisson distribution with the same mean are derived as well via some relations between these quantities with the total variation distance and the relative entropy. The analysis in this work combines elements of information theory with the Chen-Stein method for the Poisson approximation. The resulting bounds are easy to compute, and their applicability is exemplified.
28
arxiv cs it
Title: an information theoretic perspective of the poisson approximation via the chen stein method. Abstract: The first part of this work considers the entropy of the sum of ( possibly dependent and non-identically distributed) Bernoulli random variables. Upper bounds on the error that follows from an approximation of this entropy by the entropy of a Poisson random variable with the same mean are derived via the Chen-Stein method. The second part of this work derives new lower bounds on the total variation distance and relative entropy between the distribution of the sum of independent Bernoulli random variables and the Poisson distribution. The starting point of the derivation of the new bounds in the second part of this work is an introduction of a new lower bound on the total variation distance, whose derivation generalizes and refines the anal ysis by Barbour and Hall (1984), based on the Chen-Stein method for the Poisson approximation. A new lower bound on the relative entropy between these two distributions is introd uced, and this lower bound is compared to a previously reported upper bound on the relative entropy by Kontoyiannis et al. (2005). The derivation of the new lower bound on the relative entropy follows from the new lower bound on the total variation distance, combined with a distribution-dependent refine ment of Pinsker’s inequality by Ordentlich and Weinberger (2005). Upper and lower bounds on the Bhattacharyya parameter, Chernoff information and Hellinger distance between the distribution of the sum of independent Bernoulli random variables and the Poisson distribution with the same mean are derived as well via some relations between these quantities with the total variation distance and the relative entropy. The analysis in this work combines elements of information theory with the Chen-Stein method for the Poisson approximation. The resulting bounds are easy to compute, and their applicability is exemplified.
73
1,838,670,769
Title: a set and collection lemma
Abstract: A set S is independent if no two vertices from S are adjacent. In this paper we prove that if F is a collection of maximum independent sets of a graph, then there is a matching from S-{intersection of all members of F} into {union of all members of F}-S, for every independent set S. Based on this finding we give alternative proofs for a number of well-known lemmata, as the "Maximum Stable Set Lemma" due to Claude Berge and the "Clique Collection Lemma" due to Andr\'as Hajnal.
39
arxiv cs dm
Title: a set and collection lemma. Abstract: A set S is independent if no two vertices from S are adjacent. In this paper we prove that if F is a collection of maximum independent sets of a graph, then there is a matching from S-{intersection of all members of F} into {union of all members of F}-S, for every independent set S. Based on this finding we give alternative proofs for a number of well-known lemmata, as the "Maximum Stable Set Lemma" due to Claude Berge and the "Clique Collection Lemma" due to Andr\'as Hajnal.
74
1,839,164,722
Title: learning economic parameters from revealed preferences
Abstract: A recent line of work, starting with Beigman and Vohra (2006) and Zadimoghaddam and Roth (2012), has addressed the problem of {\em learning} a utility function from revealed preference data. The goal here is to make use of past data describing the purchases of a utility maximizing agent when faced with certain prices and budget constraints in order to produce a hypothesis function that can accurately forecast the {\em future} behavior of the agent. #R##N#In this work we advance this line of work by providing sample complexity guarantees and efficient algorithms for a number of important classes. By drawing a connection to recent advances in multi-class learning, we provide a computationally efficient algorithm with tight sample complexity guarantees ($\Theta(d/\epsilon)$ for the case of $d$ goods) for learning linear utility functions under a linear price model. This solves an open question in Zadimoghaddam and Roth (2012). Our technique yields numerous generalizations including the ability to learn other well-studied classes of utility functions, to deal with a misspecified model, and with non-linear prices.
36
arxiv cs gt
Title: learning economic parameters from revealed preferences. Abstract: A recent line of work, starting with Beigman and Vohra (2006) and Zadimoghaddam and Roth (2012), has addressed the problem of {\em learning} a utility function from revealed preference data. The goal here is to make use of past data describing the purchases of a utility maximizing agent when faced with certain prices and budget constraints in order to produce a hypothesis function that can accurately forecast the {\em future} behavior of the agent. #R##N#In this work we advance this line of work by providing sample complexity guarantees and efficient algorithms for a number of important classes. By drawing a connection to recent advances in multi-class learning, we provide a computationally efficient algorithm with tight sample complexity guarantees ($\Theta(d/\epsilon)$ for the case of $d$ goods) for learning linear utility functions under a linear price model. This solves an open question in Zadimoghaddam and Roth (2012). Our technique yields numerous generalizations including the ability to learn other well-studied classes of utility functions, to deal with a misspecified model, and with non-linear prices.
75
1,844,261,290
Title: towards adapting imagenet to reality scalable domain adaptation with implicit low rank transformations
Abstract: Images seen during test time are often not from the same distribution as images used for learning. This problem, known as domain shift, occurs when training classifiers from object-centric internet image databases and trying to apply them directly to scene understanding tasks. The consequence is often severe performance degradation and is one of the major barriers for the application of classifiers in real-world systems. In this paper, we show how to learn transform-based domain adaptation classifiers in a scalable manner. The key idea is to exploit an implicit rank constraint, originated from a max-margin domain adaptation formulation, to make optimization tractable. Experiments show that the transformation between domains can be very efficiently learned from data and easily applied to new categories. This begins to bridge the gap between large-scale internet image collections and object images captured in everyday life environments.
16
arxiv cs cv
Title: towards adapting imagenet to reality scalable domain adaptation with implicit low rank transformations. Abstract: Images seen during test time are often not from the same distribution as images used for learning. This problem, known as domain shift, occurs when training classifiers from object-centric internet image databases and trying to apply them directly to scene understanding tasks. The consequence is often severe performance degradation and is one of the major barriers for the application of classifiers in real-world systems. In this paper, we show how to learn transform-based domain adaptation classifiers in a scalable manner. The key idea is to exploit an implicit rank constraint, originated from a max-margin domain adaptation formulation, to make optimization tractable. Experiments show that the transformation between domains can be very efficiently learned from data and easily applied to new categories. This begins to bridge the gap between large-scale internet image collections and object images captured in everyday life environments.
76
1,849,650,586
Title: using multiple criteria methods to evaluate community partitions
Abstract: Community detection is one of the most studied problems on complex networks. Although hundreds of methods have been proposed so far, there is still no universally accepted formal definition of what is a good community. As a consequence, the problem of the evaluation and the comparison of the quality of the solutions produced by these algorithms is still an open question, despite constant progress on the topic. In this article, we investigate how using a multi-criteria evaluation can solve some of the existing problems of community evaluation, in particular the question of multiple equally-relevant solutions of different granularity. After exploring several approaches, we introduce a new quality function, called MDensity, and propose a method that can be related both to a widely used community detection metric, the Modularity, and to the Precision/Recall approach, ubiquitous in information retrieval.
26
arxiv cs si
Title: using multiple criteria methods to evaluate community partitions. Abstract: Community detection is one of the most studied problems on complex networks. Although hundreds of methods have been proposed so far, there is still no universally accepted formal definition of what is a good community. As a consequence, the problem of the evaluation and the comparison of the quality of the solutions produced by these algorithms is still an open question, despite constant progress on the topic. In this article, we investigate how using a multi-criteria evaluation can solve some of the existing problems of community evaluation, in particular the question of multiple equally-relevant solutions of different granularity. After exploring several approaches, we introduce a new quality function, called MDensity, and propose a method that can be related both to a widely used community detection metric, the Modularity, and to the Precision/Recall approach, ubiquitous in information retrieval.
77
1,852,713,323
Title: incremental adaptation strategies for neural network language models
Abstract: It is today acknowledged that neural network language models outperform backoff language models in applications like speech recognition or statistical machine translation. However, training these models on large amounts of data can take several days. We present efficient techniques to adapt a neural network language model to new data. Instead of training a completely new model or relying on mixture approaches, we propose two new methods: continued training on resampled data or insertion of adaptation layers. We present experimental results in an CAT environment where the post-edits of professional translators are used to improve an SMT system. Both methods are very fast and achieve significant improvements without overfitting the small adaptation data.
13
arxiv cs ne
Title: incremental adaptation strategies for neural network language models. Abstract: It is today acknowledged that neural network language models outperform backoff language models in applications like speech recognition or statistical machine translation. However, training these models on large amounts of data can take several days. We present efficient techniques to adapt a neural network language model to new data. Instead of training a completely new model or relying on mixture approaches, we propose two new methods: continued training on resampled data or insertion of adaptation layers. We present experimental results in an CAT environment where the post-edits of professional translators are used to improve an SMT system. Both methods are very fast and achieve significant improvements without overfitting the small adaptation data.
78
1,856,665,498
Title: optimal point to point codes in interference channels an incremental i mmse approach
Abstract: A recent result of the authors shows a so-called I-MMSE-like relationship that, for the two-user Gaussian interference channel, an I-MMSE relationship holds in the limit, as n $\to \infty$, between the interference and the interfered-with receiver, assuming that the interfered-with transmission is an optimal point-to-point sequence (achieves the point-to-point capacity). This result was further used to provide a proof of the "missing corner points" of the two-user Gaussian interference channel. This paper provides an information theoretic proof of the above-mentioned I-MMSE-like relationship which follows the incremental channel approach, an approach which was used by Guo, Shamai and Verd\'u to provide an insightful proof of the original I-MMSE relationship for point-to-point channels. Finally, some additional applications of this result are shown for other multi-user settings: the Gaussian multiple-access channel with interference and specific K-user Gaussian Z-interference channel settings.
28
arxiv cs it
Title: optimal point to point codes in interference channels an incremental i mmse approach. Abstract: A recent result of the authors shows a so-called I-MMSE-like relationship that, for the two-user Gaussian interference channel, an I-MMSE relationship holds in the limit, as n $\to \infty$, between the interference and the interfered-with receiver, assuming that the interfered-with transmission is an optimal point-to-point sequence (achieves the point-to-point capacity). This result was further used to provide a proof of the "missing corner points" of the two-user Gaussian interference channel. This paper provides an information theoretic proof of the above-mentioned I-MMSE-like relationship which follows the incremental channel approach, an approach which was used by Guo, Shamai and Verd\'u to provide an insightful proof of the original I-MMSE relationship for point-to-point channels. Finally, some additional applications of this result are shown for other multi-user settings: the Gaussian multiple-access channel with interference and specific K-user Gaussian Z-interference channel settings.
79
1,859,168,105
Title: a combined approach for constraints over finite domains and arrays
Abstract: Arrays are ubiquitous in the context of software verication. However, eective reasoning over arrays is still rare in CP, as local reasoning is dramatically ill-conditioned for constraints over arrays. In this paper, we propose an approach com- bining both global symbolic reasoning and local consistency ltering in order to solve constraint systems involving arrays (with accesses, updates and size constraints) and nite-domain constraints over their elements and indexes. Our approach, named fdcc, is based on a combination of a congruence closure algorithm for the standard theory of arrays and a CP solver over nite domains. The tricky part of the work lies in the bi- directional communication mechanism between both solvers. We identify the signicant information to share, and design ways to master the communication overhead. Exper- iments on random instances show that fdcc solves more formulas than any portfolio combination of the two solvers taken in isolation, while overhead is kept reasonable.
2
arxiv cs lo
Title: a combined approach for constraints over finite domains and arrays. Abstract: Arrays are ubiquitous in the context of software verication. However, eective reasoning over arrays is still rare in CP, as local reasoning is dramatically ill-conditioned for constraints over arrays. In this paper, we propose an approach com- bining both global symbolic reasoning and local consistency ltering in order to solve constraint systems involving arrays (with accesses, updates and size constraints) and nite-domain constraints over their elements and indexes. Our approach, named fdcc, is based on a combination of a congruence closure algorithm for the standard theory of arrays and a CP solver over nite domains. The tricky part of the work lies in the bi- directional communication mechanism between both solvers. We identify the signicant information to share, and design ways to master the communication overhead. Exper- iments on random instances show that fdcc solves more formulas than any portfolio combination of the two solvers taken in isolation, while overhead is kept reasonable.
80
1,860,769,804
Title: optimization design and analysis of systematic lt codes over awgn channel
Abstract: In this paper, we study systematic Luby Transform (SLT) codes over additive white Gaussian noise (AWGN) channel. We introduce the encoding scheme of SLT codes and give the bipartite graph for iterative belief propagation (BP) decoding algorithm. Similar to low-density parity-check codes, Gaussian approximation (GA) is applied to yield asymptotic performance of SLT codes. Recent work about SLT codes has been focused on providing better encoding and decoding algorithms and design of degree distributions. In our work, we propose a novel linear programming method to optimize the degree distribution. Simulation results show that the proposed distributions can provide better bit-error-ratio (BER) performance. Moreover, we analyze the lower bound of SLT codes and offer closed form expressions.
28
arxiv cs it
Title: optimization design and analysis of systematic lt codes over awgn channel. Abstract: In this paper, we study systematic Luby Transform (SLT) codes over additive white Gaussian noise (AWGN) channel. We introduce the encoding scheme of SLT codes and give the bipartite graph for iterative belief propagation (BP) decoding algorithm. Similar to low-density parity-check codes, Gaussian approximation (GA) is applied to yield asymptotic performance of SLT codes. Recent work about SLT codes has been focused on providing better encoding and decoding algorithms and design of degree distributions. In our work, we propose a novel linear programming method to optimize the degree distribution. Simulation results show that the proposed distributions can provide better bit-error-ratio (BER) performance. Moreover, we analyze the lower bound of SLT codes and offer closed form expressions.
81
1,867,927,193
Title: truth and envy in capacitated allocation games
Abstract: We study auctions with additive valuations where agents have a limit on the number of items they may receive. We refer to this setting as capacitated allocation games. We seek truthful and envy free mechanisms that maximize the social welfare. I.e., where agents have no incentive to lie and no agent seeks to exchange outcomes with another. In 1983, Leonard showed that VCG with Clarke Pivot payments (which is known to be truthful, individually rational, and have no positive transfers), is also an envy free mechanism for the special case of n items and n unit capacity agents. We elaborate upon this problem and show that VCG with Clarke Pivot payments is envy free if agent capacities are all equal. When agent capacities are not identical, we show that there is no truthful and envy free mechanism that maximizes social welfare if one disallows positive transfers. For the case of two agents (and arbitrary capacities) we show a VCG mechanism that is truthful, envy free, and individually rational, but has positive transfers. We conclude with a host of open problems that arise from our work.
36
arxiv cs gt
Title: truth and envy in capacitated allocation games. Abstract: We study auctions with additive valuations where agents have a limit on the number of items they may receive. We refer to this setting as capacitated allocation games. We seek truthful and envy free mechanisms that maximize the social welfare. I.e., where agents have no incentive to lie and no agent seeks to exchange outcomes with another. In 1983, Leonard showed that VCG with Clarke Pivot payments (which is known to be truthful, individually rational, and have no positive transfers), is also an envy free mechanism for the special case of n items and n unit capacity agents. We elaborate upon this problem and show that VCG with Clarke Pivot payments is envy free if agent capacities are all equal. When agent capacities are not identical, we show that there is no truthful and envy free mechanism that maximizes social welfare if one disallows positive transfers. For the case of two agents (and arbitrary capacities) we show a VCG mechanism that is truthful, envy free, and individually rational, but has positive transfers. We conclude with a host of open problems that arise from our work.
82
1,869,518,888
Title: smt based bounded model checking of fixed point digital controllers
Abstract: Digital controllers have several advantages with respect to their flexibility and design's simplicity. However, they are subject to problems that are not faced by analog controllers. In particular, these problems are related to the finite word-length implementation that might lead to overflows, limit cycles, and time constraints in fixed-point processors. This paper proposes a new method to detect design's errors in digital controllers using a state-of-the art bounded model checker based on satisfiability modulo theories. The experiments with digital controllers for a ball and beam plant demonstrate that the proposed method can be very effective in finding errors in digital controllers than other existing approaches based on traditional simulations tools.
19
arxiv cs sy
Title: smt based bounded model checking of fixed point digital controllers. Abstract: Digital controllers have several advantages with respect to their flexibility and design's simplicity. However, they are subject to problems that are not faced by analog controllers. In particular, these problems are related to the finite word-length implementation that might lead to overflows, limit cycles, and time constraints in fixed-point processors. This paper proposes a new method to detect design's errors in digital controllers using a state-of-the art bounded model checker based on satisfiability modulo theories. The experiments with digital controllers for a ball and beam plant demonstrate that the proposed method can be very effective in finding errors in digital controllers than other existing approaches based on traditional simulations tools.
83
1,871,136,127
Title: google matrix of business process management
Abstract: Development of efficient business process models and determination of their characteristic properties are subject of intense interdisciplinary research. Here, we consider a business process model as a directed graph. Its nodes correspond to the units identified by the modeler and the link direction indicates the causal dependencies between units. It is of primary interest to obtain the stationary flow on such a directed graph, which corresponds to the steady-state of a firm during the business process. Following the ideas developed recently for the World Wide Web, we construct the Google matrix for our business process model and analyze its spectral properties. The importance of nodes is characterized by Page-Rank and recently proposed CheiRank and 2DRank, respectively. The results show that this two-dimensional ranking gives a significant information about the influence and communication properties of business model units. We argue that the Google matrix method, described here, provides a new efficient tool helping companies to make their decisions on how to evolve in the exceedingly dynamic global market.
3
arxiv cs cy
Title: google matrix of business process management. Abstract: Development of efficient business process models and determination of their characteristic properties are subject of intense interdisciplinary research. Here, we consider a business process model as a directed graph. Its nodes correspond to the units identified by the modeler and the link direction indicates the causal dependencies between units. It is of primary interest to obtain the stationary flow on such a directed graph, which corresponds to the steady-state of a firm during the business process. Following the ideas developed recently for the World Wide Web, we construct the Google matrix for our business process model and analyze its spectral properties. The importance of nodes is characterized by Page-Rank and recently proposed CheiRank and 2DRank, respectively. The results show that this two-dimensional ranking gives a significant information about the influence and communication properties of business model units. We argue that the Google matrix method, described here, provides a new efficient tool helping companies to make their decisions on how to evolve in the exceedingly dynamic global market.
End of preview.