citing_id
stringlengths
9
16
cited_id
stringlengths
9
16
section_title
stringlengths
0
2.25k
citation
stringlengths
52
442
text_before_citation
sequence
text_after_citation
sequence
keywords
sequence
citation_intent
stringclasses
3 values
citing_paper_content
dict
cited_paper_content
dict
1912.13077
1802.02209
Learning-based Pose Estimation
Visual-inertial odometry: Recent work showed how it is possible to learn to estimate odometry from inertial data using recurrent neural networks #REFR , making deep visual-inertial odometry estimation possible.
[ "DeepVO #OTHEREFR , #OTHEREFR utilized the combination of CNNs and Long-Short Term Memory (LSTM) networks to learn 6DoF visual odometry from a sequence of images, showing comparable results to traditional methods.", "#OTHEREFR introduces a memory component that preserves global information via a feature selection strategy, and a refining component that improves previous predictions with a spatial-temporal attention mechanism based on current and past observations in memory.", "However, these methods cannot exploit additional sensory inputs such as inertial data.", "Several approaches #OTHEREFR , #OTHEREFR , #OTHEREFR use view synthesis and geometric consistency checks #OTHEREFR as an unsupervised signal in order to train and estimate both egomotion and monocular depth estimation.", "While joint trajectory and depth estimation shows promising results towards unsupervised visual odometry, the accuracy of such methods is still inferior to traditional visual odometry approaches." ]
[ "VINet #OTHEREFR used neural network to learn visualinertial odometry, by directly concatenating visual and inertial features.", "We observed that previous methods do not properly address the problem of learning a meaningful sensor fusion strategy, but simply concatenate visual and inertial features in the latent space.", "We argue that a gap between deep architectures and traditional model estimation techniques currently lies in a careful design of the fusion strategy.", "VIOLearner #OTHEREFR presents an online error correction module for deep visual-inertial odometry that estimates the trajectory by fusing RGB-D images with inertial data.", "DeepVIO #OTHEREFR recently proposed a fusion network to fuse visual and inertial features. This network is trained with a dedicated loss." ]
[ "deep visual-inertial odometry" ]
background
{ "title": "SelectFusion: A Generic Framework to Selectively Learn Multisensory Fusion", "abstract": "Autonomous vehicles and mobile robotic systems are typically equipped with multiple sensors to provide redundancy. By integrating the observations from different sensors, these mobile agents are able to perceive the environment and estimate system states, e.g. locations and orientations. Although deep learning approaches for multimodal odometry estimation and localization have gained traction, they rarely focus on the issue of robust sensor fusion -a necessary consideration to deal with noisy or incomplete sensor observations in the real world. Moreover, current deep odometry models also suffer from a lack of interpretability. To this extent, we propose SelectFusion, an end-to-end selective sensor fusion module which can be applied to useful pairs of sensor modalities such as monocular images and inertial measurements, depth images and LIDAR point clouds. During prediction, the network is able to assess the reliability of the latent features from different sensor modalities and estimate both trajectory at scale and global pose. In particular, we propose two fusion modules based on different attention strategies: deterministic soft fusion and stochastic hard fusion, and we offer a comprehensive study of the new strategies compared to trivial direct fusion. We evaluate all fusion strategies in both ideal conditions and on progressively degraded datasets that present occlusions, noisy and missing data and time misalignment between sensors, and we investigate the effectiveness of the different fusion strategies in attending the most reliable features, which in itself, provides insights into the operation of the various models." }
{ "title": "IONet: Learning to Cure the Curse of Drift in Inertial Odometry", "abstract": "Inertial sensors play a pivotal role in indoor localization, which in turn lays the foundation for pervasive personal applications. However, low-cost inertial sensors, as commonly found in smartphones, are plagued by bias and noise, which leads to unbounded growth in error when accelerations are double integrated to obtain displacement. Small errors in state estimation propagate to make odometry virtually unusable in a matter of seconds. We propose to break the cycle of continuous integration, and instead segment inertial data into independent windows. The challenge becomes estimating the latent states of each window, such as velocity and orientation, as these are not directly observable from sensor data. We demonstrate how to formulate this as an optimization problem, and show how deep recurrent neural networks can yield highly accurate trajectories, outperforming state-of-the-art shallow techniques, on a wide range of tests and attachments. In particular, we demonstrate that IONet can generalize to estimate odometry for non-periodic motion, such as a shopping trolley or baby-stroller, an extremely challenging task for existing techniques. Fast and accurate indoor localization is a fundamental need for many personal applications, including smart retail, public places navigation, human-robot interaction and augmented reality. One of the most promising approaches is to use inertial sensors to perform dead reckoning, which has attracted great attention from both academia and industry, because of its superior mobility and flexibility (Lymberopoulos et al. 2015) . Recent advances of MEMS (Micro-electro-mechanical systems) sensors have enabled inertial measurement units (IMUs) small and cheap enough to be deployed on smartphones. However, the low-cost inertial sensors on smartphones are plagued by high sensor noise, leading to unbounded system drifts. Based on Newtonian mechanics, traditional strapdown inertial navigation systems (SINS) integrate IMU measurements directly. They are hard to realize on accuracy-limited IMU due to exponential error propagation through integration. To address these problems, stepbased pedestrian dead reckoning (PDR) has been proposed. This approach estimates trajectories by detecting steps, estimating step length and heading, and updating locations per step (Li et al. 2012) . Instead of double integrating accelerations into locations, a step length update mitigates exponential increasing drifts into linear increasing drifts. However, dynamic step estimation is heavily influenced by sensor noise, user's walking habits and phone attachment changes, causing unavoidable errors to the entire system (Brajdic and Harle 2013). In some scenarios, no steps can be detected, for example, if a phone is placed on a baby stroller or shopping trolley, the assumption of periodicity, exploited by stepbased PDR would break down. Therefore, the intrinsic problems of SINS and PDR prevent widespread use of inertial localization in daily life. The architecture of two existing methods is illustrated in Figure 2 . To cure the unavoidable 'curse' of inertial system drifts, we break the cycle of continuous error propagation, and reformulate inertial tracking as a sequential learning problem. Instead of developing multiple modules for step-based PDR, our model can provide continuous trajectory for indoor users from raw data without the need of any hand-engineering, as shown in Figure 1 . Our contributions are three-fold: • We cast the inertial tracking problem as a sequential learning approach by deriving a sequence-based physical model from Newtonian mechanics. • We propose the first deep neural network (DNN) framework that learns location transforms in polar coordinates from raw IMU data, and constructs inertial odometry regardless of IMU attachment." }
1802.01110
1802.04211
I. INTRODUCTION
Thus, we need to efficiently organize and manage all related teaching/training/expertise activities #REFR .
[ "Unfortunately, it is very common to encounter computer architecture unaware parallelizations.", "This might be due to a lack of related skills or just a plain carelessness.", "The most noticeable example is the important proportion of programmers who are completely unaware of vector computing, also referred as SIMD computing, whose related hardware elements and mechanisms are implemented in almost all modern processors.", "Thus, providing a computer architecture course at the earliest stage is certainly a good idea.", "We see that there are numerous facts and thoughts that emerge when it comes to parallel computing from a didactic viewpoint." ]
[ "Beside invariant common needs, there are specific contextual issues that need to be addressed conscientiously through well targeted applications.", "Within an academic context, the best way to handle all these initiatives is to set up an HPC Center that will implement the main didactic actions and promote the use of parallel computing as previously stated.", "The purpose of this paper is to provide a consistent panoramic view of an academic HPC Center including the key topics with their interaction, and to describe its main associated activities. The rest of the paper is organized as follows.", "The next section describes key didactic factors related to HPC, followed by an overview of how to design and implemented an HPC Center.", "Section IV addresses the question of building a local HPC cluster. Section V concludes the paper." ]
[ "related teaching/training/expertise activities" ]
background
{ "title": "HPC Curriculum and Associated Ressources in the Academic Context", "abstract": "Abstract-Hardware support for high-performance computing (HPC) has so far been subject to significant advances. The pervasiveness of HPC systems, mainly made up with parallel computing units, makes it crucial to spread and vivify effective HPC curricula. Besides didactic considerations, it appears very important to implement HPC hardware infrastructures that will serves for practices, and also for scientific and industrial requests. The latter ensures a valuable connection with surrounding cutting-edge research activities in other topics (life sciences, physics, data mining, applied mathematics, finance, quantitative economy, engineering sciences, to name a few), and also with industrial entities and services providers from their requests related to HPC means and expertise. This aspect is very important as it makes an HPC Center becoming a social actor, while bringing real-life scenarios into the academic context. The current paper describes the major steps and objectives for a consistent HPC curriculum, with specific analyses of particular contexts; suggests how to technically set up operational HPC infrastructures; and discusses the connection with end-users, all these in both effective and prospective standpoints." }
{ "title": "Basic Parallel and Distributed Computing Curriculum", "abstract": "Abstract-With the advent of multi-core processors and their fast expansion, it is quite clear that parallel computing is now a genuine requirement in Computer Science and Engineering (and related) curriculum. In addition to the pervasiveness of parallel computing devices, we should take into account the fact that there are lot of existing softwares that are implemented in the sequential mode, and thus need to be adapted for a parallel execution. Therefore, it is required to the programmer to be able to design parallel programs and also to have some skills in moving from a given sequential code to the corresponding parallel code. In this paper, we present a basic educational scenario on how to give a consistent and efficient background in parallel computing to ordinary computer scientists and engineers." }
0910.3485
cs/0604070
I
Recently, the authors and Ying have established a general formal model of computing with (some special) words via fuzzy automata in #REFR .
[ "Since classical models of computation aim at describing numerical or symbolical calculation, their inputs are usually supposed to be exact rather than vague data.", "Motivated by this observation and Zadeh's paradigm of CW, Ying #OTHEREFR put forward two kinds of fuzzy automata, which accept fuzzy inputs, as formal models of CW.", "More specifically, he modeled the words in the CW paradigm by fuzzy subsets of a set of symbols and took all these words as the input alphabet of such fuzzy automata.", "Instead of accepting or rejecting a string of words, these fuzzy automata will accept the string with a certain degree between zero and one.", "Such an idea has been developed for probabilistic automata and fuzzy Turing machines in #OTHEREFR and #OTHEREFR , respectively." ]
[ "The new features of the model are that the input alphabet only comprises some (not necessarily all) words modeled by fuzzy subsets of a set of symbols and the fuzzy transition function can be specified arbitrarily.", "By employing the methodology of fuzzy control, we have obtained a retraction principle from CW to computing with values for handling crisp inputs and a generalized extension principle from CW to computing with all words for handling fuzzy inputs.", "It is worth noting that all the formal models of CW mentioned above are based upon automata.", "These automata are the simplest computational models which have the advantages of being intuitive, amenable to composition operations, and amenable to analysis as well.", "On the other hand, it is well known that automata are not satisfactory for describing concurrent computing (see, for example, #OTHEREFR ), and moreover, they lack structure and for this reason may lead to very large state spaces when modeling some complex computations." ]
[ "fuzzy automata" ]
background
{ "title": "A Fuzzy Petri Nets Model for Computing With Words", "abstract": "Abstract-Motivated by Zadeh's paradigm of computing with words rather than numbers, several formal models of computing with words have recently been proposed. These models are based on automata and thus are not well-suited for concurrent computing. In this paper, we incorporate the well-known model of concurrent computing, Petri nets, together with fuzzy set theory and thereby establish a concurrency model of computing with words-fuzzy Petri nets for computing with words (FPNCWs). The new feature of such fuzzy Petri nets is that the labels of transitions are some special words modeled by fuzzy sets. By employing the methodology of fuzzy reasoning, we give a faithful extension of an FPNCW which makes it possible for computing with more words. The language expressiveness of the two formal models of computing with words, fuzzy automata for computing with words and FPNCWs, is compared as well. A few small examples are provided to illustrate the theoretical development." }
{ "title": "Retraction and Generalized Extension of Computing with Words", "abstract": "Abstract-Fuzzy automata, whose input alphabet is a set of numbers or symbols, are a formal model of computing with values. Motivated by Zadeh's paradigm of computing with words rather than numbers, Ying proposed a kind of fuzzy automata, whose input alphabet consists of all fuzzy subsets of a set of symbols, as a formal model of computing with all words. In this paper, we introduce a somewhat general formal model of computing with (some special) words. The new features of the model are that the input alphabet only comprises some (not necessarily all) fuzzy subsets of a set of symbols and the fuzzy transition function can be specified arbitrarily. By employing the methodology of fuzzy control, we establish a retraction principle from computing with words to computing with values for handling crisp inputs and a generalized extension principle from computing with words to computing with all words for handling fuzzy inputs. These principles show that computing with values and computing with all words can be respectively implemented by computing with words. Some algebraic properties of retractions and generalized extensions are addressed as well." }
cs/0604087
cs/0604070
Conclusion
As a continuation of #REFR , this work further indicates that building a model for computing with some special words and then extending the model for computing with all words are of universality.
[ "Some relationships among the retractions, the generalized extensions, and the extensions studied recently in #OTHEREFR have also been provided.", "There are some limits and directions in which the present work can be extended.", "As mentioned earlier, the generalized extension of a probabilistic model is actually a process of interpolation.", "Thus, a basic problem is how to choose words and how to rationally specify their behavior.", "In turn, one can use many other interpolation approaches to cope with the problem of accepting any words as inputs." ]
[ "Therefore, it is feasible to apply this method to other computational models such as fuzzy grammars #OTHEREFR , other probabilistic automata #OTHEREFR , and fuzzy and probabilistic neural networks (see, for example, #OTHEREFR ).", "A topic of ongoing work concerns the formal model of computing with words of many kinds." ]
[ "special words" ]
background
{ "title": "Probabilistic Automata for Computing with Words", "abstract": "Usually, probabilistic automata and probabilistic grammars have crisp symbols as inputs, which can be viewed as the formal models of computing with values. In this paper, we first introduce probabilistic automata and probabilistic grammars for computing with (some special) words in a probabilistic framework, where the words are interpreted as probabilistic distributions or possibility distributions over a set of crisp symbols. By probabilistic conditioning, we then establish a retraction principle from computing with words to computing with values for handling crisp inputs and a generalized extension principle from computing with words to computing with all words for handling arbitrary inputs. These principles show that computing with values and computing with all words can be respectively implemented by computing with some special words. To compare the transition probabilities of two near inputs, we also examine some analytical properties of the transition probability functions of generalized extensions. Moreover, the retractions and the generalized extensions are shown to be equivalence-preserving. Finally, we clarify some relationships among the retractions, the generalized extensions, and the extensions studied recently by Qiu and Wang." }
{ "title": "Retraction and Generalized Extension of Computing with Words", "abstract": "Abstract-Fuzzy automata, whose input alphabet is a set of numbers or symbols, are a formal model of computing with values. Motivated by Zadeh's paradigm of computing with words rather than numbers, Ying proposed a kind of fuzzy automata, whose input alphabet consists of all fuzzy subsets of a set of symbols, as a formal model of computing with all words. In this paper, we introduce a somewhat general formal model of computing with (some special) words. The new features of the model are that the input alphabet only comprises some (not necessarily all) fuzzy subsets of a set of symbols and the fuzzy transition function can be specified arbitrarily. By employing the methodology of fuzzy control, we establish a retraction principle from computing with words to computing with values for handling crisp inputs and a generalized extension principle from computing with words to computing with all words for handling fuzzy inputs. These principles show that computing with values and computing with all words can be respectively implemented by computing with words. Some algebraic properties of retractions and generalized extensions are addressed as well." }
0912.0071
0803.0032
Related Work
Many of the models used have been shown to be susceptible to composition attacks, attacks in which the adversary has some reasonable amount of prior knowledge #REFR .
[ "For example, #OTHEREFR show that a small amount of auxiliary information (knowledge of a few movie-ratings, and approximate dates) is sufficient for an adversary to re-identify an individual in the Netflix dataset, which consists of anonymized data about Netflix users and their movie ratings.", "The same phenomenon has been observed in other kinds of data, such as social network graphs #OTHEREFR , search query logs #OTHEREFR and others.", "Releasing statistics computed on sensitive data can also be problematic; for example, #OTHEREFR", "(2009) show that releasing R 2 -values computed on high-dimensional genetic data can lead to privacy breaches by an adversary who is armed with a small amount of auxiliary information.", "There has also been a significant amount of work on privacy-preserving data mining #OTHEREFR , spanning several communities, that uses privacy models other than differential privacy." ]
[ "Other work #OTHEREFR considers the problem of privacy-preserving SVM classification when separate agents have to share private data, and provides a solution that uses random kernels, but does provide any formal privacy guarantee.", "An alternative line of privacy work is in the Secure Multiparty Computation setting due to #OTHEREFR , where the sensitive data is split across multiple hostile databases, and the goal is to compute a function on the union of these databases. #OTHEREFR and #OTHEREFR", "(2006) consider computing privacy-preserving SVMs in this setting, and their goal is to design a distributed protocol to learn a classifier.", "This is in contrast with our work, which deals with a setting where the algorithm has access to the entire dataset.", "Differential privacy, the formal privacy definition used in our paper, was proposed by the seminal work of #OTHEREFR" ]
[ "prior knowledge", "composition attacks" ]
method
{ "title": "Differentially Private Empirical Risk Minimization", "abstract": "Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the ǫ-differential privacy definition due to Dwork et al. (2006) . First we apply the output perturbation ideas of Dwork et al. (2006) , to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance." }
{ "title": "Composition attacks and auxiliary information in data privacy", "abstract": "Privacy is an increasingly important aspect of data publishing. Reasoning about privacy, however, is fraught with pitfalls. One of the most significant is the auxiliary information (also called external knowledge, background knowledge, or side information) that an adversary gleans from other channels such as the web, public records, or domain knowledge. This paper explores how one can reason about privacy in the face of rich, realistic sources of auxiliary information. Specifically, we investigate the effectiveness of current anonymization schemes in preserving privacy when multiple organizations independently release anonymized data about overlapping populations. 1. We investigate composition attacks, in which an adversary uses independently anonymized releases to breach privacy. We explain why recently proposed models of limited auxiliary information fail to capture composition attacks. Our experiments demonstrate that even a simple instance of a composition attack can breach privacy in practice, for a large class of currently proposed techniques. The class includes k-anonymity and several recent variants. 2. On a more positive note, certain randomization-based notions of privacy (such as differential privacy) provably resist composition attacks and, in fact, the use of arbitrary side information. This resistance enables \"stand-alone\" design of anonymization schemes, without the need for explicitly keeping track of other releases. We provide a precise formulation of this property, and prove that an important class of relaxations of differential privacy also satisfy the property. This significantly enlarges the class of protocols known to enable modular design." }
0912.0071
0803.0032
Related Work
Unlike many other privacy definitions, such as those mentioned above, differential privacy has been shown to be resistant to composition attacks (attacks involving side-information) #REFR .
[ "An alternative line of privacy work is in the Secure Multiparty Computation setting due to #OTHEREFR , where the sensitive data is split across multiple hostile databases, and the goal is to compute a function on the union of these databases. #OTHEREFR and #OTHEREFR", "(2006) consider computing privacy-preserving SVMs in this setting, and their goal is to design a distributed protocol to learn a classifier.", "This is in contrast with our work, which deals with a setting where the algorithm has access to the entire dataset.", "Differential privacy, the formal privacy definition used in our paper, was proposed by the seminal work of #OTHEREFR", "(2006b) , and has been used since in numerous works on privacy #OTHEREFR ." ]
[ "Some follow-up work on differential privacy includes work on differentially-private combinatorial optimization, due to #OTHEREFR", "(2010) , and differentially-private contingency tables, due to #OTHEREFR and #OTHEREFR .", "#OTHEREFR provide a more statistical view of differential privacy, and provide a technique of generating synthetic data using compression via random linear or affine transformations.", "Previous literature has also considered learning with differential privacy.", "One of the first such works is , which presents a general, although computationally inefficient, method for PAC-learning finite concept classes. #OTHEREFR" ]
[ "differential privacy" ]
background
{ "title": "Differentially Private Empirical Risk Minimization", "abstract": "Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the ǫ-differential privacy definition due to Dwork et al. (2006) . First we apply the output perturbation ideas of Dwork et al. (2006) , to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance." }
{ "title": "Composition attacks and auxiliary information in data privacy", "abstract": "Privacy is an increasingly important aspect of data publishing. Reasoning about privacy, however, is fraught with pitfalls. One of the most significant is the auxiliary information (also called external knowledge, background knowledge, or side information) that an adversary gleans from other channels such as the web, public records, or domain knowledge. This paper explores how one can reason about privacy in the face of rich, realistic sources of auxiliary information. Specifically, we investigate the effectiveness of current anonymization schemes in preserving privacy when multiple organizations independently release anonymized data about overlapping populations. 1. We investigate composition attacks, in which an adversary uses independently anonymized releases to breach privacy. We explain why recently proposed models of limited auxiliary information fail to capture composition attacks. Our experiments demonstrate that even a simple instance of a composition attack can breach privacy in practice, for a large class of currently proposed techniques. The class includes k-anonymity and several recent variants. 2. On a more positive note, certain randomization-based notions of privacy (such as differential privacy) provably resist composition attacks and, in fact, the use of arbitrary side information. This resistance enables \"stand-alone\" design of anonymization schemes, without the need for explicitly keeping track of other releases. We provide a precise formulation of this property, and prove that an important class of relaxations of differential privacy also satisfy the property. This significantly enlarges the class of protocols known to enable modular design." }
1607.04204
0803.0032
Definition
This stringent condition means that procedures which satisfy definition 1 provide very strong privacy guarantees, even against adversaries who have considerable partial information about the data set #REFR .
[ "Definition 1 ( -differential privacy, #OTHEREFR ).", "Given a privacy parameter, > 0, the procedure T satisfies -differential privacy if", "where we define log 0 0 = 0 for convenience.", "In the above notation, P ω denotes the probability with respect to ω, which is the source of randomness in the data analysis procedure.", "Thus, the definition does not impose any conditions on the distribution of D -the privacy is required to hold for all pairs of adjacent data sets." ]
[ "In order to satisfy this definition, any nonconstant procedure must be randomized.", "The -differential privacy is a strong requirement, as it takes supremum over all possible neighboring data sets of size n. A mild relaxation is the ( , δ)-differential privacy.", "Definition 2 (( , δ)-differential privacy).", "Given > 0, δ ∈ (0, 1), a procedure satisfies ( , δ)-differential privacy if, for all measurable A ⊆ S and all neighboring data sets D, D ,", "Here the requirement of the original -differential privacy is relaxed so that the distribution of T (D) only needs to be dominated by that of T (D ) outside of a set with probability no more than δ." ]
[ "strong privacy guarantees" ]
background
{ "title": "Differentially Private Model Selection with Penalized and Constrained Likelihood", "abstract": "In statistical disclosure control, the goal of data analysis is twofold: The released information must provide accurate and useful statistics about the underlying population of interest, while minimizing the potential for an individual record to be identified. In recent years, the notion of differential privacy has received much attention in theoretical computer science, machine learning, and statistics. It provides a rigorous and strong notion of protection for individuals' sensitive information. A fundamental question is how to incorporate differential privacy into traditional statistical inference procedures. In this paper we study model selection in multivariate linear regression under the constraint of differential privacy. We show that model selection procedures based on penalized least squares or likelihood can be made differentially private by a combination of regularization and randomization, and propose two algorithms to do so. We show that our private procedures are consistent under essentially the same conditions as the corresponding non-private procedures. We also find that under differential privacy, the procedure becomes more sensitive to the tuning parameters. We illustrate and evaluate our method using simulation studies and two real data examples." }
{ "title": "Composition attacks and auxiliary information in data privacy", "abstract": "Privacy is an increasingly important aspect of data publishing. Reasoning about privacy, however, is fraught with pitfalls. One of the most significant is the auxiliary information (also called external knowledge, background knowledge, or side information) that an adversary gleans from other channels such as the web, public records, or domain knowledge. This paper explores how one can reason about privacy in the face of rich, realistic sources of auxiliary information. Specifically, we investigate the effectiveness of current anonymization schemes in preserving privacy when multiple organizations independently release anonymized data about overlapping populations. 1. We investigate composition attacks, in which an adversary uses independently anonymized releases to breach privacy. We explain why recently proposed models of limited auxiliary information fail to capture composition attacks. Our experiments demonstrate that even a simple instance of a composition attack can breach privacy in practice, for a large class of currently proposed techniques. The class includes k-anonymity and several recent variants. 2. On a more positive note, certain randomization-based notions of privacy (such as differential privacy) provably resist composition attacks and, in fact, the use of arbitrary side information. This resistance enables \"stand-alone\" design of anonymization schemes, without the need for explicitly keeping track of other releases. We provide a precise formulation of this property, and prove that an important class of relaxations of differential privacy also satisfy the property. This significantly enlarges the class of protocols known to enable modular design." }
0812.2874
0707.0763
Clinical Requirements
HeC aims to create a set of models which facilitates the integration of all the available information that supports HeC system components, by providing access to the appropriate information between hospitals and that supports the integration across vertical levels of the medical domain #REFR .
[ "One of the major cornerstones supporting the HeC project goals is the modelling of relevant biomedical data sources.", "The biomedical information that is managed by HeC spans multiple vertical ranges, comes from different data sources and is possibly distributed and heterogeneous with various levels of semantic content." ]
[ "To be able to combine all sources of data into the integrated view the model of the domain under consideration needs to be established.", "Such an integrated model must provide clinicians with a coherent view of patients' health and be adaptable to changes in the models of individual sources.", "Some of the criteria which HeC domain models should satisfy include:", "• capturing information specified in clinical protocols;", "• supporting high-level applications such as integrated disease modelling, decision support and knowledge discovery;" ]
[ "medical domain" ]
background
{ "title": "A Data Model for Integrating Heterogeneous Medical Data in the Health-e-Child Project", "abstract": "Abstract. There has been much research activity in recent times about providing the data infrastructures needed for the provision of personalised healthcare. In particular the requirement of integrating multiple, potentially distributed, heterogeneous data sources in the medical domain for the use of clinicians has set challenging goals for the healthgrid community. The approach advocated in this paper surrounds the provision of an Integrated Data Model plus links to/from ontologies to homogenize biomedical (from genomic, through cellular, disease, patient and population-related) data in the context of the EC Framework 6 Healthe-Child project. Clinical requirements are identified, the design approach in constructing the model is detailed and the integrated model described in the context of examples taken from that project. Pointers are given to future work relating the model to medical ontologies and challenges to the use of fully integrated models and ontologies are identified." }
{ "title": "The Requirements for Ontologies in Medical Data Integration: A Case Study", "abstract": "Abstract" }
0705.0549
cond-mat/0204111
III. FREE ZRP ON QUENCHED RANDOM NETWORKS
The difference between the canonical ensemble, having a fixed number of edges, and grand-canonical one disappears in the thermodynamic limit if the node degree distribution Π(k) falls with k faster than any power-law #REFR .
[ "The partition function Z(N, M ) has now the following form:", "where Z(N, M, k) is given by Eq. (1).", "In general, P ( k) could have a complicated form.", "We shall restrict here to ensembles of quenched networks with a product measure sometimes called uncorrelated networks.", "One can find explicit canonical and grand-canonical realization of such ensembles #OTHEREFR for which the probability P ( k) = Π(k 1 ) · · · Π(k N ) factorizes in the limit N → ∞, where Π(k) denotes the probability distribution of the node degrees." ]
[ "This factorization partially breaks down for scale-free networks which we shall not discuss here.", "The factorization allows us to rewrite the formula for Z(N, M ) in the form of Eq. (1) with", "where µ(m) is m-th moment of the degree distribution Π(k),", "In contrast to Eq.", "(1) the partition function (9) is invariant under permutations of the ball occupation numbers m i ." ]
[ "node degree distribution" ]
background
{ "title": "Free zero-range processes on networks", "abstract": "A free zero-range process (FRZP) is a simple stochastic process describing the dynamics of a gas of particles hopping between neighboring nodes of a network. We discuss three different cases of increasing complexity: (a) FZRP on a rigid geometry where the network is fixed during the process, (b) FZRP on a random graph chosen from a given ensemble of networks, (c) FZRP on a dynamical network whose topology continuously changes during the process in a way which depends on the current distribution of particles. The case (a) provides a very simple realization of the phenomenon of condensation which manifests as the appearance of a condensate of particles on the node with maximal degree. A particularly interesting example is the condensation on scale-free networks. Here we will model it by introducing a single-site inhomogeneity to a k-regular network. This simplified situation can be easily treated analytically and, on the other hand, shows quantitatively the same behavior as in the case of scale-free networks. The case (b) is very interesting since the averaging over typical ensembles of graphs acts as a kind of homogenization of the system which makes all nodes identical from the point of view of the FZRP. In effect, the partition function of the steady state becomes invariant with respect to the permutations of the particle occupation numbers. This type of symmetric systems has been intensively studied in the literature. In particular, they undergo a phase transition to the condensed phase, which is caused by a mechanism of spontaneous symmetry breaking. In the case (c), the distribution of particles and the dynamics of network are coupled to each other. The strength of this coupling depends on the ratio of two time scales: for changes of the topology and of the FZRP. We will discuss a specific example of that type of interaction and show that it leads to an interesting phase diagram. The case (b) mentioned above can be viewed as a limiting case where the typical time scale of topology fluctuations is much larger than that of the FZRP." }
{ "title": "Principles of statistical mechanics of random networks", "abstract": "We develop a statistical mechanics approach for random networks with uncorrelated vertices. We construct equilibrium statistical ensembles of such networks and obtain their partition functions and main characteristics. We find simple dynamical construction procedures that produce equilibrium uncorrelated random graphs with an arbitrary degree distribution. In particular, we show that in equilibrium uncorrelated networks, fat-tailed degree distributions may exist only starting from some critical average number of connections of a vertex, in a phase with a condensate of edges." }
0910.5607
cs/0609113
Introduction
The notion of preclone was introduced byÉsik and Weil #REFR in a study of recognizable tree languages.
[]
[ "Preclones are heterogeneous algebras that resemble clones, but the superposition operation is slightly different from clone composition and membership of certain elements that are present in every clone is not stipulated. Precise definitions will be given in Section 2.", "Clones have been described as the closed classes of operation under the Galois connection between operations and relations induced by the preservation relation.", "This classical Galois theory is known as the Pol-Inv theory of clones and relations; see #OTHEREFR .", "Similar Galois theories have been developed for other function algebras; see #OTHEREFR .", "We refer the reader to #OTHEREFR for a brief survey on previous results in this line of research." ]
[ "preclone", "recognizable tree languages" ]
background
{ "title": "Characterization of preclones by matrix collections", "abstract": "Abstract. Preclones are described as the closed classes of the Galois connection induced by a preservation relation between operations and matrix collections. The Galois closed classes of matrix collections are also described by explicit closure conditions." }
{ "title": "Algebraic recognizability of regular tree languages", "abstract": "We propose a new algebraic framework to discuss and classify recognizable tree languages, and to characterize interesting classes of such languages. Our algebraic tool, called preclones, encompasses the classical notion of syntactic Σ-algebra or minimal tree automaton, but adds new expressivity to it. The main result in this paper is a variety theoremà la Eilenberg, but we also discuss important examples of logically defined classes of recognizable tree languages, whose characterization and decidability was established in recent papers (by Benedikt and Ségoufin, and by Bojańczyk and Walukiewicz) and can be naturally formulated in terms of pseudovarieties of preclones. Finally, this paper constitutes the foundation for another paper by the same authors, where first-order definable tree languages receive an algebraic characterization." }
1206.3634
cs/0406033
Related Work
One of the well studied online algorithm for assigning resources to users is the kserver problem #REFR where, the servers handle request once for each client.
[ "Balls-into-Bins #OTHEREFR model is used for studying load balancing in a similar resource allocation configuration where, the objective is to place m balls into n bins while guaranteeing bounds on the maximum, minimum or the average load across all the bins.", "The main advantage of the model defined in section 2 is that it also takes into accounts the capacities of the bins (which are analogous to the consumers in the problem defined in section 2).", "Further the model described in section 2 compares the performance of a randomized algorithm with the optimal offline algorithm." ]
[ "Dynamic assignment #OTHEREFR has a similar configuration involving bipartite graphs." ]
[ "online algorithm" ]
background
{ "title": "Balls into Bins: strict Capacities and Edge Weights", "abstract": "Abstract. We explore a novel theoretical model for studying the performance of distributed storage management systems where the data-centers have limited capacities (as compared to storage space requested by the users). Prior schemes such as Balls-into-bins (used for load balancing) neither consider bin (consumer) capacities (multiple balls into a bin) nor the future performance of the system after, balls (producer requests) are allocated to bins and restrict number of balls as a function of the number of bins. Our problem consists of finding an optimal assignment of the online producer requests to consumers (via weighted edges) in a complete bipartite graph while ensuring that the total size of request assigned on a consumer is limited by its capacity. The metric used to measure the performance in this model is the (minimization of) weighted sum of the requests assigned on the edges (loads) and their corresponding weights. We first explore the optimal offline algorithms followed by the analysis of different online techniques (by comparing their performance against the optimal offline solution). LP and Primal-Dual algorithms are used for calculating the optimal offline solution in O(r · n) time (where r and n are the number of requests and consumers respectively) while randomized algorithms are used for the online case. We propose randomized online algorithms in which the consumers are selected based on edge probabilities (that can change with consumer failures; due to capacity exhaustion) and evaluate the performance of these randomized schemes using probabilistic analysis. The performance of the online algorithms is measured using competitive analysis assuming an oblivious adversary who knows the randomized algorithm but not the results produced. For the simplified model with equal consumer capacities an average-case competitive ratio (which compares the average cost of the output produced by the online algorithm and the minimum cost of the optimal offline solution) of (where d is the edge weight / distance) is achieved using an algorithm that has equal probability for selecting any of the available edges with a running time of O(r). In the extending the model to arbitrary consumer capacities we show an average case competitive ratio of . This theoretical model gives insights to a (storage) cloud system designer about, how the different attributes (producer requests, edge weights and consumer capacities) effect the overall (read / write) performance of a distributed storage management system over a period of time." }
{ "title": "Randomized k-server algorithms for growth-rate bounded graphs", "abstract": "The k-server problem is a fundamental online problem where k mobile servers should be scheduled to answer a sequence of requests for points in a metric space as to minimize the total movement cost. While the deterministic competitive ratio is at least k, randomized k-server algorithms have the potential of reaching o(k) competitive ratios. This goal may be approached by using probabilistic metric approximation techniques. This paper gives the first results in this direction obtaining o(k) competitive ratio for a natural class of metric spaces, including d-dimensional grids, and wide range of k. Prior to this work no result of this type was known beyond results for specific metric spaces. The k-server problem, defined by Manasse, McGeoch, and Sleator [18], consists of an n-point metric space, and k mobile servers (k < n) residing in points of this metric space. A sequence of requests is presented, each request is associated with a point in the metric space and must be served by moving one of the k servers to the request point. The cost of an algorithm for serving a sequence of requests is the total distance traveled by the servers. The problem is formulated as an online problem, in which the servers algorithm must make the movement decision without the knowledge of future requests. Denote by cost A (σ) the cost of an algorithm A for serving the request σ (in case A is randomized algorithm, this is a random variable), and by cost opt (σ) the minimal cost for serving σ. As customary for online algorithms, we measure their performance using the competitive ratio. A randomized online algorithm A is called r-competitive if exists some C such that for any task sequence σ, [cost A (σ)] ≤ r cost opt (σ) + C. The randomized (resp. deterministic) competitive ratio is the infimum over r for which there exists a randomized (resp. deterministic) r-competitive algorithm. The k-server problem attracted much research," }
1012.3947
cs/0207064
Literature and Related Work
A notable exception is an article #REFR by Amir establishing some interpolation properties for circumscription and default logic.
[ "The interpolation theorem for classical logic is due to Craig #OTHEREFR ; it was extended to intutionistic logic by Schütte #OTHEREFR . Maksimova #OTHEREFR characterised the super-intuitionistic propositional logics possessing interpolation.", "A modern, comprehensive treatment of interpolation in modal and intuitionistic logics can be found in the monograph #OTHEREFR by Gabbay and Maksimova.", "In non-monotonic logics, interpolation has received little attention." ]
[ "By the well-known relation between the answer sets of disjunctive programs and the extensions of corresponding default theories, he also derives a form of interpolation for ASP.", "With regard to answer set semantics, the approach of #OTHEREFR is quite different from ours.", "Since it is founded on an analysis of default logic, it uses classical logic as an underlying base.", "So Amir's version of interpolation is a form of (4) where L is classical logic; there is no requirement that ⊢ L form a well-behaved sublogic of | ∼, eg a deductive base.", "As Amir remarks, one cannot deduce in general from property (5) that α | ∼ β." ]
[ "default logic" ]
background
{ "title": "Interpolation in Equilibrium Logic and Answer Set Programming: the Propositional Case", "abstract": "Interpolation is an important property of classical and many non classical logics that has been shown to have interesting applications in computer science and AI. Here we study the Interpolation Property for the propositional version of the non-monotonic system of equilibrium logic, establishing weaker or stronger forms of interpolation depending on the precise interpretation of the inference relation. These results also yield a form of interpolation for ground logic programs under the answer sets semantics. For disjunctive logic programs we also study the property of uniform interpolation that is closely related to the concept of variable forgetting." }
{ "title": "Interpolation Theorems for Nonmonotonic Reasoning Systems", "abstract": "Craig's interpolation theorem [Craig, 1957] is an important theorem known for propositional logic and first-order logic. It says that if a logical formula β logically follows from a formula α, then there is a formula γ, including only symbols that appear in both α, β, such that β logically follows from γ and γ logically follows from α. Such theorems are important and useful for understanding those logics in which they hold as well as for speeding up reasoning with theories in those logics. In this paper we present interpolation theorems in this spirit for three nonmonotonic systems: circumscription, default logic and logic programs with the stable models semantics (a.k.a. answer set semantics). These results give us better understanding of those logics, especially in contrast to their nonmonotonic characteristics. They suggest that some monotonicity principle holds despite the failure of classic monotonicity for these logics. Also, they sometimes allow us to use methods for the decomposition of reasoning for these systems, possibly increasing their applicability and tractability. Finally, they allow us to build structured representations that use those logics." }
1012.3947
cs/0207064
Literature and Related Work
Another difference with respect to our approach is that #REFR does not discuss the nature of the | ∼ relation for ASP in detail, in particular how to understand Π | ∼ ϕ in case ϕ contains atoms not present in the program Π.
[ "Since it is founded on an analysis of default logic, it uses classical logic as an underlying base.", "So Amir's version of interpolation is a form of (4) where L is classical logic; there is no requirement that ⊢ L form a well-behaved sublogic of | ∼, eg a deductive base.", "As Amir remarks, one cannot deduce in general from property (5) that α | ∼ β.", "However if L is classical logic one cannot even deduce α | ∼ β from (4).", "More generally, there is no counterpart to our Proposition 1 in this case." ]
[ "In fact, if we interpret | ∼ AS as in section 5 above, it is easy to refute (| ∼, ⊢ L )-interpolation where L is classical logic.", "Let Π be the program B ← ¬A and q the query B ∧ ¬C.", "Then clearly Π | ∼ AS q, but there is no formula in the vocabulary B that would classically entail ¬C.", "Under any interpretation of answer set inference such that atoms not in the program are regarded as false, (| ∼, ⊢ L )-interpolation would be refuted." ]
[ "∼ relation" ]
background
{ "title": "Interpolation in Equilibrium Logic and Answer Set Programming: the Propositional Case", "abstract": "Interpolation is an important property of classical and many non classical logics that has been shown to have interesting applications in computer science and AI. Here we study the Interpolation Property for the propositional version of the non-monotonic system of equilibrium logic, establishing weaker or stronger forms of interpolation depending on the precise interpretation of the inference relation. These results also yield a form of interpolation for ground logic programs under the answer sets semantics. For disjunctive logic programs we also study the property of uniform interpolation that is closely related to the concept of variable forgetting." }
{ "title": "Interpolation Theorems for Nonmonotonic Reasoning Systems", "abstract": "Craig's interpolation theorem [Craig, 1957] is an important theorem known for propositional logic and first-order logic. It says that if a logical formula β logically follows from a formula α, then there is a formula γ, including only symbols that appear in both α, β, such that β logically follows from γ and γ logically follows from α. Such theorems are important and useful for understanding those logics in which they hold as well as for speeding up reasoning with theories in those logics. In this paper we present interpolation theorems in this spirit for three nonmonotonic systems: circumscription, default logic and logic programs with the stable models semantics (a.k.a. answer set semantics). These results give us better understanding of those logics, especially in contrast to their nonmonotonic characteristics. They suggest that some monotonicity principle holds despite the failure of classic monotonicity for these logics. Also, they sometimes allow us to use methods for the decomposition of reasoning for these systems, possibly increasing their applicability and tractability. Finally, they allow us to build structured representations that use those logics." }
1910.05948
1711.00700
Lemma 4
Theorem 3 In the closed-loop system including the plant (1)- #REFR , the observer (29)-(36) and the controller (203), there exist positive constants Γ c and λ c making the control signal be bounded and exponentially convergent to zero in the sense of Proof.
[ "Recalling the obtained exponential convergence of α(x,t), β (x,t), α(·,t) , β (·,t) , |Ŷ (t)|, |Ẑ(t)|, we thus obtain the exponential convergence to zero of |ẑ(x,t)| + |ŵ(x,t)| + |X(t)|.", "Recalling the exponential convergence to zero of |Ŷ (t)| and |v(·,t)|, we obtain Lemma 4.", "Theorem 2 For any initial data (z(x, 0), w(x, 0), v(x, 0)) ∈ L 2 (0, 1) × L 2 (0, 1) × L 2 (1, 2), exponential stability of the closed-loop system including the plant (1)- #OTHEREFR , the observer (29)-(36) and the controller (203) holds in the sense of the norm", "Proof.", "Applying #OTHEREFR and Cauchy-Schwarz inequality, recalling Theorem 1 and Lemma 4, we straightforwardly obtain Theorem 2." ]
[ "According to the control design in Section 4.3, (203) is a (stable) proper transfer function because F 0 is a constant matrix.", "Applying (203) and the exponential convergence ofẐ proved in Lemma 4, Theorem 3 is thus obtained." ]
[ "control signal", "closed-loop system" ]
background
{ "title": "Delay-Compensated Control of Sandwiched ODE-PDE-ODE Hyperbolic Systems for Oil Drilling and Disaster Relief", "abstract": "Motivated by engineering applications of subsea installation by deepwater construction vessels in oil drilling, and of aid delivery by unmanned aerial vehicles in disaster relief, we develop output-feedback boundary control of heterodirectional coupled hyperbolic PDEs sandwiched between two general ODEs, where the measurement is the output state of one ODE and suffers a time delay. After rewriting the time-delay dynamics as a transport PDE of which the left boundary connects with the sandwiched system, a state observer is built to estimate the states of the overall system of ODE-heterodirectional coupled hyperbolic PDEs-ODE-transport PDE using the right boundary state of the last transport PDE. An observer-based output-feedback controller acting at the first ODE is designed to stabilize the overall system using backstepping transformations and frequency-domain designs. The exponential stability results of the closed-loop system, boundedness and exponential convergence of the control input are proved. The obtained theoretical result is applied to control of a deepwater oil drilling construction vessel as a simulation case, where the simulation results show the proposed control design reduces cable oscillations and places the oil drilling equipment to be installed in the target area on the sea floor. Performance deterioration under extreme and unmodelled disturbances is also illustrated." }
{ "title": "Output feedback control of general linear heterodirectional hyperbolic PDE-ODE systems with spatially-varying coefficients", "abstract": "This paper presents a backstepping solution for the output feedback control of general linear heterodirectional hyperbolic PDE-ODE systems with spatially varying coefficients. Thereby, the ODE is coupled to the PDE in-domain and at the uncontrolled boundary, whereas the ODE is coupled with the latter boundary. For the state feedback design, a two-step backstepping approach is developed, which yields the conventional kernel equations and additional decoupling equations of simple form. In order to implement the state feedback controller, the design of observers for the PDE-ODE systems in question is considered, whereby anti-collocated measurements are assumed. Exponential stability with a prescribed convergence rate is verified for the closed-system pointwise in space. The resulting compensator design is illustrated for a 4 × 4 heterodirectional hyperbolic system coupled with a third-order ODE modelling a dynamic boundary condition." }
2001.04555
1402.4914
Entropy
These results illustrate that exact Knuth and Yao sampling can be infeasible in practice, whereas rejection sampling requires less precision (though higher than what is typically available on low precision sampling devices #REFR ) but is wasteful in terms of bits per sample.
[ "The higher number of expected bits per sample leads to wasted computation and higher runtime in practice due to excessive calls to the random number generator (as illustrated in Table 3 ).", "Optimal approximate sampler.", "For precision levels ranging from k = 4 to 64, the selected value of l delivers the smallest approximation error across executions of Algorithm 3 on inputs Z kk , . . . , Z k 0 .", "At each precision, the number of bits per sample has an upper bound that is very close to the upper bound of the optimal rate, since the entropies of the closest-approximation distributions are very close to the entropy of the target distribution, even at low precision.", "Under the L 1 metric, the approximation error decreases exponentially quickly with the increase in precision (Theorem 4.17)." ]
[ "The optimal approximate samplers are practical to implement and use significantly less precision or bits per sample than exact samplers, at the expense of a small approximation error that can be controlled based on the accuracy and entropy constraints of the application at hand." ]
[ "low precision sampling", "less precision" ]
result
{ "title": "Optimal Approximate Sampling from Discrete Probability Distributions", "abstract": "This paper addresses a fundamental problem in random variate generation: given access to a random source that emits a stream of independent fair bits, what is the most accurate and entropy-efficient algorithm for sampling from a discrete probability distribution (p 1 , . . . , p n ), where the probabilities of the output distribution (p 1 , . . . ,p n ) of the sampling algorithm must be specified using at most k bits of precision? We present a theoretical framework for formulating this problem and provide new techniques for finding sampling algorithms that are optimal both statistically (in the sense of sampling accuracy) and information-theoretically (in the sense of entropy consumption). We leverage these results to build a system that, for a broad family of measures of statistical accuracy, delivers a sampling algorithm whose expected entropy usage is minimal among those that induce the same distribution (i.e., is \"entropy-optimal\") and whose output distribution (p 1 , . . . ,p n ) is a closest approximation to the target distribution (p 1 , . . . , p n ) among all entropy-optimal sampling algorithms that operate within the specified k-bit precision. This optimal approximate sampler is also a closer approximation than any (possibly entropy-suboptimal) sampler that consumes a bounded amount of entropy with the specified precision, a class which includes floating-point implementations of inversion sampling and related methods found in many software libraries. We evaluate the accuracy, entropy consumption, precision requirements, and wall-clock runtime of our optimal approximate sampling algorithms on a broad set of distributions, demonstrating the ways that they are superior to existing approximate samplers and establishing that they often consume significantly fewer resources than are needed by exact samplers." }
{ "title": "Building fast Bayesian computing machines out of intentionally stochastic, digital parts", "abstract": "The brain interprets ambiguous sensory information faster and more reliably than modern computers, using neurons that are slower and less reliable than logic gates. But Bayesian inference, which underpins many computational models of perception and cognition, appears computationally challenging even given modern transistor speeds and energy budgets. The computational principles and structures needed to narrow this gap are unknown. Here we show how to build fast Bayesian computing machines using intentionally stochastic, digital parts, narrowing this efficiency gap by multiple orders of magnitude. We find that by connecting stochastic digital components according to simple mathematical rules, one can build massively parallel, low precision circuits that solve Bayesian inference problems and are compatible with the Poisson firing statistics of cortical neurons. We evaluate circuits for depth and motion perception, perceptual learning and causal reasoning, each performing inference over 10,000+ latent variables in real time -a 1,000x speed advantage over commodity microprocessors. These results suggest a new role for randomness in the engineering and reverse-engineering of intelligent computation. Our ability to see, think and act all depend on our mind's ability to process uncertain information and identify probable explanations for inherently ambiguous data. Many computational models of the perception of motion 1 , motor learning 2 , higher-level cognition 3, 4 and cognitive development 5 are based on Bayesian inference in rich, flexible probabilistic models of the world. Machine intelligence systems, including Watson 6 , autonomous vehicles 7 and other robots 8 and the Kinect 9 system for gestural control of video games, also all depend on probabilistic inference to resolve ambiguities in their sensory input. But brains solve these problems with greater speed than modern computers, using information processing units that are orders of magnitude slower and less reliable than the switching elements in the earliest electronic computers. The original UNIVAC I ran at 2.25 MHz 10 , and RAM from twenty years ago had one bit error per 256 MB per month 11 . In contrast, the fastest neurons in human brains operate at less than 1 kHz, and synaptic transmission can completely fail up to 50% of the time 12 . This efficiency gap presents a fundamental challenge for computer science. How is it possible to solve problems of probabilistic inference with an efficiency that begins to approach that of the brain? Here we introduce intentionally stochastic but still digital circuit elements, along with composition laws and design rules, that together narrow the efficiency gap by multiple orders of magnitude. Our approach both builds on and departs from the principles behind digital logic. Like traditional digital gates, stochastic digital gates consume and produce discrete symbols, which can be represented via binary numbers. Also like digital logic gates, our circuit elements can be composed 2 and abstracted via simple mathematical rules, yielding larger computational units that whose behavior can be analyzed in terms of their constituents. We describe primitives and design rules for both stateless and synchronously clocked circuits. But unlike digital gates and circuits, our gates and circuits are intentionally stochastic: each output is a sample from a probability distribution conditioned on the inputs, and (except in degenerate cases) simulating a circuit twice will produce different results. The numerical probability distributions themselves are implicit, though they can be estimated via the circuits' long-run time-averaged behavior. And also unlike digital gates and circuits, Bayesian reasoning arises naturally via the dynamics of our synchronously clocked circuits, simply by fixing the values of the circuit elements representing the data. We have built prototype circuits that solve problems of depth and motion perception and perceptual learning, plus a compiler that can automatically generate circuits for solving causal reasoning problems given a description of the underlying causal model. Each of these systems illustrates the use of stochastic digital circuits to accelerate Bayesian inference an important class of probabilistic models, including Markov Random Fields, nonparametric Bayesian mixture models, and Bayesian networks. Our prototypes show that this combination of simple choices at the hardware level -a discrete, digital representation for information, coupled with intentionally stochastic rather than ideally deterministic elements -has far reaching architectural consequences. For example, software implementations of approximate Bayesian reasoning typically rely on highprecision arithmetic and serial computation. We show that our synchronous stochastic circuits can be implemented at very low bit precision, incurring only a negligible decrease in accuracy. This low precision enables us to make fast, small, power-efficient circuits at the core of our designs. We also show that these reductions in computing unit size are sufficient to let us exploit the massive parallelism that has always been inherent in complex probabilistic models at a granularity that has been previously impossible to exploit. The resulting high computation density drives the performance gains we see from stochastic digital circuits, narrowing the efficiency gap with neural computation by multiple orders of magnitude. Our approach is fundamentally different from existing approaches for reliable computation with unreliable components [13] [14] [15] , which view randomness as either a source of error whose impact needs to be mitigated or as a mechanism for approximating arithmetic calculations. Our combinational circuits are intentionally stochastic, and we depend on them to produce exact samples from the probability distributions they represent. Our approach is also different from and com- here is based on inference in a nonparametric Bayesian model to which belief propagation does not apply. Additionally, because stochastic digital circuits produce samples rather than probabilities, their results capture the complex dependencies between variables in multi-modal probability distributions, and can also be used to solve otherwise intractable problems in decision theory by estimating expected utilities. part from the composition laws that they support, shown in Figure 1B . The output from one gate can be connected to the input of another, yielding a circuit that samples from the composition of the Boolean functions represented by each gate. The compound circuit can also be treated as a new primitive, abstracting away its internal structure. These simple laws have proved surprisingly powerful: they enable complex circuits to be built up out of reusable pieces. Stochastic digital gates (see Figure 1C ) are similar to Boolean gates, but consume a source of random bits to generate samples from conditional probability distributions. Stochastic gates are 5 specified by conditional probability tables; these give the probability that a given output will result from a given input. Digital logic corresponds to the degenerate case where all the probabilities are 0 or 1; see Figure 1D for the conditional probability table for an AND gate. Many stochastic gates with m input bits and n output bits are possible. Figure 1E shows one central example, the THETA gate, which generates draws from a biased coin whose bias is specified on the input. Supplementary material outlining serial and parallel implementations is available at 26 . Crucially, stochastic gates support generalizations of the composition laws from digital logic, shown in Figure 1F . The output of one stochastic gate can be fed as the input to another, yielding samples from the joint probability distribution over the random variables simulated by each gate. The compound circuit can also be treated as a new primitive that generates samples from the marginal distribution of the final output given the first input. As with digital gates, an enormous variety of circuits can be constructed using these simple rules. Most digital systems are based on deterministic finite state machines; the template for these machines is shown in Figure 2A . A stateless digital circuit encodes the transition function that calculates the next state from the previous state, and the clocking machinery (not shown) iterates the transition function repeatedly. This abstraction has proved enormously fruitful; the first microprocessors had roughly 2 20 distinct states. In Figure 2B , we show the stochastic analogue of this synchronous state machine: a stochastic transition circuit. We can scale up to challenging problems by exploiting the composition laws that stochastic transition circuits support. Consider a probability distribution defined over three variables P (A, B, C) = P (A)P (B|A)P (C|A). We can construct a transition circuit that samples from the overall state (A, B, C) by composing transition circuits for updating A|BC, B|A and C|A; this assembly is shown in Figure 2C . As long as the underlying probability model does not have any zero-probability states, ergodic convergence of each constituent transition circuit then implies ergodic convergence of the whole assembly 29 . The only requirement for scheduling transitions is that each circuit must be left fixed while circuits for variables that interact with it are transitioning. This scheduling requirement -that a transition circuit's value be held fixed while others that read from its internal state or serve as inputs to its next transition are updating -is analogous to the so-called \"dynamic discipline\" that defines valid clock schedules for traditional sequential 7 logic 30 . Deterministic and stochastic schedules, implementing cycle or mixture hybrid kernels 29 , are both possible. This simple rule also implies a tremendous amount of exploitable parallelism in stochastic transition circuits: if two variables are independently caused given the current setting of all others, they can be updated at the same time. Assemblies of stochastic transition circuits implement Bayesian reasoning in a straightforward way: by fixing, or \"clamping\" some of the variables in the assembly. If no variables are fixed, the circuit explores the full joint distribution, as shown in Figure 2E and 2F. If a variable is fixed, the circuit explores the conditional distribution on the remaining variables, as shown in Figure 2G and 2H. Simply by changing which transition circuits are updated, the circuit can be used to answer different probabilistic queries; these can be varied online based on the needs of the application. ( Figure 2 about here.) The accuracy of ultra-low-precision stochastic transition circuits. which generates draws from a discrete-output probability distribution whose weights are specified on its input. For example, in Gibbs sampling, this distribution is the conditional probability of one variable given the current value of all other variables that directly depend on it. One implementation of this operation is shown in Figure 3A ; each stochastic transition circuit from Figure 2 could be implemented by one such circuit, with multiplexers to select log-probability values based on 8 the neighbors of each random variable. Because only the ratios of the raw probabilities matter, and the probabilities themselves naturally vary on a log scale, extremely low precision representations can still provide accurate results. High entropy (i.e. nearly uniform) distributions are resilient to truncation because their values are nearly equal to begin with, differing only slightly in terms of their low-order bits. Low entropy (i.e. nearly deterministic) distributions are resilient because truncation is unlikely to change which outcomes have nonzero probability. Figure 3B quantifies this low-precision property, showing the relative entropy (a canonical information theoretic measure of the difference between two distributions) between the output distributions of low precision implementations of the circuit from Figure 3A and an accurate floating-point implementation. Discrete distributions on 1000 outcomes were used, spanning the full range of possible entropies, from almost 10 bits (for a uniform distribution on 1000 outcomes) to 0 bits (for a deterministic distribution), with error nearly undetectable until fewer than 8 bits are used. Figure 3C shows example distributions on 10 outcomes, and Figure 3D shows the resulting impact on computing element size. Extensive quantitative assessments of the impact of low bit precision have also been performed, providing additional evidence that only very low precision is required 26 . ( Figure 3 about here.) Efficiency gains on depth and motion perception and perceptual learning problems Our main results are based on an implementation where each stochastic gate is simulated using digital logic, consuming entropy from an internal pseudorandom number generator 31 . This allows 9 us to measure the performance and fault-tolerance improvements that flow from stochastic architectures, independent of physical implementation. We find that stochastic circuits make it practical to perform stochastic inference over several probabilistic models with 10,000+ latent variables in real time and at low power on a single chip. These designs achieve a 1,000x speed advantage over commodity microprocessors, despite using gates that are 10x slower. In 26 , we also show architectures that exhibit minimal degradation of accuracy in the presence of fault rates as high as one bit error for every 100 state transitions, in contrast to conventional architectures where failure rates are measured in bit errors (failures) per billion hours of operation 32 . Our first application is to depth and motion perception, via Bayesian inference in lattice Markov Random Field models 28 . The core problem is matching pixels from two images of the same scene, taken at distinct but nearby points in space or in time. The matching is ambiguous on the basis of the images alone, as multiple pixels might share the same value 33 ; prior knowledge about the structure of the scene must be applied, which is often cast in terms of Bayesian inference 34 . Figure 4A illustrates the template probabilistic model most commonly used. The X variables contain the unknown displacement vectors. Each Y variable contains a vector of pixel similarity measurements, one per possible pair of matched pixels based on X. The pairwise potentials between the X variables encode scene structure assumptions; in typical problems, unknown values are assumed to vary smoothly across the scene, with a small number of discontinuities at the boundaries of objects. Figure 4B shows the conditional independence structure in this problem: every other X variable is independent from one another, allowing the entire Markov chain over the X variables to be updated in a two-phase clock, independent of lattice size. Figure 4C shows 10 the dataflow for the software-reprogrammable probabilistic video processor we developed to solve this family of problems; this processor takes a problem specification based on pairwise potentials and Y values, and produces a stream of posterior samples. When comparing the hardware to handoptimized C versions on a commodity workstation, we see a 500x performance improvement. ( Figure 4 about here.) We have also built stochastic architectures for solving perceptual learning problems, based on fully Bayesian inference in Dirichlet process mixture models 35, 36 . Dirichlet process mixtures allow the number of clusters in a perceptual dataset to be automatically discovered during inference, without assuming an a priori limit on the models' complexity, and form the basis of many models of human categorization 37, 38 . We tested our prototype on the problem of discovering and classifying handwritten digits from binary input images. Our circuit for solving this problem operates on an online data stream, and efficiently tracks the number of perceptual clusters this input; see 26 for architectural and implementation details and additional characterizations of performance. As with our depth and motion perception architecture, we observe over ∼2,000x speedups as compared to a highly optimized software implementation. Of the ∼2000x difference in speed, roughly ∼256x is directly due to parallelism -all of the pixels are independent dimensions, and can therefore be updated simultaneously. ( Figure 5 about here.) Automatically generated causal reasoning circuits and spiking implementations Digital logic gates and their associated design rules are so simple that circuits for many problems can be generated automatically. Digital logic also provides a common target for device engineers, and have been implemented using many different physical mechanisms -classically with vaccum tubes, then with MOSFETS in silicon, and even on spintronic devices 39 . Here we provide two illustrations of the analogous simplicity and generality of stochastic digital circuits, both relevant for the reverse-engineering of intelligent computation in the brain. We have built a compiler that can automatically generate circuits for solving arbitrary causal This spiking implementation helps to narrow the gap with recent theories in computational neuroscience. For example, there have been recent proposals that neural spikes correspond to samples 41 , and that some spontaneous spiking activity corresponds to sampling from the brain's unclamped prior distribution 42 . Combining these local elements using our composition and abstraction laws into massively parallel, low-precision, intentionally stochastic circuits may help to bridge the gap between probabilistic theories of neural computation and the computational demands of complex probabilistic models and approximate inference 43 . ( Figure 6 about here.) 13 To further narrow the efficiency gap with the brain, and scale to more challenging Bayesian inference problems, we need to improve the convergence rate of our architectures. One approach would be to initialize the state in a transition circuit via a separate, feed-forward, combinational circuit that approximates the equilibrium distribution of the Markov chain. Machine perception software that uses machine learning to construct fast, compact initializers is already in use 9 . Analyzing the number of transitions needed to close the gap between a good initialization and the target distribution may be harder 44 . However, some feedforward Monte Carlo inference strategies for Bayesian networks provably yield precise estimates of probabilities in polynomial time if the underlying probability model is sufficiently stochastic 45 ; it remains to be seen if similar conditions apply to stateful stochastic transition circuits. It may also be fruitful to search for novel electronic devices -or previously unusable dynamical regimes of existing devices -that are as well matched to the needs of intentionally stochastic circuits as transistors are to logical inverters, potentially even via a spiking implementation. Physical phenomena that proved too unreliable for implementing Boolean logic gates may be viable building blocks for machines that perform Bayesian inference. Computer engineering has thus far focused on deterministic mechanisms of remarkable scale and complexity: billlions of parts that are expected to make trillions of state transitions with perfect repeatability 46 . But we are now engineering computing systems to exhibit more intelligence than they once did, and identify probable explanations for noisy, ambiguous data, drawn from large 14 spaces of possibilities, rather than calculate the definite consequences of perfectly known assumptions with high precision. The apparent intractability of probabilistic inference has complicated these efforts, and challenged the viability of Bayesian reasoning as a foundation for engineering intelligent computation and for reverse-engineering the mind and brain. At the same time, maintaining the illusion of rock-solid determinism has become increasingly costly. Engineers now attempt to build digital logic circuits in the deep sub-micron regime 47 and even inside cells 48 ; in both these settings, the underlying physics has stochasticity that is difficult to suppress. Energy budgets have grown increasingly restricted, from the scale of the datacenter 49 to the mobile device 50 , yet we spend substantial energy to operate transistors in deterministic regimes. And efforts to understand the dynamics of biological computation -from biological neural networks to gene expression networks 51 -have all encountered stochastic behavior that is hard to explain in deterministic, digital terms. Our intentionally stochastic digital circuit elements and stochastic computing architectures suggest a new direction for reconciling these trends, and enables the design of a new class of fast, Bayesian digital computing machines." }
2001.00139
1811.04256
Future work
This is complex problem to solve as system for such scenario needs to deal with background clutters, variable illumination condition, variable camera angles, distorted characters and variable writing styles #REFR .
[ "This can help preserve cultural heritage of vulnerable communities and will also create positive impact on strengthening global synergy.", "2.", "Another research problem that needs attention of research community is to built systems that can recognize on screen characters and text in different conditions in daily life scenarios e.g.", "text in captions or news tickers, text on sign boards, text on billboards etc.", "This is the domain of \"recognition / classification / text in the wild\"." ]
[ "3.", "To build robust system for \"text in the wild\", researchers needs to come up with challenging datasets that is comprehensive enough to incorporate all possible variations in characters. One such effort is #OTHEREFR .", "In another attempt, research community has launched \"ICDAR 2019: Robustreading challenge on multi-lingual scene text detection and recognition\" [189] .", "Aim of this challenge is invite research studies that proposes robust system for multi-lingual text recognition in daily life or \"in the wild\" scenario.", "Recently report for this challenge has been published and winner methods for different tasks in the challenge are all based on different deep learning architectures e.g. CNN, RNN or LSTM." ]
[ "characters" ]
background
{ "title": "Handwritten Optical Character Recognition (OCR): A Comprehensive Systematic Literature Review (SLR)", "abstract": "Given the ubiquity of handwritten documents in human transactions, Optical Character Recognition (OCR) of documents have invaluable practical worth. Optical character recognition is a science that enables to translate various types of documents or images into analyzable, editable and searchable data. During last decade, researchers have used artificial intelligence / machine learning tools to automatically analyze handwritten and printed documents in order to convert them into electronic format. The objective of this review paper is to summarize research that has been conducted on character recognition of handwritten documents and to provide research directions. In this Systematic Literature Review (SLR) we collected, synthesized and analyzed research articles on the topic of handwritten OCR (and closely related topics) which were published between year 2000 to 2018. We followed widely used electronic databases by following pre-defined review protocol. Articles were searched using keywords, forward reference searching and backward reference searching in order to search all the articles related to the topic. After carefully following study selection process 142 articles were selected for this SLR. This review article serves the purpose of presenting state of the art results and techniques on OCR and also provide research directions by highlighting research gaps. led to the commercial availability of the OCR machines. In 1965, advance reading machine \"IBM 1287\" was introduced at the \"world fair\" in New York [10] . This was the first ever optical reader, which was capable of reading handwritten numbers. During 1970s, researchers focused on the improvement of response time and performance of the OCR system. The next two decades from 1980 till 2000, software system of OCR was developed and deployed in educational institutes, census OCR [11] and for recognition of stamped characters on metallic bar [12] . In early 2000s, binarization techniques were introduced to preserve historical documents in digital form and provide researchers the access to these documents [13, 14, 15, 16] . Some of the challenges of binarization of historical documents was the use of nonstandard fonts, printing noise and spacing. In mid of 2000 multiple applications were introduced that were helpful for differently abled people. These applications helped these people in developing reading and writing skills. In the current decade, researchers have worked on different machine learning approaches which include Support Vector Machine (SVM), Random Forests (RF), k Nearest Neighbor (kNN), Decision Tree (DT) [17, 18, 19] etc. Researchers combined these machine learning techniques with image processing techniques to increase accuracy of optical character recognition system. Recently researchers has focused on developing techniques for the digitization of handwritten documents, primarily based on deep learning [20] approach. This paradigm shift has been sparked due to adaption of cluster computing and GPUs and better performance by deep learning architectures [21] , which includes Recurrent Neural Networks (RNN), Convolutional Neural Network (CNN), Long Short-Term Memory (LSTM) networks etc. This Systematic Literature Review (SLR) will not only serve the purpose of presenting literature in the domain of OCR for different languages but will also highlight research directions for new researcher by highlighting weak areas of current OCR systems that needs further investigation. This article is organized as follows. Section 2 discusses review methodology employed in this article. Review methodology section includes review protocol, inclusion and exclusion criteria, search strategy, selection process, quality assessment criteria and meta data synthesis of selected studies. Statistical data from selected studies is presented in Section 3. Section 4 presents research question and their motivation. Section 5 will discuss different classifications methods which are used for handwritten OCR. The section will also elaborate on structural and statistical models for optical character recognition. Section 6 will present different databases (for specific language) which are available for research purpose. Section 7 will present research overview of language specific research in OCR, while Section 8 will highlight research trends. Section 9 will summarize this review findings and will also highlight gaps in research that needs attention of research community. As mentioned above, this Systematic Literature Review (SLR) aims to identify and present literature on OCR by formulating research questions and selecting relevant research studies. Thus, in summary this review was: 1. To summarize existing research work (machine learning techniques and databases) on different languages of handwritten character recognition systems. 2. To highlight research weakness in order to eliminate them through additional research. 3. To identify new research areas within the domain of OCR. We will follow strategies proposed by Kitchenham et al. [22]. Following proposed strategy, in subsequent sub-sections review protocol, inclusion and exclusion criteria, search strategy process, selection process and data extraction and synthesis processes are discussed. Following the philosophy, principles and measures of the Systematic Literature Review (SLR) [22] , this systematic study was initialized with the development of comprehensive review protocol. This protocol" }
{ "title": "Scene Text Detection and Recognition: The Deep Learning Era", "abstract": "With the rise and development of deep learning, computer vision has been tremendously transformed and reshaped. As an important research area in computer vision, scene text detection and recognition has been inevitable influenced by this wave of revolution, consequentially entering the era of deep learning. In recent years, the community has witnessed substantial advancements in mindset, methodology and performance. This survey is aimed at summarizing and analyzing the major changes and significant progresses of scene text detection and recognition in the deep learning era. Through this article, we devote to: (1) introduce new insights and ideas; (2) highlight recent techniques and benchmarks; (3) look ahead into future trends. Specifically, we will emphasize the dramatic differences brought by deep learning and the grand challenges still remained. We expect that this review paper would serve as a reference book for researchers in this field. Related resources are also collected and compiled in our Github repository: https://github.com/Jyouhou/SceneTextPapers." }
1907.00693
1811.04256
A. Scene text segmentation
In recent years, a large number of deep learning-based scene text detection approaches have been proposed, most of which have been summarized in the latest survey #REFR .
[]
[ "Generally, these approaches can be roughly divided into three groups: regional proposal-based #OTHEREFR , anchor-based #OTHEREFR and semantic segmentation-based #OTHEREFR , #OTHEREFR .", "The text extraction stage in our proposed method is most related to the semantic segmentationbased text detection, which aims to assign the pixel-wise text and non-text labels to an image." ]
[ "deep learning-based scene", "text detection approaches" ]
background
{ "title": "Scene Text Magnifier", "abstract": "Scene text magnifier aims to magnify text in natural scene images without recognition. It could help the special groups, who have myopia or dyslexia to better understand the scene. In this paper, we design the scene text magnifier through interacted four CNN-based networks: character erasing, character extraction, character magnify, and image synthesis. The architecture of the networks are extended based on the hourglass encoderdecoders. It inputs the original scene text image and outputs the text magnified image while keeps the background unchange. Intermediately, we can get the side-output results of text erasing and text extraction. The four sub-networks are first trained independently and fine-tuned in end-to-end mode. The training samples for each stage are processed through a flow with original image and text annotation in ICDAR2013 and Flickr dataset as input, and corresponding text erased image, magnified text annotation, and text magnified scene image as output. To evaluate the performance of text magnifier, the Structural Similarity is used to measure the regional changes in each character region. The experimental results demonstrate our method can magnify scene text effectively without effecting the background." }
{ "title": "Scene Text Detection and Recognition: The Deep Learning Era", "abstract": "With the rise and development of deep learning, computer vision has been tremendously transformed and reshaped. As an important research area in computer vision, scene text detection and recognition has been inevitable influenced by this wave of revolution, consequentially entering the era of deep learning. In recent years, the community has witnessed substantial advancements in mindset, methodology and performance. This survey is aimed at summarizing and analyzing the major changes and significant progresses of scene text detection and recognition in the deep learning era. Through this article, we devote to: (1) introduce new insights and ideas; (2) highlight recent techniques and benchmarks; (3) look ahead into future trends. Specifically, we will emphasize the dramatic differences brought by deep learning and the grand challenges still remained. We expect that this review paper would serve as a reference book for researchers in this field. Related resources are also collected and compiled in our Github repository: https://github.com/Jyouhou/SceneTextPapers." }
1907.12122
1811.04256
Related Work
Details on other approaches which are not covered in this work are presented in #REFR .
[ "As mentioned above, current methods can be roughly divided to anchor-based approaches #OTHEREFR and segmentation-based ones #OTHEREFR , where some recent methods try to fuse the two types together #OTHEREFR 19, #OTHEREFR .", "Our proposed pipeline is based on recent segmentationbased text detection methods, which are discussed next." ]
[ "Segmentation-based text detection approaches have gained significant attention in recent years, starting from the seminal works of Yao et al. #OTHEREFR and Zhang et al. #OTHEREFR .", "These works solve the problem of text detection by reformulating it as a semantic segmentation scheme, which is then solved by a Fully Convolutional Network (FCN) #OTHEREFR .", "It was shown that these approaches are better suited for Figure 3 : The proposed pipeline.", "First, a downsized image is fed into our base network to get initial segmentation and scale masks.", "These masks are then used to create a canonical knapsack, containing only text regions in a uniform scale." ]
[ "work" ]
method
{ "title": "It's All About The Scale -- Efficient Text Detection Using Adaptive Scaling", "abstract": "\"Text can appear anywhere\". This property requires us to carefully process all the pixels in an image in order to accurately localize all text instances. In particular, for the more difficult task of localizing small text regions, many methods use an enlarged image or even several rescaled ones as their input. This significantly increases the processing time of the entire image and needlessly enlarges background regions. If we were to have a prior telling us the coarse location of text instances in the image and their approximate scale, we could have adaptively chosen which regions to process and how to rescale them, thus significantly reducing the processing time. To estimate this prior we propose a segmentation-based network with an additional \"scale predictor\", an output channel that predicts the scale of each text segment. The network is applied on a scaled down image to efficiently approximate the desired prior, without processing all the pixels of the original image. The approximated prior is then used to create a compact image containing only text regions, resized to a canonical scale, which is fed again to the segmentation network for fine-grained detection. We show that our approach offers a powerful alternative to fixed scaling schemes, achieving an equivalent accuracy to larger input scales while processing far fewer pixels. Qualitative and quantitative results are presented on the ICDAR15 and ICDAR17 MLT benchmarks to validate our approach." }
{ "title": "Scene Text Detection and Recognition: The Deep Learning Era", "abstract": "With the rise and development of deep learning, computer vision has been tremendously transformed and reshaped. As an important research area in computer vision, scene text detection and recognition has been inevitable influenced by this wave of revolution, consequentially entering the era of deep learning. In recent years, the community has witnessed substantial advancements in mindset, methodology and performance. This survey is aimed at summarizing and analyzing the major changes and significant progresses of scene text detection and recognition in the deep learning era. Through this article, we devote to: (1) introduce new insights and ideas; (2) highlight recent techniques and benchmarks; (3) look ahead into future trends. Specifically, we will emphasize the dramatic differences brought by deep learning and the grand challenges still remained. We expect that this review paper would serve as a reference book for researchers in this field. Related resources are also collected and compiled in our Github repository: https://github.com/Jyouhou/SceneTextPapers." }
1904.02276
1811.04852
Matrix zero-sum games
Classically, one query to X is to ask for one entry in the matrix, whereas quantumly we assume the oracle in #REFR . Inspired by Ref.
[ "implies an 18 -optimal strategy for X.", "Therefore, without loss of generality, we could assume that X is an n-dimensional anti-symmetric matrix (by taking n = n 1 + n 2 + 1).", "In this case, the game value max p∈∆n min q∈∆n p † Xq in (61) equals to 0, and due to symmetry finding an -optimal strategy reduces to find an w ∈ ∆ n such that", "where ≤ applies to each coordinate.", "As a normalization, we assume that max i,j∈[n] |X i,j | ≤ 1." ]
[ "[18, Theorem 1], we give the following result for solving the zero-sum game: Theorem 7.", "With success probability at least 2/3, Algorithm 6 returns a vectorw ∈ R n such that Xw ≤ · 1 n , usingÕ √ n 4 quantum gates.", "Algorithm 6: Sublinear quantum algorithm for solving zero-sum games.", "Input: > 0, a quantum oracle O X for X ∈ R n×n . Output:w ∈ ∆ n that satisfies (63).", "using Algorithm 2;" ]
[ "matrix" ]
background
{ "title": "Sublinear quantum algorithms for training linear and kernel-based classifiers", "abstract": "We investigate quantum algorithms for classification, a fundamental problem in machine learning, with provable guarantees. Given n d-dimensional data points, the state-of-the-art (and optimal) classical algorithm for training classifiers with constant margin [11] runs inÕ(n + d) 1 time. We design sublinear quantum algorithms for the same task running inÕ( √ n+ √ d) time, a quadratic improvement in both n and d. Moreover, our algorithms use the standard quantization of the classical input and generate the same classical output, suggesting minimal overheads when used as subroutines for end-to-end applications. We also demonstrate a tight lower bound (up to poly-log factors) and discuss the possibility of implementation on near-term quantum machines. As a side result, we also give sublinear quantum algorithms for approximating the equilibria of n-dimensional matrix zero-sum games with optimal complexityΘ( √ n)." }
{ "title": "Quantum-inspired sublinear classical algorithms for solving low-rank linear systems", "abstract": "We present classical sublinear-time algorithms for solving low-rank linear systems of equations. Our algorithms are inspired by the HHL quantum algorithm [9] for solving linear systems and the recent breakthrough by Tang [15] of dequantizing the quantum algorithm for recommendation systems. Let A ∈ C m×n be a rank-k matrix, and b ∈ C m be a vector. We present two algorithms: a \"sampling\" algorithm that provides a sample from A −1 b and a \"query\" algorithm that outputs an estimate of an entry of A −1 b, where A −1 denotes the Moore-Penrose pseudo-inverse. Both of our algorithms have query and time complexity O(poly(k, κ, A F , 1/ ) polylog(m, n)), where κ is the condition number of A and is the precision parameter. Note that the algorithms we consider are sublinear time, so they cannot write and read the whole matrix or vectors. In this paper, we assume that A and b come with well-known low-overhead data structures such that entries of A and b can be sampled according to some natural probability distributions. Alternatively, when A is positive semidefinite, our algorithms can be adapted so that the sampling assumption on b is not required." }
1912.07946
1811.05296
A. Methodology
All the details about the dataset and parameters used to train this model are provided in #REFR .
[ "The limit on 500 instructions as the maximum function length ensures that only˜6% of instructions sequences in the multiple compilers in the Ubuntu dataset were sliced.", "All the other instruction sequences were considered by the network in their entirety.", "Regarding the Seq2Seq training process, experiments used the following set of parameters: All the details about the dataset and parameters used to train this model are described in #OTHEREFR . The network was also tested with no pre-trained embeddings.", "In order to train the Seq2Seq network, pre-trained instruction embeddings have been used.", "Such embeddings are computed using the i2v model also employed in SAFE." ]
[ "The network was also tested with no pre-trained instruction embeddings.", "In this case embeddings for single instructions were randomly initialized and trained together with the network itself." ]
[ "dataset" ]
method
{ "title": "Function Naming in Stripped Binaries Using Neural Networks.", "abstract": "In this paper we investigate the problem of automatically naming pieces of assembly code. Where by naming we mean assigning to portion of code the string of words that would be likely assigned by an human reverse engineer. We formally and precisely define the framework in which our investigation takes place. That is we define problem, we provide reasonable justifications for the choice that we made during our designing of the training and test steps and we performed a statistical analysis of function names in a large real-world corpora of over 4 millions of functions. In such framework we test several baselines coming from the field of NLP (e.g., Seq2Seq networks and transformers). Moreover, we provide a set of tailored solutions that beat the aforementioned baselines." }
{ "title": "SAFE: Self-Attentive Function Embeddings for Binary Similarity", "abstract": "The binary similarity problem consists in determining if two functions are similar by only considering their compiled form. Advanced techniques for binary similarity recently gained momentum as they can be applied in several fields, such as copyright disputes, malware analysis, vulnerability detection, etc., and thus have an immediate practical impact. Current solutions compare functions by first transforming their binary code in multi-dimensional vector representations (embeddings), and then comparing vectors through simple and efficient geometric operations. However, embeddings are usually derived from binary code using manual feature extraction, that may fail in considering important function characteristics, or may consider features that are not important for the binary similarity problem. In this paper we propose SAFE, a novel architecture for the embedding of functions based on a self-attentive neural network. SAFE works directly on disassembled binary functions, does not require manual feature extraction, is computationally more efficient than existing solutions (i.e., it does not incur in the computational overhead of building or manipulating control flow graphs), and is more general as it works on stripped binaries and on multiple architectures. We report the results from a quantitative and qualitative analysis that show how SAFE provides a noticeable performance improvement with respect to previous solutions. Furthermore, we show how clusters of our embedding vectors are closely related to the semantic of the implemented algorithms, paving the way for further interesting applications (e.g. semantic-based binary function search) 1 https://www.statista.com/statistics/266210/number-of-available-applications-in-the-google-play-store/ 2 https://www.cvedetails.com/browse-by-date.php 1 arXiv:1811.05296v2 [cs.CR]" }
1912.07946
1811.05296
B. Results
Regarding the performance difference between the two Seq2Seq networks, the 1%-2% advantage obtainable by using pre-trained instruction embeddings is coherent with the results reported in #REFR .
[ "Therefore, in order to prevent the network from slicing the assembly code sequences of too many functions, it is necessary to increase the maximum source sequence length parameter, leading to a very slow training.", "To better understand the quality of the obtained results, performance have been compared to those achievable by a random predictor.", "The random prediction has been created by using the same probability distribution of function name lengths as the one of the original function names in the test set and by randomly assigning tokens to each function.", "As can be noticed from the table, results achieved by the model are clearly better than those achieved by a random prediction.", "This result suggests that even if this model does not provide strong performance overall, there are functions for which it is able to understand behaviors and assign correct tokens." ]
[]
[ "pre-trained instruction embeddings" ]
result
{ "title": "Function Naming in Stripped Binaries Using Neural Networks.", "abstract": "In this paper we investigate the problem of automatically naming pieces of assembly code. Where by naming we mean assigning to portion of code the string of words that would be likely assigned by an human reverse engineer. We formally and precisely define the framework in which our investigation takes place. That is we define problem, we provide reasonable justifications for the choice that we made during our designing of the training and test steps and we performed a statistical analysis of function names in a large real-world corpora of over 4 millions of functions. In such framework we test several baselines coming from the field of NLP (e.g., Seq2Seq networks and transformers). Moreover, we provide a set of tailored solutions that beat the aforementioned baselines." }
{ "title": "SAFE: Self-Attentive Function Embeddings for Binary Similarity", "abstract": "The binary similarity problem consists in determining if two functions are similar by only considering their compiled form. Advanced techniques for binary similarity recently gained momentum as they can be applied in several fields, such as copyright disputes, malware analysis, vulnerability detection, etc., and thus have an immediate practical impact. Current solutions compare functions by first transforming their binary code in multi-dimensional vector representations (embeddings), and then comparing vectors through simple and efficient geometric operations. However, embeddings are usually derived from binary code using manual feature extraction, that may fail in considering important function characteristics, or may consider features that are not important for the binary similarity problem. In this paper we propose SAFE, a novel architecture for the embedding of functions based on a self-attentive neural network. SAFE works directly on disassembled binary functions, does not require manual feature extraction, is computationally more efficient than existing solutions (i.e., it does not incur in the computational overhead of building or manipulating control flow graphs), and is more general as it works on stripped binaries and on multiple architectures. We report the results from a quantitative and qualitative analysis that show how SAFE provides a noticeable performance improvement with respect to previous solutions. Furthermore, we show how clusters of our embedding vectors are closely related to the semantic of the implemented algorithms, paving the way for further interesting applications (e.g. semantic-based binary function search) 1 https://www.statista.com/statistics/266210/number-of-available-applications-in-the-google-play-store/ 2 https://www.cvedetails.com/browse-by-date.php 1 arXiv:1811.05296v2 [cs.CR]" }
1912.07946
1811.05296
A. Methodology
Regarding the Seq2Seq training process, experiments used the following set of parameters: All the details about the dataset and parameters used to train this model are described in #REFR . The network was also tested with no pre-trained embeddings.
[ "Early stopping has been used to stop training at the point when performance on the validation set starts degrading or stops improving.", "The metric used for stopping the training was ROUGE-1 #OTHEREFR instead of the loss function of the validation set, since the former does not take into account the order of tokens, whereas the latter does.", "Decoding of test set has always been carried out using beamwidth 5.", "The limit on 500 instructions as the maximum function length ensures that only˜6% of instructions sequences in the multiple compilers in the Ubuntu dataset were sliced.", "All the other instruction sequences were considered by the network in their entirety." ]
[ "In order to train the Seq2Seq network, pre-trained instruction embeddings have been used.", "Such embeddings are computed using the i2v model also employed in SAFE.", "All the details about the dataset and parameters used to train this model are provided in #OTHEREFR .", "The network was also tested with no pre-trained instruction embeddings.", "In this case embeddings for single instructions were randomly initialized and trained together with the network itself." ]
[ "pre-trained embeddings" ]
method
{ "title": "In Nomine Function: Naming Functions in Stripped Binaries with Neural Networks", "abstract": "In this paper we investigate the problem of automatically naming pieces of assembly code. Where by naming we mean assigning to portion of code the string of words that would be likely assigned by an human reverse engineer. We formally and precisely define the framework in which our investigation takes place. That is we define problem, we provide reasonable justifications for the choice that we made during our designing of the training and test steps and we performed a statistical analysis of function names in a large real-world corpora of over 4 millions of functions. In such framework we test several baselines coming from the field of NLP (e.g., Seq2Seq networks and transformers). Moreover, we provide a set of tailored solutions that beat the aforementioned baselines." }
{ "title": "SAFE: Self-Attentive Function Embeddings for Binary Similarity", "abstract": "The binary similarity problem consists in determining if two functions are similar by only considering their compiled form. Advanced techniques for binary similarity recently gained momentum as they can be applied in several fields, such as copyright disputes, malware analysis, vulnerability detection, etc., and thus have an immediate practical impact. Current solutions compare functions by first transforming their binary code in multi-dimensional vector representations (embeddings), and then comparing vectors through simple and efficient geometric operations. However, embeddings are usually derived from binary code using manual feature extraction, that may fail in considering important function characteristics, or may consider features that are not important for the binary similarity problem. In this paper we propose SAFE, a novel architecture for the embedding of functions based on a self-attentive neural network. SAFE works directly on disassembled binary functions, does not require manual feature extraction, is computationally more efficient than existing solutions (i.e., it does not incur in the computational overhead of building or manipulating control flow graphs), and is more general as it works on stripped binaries and on multiple architectures. We report the results from a quantitative and qualitative analysis that show how SAFE provides a noticeable performance improvement with respect to previous solutions. Furthermore, we show how clusters of our embedding vectors are closely related to the semantic of the implemented algorithms, paving the way for further interesting applications (e.g. semantic-based binary function search) 1 https://www.statista.com/statistics/266210/number-of-available-applications-in-the-google-play-store/ 2 https://www.cvedetails.com/browse-by-date.php 1 arXiv:1811.05296v2 [cs.CR]" }
1912.07946
1811.05296
B. Results
Regarding the performance difference between the two Seq2Seq networks, the 1%-2% advantage obtainable by using pre-trained instruction embeddings is coherent with the results reported in #REFR .
[ "Therefore, in order to prevent the network from slicing the assembly code sequences of too many functions, it is necessary to increase the maximum source sequence length parameter, leading to a very slow training.", "To better understand the quality of the obtained results, performance have been compared to those achievable by a random predictor.", "The random prediction has been created by using the same probability distribution of function name lengths as the one of the original function names in the test set and by randomly assigning tokens to each function.", "As can be noticed from the table, results achieved by the model are clearly better than those achieved by a random prediction.", "This result suggests that even if this model does not provide strong performance overall, there are functions for which it is able to understand behaviors and assign correct tokens." ]
[]
[ "pre-trained instruction embeddings" ]
result
{ "title": "In Nomine Function: Naming Functions in Stripped Binaries with Neural Networks", "abstract": "In this paper we investigate the problem of automatically naming pieces of assembly code. Where by naming we mean assigning to portion of code the string of words that would be likely assigned by an human reverse engineer. We formally and precisely define the framework in which our investigation takes place. That is we define problem, we provide reasonable justifications for the choice that we made during our designing of the training and test steps and we performed a statistical analysis of function names in a large real-world corpora of over 4 millions of functions. In such framework we test several baselines coming from the field of NLP (e.g., Seq2Seq networks and transformers). Moreover, we provide a set of tailored solutions that beat the aforementioned baselines." }
{ "title": "SAFE: Self-Attentive Function Embeddings for Binary Similarity", "abstract": "The binary similarity problem consists in determining if two functions are similar by only considering their compiled form. Advanced techniques for binary similarity recently gained momentum as they can be applied in several fields, such as copyright disputes, malware analysis, vulnerability detection, etc., and thus have an immediate practical impact. Current solutions compare functions by first transforming their binary code in multi-dimensional vector representations (embeddings), and then comparing vectors through simple and efficient geometric operations. However, embeddings are usually derived from binary code using manual feature extraction, that may fail in considering important function characteristics, or may consider features that are not important for the binary similarity problem. In this paper we propose SAFE, a novel architecture for the embedding of functions based on a self-attentive neural network. SAFE works directly on disassembled binary functions, does not require manual feature extraction, is computationally more efficient than existing solutions (i.e., it does not incur in the computational overhead of building or manipulating control flow graphs), and is more general as it works on stripped binaries and on multiple architectures. We report the results from a quantitative and qualitative analysis that show how SAFE provides a noticeable performance improvement with respect to previous solutions. Furthermore, we show how clusters of our embedding vectors are closely related to the semantic of the implemented algorithms, paving the way for further interesting applications (e.g. semantic-based binary function search) 1 https://www.statista.com/statistics/266210/number-of-available-applications-in-the-google-play-store/ 2 https://www.cvedetails.com/browse-by-date.php 1 arXiv:1811.05296v2 [cs.CR]" }
1904.03139
1807.01680
The coefficients c k (suitably normalized)
For instance, #REFR describes how, given some graph G, one can carefully craft a Gibbs distribution for which the parameter q is a pointwise evaluation of the reliability polynomial of G.
[ "For many application to physics and computer science, the parameters q and c k are correlated with underlying problem parameters." ]
[ "Similarly, the parameters c k are correlated with counts of certain types of connected subgraphs of G.", "A number of other problems where the value q is useful are discussed in #OTHEREFR .", "One special case of the Gibbs distribution is worth further mention, as it appears in a number of important combinatorial applications: the situation where the coefficients c 0 , . . .", ", c k are known the be log-concave, that is, they satisfy the bound c 2 k ≥ c k−1 c k+1 for k = 1, . . . , n − 1.", "We refer to this as the log-concave setting, and a number of results will be specialized for this case." ]
[ "Gibbs distribution", "reliability polynomial" ]
background
{ "title": "PR ] 5 A pr 2 01 9 Parameter estimation for integer-valued Gibbs distributions", "abstract": "We consider the family of Gibbs distributions, which are probability distributions over a discrete space Ω given by µ" }
{ "title": "Tight bounds for popping algorithms", "abstract": "Abstract. We sharpen run-time analysis for algorithms under the partial rejection sampling framework. Our method yields improved bounds for • the cluster-popping algorithm for approximating all-terminal network reliability; • the cycle-popping algorithm for sampling rooted spanning trees; • the sink-popping algorithm for sampling sink-free orientations. In all three applications, our bounds are not only tight in order, but also optimal in constants." }
2001.06935
1807.07814
II. HIERARCHICAL HYPERSPARSE MATRICES
Hierarchical hypersparse matrices store increasing numbers of nonzero entries in each layer (adapted from #REFR ).
[ "Python, Julia, and Matlab/Octave bindings allow the performance benefits of the SuiteSparse GraphBLAS C library to be realized in these highly productive programming environments.", "Streaming updates to a large hypersparse matrix can be be accelerated with a hierarchical implementation optimized to the memory hierarchy (see Fig. 1 ).", "Rapid updates are performed on the smallest hypersparse matrices in the fastest memory.", "The strong mathematical properties of the GraphBLAS allow a hierarchical implementation of hypersparse matrices to be implemented via simple addition.", "All creation and organization of hypersparse row and column indices are handled naturally by the GraphBLAS mathematics. Fig. 1 ." ]
[ "If layer A i surpasses the nonzero threshold c i it is added to A i+1 and cleared.", "Hierarchical hypersparse matrices ensure that the majority of updates are performed in fast memory.", "c i , then A i is added to A i+1 and A i is cleared.", "The overall usage is as follows • Initialize N -level hierarchical hypersparse matrix with cuts c i • Update by adding data A to lowest layer", "and reset A 1 to an empty hypersparse matrix of appropriate dimensions." ]
[ "Hierarchical hypersparse matrices" ]
background
{ "title": "75,000,000,000 Streaming Inserts/Second Using Hierarchical Hypersparse GraphBLAS Matrices", "abstract": "The SuiteSparse GraphBLAS C-library implements high performance hypersparse matrices with bindings to a variety of languages (Python, Julia, and Matlab/Octave). GraphBLAS provides a lightweight in-memory database implementation of hypersparse matrices that are ideal for analyzing many types of network data, while providing rigorous mathematical guarantees, such as linearity. Streaming updates of hypersparse matrices put enormous pressure on the memory hierarchy. This work benchmarks an implementation of hierarchical hypersparse matrices that reduces memory pressure and dramatically increases the update rate into a hypersparse matrices. The parameters of hierarchical hypersparse matrices rely on controlling the number of entries in each level in the hierarchy before an update is cascaded. The parameters are easily tunable to achieve optimal performance for a variety of applications. Hierarchical hypersparse matrices achieve over 1,000,000 updates per second in a single instance. Scaling to 31,000 instances of hierarchical hypersparse matrices arrays on 1,100 server nodes on the MIT SuperCloud achieved a sustained update rate of 75,000,000,000 updates per second. This capability allows the MIT SuperCloud to analyze extremely large streaming network data sets." }
{ "title": "Interactive Supercomputing on 40,000 Cores for Machine Learning and Data Analysis", "abstract": "Abstract-Interactive massively parallel computations are critical for machine learning and data analysis. These computations are a staple of the MIT Lincoln Laboratory Supercomputing Center (LLSC) and has required the LLSC to develop unique interactive supercomputing capabilities. Scaling interactive machine learning frameworks, such as TensorFlow, and data analysis environments, such as MATLAB/Octave, to tens of thousands of cores presents many technical challenges -in particular, rapidly dispatching many tasks through a scheduler, such as Slurm, and starting many instances of applications with thousands of dependencies. Careful tuning of launches and prepositioning of applications overcome these challenges and allow the launching of thousands of tasks in seconds on a 40,000-core supercomputer. Specifically, this work demonstrates launching 32,000 TensorFlow processes in 4 seconds and launching 262,000 Octave processes in 40 seconds. These capabilities allow researchers to rapidly explore novel machine learning architecture and data analysis algorithms." }
1808.04345
1807.07814
I. INTRODUCTION
Using these Linux powered supercomputers, it is possible to rapidly launch interactive applications on thousands of processors in a matter of seconds #REFR .
[ "With the slowing down of Moore's Law #OTHEREFR , #OTHEREFR , parallel processing has become a primary technique for increasing application performance.", "Physical simulation, machine learning, and data analysis are rapidly growing applications that are utilizing parallel processing to achieve their performance goals.", "These applications require a wide range of software which can be dependent upon specific operating systems, such as Microsoft Windows.", "Parallel computing directly in the Windows platform has a long history #OTHEREFR - #OTHEREFR .", "The largest supercomputers currently available almost exclusively run the Linux operating system #OTHEREFR , #OTHEREFR ." ]
[ "A common way to launch multiple Microsoft Windows applications on Linux computers is to use virtual machines (VMs) #OTHEREFR .", "Windows VMs replicate the complete operating system and its virtual memory environment for each instance of the Windows application that is running, which imposes a great deal of overhead on the applications.", "Launching many Windows VMs on a large supercomputer can often take several seconds per VM #OTHEREFR - #OTHEREFR .", "While this performance is adequate for interactive applications that may require a handful of processors, scaling up such applications to the thousands of processors typically found in a modern supercomputer is prohibitive.", "This paper describes a unique approach using the Lincoln Laboratory LLMapReduce technology in combination with the Wine Windows compatibility layer to rapidly launch and run Microsoft Windows applications on thousands of cores on a supercomputer." ]
[ "Linux powered supercomputers" ]
background
{ "title": "Interactive Launch of 16,000 Microsoft Windows Instances on a Supercomputer", "abstract": "Simulation, machine learning, and data analysis require a wide range of software which can be dependent upon specific operating systems, such as Microsoft Windows. Running this software interactively on massively parallel supercomputers can present many challenges. Traditional methods of scaling Microsoft Windows applications to run on thousands of processors have typically relied on heavyweight virtual machines that can be inefficient and slow to launch on modern manycore processors. This paper describes a unique approach using the Lincoln Laboratory LLMapReduce technology in combination with the Wine Windows compatibility layer to rapidly and simultaneously launch and run Microsoft Windows applications on thousands of cores on a supercomputer. Specifically, this work demonstrates launching 16,000 Microsoft Windows applications in 5 minutes running on 16,000 processor cores. This capability significantly broadens the range of applications that can be run at large scale on a supercomputer." }
{ "title": "Interactive Supercomputing on 40,000 Cores for Machine Learning and Data Analysis", "abstract": "Abstract-Interactive massively parallel computations are critical for machine learning and data analysis. These computations are a staple of the MIT Lincoln Laboratory Supercomputing Center (LLSC) and has required the LLSC to develop unique interactive supercomputing capabilities. Scaling interactive machine learning frameworks, such as TensorFlow, and data analysis environments, such as MATLAB/Octave, to tens of thousands of cores presents many technical challenges -in particular, rapidly dispatching many tasks through a scheduler, such as Slurm, and starting many instances of applications with thousands of dependencies. Careful tuning of launches and prepositioning of applications overcome these challenges and allow the launching of thousands of tasks in seconds on a 40,000-core supercomputer. Specifically, this work demonstrates launching 32,000 TensorFlow processes in 4 seconds and launching 262,000 Octave processes in 40 seconds. These capabilities allow researchers to rapidly explore novel machine learning architecture and data analysis algorithms." }
1902.03948
1807.07814
This addition brought over 40,000 additional compute cores to TX-Green, an additional core switch, and OmniPath network #REFR .
[ "The integration of monitoring the core compute assets while having full situational awareness of the lights out data center located 100 miles from the offices was critical to providing a stable HPC platform for the research community.", "Enabling the administration team to proactively identify potential resource constraints, node failures, and environmental risks.", "The converged DCIM platform leverages the strategies and techniques commonly used in Big Data communities to store, query, analyze, and visualize voluminous amounts of data.", "It consist of Accumulo, MATLAB, and Dynamically Distributed Dimensional Data Model (D4M), and Unity #OTHEREFR , [3] . However, since original publication our systems have grown significantly.", "In 2016 we added a 1.32 Petaflop Intel Knights Landing system which debuted on the 2016 Top 500 list at 106 in the world [4]." ]
[ "As part of our regular refresh cycle we also included 75 GPU enabled systems to the TX-Green environment, thus creating a total of four unique computing architectures AMD, Intel, KNL, GPU requiring separate resource queues and creating more diverse landscape of available resources.", "The more than quadrupling of compute resources and service queues pushed our DCIM architecture to new levels.", "Scaling our monitoring and management capabilities to match our new computational environment gave us good insight into the design choices and how they scaled under real world conditions.", "The underlying Accumulo database managed through the MIT SuperCloud portal technology has been the best performer, seamlessly scaling with the added entries and data collection fields #OTHEREFR .", "At the time of original publication the combined number of entries for both Node and Data Center databases was just over 15 billion." ]
[ "40,000 additional compute" ]
background
{ "title": "Scaling Big Data Platform for Big Data Pipeline", "abstract": "Abstract-Monitoring and Managing High Performance Computing (HPC) systems and environments generate an ever growing amount of data. Making sense of this data and generating a platform where the data can be visualized for system administrators and management to proactively identify system failures or understand the state of the system requires the platform to be as efficient and scalable as the underlying database tools used to store and analyze the data. In this paper we will show how we leverage Accumulo, d4m, and Unity to generate a 3D visualization platform to monitor and manage the Lincoln Laboratory Supercomputer systems and how we have had to retool our approach to scale with our systems. Leveraging the 3D Data Center Infrastructure Management (DCIM) tool built on the Unity game engine as published in 2015 [1] has enabled the administrators of the TX-Green supercomputer at MIT Lincoln Laboratory Supercomputing Center, LLSC, to have an easily digestible single pane of the current state of the systems they manage. At the time of the original publication the TX-Green systems comprised of 3500 IT data points and 5000 environmental sensors outputs. The TX-Green systems were approximately 9,000 compute cores, 2PB of storage and a single core network. The integration of monitoring the core compute assets while having full situational awareness of the lights out data center located 100 miles from the offices was critical to providing a stable HPC platform for the research community. Enabling the administration team to proactively identify potential resource constraints, node failures, and environmental risks. The converged DCIM platform leverages the strategies and techniques commonly used in Big Data communities to store, query, analyze, and visualize voluminous amounts of data. It consist of Accumulo, MATLAB, and Dynamically Distributed Dimensional Data Model (D4M), and Unity [2], [3]. However, since original publication our systems have grown significantly. In 2016 we added a 1.32 Petaflop Intel Knights Landing system which debuted on the 2016 Top 500 list at 106 in the world [4]. This addition brought over 40,000 additional compute cores to TX-Green, an additional core switch, and OmniPath network [5] . As part of our regular refresh cycle we also included 75 GPU enabled systems to the TX-Green environment, thus creating a total of four unique computing architectures AMD, Intel, KNL, GPU requiring separate resource queues and creating more diverse landscape of available resources. The more than quadrupling of compute resources and service queues pushed our DCIM architecture to new levels. Scaling our monitoring and management capabilities to match our new computational environment gave us good insight into the design choices and how they scaled under real world conditions. The underlying Accumulo database managed through the MIT SuperCloud portal technology has been the best performer, seamlessly scaling with the added entries and data collection fields [6] . At the time of original publication the combined number of entries for both Node and Data Center databases was just over 15 billion. There are now 6.9 billion entries for the Nodes database and over 31 billion entries for the environmental building management system database. This would be extremely taxing on a standard mysql database. Accumulo, however, has performed exceptionally well under these conditions as it can withstand ingest rates of 100,000,000 entries per second [7] . The scaling of our systems extended to all aspects of the HPC environment. With the additional computational resources being brought online and additional queues we expanded the number of default job slots individual users can consume on a single run from 2048 to 8192 and allowing some users to consume 16384 cores on special request. This results in the base number of jobs concurrently running on the system to be dramatically increased. With additional queues setup for the four different architectures, new alerts were implemented to correlate to the heterogeneous environment of available memory, local disk, and CPU load thresholds. As a result the growth in data collections for each node grew substantially. The 40,000+ cores added to the system were each reporting the jobs running, the potential alerts and thresholds met. The one area of our processing pipeline that was most affected by this explosion in data was the Unity visualization platform. The rendering of the additional nodes and game objects in the 3D space was not impacted, however, when applying the data to the nodes and subsequent updates the visualization environment performance dropped off significantly. On every update cycle the load time would stall 10+ seconds to parse the .csv files and apply the updated data to the EcoPOD and nodes. We applied a number of strategies to try address the constant lagging. First, we decided to stagger the updates of the EcoPOD and the nodes so the call was less expensive and hopefully less noticeable to the user. This only led to multiple slightly shorter lags in the game environment and a more stuttered playing experience. Secondly, we tried to chunk the updates so only a subset of the nodes would update at a time. Unfortunately, this also did not resolve the issue as the environment was constantly updating and the lags, while smaller, were more frequent. We finally succumbed to the fact that we had been carrying on too much legacy code and Unity has made many updates since the 3.0 version we had originally" }
{ "title": "Interactive Supercomputing on 40,000 Cores for Machine Learning and Data Analysis", "abstract": "Abstract-Interactive massively parallel computations are critical for machine learning and data analysis. These computations are a staple of the MIT Lincoln Laboratory Supercomputing Center (LLSC) and has required the LLSC to develop unique interactive supercomputing capabilities. Scaling interactive machine learning frameworks, such as TensorFlow, and data analysis environments, such as MATLAB/Octave, to tens of thousands of cores presents many technical challenges -in particular, rapidly dispatching many tasks through a scheduler, such as Slurm, and starting many instances of applications with thousands of dependencies. Careful tuning of launches and prepositioning of applications overcome these challenges and allow the launching of thousands of tasks in seconds on a 40,000-core supercomputer. Specifically, this work demonstrates launching 32,000 TensorFlow processes in 4 seconds and launching 262,000 Octave processes in 40 seconds. These capabilities allow researchers to rapidly explore novel machine learning architecture and data analysis algorithms." }
1904.07925
1206.6661
Computation of the differential Galois group of a reduced form
Since rApxqs is in reduced form, by Lemma 32 in #REFR , we obtain that G is connected and therefore xexppB 1 q, . . . , exppB σ qy " G.
[ "We follow material from #OTHEREFR , see also [dG09] . Let B 1 , . . .", ", B σ be a basis of the C-vector space g.", "Since the exponential map goes from g to a subgroup of G, we find that xexppB 1 q, . . .", ", exppB σ qy \" G˝, where xexppB 1 q, . . .", ", exppB σ qy denotes the smallest algebraic group that contains the exppB i q, and G˝denotes the connected component of the identity of G." ]
[ "So let us compute xexppB 1 q, . . . , exppB σ qy.", "This problem has been solved in full generality in #OTHEREFR but mostly simplifies in this case, since the algebraic group we are looking for is connected. We start by an observation." ]
[ "reduced form" ]
background
{ "title": "Computing the Lie algebra of the differential Galois group: the reducible case", "abstract": "Abstract. In this paper, we explain how to compute the Lie algebra of the differential Galois group of a reducible linear differential system. We achieve this by showing how to transform a block-triangular linear differential system into a Kolchin-Kovacic reduced form. We combine this with other reduction results to propose a general algorithm for computing a reduced form of a general linear differential system. In particular, this provides directly the Lie algebra of the differential Galois group without an a priori computation of this Galois group." }
{ "title": "A Characterization of Reduced Forms of Linear Differential Systems", "abstract": "A differential system [A] : Y ′ = AY , with A ∈ Mat(n, k) is said to be in reduced form if A ∈ g(k) where g is the Lie algebra of the differential Galois group G of [A]. In this article, we give a constructive criterion for a system to be in reduced form. When G is reductive and unimodular, the system [A] is in reduced form if and only if all of its invariants (rational solutions of appropriate symmetric powers) have constant coefficients (instead of rational functions). When G is nonreductive, we give a similar characterization via the semi-invariants of G. In the reductive case, we propose a decision procedure for putting the system into reduced form which, in turn, gives a constructive proof of the classical Kolchin-Kovacic reduction theorem." }
2001.11224
1912.03879
AI2D-RST -a multimodally-motivated annotation schema
Most recently, RST has been applied to diagrams in the AI2D dataset as a part of an alternative annotation schema that seeks to provide a more multimodallyinformed description of diagrammatic representations #REFR .
[ "One formalism that has frequently been applied to the description of discourse semantics in multimodality research is Rhetorical Structure Theory (RST), which was developed as a theory of text organisation and coherence in the 1980s #OTHEREFR .", "Originally, RST attempted to describe why well-formed texts appear coherent, or why individual parts of a text appear to contribute towards a common communicative goal #OTHEREFR .", "As a part of an extension to multimodal discourse, RST has been used to describe multimodal discourse structures in various media #OTHEREFR ." ]
[ "This dataset, called AI2D-RST, covers 1000 diagrams from the AI2D corpus, annotated using a new schema by experts trained in the uese of the schema #OTHEREFR .", "The development of AI2D-RST was motivated by the observation that the AI2D annotation schema introduced above conflates descriptions of different types of multimodal structure #OTHEREFR , such as implicit semantic relations and explicit connections signalled using arrows and lines into a single DPG.", "These can be pulled apart multimodally to better understand how these structures contribute to diagrammatic representations.", "For this reason, AI2D-RST represents each diagram using three distinct graphs corresponding to three distinct, but mutually complementary, layers of annotation: grouping, connectivity and discourse structure.", "Figure 3 shows examples of all three graphs for the diagram introduced in Figure 2 ." ]
[ "diagrammatic representations" ]
method
{ "title": "Introducing the diagrammatic mode", "abstract": "In this article, we propose a multimodal perspective to diagrammatic representations by sketching a description of what may be tentatively termed the diagrammatic mode. We consider diagrammatic representations in the light of contemporary multimodality theory and explicate what enables diagrammatic representations to integrate natural language, various forms of graphics, diagrammatic elements such as arrows, lines and other expressive resources into coherent organisations. We illustrate the proposed approach using two recent diagram corpora and show how a multimodal approach supports the empirical analysis of diagrammatic representations, especially in identifying diagrammatic constituents and describing their interrelations." }
{ "title": "AI2D-RST: A multimodal corpus of 1000 primary school science diagrams", "abstract": "This article introduces AI2D-RST, a multimodal corpus of 1000 English-language diagrams that represent topics in primary school natural science, such as food webs, life cycles, moon phases and human physiology. The corpus is based on the Allen Institute for Artificial Intelligence Diagrams (AI2D) dataset, a collection of diagrams with crowd-sourced descriptions, which was originally developed for computational tasks such as automatic diagram understanding and visual question answering. Building on the segmentation of diagram layouts in AI2D, the AI2D-RST corpus presents a new multi-layer annotation schema that provides a rich description of their multimodal structure. Annotated by trained experts, the layers describe (1) the grouping of diagram elements into perceptual units, (2) the connections set up by diagrammatic elements such as arrows and lines, and (3) the discourse relations between diagram elements, which are described using Rhetorical Structure Theory (RST). Each annotation layer in AI2D-RST is represented using a graph. The corpus is freely available for research and teaching." }
1908.06272
1803.07635
I. INTRODUCTION
Transferring the solutions to arbitrary poses in the workspace or even to other systems, however, requires the training of a new controller, even if initial trajectories are provided #REFR .
[ "However, the ingenuity of finding the primitives that apply best in certain situations and parameterizing them often means a considerable engineering effort, leaving the cognitive performance to the programmer and not to the system.", "Humans are remarkably skilled workers and join assembly parts with ease, although not necessarily using analytical representations, such as contact states #OTHEREFR .", "However, it is not intuitive for us to describe how we deal with tilting and jamming or what strategies we deploy upon getting stuck.", "Following this insight, human performance can directly be used to obtain skills through imitation learning, such as in Programming by Demonstration (PbD) #OTHEREFR , #OTHEREFR , #OTHEREFR , with applications for general object manipulation #OTHEREFR , #OTHEREFR , #OTHEREFR , and industrial assembly processes #OTHEREFR , #OTHEREFR .", "Contrary, approaches have shown promising results on contact-rich manipulation tasks without human input #OTHEREFR , #OTHEREFR , also with very tight clearances #OTHEREFR ." ]
[ "In this work, we aim at developing force-based contact skills to handle jamming and tilting effects that are portable to different robotic manipulators, and can, once learned, serve as robot independent skills for a specific assembly task.", "To this end, we train a recurrent neural network to learn human-like manipulation strategies from human performance in simulation, and relate those to the relative object's geometry in task space.", "In contrast to related works, our model predicts sequences of forces and torques, which serve as reference set point for a Cartesian force-control of positioncontrolled robots, which we build upon the idea from an earlier work #OTHEREFR .", "The remainder of the paper is as follows: In II we discuss related work and motivate our approach, which we explain in detail in III for our contact skill model, and in IV for the robot-abstracting force-control. V shows our experiments and results.", "In VI we discuss final aspects and conclude in VII." ]
[ "training" ]
background
{ "title": "Contact Skill Imitation Learning for Robot-Independent Assembly Programming", "abstract": "Robotic automation is a key driver for the advancement of technology. The skills of human workers, however, are difficult to program and seem currently unmatched by technical systems. In this work we present a data-driven approach to extract and learn robot-independent contact skills from human demonstrations in simulation environments, using a Long Short Term Memory (LSTM) network. Our model learns to generate error-correcting sequences of forces and torques in task space from object-relative motion, which industrial robots carry out through a Cartesian force control scheme on the real setup. This scheme uses forward dynamics computation of a virtually conditioned twin of the manipulator to solve the inverse kinematics problem. We evaluate our methods with an assembly experiment, in which our algorithm handles part tilting and jamming in order to succeed. The results show that the skill is robust towards localization uncertainty in task space and across different joint configurations of the robot. With our approach, non-experts can easily program force-sensitive assembly tasks in a robot-independent way." }
{ "title": "Learning Robotic Assembly from CAD", "abstract": "In this work, motivated by recent manufacturing trends, we investigate autonomous robotic assembly. Industrial assembly tasks require contact-rich manipulation skills, which are challenging to acquire using classical control and motion planning approaches. Consequently, robot controllers for assembly domains are presently engineered to solve a particular task, and cannot easily handle variations in the product or environment. Reinforcement learning (RL) is a promising approach for autonomously acquiring robot skills that involve contact-rich dynamics. However, RL relies on random exploration for learning a control policy, which requires many robot executions, and often gets trapped in locally suboptimal solutions. Instead, we posit that prior knowledge, when available, can improve RL performance. We exploit the fact that in modern assembly domains, geometric information about the task is readily available via the CAD design files. We propose to leverage this prior knowledge by guiding RL along a geometric motion plan, calculated using the CAD data. We show that our approach effectively improves over traditional control approaches for tracking the motion plan, and can solve assembly tasks that require high precision, even without accurate state estimation. In addition, we propose a neural network architecture that can learn to track the motion plan, thereby generalizing the assembly controller to changes in the object positions." }
1907.08199
1803.07635
II. RELATED WORK
As shown in #REFR naively tuning and shaping a reward function may result in sub-optimal solutions using base actions.
[ "Our work can be viewed as using a planner as a hierarchical policy in the options framework, which is made possible through the incorporation of a goalscoring progress function learned from demonstration.", "In a similar manner, #OTHEREFR showed how planning can be incorporated into action selection when future states can be evaluated.", "Our method borrows this view of temporally abstracting trajectories and extends it by applying a dynamics model for each of the options, allowing an agent to assess its states and incorporate foresight #OTHEREFR in its actions.", "The work of #OTHEREFR highlights that including a dense reward indeed increases the overall performance of the agent.", "Instead of using a predetermined dense function, we learn a Goal Scoring estimator from the demonstrations." ]
[ "Furthermore, our planner selects an already learned controller and thus avoids converging to sub-optimal behaviours.", "As highlighted by Sunderhauf #OTHEREFR , there are limits of the use of RL in robotics.", "By leveraging strategies from both RL and control communities, this work aims to increase the scope of problems that can be tackled in robotics." ]
[ "base actions" ]
background
{ "title": "R O ] 1 O ct 2 01 9 Composing Diverse Policies for Temporally Extended Tasks", "abstract": "Robot control policies for temporally extended and sequenced tasks are often characterized by discontinuous switches between different local dynamics. These change-points are often exploited in hierarchical motion planning to build approximate models and to facilitate the design of local, regionspecific controllers. However, it becomes combinatorially challenging to implement such a pipeline for complex temporally extended tasks, especially when the sub-controllers work on different information streams, time scales and action spaces. In this paper, we introduce a method that can compose diverse policies comprising motion planning trajectories, dynamic motion primitives and neural network controllers. We introduce a global goal scoring estimator that uses local, per-motion primitive dynamics models and corresponding activation statespace sets to sequence diverse policies in a locally optimal fashion. We use expert demonstrations to convert what is typically viewed as a gradient-based learning process into a planning process without explicitly specifying pre-and postconditions. We first illustrate the proposed framework using an MDP benchmark to showcase robustness to action and model dynamics mismatch, and then with a particularly complex physical gear assembly task, solved on a PR2 robot. We show that the proposed approach successfully discovers the optimal sequence of controllers and solves both tasks efficiently." }
{ "title": "Learning Robotic Assembly from CAD", "abstract": "In this work, motivated by recent manufacturing trends, we investigate autonomous robotic assembly. Industrial assembly tasks require contact-rich manipulation skills, which are challenging to acquire using classical control and motion planning approaches. Consequently, robot controllers for assembly domains are presently engineered to solve a particular task, and cannot easily handle variations in the product or environment. Reinforcement learning (RL) is a promising approach for autonomously acquiring robot skills that involve contact-rich dynamics. However, RL relies on random exploration for learning a control policy, which requires many robot executions, and often gets trapped in locally suboptimal solutions. Instead, we posit that prior knowledge, when available, can improve RL performance. We exploit the fact that in modern assembly domains, geometric information about the task is readily available via the CAD design files. We propose to leverage this prior knowledge by guiding RL along a geometric motion plan, calculated using the CAD data. We show that our approach effectively improves over traditional control approaches for tracking the motion plan, and can solve assembly tasks that require high precision, even without accurate state estimation. In addition, we propose a neural network architecture that can learn to track the motion plan, thereby generalizing the assembly controller to changes in the object positions." }
1907.08199
1803.07635
V. EXPERIMENTAL RESULTS
The predicted state under the specified time #REFR The challenge is described at https://new.siemens.com/us/en/company/fairs-events/robot-learning.html Fig. 4 : MDP solution.
[ "We aim to demonstrate the viability of composing diverse policies by using the controller dynamics as a method for choosing a satisfactory policy.", "The dynamics can be learned independently of the task, and can be used to solve a downstream task.", "Simulated MDP This problem illustrates the feasibility of using our architecture as a planning method.", "Figure 4 shows that the agent reaches the optimal state in just 4 planning steps, where each planning step is a rollout of a controller." ]
[ "At timestep 0, a rollout of the 5 controllers is performed with the dynamics model. The expected resulting state is marked using vertical bars.", "The best performing controller is used within the environment to obtain the next state -the red line at state 5 and planning step 1.", "This process is iterated until a desired state is reached.", "horizon is illustrated at each step for the different controller options.", "This naturally suggests the use of the policy π 1 that outperforms the alternatives (π 1 reaches state 6, π 2 -state 4, π 2 -state 3, π 3 -state 1, π 4 -state 1, π 5 -state 0)." ]
[ "https://new.siemens.com/us/en/company/fairs-events/robot-learning.html Fig" ]
background
{ "title": "R O ] 1 O ct 2 01 9 Composing Diverse Policies for Temporally Extended Tasks", "abstract": "Robot control policies for temporally extended and sequenced tasks are often characterized by discontinuous switches between different local dynamics. These change-points are often exploited in hierarchical motion planning to build approximate models and to facilitate the design of local, regionspecific controllers. However, it becomes combinatorially challenging to implement such a pipeline for complex temporally extended tasks, especially when the sub-controllers work on different information streams, time scales and action spaces. In this paper, we introduce a method that can compose diverse policies comprising motion planning trajectories, dynamic motion primitives and neural network controllers. We introduce a global goal scoring estimator that uses local, per-motion primitive dynamics models and corresponding activation statespace sets to sequence diverse policies in a locally optimal fashion. We use expert demonstrations to convert what is typically viewed as a gradient-based learning process into a planning process without explicitly specifying pre-and postconditions. We first illustrate the proposed framework using an MDP benchmark to showcase robustness to action and model dynamics mismatch, and then with a particularly complex physical gear assembly task, solved on a PR2 robot. We show that the proposed approach successfully discovers the optimal sequence of controllers and solves both tasks efficiently." }
{ "title": "Learning Robotic Assembly from CAD", "abstract": "In this work, motivated by recent manufacturing trends, we investigate autonomous robotic assembly. Industrial assembly tasks require contact-rich manipulation skills, which are challenging to acquire using classical control and motion planning approaches. Consequently, robot controllers for assembly domains are presently engineered to solve a particular task, and cannot easily handle variations in the product or environment. Reinforcement learning (RL) is a promising approach for autonomously acquiring robot skills that involve contact-rich dynamics. However, RL relies on random exploration for learning a control policy, which requires many robot executions, and often gets trapped in locally suboptimal solutions. Instead, we posit that prior knowledge, when available, can improve RL performance. We exploit the fact that in modern assembly domains, geometric information about the task is readily available via the CAD design files. We propose to leverage this prior knowledge by guiding RL along a geometric motion plan, calculated using the CAD data. We show that our approach effectively improves over traditional control approaches for tracking the motion plan, and can solve assembly tasks that require high precision, even without accurate state estimation. In addition, we propose a neural network architecture that can learn to track the motion plan, thereby generalizing the assembly controller to changes in the object positions." }
1903.05751
1803.07635
II. RELATED WORK
The planning algorithm serves as a demonstrator for the learning algorithm. The closest work similar to ours is #REFR .
[ "Model-free algorithms, on the other hand, can achieve good asymptotic performance, but suffer from high sample complexity.", "A lot of recent research has focused on leveraging ideas from control and optimization theory for faster learning #OTHEREFR - #OTHEREFR .", "In a lot of robotics applications, it is generally advantageous to initialize RL agents with demonstrations which can provide them with an initial reference solution #OTHEREFR .", "Learning policies from reference trajectories has been studied in #OTHEREFR - #OTHEREFR .", "Motivated by this idea, our work mainly focuses on using reference trajectories that can be provided by off-the-shelf planning algorithms to speed up learning for our RL agent." ]
[ "In #OTHEREFR , a model-based RL agent is learned using a trajectory-centric RL (Guided Policy Search) approach to learn a trajectory-tracking controller for a trajectory provided by RRT #OTHEREFR .", "However, using a modelbased RL in constrained state and control settings could be difficult, because it is not clear how the underlying trajectory optimization algorithm #OTHEREFR can account for arbitrary state constraints for manipulator-like systems.", "Our combination of RL and reference trajectory tracking can be seen as a form of reward shaping #OTHEREFR .", "Reward shaping speeds up learning by creating a more informative reward signal.", "However, designing shaping rewards requires significant non-trivial reward engineering, and may also alter the optimal solution." ]
[ "planning algorithm" ]
background
{ "title": "Trajectory Optimization for Unknown Constrained Systems using Reinforcement Learning", "abstract": "Abstract-In this paper, we propose a reinforcement learning-based algorithm for trajectory optimization for constrained dynamical systems. This problem is motivated by the fact that for most robotic systems, the dynamics may not always be known. Generating smooth, dynamically feasible trajectories could be difficult for such systems. Using samplingbased algorithms for motion planning may result in trajectories that are prone to undesirable control jumps. However, they can usually provide a good reference trajectory which a model-free reinforcement learning algorithm can then exploit by limiting the search domain and quickly finding a dynamically smooth trajectory. We use this idea to train a reinforcement learning agent to learn a dynamically smooth trajectory in a curriculum learning setting. Furthermore, for generalization, we parameterize the policies with goal locations, so that the agent can be trained for multiple goals simultaneously. We show result in both simulated environments as well as real experiments, for a 6-DoF manipulator arm operated in position-controlled mode to validate the proposed idea. We compare the proposed ideas against a PID controller which is used to track a designed trajectory in configuration space. Our experiments show that our RL agent trained with a reference path outperformed a model-free PID controller of the type commonly used on many robotic platforms for trajectory tracking." }
{ "title": "Learning Robotic Assembly from CAD", "abstract": "In this work, motivated by recent manufacturing trends, we investigate autonomous robotic assembly. Industrial assembly tasks require contact-rich manipulation skills, which are challenging to acquire using classical control and motion planning approaches. Consequently, robot controllers for assembly domains are presently engineered to solve a particular task, and cannot easily handle variations in the product or environment. Reinforcement learning (RL) is a promising approach for autonomously acquiring robot skills that involve contact-rich dynamics. However, RL relies on random exploration for learning a control policy, which requires many robot executions, and often gets trapped in locally suboptimal solutions. Instead, we posit that prior knowledge, when available, can improve RL performance. We exploit the fact that in modern assembly domains, geometric information about the task is readily available via the CAD design files. We propose to leverage this prior knowledge by guiding RL along a geometric motion plan, calculated using the CAD data. We show that our approach effectively improves over traditional control approaches for tracking the motion plan, and can solve assembly tasks that require high precision, even without accurate state estimation. In addition, we propose a neural network architecture that can learn to track the motion plan, thereby generalizing the assembly controller to changes in the object positions." }
1907.08199
1803.07635
I. INTRODUCTION
In the interest of sample efficiency and tractability, such end-to-end learning could be warm-started by using samples from a motion planner, which provides information on how to bring the two pieces together and concentrates effort on learning an alignment policy, as in #REFR .
[ "Digital Object Identifier 10.1109/LRA.", "2020.2972794 In many practical applications, we wish to combine a diversity of such controllers to solve complex tasks.", "This typically requires that controllers share a common domain representation and a notion of progress to sequence these.", "For instance, the problem of assembly, as shown in Figure 1 , can be partitioned by first picking up a mechanical part, then using motion planning and trajectory control to move this in close proximity to an assembly, before the subsequent use of a variety of wiggle policies to fit the parts together, as shown by #OTHEREFR .", "Alternatively, the policy could be trained in an end-to-end fashion with a neural network, but one may find this difficult for extended tasks with sparse rewards, such as in Figure 1 ." ]
[ "Additionally, the completion of these independent sub-tasks can be viewed as a global metric of progress.", "We propose a hybrid hierarchical control strategy that allows for the use of diverse sets of sub-controllers, consisting of commonly used goal-directed motion planning techniques, other strategies such as wiggle, slide and push-against #OTHEREFR that are so elegantly used in human manipulation, as well as deep neural network based policies that are represented very differently from their sampling-based motion planning counterparts.", "Thus, we tackle a key challenge associated with existing motion primitive scheduling approaches, which typically assume that a common representation is used by all sub-controllers.", "We make use of the fact that controllers tend to have a dynamic model of the active part of their state space -either an analytical or a learned model, and further estimate how close each state is to completing the overall task using a novel goal scoring estimator.", "This allows the hierarchical controller to model the outcome of using any of the available sub-controllers and then determine which of these would bring the world state closest to achieving the desired solution -in the spirit of model predictive control." ]
[ "motion planner" ]
method
{ "title": "Composing Diverse Policies for Temporally Extended Tasks", "abstract": "Robot control policies for temporally extended and sequenced tasks are often characterized by discontinuous switches between different local dynamics. These change-points are often exploited in hierarchical motion planning to build approximate models and to facilitate the design of local, region-specific controllers. However, it becomes combinatorially challenging to implement such a pipeline for complex temporally extended tasks, especially when the sub-controllers work on different information streams, time scales and action spaces. In this letter, we introduce a method that can automatically compose diverse policies comprising motion planning trajectories, dynamic motion primitives and neural network controllers. We introduce a global goal scoring estimator that uses local, per-motion primitive dynamics models and corresponding activation state-space sets to sequence diverse policies in a locally optimal fashion. We use expert demonstrations to convert what is typically viewed as a gradient-based learning process into a planning process without explicitly specifying preand post-conditions. We first illustrate the proposed framework using an MDP benchmark to showcase robustness to action and model dynamics mismatch, and then with a particularly complex physical gear assembly task, solved on a PR2 robot. We show that the proposed approach successfully discovers the optimal sequence of controllers and solves both tasks efficiently. Index Terms-Motion and path planning, learning and adaptive systems, learning from demonstration." }
{ "title": "Learning Robotic Assembly from CAD", "abstract": "In this work, motivated by recent manufacturing trends, we investigate autonomous robotic assembly. Industrial assembly tasks require contact-rich manipulation skills, which are challenging to acquire using classical control and motion planning approaches. Consequently, robot controllers for assembly domains are presently engineered to solve a particular task, and cannot easily handle variations in the product or environment. Reinforcement learning (RL) is a promising approach for autonomously acquiring robot skills that involve contact-rich dynamics. However, RL relies on random exploration for learning a control policy, which requires many robot executions, and often gets trapped in locally suboptimal solutions. Instead, we posit that prior knowledge, when available, can improve RL performance. We exploit the fact that in modern assembly domains, geometric information about the task is readily available via the CAD design files. We propose to leverage this prior knowledge by guiding RL along a geometric motion plan, calculated using the CAD data. We show that our approach effectively improves over traditional control approaches for tracking the motion plan, and can solve assembly tasks that require high precision, even without accurate state estimation. In addition, we propose a neural network architecture that can learn to track the motion plan, thereby generalizing the assembly controller to changes in the object positions." }
2002.09107
1803.07635
A. Precision Robotics Manipulation
In order to use algorithms for low-dimensional input space, some methods use fiducial markers to obtain pose information about objects in the scene #REFR .
[ "These tasks require higher levels of precision with a slimmer margin for error, often necessitating extra algorithmic or sensory innovations.", "Some works focused on developing algorithms, in simulation #OTHEREFR for precise robot manipulation tasks.", "Many of these methods leverage important state information such as accurate object poses that are available in simulation but difficult to obtain in the real world.", "While this approach of using ground truth pose information of relevant entities is useful for developing algorithms for low-dimensional input space, generalizing such systems to the real world requires accurate pose estimation and a well calibrated system.", "Our approach learns directly from images and does not depend on such calibrated conditions." ]
[ "Other methods can be utilized to reason about object geometry #OTHEREFR , detect object pose #OTHEREFR [15] #OTHEREFR [18] #OTHEREFR , key points #OTHEREFR and grasping points #OTHEREFR from pointcloud and RGB observations after which robot actions are planned and executed to accomplish tasks.", "The approaches require an estimate of both where a point in the world is relative to a camera, and where the camera is in relation to the robot.", "Objects that are small, articulated, reflective, or transparent can complicate these methods.", "In contrast, our method learns precision tasks in an end-to-end fashion without any intermediate pose estimation or camera calibration.", "Our approach learns handeye coordination across multi cameras, without requiring explicit pose information." ]
[ "low-dimensional input space" ]
method
{ "title": "Learning Precise 3D Manipulation from Multiple Uncalibrated Cameras", "abstract": "In this work, we present an effective multi-view approach to closed-loop end-to-end learning of precise manipulation tasks that are 3D in nature. Our method learns to accomplish these tasks using multiple statically placed but uncalibrated RGB camera views without building an explicit 3D representation such as a pointcloud or voxel grid. This multi-camera approach achieves superior task performance on difficult stacking and insertion tasks compared to single-view baselines. Single view robotic agents struggle from occlusion and challenges in estimating relative poses between points of interest. While full 3D scene representations (voxels or pointclouds) are obtainable from registered output of multiple depth sensors, several challenges complicate operating off such explicit 3D representations. These challenges include imperfect camera calibration, poor depth maps due to object properties such as reflective surfaces, and slower inference speeds over 3D representations compared to 2D images. Our use of static but uncalibrated cameras does not require camera-robot or cameracamera calibration making the proposed approach easy to setup and our use of sensor dropout during training makes it resilient to the loss of camera-views after deployment. • A novel camera calibration free, multi-view, approach to precise 3D robotic manipulation" }
{ "title": "Learning Robotic Assembly from CAD", "abstract": "In this work, motivated by recent manufacturing trends, we investigate autonomous robotic assembly. Industrial assembly tasks require contact-rich manipulation skills, which are challenging to acquire using classical control and motion planning approaches. Consequently, robot controllers for assembly domains are presently engineered to solve a particular task, and cannot easily handle variations in the product or environment. Reinforcement learning (RL) is a promising approach for autonomously acquiring robot skills that involve contact-rich dynamics. However, RL relies on random exploration for learning a control policy, which requires many robot executions, and often gets trapped in locally suboptimal solutions. Instead, we posit that prior knowledge, when available, can improve RL performance. We exploit the fact that in modern assembly domains, geometric information about the task is readily available via the CAD design files. We propose to leverage this prior knowledge by guiding RL along a geometric motion plan, calculated using the CAD data. We show that our approach effectively improves over traditional control approaches for tracking the motion plan, and can solve assembly tasks that require high precision, even without accurate state estimation. In addition, we propose a neural network architecture that can learn to track the motion plan, thereby generalizing the assembly controller to changes in the object positions." }
1809.10699
1803.07635
II. RELATED WORK
In #REFR the tasks are most similar to ours, as they use a subset of the Siemens Innovation Challenge tasks (two sub tasks), as well as a U-shape fitting task.
[ "The learning burden is split between two agents: a mixture of linear Note that while these four rotations of Gear 1 are very similar, successful classification among them is required for exact pose estimation.", "When another rotation is used instead of the correct one, the grasp efficiency will be reduced due to lack of contact points and the gear's teeth may not fit well into the teeth of its neighboring gears.", "(Right): two near-symmetries of Shaft 2 rotated to canonical pose, which are much easier to discriminate.", "Gaussian controllers, each trained for specific known initial conditions and a deep network generalizing across initial conditions.", "The linear Gaussian controllers are simple to learn as they get the state as input, and they are used to teach the network using a BADMM formulation, with local optima guaranties." ]
[ "In this work the pose estimation problem is faced using fiducials (sticking markers on the parts designed for pose estimation).", "The focus is on enabling assembly actions with complex non-linear trajectories.", "Training is done using a combination of motion planning and Guided Policy Search (GPS), where the former is used to guide training of the latter.", "High success rates are reported for the real robot, but only from two known and fixed initial part positions.", "In #OTHEREFR a difficult assembly task of Lego brick stitching is handled, but only in a simulation environment." ]
[ "U-shape fitting task" ]
method
{ "title": "Learning a High-Precision Robotic Assembly Task Using Pose Estimation from Simulated Depth Images.", "abstract": "Abstract-Most of industrial robotic assembly tasks today require fixed initial conditions for successful assembly. These constraints induce high production costs and low adaptability to new tasks. In this work we aim towards flexible and adaptable robotic assembly by using 3D CAD models for all parts to be assembled. We focus on a generic assembly task -the Siemens Innovation Challenge -in which a robot needs to assemble a gearlike mechanism with high precision into an operating system. To obtain the millimeter-accuracy required for this task and industrial settings alike, we use a depth camera mounted near the robot's end-effector. We present a high-accuracy three-stage pose estimation pipeline based on deep convolutional neural networks, which includes detection, pose estimation, refinement, and handling of near-and full symmetries of parts. The networks are trained on simulated depth images by means to ensure successful transfer to the real robot. We obtain an average pose estimation error of 2.14 millimeters and 1.09 degree leading to 88.6% success rate for robotic assembly of randomly distributed parts. To the best of our knowledge, this is the first time that the Siemens Innovation Challenge is fully solved, opening up new possibilities for automated industrial assembly." }
{ "title": "Learning Robotic Assembly from CAD", "abstract": "In this work, motivated by recent manufacturing trends, we investigate autonomous robotic assembly. Industrial assembly tasks require contact-rich manipulation skills, which are challenging to acquire using classical control and motion planning approaches. Consequently, robot controllers for assembly domains are presently engineered to solve a particular task, and cannot easily handle variations in the product or environment. Reinforcement learning (RL) is a promising approach for autonomously acquiring robot skills that involve contact-rich dynamics. However, RL relies on random exploration for learning a control policy, which requires many robot executions, and often gets trapped in locally suboptimal solutions. Instead, we posit that prior knowledge, when available, can improve RL performance. We exploit the fact that in modern assembly domains, geometric information about the task is readily available via the CAD design files. We propose to leverage this prior knowledge by guiding RL along a geometric motion plan, calculated using the CAD data. We show that our approach effectively improves over traditional control approaches for tracking the motion plan, and can solve assembly tasks that require high precision, even without accurate state estimation. In addition, we propose a neural network architecture that can learn to track the motion plan, thereby generalizing the assembly controller to changes in the object positions." }
1809.10699
1803.07635
A. Robotic assembly in a real environment
Second, we show assembly capabilities on more parts than considered in #REFR , which studied the subtask of inserting a shaft into a gear within the Siemens Innovation Challenge assuming fixed initial conditions.
[ "The last 2 failed attempts (1.33%) occurred when the base plate was randomly placed in a certain position for which the motion planner could not find a feasible trajectory to place the part.", "In summary, the angular SAKE EZGripper imposed a severe constraint on the performance.", "We believe that by using a parallel gripper the grasping and assembly success rates could have been significantly improved, but this needs to be analyzed in future studies.", "For comparison of our results with previous works we refer to two related studies.", "First, we were able to improve pose estimation by an order of magnitude when compared to #OTHEREFR , which also used training in simulation, but used RGB images of more basic parts." ]
[]
[ "assembly capabilities" ]
background
{ "title": "Learning a High-Precision Robotic Assembly Task Using Pose Estimation from Simulated Depth Images.", "abstract": "Abstract-Most of industrial robotic assembly tasks today require fixed initial conditions for successful assembly. These constraints induce high production costs and low adaptability to new tasks. In this work we aim towards flexible and adaptable robotic assembly by using 3D CAD models for all parts to be assembled. We focus on a generic assembly task -the Siemens Innovation Challenge -in which a robot needs to assemble a gearlike mechanism with high precision into an operating system. To obtain the millimeter-accuracy required for this task and industrial settings alike, we use a depth camera mounted near the robot's end-effector. We present a high-accuracy three-stage pose estimation pipeline based on deep convolutional neural networks, which includes detection, pose estimation, refinement, and handling of near-and full symmetries of parts. The networks are trained on simulated depth images by means to ensure successful transfer to the real robot. We obtain an average pose estimation error of 2.14 millimeters and 1.09 degree leading to 88.6% success rate for robotic assembly of randomly distributed parts. To the best of our knowledge, this is the first time that the Siemens Innovation Challenge is fully solved, opening up new possibilities for automated industrial assembly." }
{ "title": "Learning Robotic Assembly from CAD", "abstract": "In this work, motivated by recent manufacturing trends, we investigate autonomous robotic assembly. Industrial assembly tasks require contact-rich manipulation skills, which are challenging to acquire using classical control and motion planning approaches. Consequently, robot controllers for assembly domains are presently engineered to solve a particular task, and cannot easily handle variations in the product or environment. Reinforcement learning (RL) is a promising approach for autonomously acquiring robot skills that involve contact-rich dynamics. However, RL relies on random exploration for learning a control policy, which requires many robot executions, and often gets trapped in locally suboptimal solutions. Instead, we posit that prior knowledge, when available, can improve RL performance. We exploit the fact that in modern assembly domains, geometric information about the task is readily available via the CAD design files. We propose to leverage this prior knowledge by guiding RL along a geometric motion plan, calculated using the CAD data. We show that our approach effectively improves over traditional control approaches for tracking the motion plan, and can solve assembly tasks that require high precision, even without accurate state estimation. In addition, we propose a neural network architecture that can learn to track the motion plan, thereby generalizing the assembly controller to changes in the object positions." }
1107.5474
1002.4286
Association rules for a a formal context
Researching on logical reasoning methods for association rules is a relatively recent promising research line #REFR .
[ "We can consider a Stem Basis as an adequate production system in order to reason.", "However, Stem Basis is designed for entailing true implications only, without any exceptions into the object set nor implications with a low number of counterexamples in the context.", "Another more important question arises when it works on predictions.", "In this case we are interested in obtaining methods for selecting a result among all obtained results (even if they are mutually incoherent), and theorem 3 does not provide such a method.", "Therefore, it is better to consider association rules (with confidence) instead of true implications and the initial production system must be revised for working with confidence." ]
[ "In FCA, association rules are implications between sets of attributes. Confidence and support are defined as usual.", "Recall that the support of X, supp(X) of a set of attributes X is defined as the proportion of objects which satisfy every attribute of X, and the confidence of a association rule is conf (X → Y ) = supp(X ∪ Y )/supp(X).", "Confidence can be interpreted as an estimate of the probability P (Y |X), the probability of an object satisfying every attribute of Y under the condition that it also satisfies every one of X.", "Conexp software provides association rules (and their confidence) for formal contexts." ]
[ "association rules" ]
background
{ "title": "Selecting Attributes for Sport Forecasting using Formal Concept Analysis", "abstract": "In order to address complex systems, apply pattern recongnition on their evolution could play an key role to understand their dynamics. Global patterns are required to detect emergent concepts and trends, some of them with qualitative nature. Formal Concept Analysis (FCA) is a theory whose goal is to discover and to extract Knowledge from qualitative data. It provides tools for reasoning with implication basis (and association rules). Implications and association rules are usefull to reasoning on previously selected attributes, providing a formal foundation for logical reasoning. In this paper we analyse how to apply FCA reasoning to increase confidence in sports betting, by means of detecting temporal regularities from data. It is applied to build a Knowledge Based system for confidence reasoning." }
{ "title": "Redundancy, Deduction Schemes, and Minimum-Size Bases for Association Rules", "abstract": "Abstract. Association rules are among the most widely employed data analysis methods in the field of Data Mining. An association rule is a form of partial implication between two sets of binary variables. In the most common approach, association rules are parametrized by a lower bound on their confidence, which is the empirical conditional probability of their consequent given the antecedent, and/or by some other parameter bounds such as \"support\" or deviation from independence. We study here notions of redundancy among association rules from a fundamental perspective. We see each transaction in a dataset as an interpretation (or model) in the propositional logic sense, and consider existing notions of redundancy, that is, of logical entailment, among association rules, of the form \"any dataset in which this first rule holds must obey also that second rule, therefore the second is redundant\". We discuss several existing alternative definitions of redundancy between association rules and provide new characterizations and relationships among them. We show that the main alternatives we discuss correspond actually to just two variants, which differ in the treatment of full-confidence implications. For each of these two notions of redundancy, we provide a sound and complete deduction calculus, and we show how to construct complete bases (that is, axiomatizations) of absolutely minimum size in terms of the number of rules. We explore finally an approach to redundancy with respect to several association rules, and fully characterize its simplest case of two partial premises." }
1504.03620
1002.4286
Redundancy with Multiple Premises
The general case for two premises was fully characterized in #REFR , but the case of arbitrary premise sets has remained elusive for some years.
[ ", it is easy to see that these cases fail badly for partial implications.", "Indeed, one might suspect, as this author did for quite some time, that one partial implication would not follow logically from several premises unless it follows from one of them.", "Generally speaking, however, this suspicion is wrong.", "It is indeed true for confidence thresholds γ ∈ (0, 0.5), but these are not very useful in practice, as an association rule X → A of confidence less than 0.5 means that, in D X , the absence of A is more frequent than its presence.", "And, for γ ∈ [0.5, 1), it turns out that, for instance, from A → BC and A → BD it follows ACD → B, in the sense that if both premises have confidence at least γ in any dataset, then the conclusion also does." ]
[ "Eventually, a very recent result from #OTHEREFR proved that redundancy with respect to a set of premises that are partial implications hinges on a complicated combinatorial property of the premises themselves.", "We give that property a short (if admittedly uninformative) name here:", "Here we use the standard symbol |= for logical entailment; that is, whenever the implications at the left-hand side are true, the one at the right-hand side must be as well.", "Note that the definition of nicety of a set of partial implications states a property, not of the partial implications themselves, but of their full counterparts.", "Then, we can characterize entailment among partial implications for high enough thresholds of confidence, as follows:" ]
[ "arbitrary premise sets" ]
background
{ "title": "Quantitative Redundancy in Partial Implications", "abstract": "We survey the different properties of an intuitive notion of redundancy, as a function of the precise semantics given to the notion of partial implication." }
{ "title": "Redundancy, Deduction Schemes, and Minimum-Size Bases for Association Rules", "abstract": "Abstract. Association rules are among the most widely employed data analysis methods in the field of Data Mining. An association rule is a form of partial implication between two sets of binary variables. In the most common approach, association rules are parametrized by a lower bound on their confidence, which is the empirical conditional probability of their consequent given the antecedent, and/or by some other parameter bounds such as \"support\" or deviation from independence. We study here notions of redundancy among association rules from a fundamental perspective. We see each transaction in a dataset as an interpretation (or model) in the propositional logic sense, and consider existing notions of redundancy, that is, of logical entailment, among association rules, of the form \"any dataset in which this first rule holds must obey also that second rule, therefore the second is redundant\". We discuss several existing alternative definitions of redundancy between association rules and provide new characterizations and relationships among them. We show that the main alternatives we discuss correspond actually to just two variants, which differ in the treatment of full-confidence implications. For each of these two notions of redundancy, we provide a sound and complete deduction calculus, and we show how to construct complete bases (that is, axiomatizations) of absolutely minimum size in terms of the number of rules. We explore finally an approach to redundancy with respect to several association rules, and fully characterize its simplest case of two partial premises." }
1207.2041
1210.6095
A. Heterogeneous Out-of-Cell Interference
A more complex model would take non-uniformity or clustering into account #REFR , but this is beyond the scope of this work.
[ "Suppose that there are different kinds of infrastructure deployed in a homogeneous way through the network." ]
[ "Each interference source can be modeled as a marked PPP with marks corresponding to the composite fading distribution distributed as where . The path-loss function is the same for each process.", "The transmitting power is given by and the transmitter density by . Consequently, each process is parameterized by .", "Since the transmission power of each tier is different, the guard region radius for each tier is different.", "We assume that the base station associated with the fixed cell belongs to tier 1.", "Given , the radius of the guard region for tier , denoted as , is (10) where ." ]
[ "complex model", "uniformity" ]
background
{ "title": "Modeling Heterogeneous Network Interference Using Poisson Point Processes", "abstract": "Cellular systems are becoming more heterogeneous with the introduction of low power nodes including femtocells, relays, and distributed antennas. Unfortunately, the resulting interference environment is also becoming more complicated, making evaluation of different communication strategies challenging in both analysis and simulation. Leveraging recent applications of stochastic geometry to analyze cellular systems, this paper proposes to analyze downlink performance in a fixed-size cell, which is inscribed within a weighted Voronoi cell in a Poisson field of interferers. A nearest out-of-cell interferer, out-of-cell interferers outside a guard region, and cross-tier interferers are included in the interference calculations. Bounding the interference power as a function of distance from the cell center, the total interference is characterized through its Laplace transform. An equivalent marked process is proposed for the out-of-cell interference under additional assumptions. To facilitate simplified calculations, the interference distribution is approximated using the Gamma distribution with second order moment matching. The Gamma approximation simplifies calculation of the success probability and average rate, incorporates small-scale and large-scale fading, and works with co-tier and cross-tier interference. Simulations show that the proposed model provides a flexible way to characterize outage probability and rate as a function of the distance to the cell edge." }
{ "title": "Interference Coordination: Random Clustering and Adaptive Limited Feedback", "abstract": "Abstract-Interference coordination improves data rates and reduces outages in cellular networks. Accurately evaluating the gains of coordination, however, is contingent upon using a network topology that models realistic cellular deployments. In this paper, we model the base stations locations as a Poisson point process to provide a better analytical assessment of the performance of coordination. Since interference coordination is only feasible within clusters of limited size, we consider a random clustering process where cluster stations are located according to a random point process and groups of base stations associated with the same cluster coordinate. We assume channel knowledge is exchanged among coordinating base stations, and we analyze the performance of interference coordination when channel knowledge at the transmitters is either perfect or acquired through limited feedback. We apply intercell interference nulling (ICIN) to coordinate interference inside the clusters. The feasibility of ICIN depends on the number of antennas at the base stations. Using tools from stochastic geometry, we derive the probability of coverage and the average rate for a typical mobile user. We show that the average cluster size can be optimized as a function of the number of antennas to maximize the gains of ICIN. To minimize the mean loss in rate due to limited feedback, we propose an adaptive feedback allocation strategy at the mobile users. We show that adapting the bit allocation as a function of the signals' strength increases the achievable rate with limited feedback, compared to equal bit partitioning. Finally, we illustrate how this analysis can help solve network design problems such as identifying regions where coordination provides gains based on average cluster size, number of antennas, and number of feedback bits." }
1808.07173
1210.6095
V. DIVERSITY, MULTIPLEXING OR INTERFERENCE CANCELLATION?
The analytic expression in #REFR is, in fact, a strictly decreasing function of O, while the numerical curves are approximately so.
[ "However, as explained in Section II-B, in the analysis, the values of K and O specify the average number of BSs in a cluster, i.e.,B = (K + O) /K.", "Despite the discrepancy, the analysis does capture the general behavior of the system.", "For example, when both O and K are small, increasing K improves the per-BS ergodic sum rate, while for large values of O and K, increasing K degrades system performance.", "However, more importantly, the analysis helps identify the region of O which provides the maximum per-BS ergodic sum rate, leading to the main, and surprising, result of this paper.", "Our analytical and simulation results reveal a remarkable trend: the per-BS ergodic sum rate is largely a decreasing function of the number of spatial dimensions assigned to null interference." ]
[ "It is, therefore, nearly optimum for each BS to operate independently of the other BSs and not to spend any spatial resources on nulling interference.", "Specifically, the optimal operating point, in sum-rate sense, obtained numerically is (K * , ζ * , O * ) = #OTHEREFR , and the optimal operating point obtained analytically is (K * , ζ * , O * ) = (10, 6, 0).", "The difference between the achievable sum rates at these two operating points, when obtained numerically via simulations, is only about 4%.", "Thus, when K and ζ are properly chosen, an LS-MIMO system without interference nulling, i.e., with each BS operating independently, is close to being sum-rate optimal. Fig. 4 further supports the results in Figs. 2 and 3 . As in these figures, Fig.", "4 plots the per-BS ergodic sum rate for various values of K as a function of O, but with M = 32." ]
[ "strictly decreasing function", "numerical curves" ]
background
{ "title": "Optimizing the MIMO Cellular Downlink: Multiplexing, Diversity, or Interference Nulling?", "abstract": "A base-station (BS) equipped with multiple antennas can use its spatial dimensions in three different ways: 1) to serve multiple users, thereby achieving a multiplexing gain; 2) to provide spatial diversity in order to improve user rates; and 3) to null interference in neighboring cells. This paper answers the following question: What is the optimal balance between these three competing benefits? We answer this question in the context of the downlink of a cellular network, where multi-antenna BSs serve multiple single-antenna users using zero-forcing beamforming with equal power assignment, while nulling interference at a subset of out-of-cell users. Any remaining spatial dimensions provide transmit diversity for the scheduled users. Utilizing tools from stochastic geometry, we show that, surprisingly, to maximize the per-BS ergodic sum rate, with an optimal allocation of spatial resources, interference nulling does not provide a tangible benefit. The strategy of avoiding inter-cell interference nulling, reserving some fraction of spatial resources for multiplexing, and using the rest to provide diversity, is already close-to-optimal in terms of the sum-rate. However, interference nulling does bring significant benefit to cell-edge users, particularly when adopting a range-adaptive nulling strategy where the size of the cooperating BS cluster is increased for cell-edge users. Index Terms-Coordinated beamforming, interference nulling, large-scale multiple-input multiple-output, multi-cell cooperative communications, stochastic geometry, wireless cellular networks." }
{ "title": "Interference Coordination: Random Clustering and Adaptive Limited Feedback", "abstract": "Abstract-Interference coordination improves data rates and reduces outages in cellular networks. Accurately evaluating the gains of coordination, however, is contingent upon using a network topology that models realistic cellular deployments. In this paper, we model the base stations locations as a Poisson point process to provide a better analytical assessment of the performance of coordination. Since interference coordination is only feasible within clusters of limited size, we consider a random clustering process where cluster stations are located according to a random point process and groups of base stations associated with the same cluster coordinate. We assume channel knowledge is exchanged among coordinating base stations, and we analyze the performance of interference coordination when channel knowledge at the transmitters is either perfect or acquired through limited feedback. We apply intercell interference nulling (ICIN) to coordinate interference inside the clusters. The feasibility of ICIN depends on the number of antennas at the base stations. Using tools from stochastic geometry, we derive the probability of coverage and the average rate for a typical mobile user. We show that the average cluster size can be optimized as a function of the number of antennas to maximize the gains of ICIN. To minimize the mean loss in rate due to limited feedback, we propose an adaptive feedback allocation strategy at the mobile users. We show that adapting the bit allocation as a function of the signals' strength increases the achievable rate with limited feedback, compared to equal bit partitioning. Finally, we illustrate how this analysis can help solve network design problems such as identifying regions where coordination provides gains based on average cluster size, number of antennas, and number of feedback bits." }
1511.03350
1210.6095
C. Overall Success Probability
F (·, ·) and Υ j (·) are given in (11) and (18), C 1 (K ) and C 2 (K ) follow from #REFR , and β = (24) is due to the intrinsic interference, Υ j (M) is due to the extrinsic interference, and 2 (K ) captures the user competition for cluster access.
[ "where we have used the notation from Section III-B.", "Note that these two events are not independent as both depend on the same underlying geometry.", "Leveraging the framework developed in Section III-A and III-B, we provide a closedform expression for P suc (·) in terms of the model parameters.", "Theorem 3: The overall success probability P suc (K , θ) can be expressed as a function of the cluster size (K ), the intensity parameters ( , λ u ), and the energy harvesting parameters ( ) for a cluster geometry", "where" ]
[ "Remark 5: The overall success probability given in Theorem 3 also depends on the user density λ u via the density ratio β = p tr λ λ u .", "Moreover, even in the absence of extrinsic interference, the overall success probability still depends on the density ratio β.", "This is unlike Theorem 2, where the link success probability is independent of the intensity parameters when the M interfering tiers are turned off (Remark 2).", "This further suggests that densifying the TX tier helps improve the overall performance regardless of the presence of extrinsic interference.", "Remark 6: As the density ratio β increases, the overall success probability in Theorem 3 converges to the link success probability given in Theorem 2." ]
[ "cluster access", "extrinsic interference" ]
background
{ "title": "A Stochastic Geometry Analysis of Large-Scale Cooperative Wireless Networks Powered by Energy Harvesting", "abstract": "Abstract-Energy harvesting is an emerging technology for enabling green, sustainable, and autonomous wireless networks. In this paper, a large-scale wireless network with energy harvesting transmitters is considered, where a group of transmitters forms a cluster to cooperatively serve a desired receiver amid interference and noise. To characterize the link-level performance, closed-form expressions are derived for the transmission success probability at a receiver in terms of key parameters such as node densities, energy harvesting parameters, channel parameters, and cluster size, for a given cluster geometry. The analysis is further extended to characterize a network-level performance metric, capturing the tradeoff between link quality and the fraction of receivers served. Numerical simulations validate the accuracy of the analytical model. Several useful insights are provided. For example, while more cooperation helps improve the link-level performance, the network-level performance might degrade with the cluster size. Numerical results show that a small cluster size (typically 3 or smaller) optimizes the networklevel performance. Furthermore, substantial performance can be extracted with a relatively small energy buffer. Moreover, the utility of having a large energy buffer increases with the energy harvesting rate as well as with the cluster size in sufficiently dense networks." }
{ "title": "Interference Coordination: Random Clustering and Adaptive Limited Feedback", "abstract": "Abstract-Interference coordination improves data rates and reduces outages in cellular networks. Accurately evaluating the gains of coordination, however, is contingent upon using a network topology that models realistic cellular deployments. In this paper, we model the base stations locations as a Poisson point process to provide a better analytical assessment of the performance of coordination. Since interference coordination is only feasible within clusters of limited size, we consider a random clustering process where cluster stations are located according to a random point process and groups of base stations associated with the same cluster coordinate. We assume channel knowledge is exchanged among coordinating base stations, and we analyze the performance of interference coordination when channel knowledge at the transmitters is either perfect or acquired through limited feedback. We apply intercell interference nulling (ICIN) to coordinate interference inside the clusters. The feasibility of ICIN depends on the number of antennas at the base stations. Using tools from stochastic geometry, we derive the probability of coverage and the average rate for a typical mobile user. We show that the average cluster size can be optimized as a function of the number of antennas to maximize the gains of ICIN. To minimize the mean loss in rate due to limited feedback, we propose an adaptive feedback allocation strategy at the mobile users. We show that adapting the bit allocation as a function of the signals' strength increases the achievable rate with limited feedback, compared to equal bit partitioning. Finally, we illustrate how this analysis can help solve network design problems such as identifying regions where coordination provides gains based on average cluster size, number of antennas, and number of feedback bits." }
1511.03350
1210.6095
B. Cluster Access Probability
We now validate the analytical approximation for the cluster access probability p clus (K , β) proposed in #REFR .
[]
[ "In each simulation trial, the transmitter/receiver PPPs are (independently) generated in a finite window according to the specified densities.", "When there is no conflict, a receiver is assigned to its K closest TXs.", "In case of multiple candidate receivers, only one is randomly selected for service.", "The cluster access probability is calculated by averaging over 10,000 such trials. In Fig.", "7 , there is a nice agreement between analytical and Monte Carlo simulation-based results." ]
[ "cluster access probability" ]
method
{ "title": "A Stochastic Geometry Analysis of Large-Scale Cooperative Wireless Networks Powered by Energy Harvesting", "abstract": "Abstract-Energy harvesting is an emerging technology for enabling green, sustainable, and autonomous wireless networks. In this paper, a large-scale wireless network with energy harvesting transmitters is considered, where a group of transmitters forms a cluster to cooperatively serve a desired receiver amid interference and noise. To characterize the link-level performance, closed-form expressions are derived for the transmission success probability at a receiver in terms of key parameters such as node densities, energy harvesting parameters, channel parameters, and cluster size, for a given cluster geometry. The analysis is further extended to characterize a network-level performance metric, capturing the tradeoff between link quality and the fraction of receivers served. Numerical simulations validate the accuracy of the analytical model. Several useful insights are provided. For example, while more cooperation helps improve the link-level performance, the network-level performance might degrade with the cluster size. Numerical results show that a small cluster size (typically 3 or smaller) optimizes the networklevel performance. Furthermore, substantial performance can be extracted with a relatively small energy buffer. Moreover, the utility of having a large energy buffer increases with the energy harvesting rate as well as with the cluster size in sufficiently dense networks." }
{ "title": "Interference Coordination: Random Clustering and Adaptive Limited Feedback", "abstract": "Abstract-Interference coordination improves data rates and reduces outages in cellular networks. Accurately evaluating the gains of coordination, however, is contingent upon using a network topology that models realistic cellular deployments. In this paper, we model the base stations locations as a Poisson point process to provide a better analytical assessment of the performance of coordination. Since interference coordination is only feasible within clusters of limited size, we consider a random clustering process where cluster stations are located according to a random point process and groups of base stations associated with the same cluster coordinate. We assume channel knowledge is exchanged among coordinating base stations, and we analyze the performance of interference coordination when channel knowledge at the transmitters is either perfect or acquired through limited feedback. We apply intercell interference nulling (ICIN) to coordinate interference inside the clusters. The feasibility of ICIN depends on the number of antennas at the base stations. Using tools from stochastic geometry, we derive the probability of coverage and the average rate for a typical mobile user. We show that the average cluster size can be optimized as a function of the number of antennas to maximize the gains of ICIN. To minimize the mean loss in rate due to limited feedback, we propose an adaptive feedback allocation strategy at the mobile users. We show that adapting the bit allocation as a function of the signals' strength increases the achievable rate with limited feedback, compared to equal bit partitioning. Finally, we illustrate how this analysis can help solve network design problems such as identifying regions where coordination provides gains based on average cluster size, number of antennas, and number of feedback bits." }
1411.6408
1310.6271
The Last Layers of a Sorting Network
Even though L n grows quickly, it grows slower than the number G n of possible layers in general #REFR ; in particular, L 17 = 2583, whereas G 17 = 211,799,312.
[ "Proof.", "Denote by L + n the number of possible last layers on n channels, where the last layer is allowed to be empty (so L n = L + n − 1).", "There is exactly one possible last layer on 1 channel, and there are two possible last layers on 2 channels (no comparators or one comparator), so L", "Given a layer on n channels, there are two possibilities.", "Either the first channel is unused, and there are L + n−1 possibilities for the remaining n − 1 channels; or it is connected to the second channel, and there are L + n−2 possibilities for the remaining n − 2 channels. So" ]
[ "To move (backwards) beyond the last layer, we introduce an auxiliary notion.", "Definition 6.", "Let C be a depth d sorting network without redundant comparators, and let k < d.", "A k-block of C is a set of channels B such that i, j ∈ B if and only if there is a sequence of channels i = x 0 , . . . , x ℓ = j where", "Note that for each k the set of k-blocks of C is a partition of the set of channels." ]
[ "possible layers" ]
background
{ "title": "Sorting Networks: the End Game", "abstract": "Abstract. This paper studies properties of the back end of a sorting network and illustrates the utility of these in the search for networks of optimal size or depth. All previous works focus on properties of the front end of networks and on how to apply these to break symmetries in the search. The new properties help shed understanding on how sorting networks sort and speed-up solvers for both optimal size and depth by an order of magnitude." }
{ "title": "Optimal Sorting Networks", "abstract": "Abstract. This paper settles the optimality of sorting networks given in The Art of Computer Programming vol. 3 more than 40 years ago. The book lists efficient sorting networks with n ≤ 16 inputs. In this paper we give general combinatorial arguments showing that if a sorting network with a given depth exists then there exists one with a special form. We then construct propositional formulas whose satisfiability is necessary for the existence of such a network. Using a SAT solver we conclude that the listed networks have optimal depth. For n ≤ 10 inputs where optimality was known previously, our algorithm is four orders of magnitude faster than those in prior work." }
1411.6408
1310.6271
Conclusion
In order to assess the impact of our contribution, we show how to integrate them into SAT encodings that search for sorting networks of a given depth #REFR .
[ "This paper presents the first systematic exploration of what happens at the end of a sorting network, as opposed to at the beginning.", "We present properties of the last layers of sorting networks." ]
[ "Here, we see an order of magnitude improvement in solving times, bringing us closer to being able to solve the next open instance of the optimal depth problem (17 channels).", "While the paper presents detailed results on the end of sorting networks in the context of proving optimal depth of sorting networks, the necessary properties of the last layers can also be used to prove optimal size.", "We experimented on adding constraints similar to those in Section 5 for the last three comparators, as well as constraints encoding Corollary 12, to the SAT encoding presented in #OTHEREFR .", "Preliminary results based on uniform random sampling of more than 10% of the cases indicate that we can reduce the total computational time used in the proof that 25 comparators are optimal for 9 channels from 6.5 years to just over 1.5 years.", "On the 288-thread cluster originally used for that proof, this corresponds to reducing the actual execution time from over 8 days to just 2 days." ]
[ "sorting networks" ]
background
{ "title": "Sorting Networks: the End Game", "abstract": "Abstract. This paper studies properties of the back end of a sorting network and illustrates the utility of these in the search for networks of optimal size or depth. All previous works focus on properties of the front end of networks and on how to apply these to break symmetries in the search. The new properties help shed understanding on how sorting networks sort and speed-up solvers for both optimal size and depth by an order of magnitude." }
{ "title": "Optimal Sorting Networks", "abstract": "Abstract. This paper settles the optimality of sorting networks given in The Art of Computer Programming vol. 3 more than 40 years ago. The book lists efficient sorting networks with n ≤ 16 inputs. In this paper we give general combinatorial arguments showing that if a sorting network with a given depth exists then there exists one with a special form. We then construct propositional formulas whose satisfiability is necessary for the existence of such a network. Using a SAT solver we conclude that the listed networks have optimal depth. For n ≤ 10 inputs where optimality was known previously, our algorithm is four orders of magnitude faster than those in prior work." }
1806.00305
1310.6271
1.
The set H m n with m = n 2 is the set of networks with a fixed maximal first layer (denoted by G n in #REFR ).
[ "This complete set includes the empty network (0, 0, 0, 0, 0) with no comparators, networks without any comparator in the second layer (0, 0, 0, 12), (0, 12, 12), and networks with the word 21c, i.e., a redundant comparator. Lemma 1. #OTHEREFR .", "Let C and C ′ be two-layer comparator networks on n channels.", "Then C ≈ C ′ if and only if word(C) = word(C ′ ).", "We denote by H m n the set of all possible n-channel two-layer network whose first layer has m comparators of the form (2i − 1, 2i), with 1 ≤ i ≤ m and 0 ≤ m ≤ n 2 .", "And by H n the union of the entire sequence." ]
[ "The set of representatives of the equivalence classes of H n and G n is denoted by R(H n ) and R(G n ) respectively.", "For a given n the set R(H n ) can be generated from all multi-sets of valid words with a total of n channels. Figure 7 shows the complete R(H 5 ) set.", "However, in the search for optimal networks we can remove prefixes with redundant comparators (word 12c), the empty network, and prefixes without any comparator in the second layer (with only the words 0 and 12 in its symbolic representation).", "We denote R(T n ) the resulting reduced set of prefixes." ]
[ "layer", "networks" ]
background
{ "title": "Joint Size and Depth Optimization of Sorting Networks", "abstract": "Sorting networks are oblivious sorting algorithms with many interesting theoretical properties and practical applications. One of the related classical challenges is the search of optimal networks respect to size (number of comparators) of depth (number of layers). However, up to our knowledge, the joint size-depth optimality of small sorting networks has not been addressed before. This paper presents size-depth optimality results for networks up to 12 channels. Our results show that there are sorting networks for n ≤ 9 inputs that are optimal in both size and depth, but this is not the case for 10 and 12 channels. For n = 10 inputs, we were able to proof that optimal-depth optimal sorting networks with 7 layers require 31 comparators while optimal-size networks with 29 comparators need 8 layers. For n = 11 inputs we show that networks with 8 or 9 layers require at least 35 comparators (the best known upper bound for the minimal size). And for networks with n = 12 inputs and 8 layers we need 40 comparators, while for 9 layers the best known size is 39." }
{ "title": "Optimal Sorting Networks", "abstract": "Abstract. This paper settles the optimality of sorting networks given in The Art of Computer Programming vol. 3 more than 40 years ago. The book lists efficient sorting networks with n ≤ 16 inputs. In this paper we give general combinatorial arguments showing that if a sorting network with a given depth exists then there exists one with a special form. We then construct propositional formulas whose satisfiability is necessary for the existence of such a network. Using a SAT solver we conclude that the listed networks have optimal depth. For n ≤ 10 inputs where optimality was known previously, our algorithm is four orders of magnitude faster than those in prior work." }
1806.00305
1310.6271
Reflections
For comparison, we also include the cardinality of |R(G n )|, saturated prefixes |R(S n )| and saturated prefixes without reflections |R(S ′ n )| of interest in the optimal-depth sorting problem #REFR . Theorem 1.
[ "Therefore, we can further reduce the number of prefixes in our complete set by removing those that are reflections of others.", "In the resulting symbolic representation of the complete set of two-layers prefixes, denoted by R(T ′ n ), we keep the lexicographically smallest of the two sentences word(C) and word(C R ).", "The symbolic representation word(C R ) can be obtained from word(C) swapping min-channels with maxchannels, and then selecting the lexicographically smallest representation for each type of word and for the whole sentence according to definitions 2 and 3.", "Figure 8 shows two equivalent two-layer prefixes up to reflection, and the corresponding sentence representation.", "Table 1 shows the cardinality of |R(H n )|, |R(T n )| and |R(T ′ n )| for n ≤ 26." ]
[ "For any n ≥ 3, the set R(T ′ n ) of two-layer comparator networks is a complete set of prefixes for the search of optimal sorting networks in size and depth." ]
[ "optimal-depth sorting problem" ]
background
{ "title": "Joint Size and Depth Optimization of Sorting Networks", "abstract": "Sorting networks are oblivious sorting algorithms with many interesting theoretical properties and practical applications. One of the related classical challenges is the search of optimal networks respect to size (number of comparators) of depth (number of layers). However, up to our knowledge, the joint size-depth optimality of small sorting networks has not been addressed before. This paper presents size-depth optimality results for networks up to 12 channels. Our results show that there are sorting networks for n ≤ 9 inputs that are optimal in both size and depth, but this is not the case for 10 and 12 channels. For n = 10 inputs, we were able to proof that optimal-depth optimal sorting networks with 7 layers require 31 comparators while optimal-size networks with 29 comparators need 8 layers. For n = 11 inputs we show that networks with 8 or 9 layers require at least 35 comparators (the best known upper bound for the minimal size). And for networks with n = 12 inputs and 8 layers we need 40 comparators, while for 9 layers the best known size is 39." }
{ "title": "Optimal Sorting Networks", "abstract": "Abstract. This paper settles the optimality of sorting networks given in The Art of Computer Programming vol. 3 more than 40 years ago. The book lists efficient sorting networks with n ≤ 16 inputs. In this paper we give general combinatorial arguments showing that if a sorting network with a given depth exists then there exists one with a special form. We then construct propositional formulas whose satisfiability is necessary for the existence of such a network. Using a SAT solver we conclude that the listed networks have optimal depth. For n ≤ 10 inputs where optimality was known previously, our algorithm is four orders of magnitude faster than those in prior work." }
1410.2736
1310.6271
Introduction
However, the optimality of the known networks for 11 to 16 channels was only shown recently by Bundala and Závodný #REFR , who expressed the existence of a sorting network using less layers in propositional logic and used a SAT solver to show that the resulting formulae are unsatisfiable.
[ "The size of a comparator network in general can be measured by two different quantities: the total number of comparators involved in the network, or the number of layers the networks consists of.", "In both cases, finding optimal sorting networks (i.e., of minimal size) is a challenging task even when restricted to few inputs, which was attacked using different methods.", "For instance, Valsalam and Miikkulainen #OTHEREFR employed evolutionary algorithms to generate sorting networks with few comparators.", "Minimal depth sorting networks for up to 16 inputs were constructed by Shapiro (6 and 12 inputs) and Van Voorhis (10 and 16 inputs) in the 60's and 70's, and by Schwiebert (9 and 11 inputs) in 2001, who also made use of evolutionary techniques.", "For a presentation of these networks see Knuth [6, Fig.51 ]." ]
[ "Codish, Cruz-Filipe, and SchneiderKamp #OTHEREFR simplified parts of this approach and independently verified Bundala and Závodný's result.", "For more than 16 channels, not much is known about the minimal depths of sorting networks.", "Al-Haj Baddar and Batcher #OTHEREFR exhibit a network sorting 18 inputs using 11 layers, which also provides the best known upper bound on the minimal depth of a sorting network for 17 inputs.", "The lowest upper bound on the size of minimal depth sorting networks on 19 to 22 channels also stems from a network presented by Al-Haj Baddar and Batcher #OTHEREFR .", "For 23 and more inputs, the best upper bounds to date are established by merging the outputs of smaller sorting networks with Batcher's odd-even merge #OTHEREFR , which needs log n layers for this merging step." ]
[ "sorting network" ]
background
{ "title": "Faster Sorting Networks for $17$, $19$ and $20$ Inputs", "abstract": "Abstract. We present new parallel sorting networks for 17 to 20 inputs. For 17, 19, and 20 inputs these new networks are faster (i.e., they require less computation steps) than the previously known best networks. Therefore, we improve upon the known upper bounds for minimal depth sorting networks on 17, 19, and 20 channels. The networks were obtained using a combination of hand-crafted first layers and a SAT encoding of sorting networks." }
{ "title": "Optimal Sorting Networks", "abstract": "Abstract. This paper settles the optimality of sorting networks given in The Art of Computer Programming vol. 3 more than 40 years ago. The book lists efficient sorting networks with n ≤ 16 inputs. In this paper we give general combinatorial arguments showing that if a sorting network with a given depth exists then there exists one with a special form. We then construct propositional formulas whose satisfiability is necessary for the existence of such a network. Using a SAT solver we conclude that the listed networks have optimal depth. For n ≤ 10 inputs where optimality was known previously, our algorithm is four orders of magnitude faster than those in prior work." }
1404.0948
1310.6271
I. INTRODUCTION
The approach in #REFR identified 212 two-layer network prefixes for n = 13; however, the calculation of this set required 32 minutes of computation, and this approach does not scale for larger values of n.
[ "Parberry's result was obtained by implementing an exhaustive search with pruning based on symmetries in the first two layers of the comparator networks, and executing the algorithm on a supercomputer (consuming 200 hours of low priority computation).", "In 2011, Morgenstern and Schneider #OTHEREFR applied SAT solvers to search for optimal depth sorting networks, and were able to reproduce the known results for n < 10 with an acceptable runtime, but still required 21 days of computation for n = 10, shredding any hope to achieve reasonable runtimes for n ≥ 11.", "Optimality for the cases 11 ≤ n ≤ 16 is shown by #OTHEREFR in #OTHEREFR , first by showing that n = 11 requires at least depth 8, and then by showing that n = 13 requires at least depth 9.", "Their results are obtained using a SAT solver, and are also based on identifying symmetries in the first two layers of the sorting networks.", "Both Parberry #OTHEREFR and then Bundala and Závodný #OTHEREFR consider the following question: what is the smallest set S of twolayer network prefixes that need be considered in the search for minimal depth sorting networks? In particular, such that, if no element of S can be extended to a sorting network of depth d, then no depth d sorting network exists." ]
[ "In this paper, we show how to generate the same set of 212 two-layer prefixes for n = 13 in \"under a second\" and, following ideas presented in #OTHEREFR , improve results such that only 117 relevant two-layer prefixes need to be considered. Our approach also scales well, i.e.", "we can compute the set of 34,486 relevant prefixes for n = 30 in \"under a minute\", and that of relevant prefixes for n = 40 in around two hours.", "Our main contribution here is to illustrate how focusing on concepts of regular languages, graph isomorphism, and symmetry breaking facilitates the efficient generation of all two-layer prefixes modulo isomorphism of the networks." ]
[ "212 two-layer network" ]
method
{ "title": "The Quest for Optimal Sorting Networks: Efficient Generation of Two-Layer Prefixes", "abstract": "Abstract-Previous work identifying depth-optimal n-channel sorting networks for 9 ≤ n ≤ 16 is based on exploiting symmetries of the first two layers. However, the naive generate-and-test approach typically applied does not scale. This paper revisits the problem of generating two-layer prefixes modulo symmetries. An improved notion of symmetry is provided and a novel technique based on regular languages and graph isomorphism is shown to generate the set of non-symmetric representations. An empirical evaluation demonstrates that the new method outperforms the generate-and-test approach by orders of magnitude and easily scales until n = 40." }
{ "title": "Optimal Sorting Networks", "abstract": "Abstract. This paper settles the optimality of sorting networks given in The Art of Computer Programming vol. 3 more than 40 years ago. The book lists efficient sorting networks with n ≤ 16 inputs. In this paper we give general combinatorial arguments showing that if a sorting network with a given depth exists then there exists one with a special form. We then construct propositional formulas whose satisfiability is necessary for the existence of such a network. Using a SAT solver we conclude that the listed networks have optimal depth. For n ≤ 10 inputs where optimality was known previously, our algorithm is four orders of magnitude faster than those in prior work." }
1404.0948
1310.6271
Definition 1.
Proof: Although this formulation is more general, the proof of case (1) is the same as the first case of the proof of Lemma 8 of #REFR , and the proof of cases (2a), (2b) and (2c) is the same as the second case of the same proof.
[ "A comparator network C is redundant if there exists a network C obtained from C by removing a comparator such that outputs(C ) = outputs(C).", "A network C is saturated if it is non-redundant and every network C obtained by adding a comparator to the last layer of C satisfies outputs(C ) ⊆ outputs(C).", "Parberry #OTHEREFR shows that the first layer of a minimal-depth sorting network on n channels can always be assumed to contain n 2 comparators.", "Also, any comparator network that contains the same comparator at consecutive layers is redundant. Theorem 1. Let C be a saturated two-layer network. Then C contains none of the following two-layer patterns." ]
[ "For case (3a), assume that C includes the given pattern and let the channels corresponding to those in the pattern be a, b, c and d.", "Add a comparator between channels b and d to obtain a network C that includes the following pattern.", "For case (3b) the construction is the same, and the thesis follows by comparing m a with m c .", "As it turns out, these are actually all of the patterns that make a comparator network with first layer F n non-saturated. We formalize this observation in the following theorem.", "Theorem 2." ]
[ "Lemma", "cases" ]
background
{ "title": "The Quest for Optimal Sorting Networks: Efficient Generation of Two-Layer Prefixes", "abstract": "Abstract-Previous work identifying depth-optimal n-channel sorting networks for 9 ≤ n ≤ 16 is based on exploiting symmetries of the first two layers. However, the naive generate-and-test approach typically applied does not scale. This paper revisits the problem of generating two-layer prefixes modulo symmetries. An improved notion of symmetry is provided and a novel technique based on regular languages and graph isomorphism is shown to generate the set of non-symmetric representations. An empirical evaluation demonstrates that the new method outperforms the generate-and-test approach by orders of magnitude and easily scales until n = 40." }
{ "title": "Optimal Sorting Networks", "abstract": "Abstract. This paper settles the optimality of sorting networks given in The Art of Computer Programming vol. 3 more than 40 years ago. The book lists efficient sorting networks with n ≤ 16 inputs. In this paper we give general combinatorial arguments showing that if a sorting network with a given depth exists then there exists one with a special form. We then construct propositional formulas whose satisfiability is necessary for the existence of such a network. Using a SAT solver we conclude that the listed networks have optimal depth. For n ≤ 10 inputs where optimality was known previously, our algorithm is four orders of magnitude faster than those in prior work." }
1404.0948
1310.6271
V. GRAPH REPRESENTATION
The results presented in #REFR involve a great deal of computational effort to identify permutations which render various two-layer networks equivalent.
[]
[ "Motivated by the existence of sophisticated tools in the context of graph isomorphism, we adopt a representation for comparator networks similar to the one defined by Choi and Moon #OTHEREFR . Let C be a comparator network on n channels.", "The graph representation of C is a directed and labeled graph, G(C) = (V, E) where each node in V corresponds to a comparator in C and E ⊆ V × {min, max} × V .", "Let c(v) denote the comparator corresponding to a node v.", "Then, (u, , v) ∈ E if comparator c(u) feeds into the comparator c(v) in C and the label ∈ {min, max} indicates if the channel from c(u) to c(v) is the min or the max output of c(u).", "Note that the number of channels cannot be inferred from the graph representation, as unused channels are not represented." ]
[ "various two-layer networks" ]
background
{ "title": "The Quest for Optimal Sorting Networks: Efficient Generation of Two-Layer Prefixes", "abstract": "Abstract-Previous work identifying depth-optimal n-channel sorting networks for 9 ≤ n ≤ 16 is based on exploiting symmetries of the first two layers. However, the naive generate-and-test approach typically applied does not scale. This paper revisits the problem of generating two-layer prefixes modulo symmetries. An improved notion of symmetry is provided and a novel technique based on regular languages and graph isomorphism is shown to generate the set of non-symmetric representations. An empirical evaluation demonstrates that the new method outperforms the generate-and-test approach by orders of magnitude and easily scales until n = 40." }
{ "title": "Optimal Sorting Networks", "abstract": "Abstract. This paper settles the optimality of sorting networks given in The Art of Computer Programming vol. 3 more than 40 years ago. The book lists efficient sorting networks with n ≤ 16 inputs. In this paper we give general combinatorial arguments showing that if a sorting network with a given depth exists then there exists one with a special form. We then construct propositional formulas whose satisfiability is necessary for the existence of such a network. Using a SAT solver we conclude that the listed networks have optimal depth. For n ≤ 10 inputs where optimality was known previously, our algorithm is four orders of magnitude faster than those in prior work." }
1404.0948
1310.6271
Theorem 4.
For this, we use an encoding to Boolean satisfiablity (SAT) as described in #REFR , where for each network C in R 16 , we generate a formula ϕ C that is satisfiable if and only if there exists a sorting network of depth 8 extending C.
[ "Furthermore, it is possible to have distinct cycles whose reflections are equivalent (but not equal); this brings the number of relevant two-layer networks on 13 channels to 117.", "The last line in the table of Figure 4 above details the number |R n | of representatives modulo equivalence and reflection for each value of n ≤ 40.", "We can compute the set R 30 in less than one minute and R 40 in approximately two hours.", "Having computed R 16 , we can directly verify the known value 9 for the optimal depth of a 16-channel sorting network, obtained only indirectly in #OTHEREFR .", "This direct proof involves showing that none of the 211 two-layer comparator networks in R 16 extends to a sorting network of depth 8." ]
[ "Showing the unsatisfiability of these 211 SAT instances can be performed in parallel, with the hardest instance (a CNF with approx. 450,000 clauses) requiring approx.", "1800 seconds running on a single thread of a cluster of Intel Xeon E5-2620 nodes clocked at 2 GHz.", "However, this approach does not directly work for n = 17, where the best known upper bound is 11.", "Attempting to show that there is no sorting network of depth 10 requires analyzing the networks in R 17 .", "The resulting 609 formulas have more than five million clauses each, and none could be solved within a couple of weeks." ]
[ "sorting network" ]
method
{ "title": "The Quest for Optimal Sorting Networks: Efficient Generation of Two-Layer Prefixes", "abstract": "Abstract-Previous work identifying depth-optimal n-channel sorting networks for 9 ≤ n ≤ 16 is based on exploiting symmetries of the first two layers. However, the naive generate-and-test approach typically applied does not scale. This paper revisits the problem of generating two-layer prefixes modulo symmetries. An improved notion of symmetry is provided and a novel technique based on regular languages and graph isomorphism is shown to generate the set of non-symmetric representations. An empirical evaluation demonstrates that the new method outperforms the generate-and-test approach by orders of magnitude and easily scales until n = 40." }
{ "title": "Optimal Sorting Networks", "abstract": "Abstract. This paper settles the optimality of sorting networks given in The Art of Computer Programming vol. 3 more than 40 years ago. The book lists efficient sorting networks with n ≤ 16 inputs. In this paper we give general combinatorial arguments showing that if a sorting network with a given depth exists then there exists one with a special form. We then construct propositional formulas whose satisfiability is necessary for the existence of such a network. Using a SAT solver we conclude that the listed networks have optimal depth. For n ≤ 10 inputs where optimality was known previously, our algorithm is four orders of magnitude faster than those in prior work." }
1507.01428
1310.6271
SAT Encoding for Depth-Restricted Sorting Networks
Bundala and Závodný #REFR continued this approach, and introduced a better SAT encoding that was able both to find sorting networks of optimal depth with up to 13 channels and to prove their optimality, implying the optimal depth of the best known networks with up to 16 channels.
[ "A first approach to encode sorting networks as formulae in propositional logic was suggested by Morgenstern and Schneider #OTHEREFR .", "However, their encoding to SAT did not prove sufficient to find new results concerning optimal-depth networks, as it did not scale for n > 10." ]
[ "This was the first SAT formulation that led to new results on optimaldepth sorting networks. Bundala et al.", "#OTHEREFR further improve this encoding to establish optimal depth for networks with n ≤ 16 channels directly.", "In this paper, we obtain results applying further extensions and optimizations of the SAT encoding presented in #OTHEREFR , hence for sake of completeness, we recall it here.", "A comparator network of depth d on n channels is represented by a set of Boolean variables", "Here, once encodes the fact that each channel may be used only once in one layer, and valid enforces this constraint for each channel and each layer." ]
[ "sorting networks" ]
background
{ "title": "Sorting Networks: to the End and Back Again", "abstract": "This paper studies new properties of the front and back ends of a sorting network, and illustrates the utility of these in the search for new bounds on optimal sorting networks. Search focuses first on the \"outsides\" of the network and then on the inner part. All previous works focus only on properties of the front end of networks and on how to apply these to break symmetries in the search. The new, out-side-in, properties help shed understanding on how sorting networks sort, and facilitate the computation of new bounds on optimal sorting networks. We present new parallel sorting networks for 17 to 20 inputs. For 17, 19, and 20 inputs these networks are faster than the previously known best networks. For 17 inputs, the new sorting network is shown optimal in the sense that no sorting network using less layers exists." }
{ "title": "Optimal Sorting Networks", "abstract": "Abstract. This paper settles the optimality of sorting networks given in The Art of Computer Programming vol. 3 more than 40 years ago. The book lists efficient sorting networks with n ≤ 16 inputs. In this paper we give general combinatorial arguments showing that if a sorting network with a given depth exists then there exists one with a special form. We then construct propositional formulas whose satisfiability is necessary for the existence of such a network. Using a SAT solver we conclude that the listed networks have optimal depth. For n ≤ 10 inputs where optimality was known previously, our algorithm is four orders of magnitude faster than those in prior work." }
1501.06946
1310.6271
Improved Techniques
We will stick to the formulation by Bundala and Závodný #REFR , and introduce new variables if necessary.
[ "In this section we introduce the new techniques and improvements on existing techniques we used to gain our results." ]
[ "Furthermore, we will extend a technique introduced in their paper, called subnetwork optimization.", "It is based on the fact that a sorting network must sort all its inputs, but in order to prove non-existence of sorting networks of a certain depth, it is often sufficient to consider only a subset of all possible inputs, which are not all sorted by any network of this restricted depth.", "Bundala and Závodný chose subsets of the form T r = 0 a x1 b | |x| = r and a + b + |x| = n for r < n, which are inputs having a window of size r.", "For an input 0 a x1 b from this set the values on the first a channels at any point in the network will always be 0, and those on the last b channels will always be 1, which significantly reduces the encoding size for these inputs if a and b are sufficiently large." ]
[ "Závodný", "new variables" ]
background
{ "title": "New Bounds on Optimal Sorting Networks", "abstract": "Abstract. We present new parallel sorting networks for 17 to 20 inputs. For 17, 19, and 20 inputs these new networks are faster (i.e., they require less computation steps) than the previously known best networks. Therefore, we improve upon the known upper bounds for minimal depth sorting networks on 17, 19, and 20 channels. Furthermore, we show that our sorting network for 17 inputs is optimal in the sense that no sorting network using less layers exists." }
{ "title": "Optimal Sorting Networks", "abstract": "Abstract. This paper settles the optimality of sorting networks given in The Art of Computer Programming vol. 3 more than 40 years ago. The book lists efficient sorting networks with n ≤ 16 inputs. In this paper we give general combinatorial arguments showing that if a sorting network with a given depth exists then there exists one with a special form. We then construct propositional formulas whose satisfiability is necessary for the existence of such a network. Using a SAT solver we conclude that the listed networks have optimal depth. For n ≤ 10 inputs where optimality was known previously, our algorithm is four orders of magnitude faster than those in prior work." }
1704.02054
0810.4182
B. Background and Related Work
In #REFR this was somewhat improved to yield n(log n) r time, but it still requires r = o( log n log log n ) for queries to be sub-linear.
[ "However one would need to find a way to derandomize the randomized clustering step used in their approach.", "There is of course also a literature of deterministic and Las Vegas data structures not using LSH.", "As a baseline, we note that the \"brute force\" algorithm that stores every data point in a hash table, and given a query, q ∈ {0,", ") point of Hamming distance most r.", "This of course requires r log(d/r) < log n to be sublinear, and for a typical example of d = (log n) 2 and r = d/10 it won't be practical." ]
[ "We can also imagine storing the nearest neighbour for every point in {0, 1}", "d .", "Such an approach would give fast (constant time) queries, but the space required would be exponential in r.", "In Euclidean space ( 2 metric) the classical K-d tree algorithm #OTHEREFR is of course deterministic, but it has query time n 1−1/d , so we need d = O(1) for it to be strongly sub-linear.", "Allowing approximation, but still deterministically, #OTHEREFR ] found a (d/(c−1)) d algorithm for a c > 1 approximation." ]
[ "queries" ]
background
{ "title": "Optimal Las Vegas Locality Sensitive Data Structures", "abstract": "Abstract-We show that approximate similarity (near neighbour) search can be solved in high dimensions with performance matching state of the art (data independent) Locality Sensitive Hashing, but with a guarantee of no false negatives. Specifically we give two data structures for common problems. For c-approximate near neighbour in Hamming space we get query time dn 1/c+o(1) and space dn 1+1/c+o(1) matching that of [Indyk and Motwani, 1998 ] and answering a long standing open question from [Indyk, 2000a] and , when sets have equal size, matching the performance of . The algorithms are based on space partitions, as with classic LSH, but we construct these using a combination of brute force, tensoring and splitter functionsà la [Naor et al., 1995] . We also show two dimensionality reduction lemmas with 1-sided error." }
{ "title": "Bucketing Coding and Information Theory for the Statistical High Dimensional Nearest Neighbor Problem", "abstract": "Consider the problem of finding high dimensional approximate nearest neighbors, where the data is generated by some known probabilistic model. We will investigate a large natural class of algorithms which we call bucketing codes. We will define bucketing information, prove that it bounds the performance of all bucketing codes, and that the bucketing information bound can be asymptotically attained by randomly constructed bucketing codes. For example suppose we have n Bernoulli(1/2) very long (length d → ∞) sequences of bits. Let n − 2m sequences be completely independent, while the remaining 2m sequences are composed of m independent pairs. The interdependence within each pair is that their bits agree with probability 1/2 < p ≤ 1. It is well known how to find most pairs with high probability by performing order of n log 2 2/p comparisons. We will see that order of n 1/p+ǫ comparisons suffice, for any ǫ > 0. Moreover if one sequence out of each pair belongs to a a known set of n (2p−1) 2 −ǫ sequences, than pairing can be done using order n comparisons!" }
1611.02238
quant-ph/9605043
Search with Grover's Oracle
In Grover's algorithm #REFR , the system evolves by repeatedly applying two reflections: The first reflects the state through the marked vertex, and the second reflects across the initial uniform state.
[]
[ "This first reflection acts as an oracle query Q, and it negates the amplitude at the marked vertex. That is,", "Motivated by this, Santos #OTHEREFR defined Grover-type oracles in Szegedy's scheme. In the bipartite double cover (c.f., Fig.", "2c ), there are marked vertices in each partite set X and Y .", "So we get two Grover-type oracles, one for each set:", "Q 1 flips the sign of an edge if its incident to a marked vertex in X, and Q 2 acts similarly, except it flips the sign of an edge if its incident to a marked vertex in Y ." ]
[ "Grover's algorithm" ]
method
{ "title": "Equivalence of Szegedy's and Coined Quantum Walks", "abstract": "Abstract Szegedy's quantum walk is a quantization of a classical random walk or Markov chain, where the walk occurs on the edges of the bipartite double cover of the original graph. To search, one can simply quantize a Markov chain with absorbing vertices. Recently, Santos proposed two alternative search algorithms that instead utilize the sign-flip oracle in Grover's algorithm rather than absorbing vertices. In this paper, we show that these two algorithms are exactly equivalent to two algorithms involving coined quantum walks, which are walks on the vertices of the original graph with an internal degree of freedom. The first scheme is equivalent to a coined quantum walk with one walk-step per query of Grover's oracle, and the second is equivalent to a coined quantum walk with two walk-steps per query of Grover's oracle. These equivalences lie outside the previously known equivalence of Szegedy's quantum walk with absorbing vertices and the coined quantum walk with the negative identity operator as the coin for marked vertices, whose precise relationships we also investigate." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1812.10797
quant-ph/9605043
Results
A circuit-based quantum algorithm was firstly designed by Grover, which shows a quadratic quantum speedup over classical computing #REFR . In adiabatic quantum computing, the Hamiltonians in Eq.
[ "Results from adiabatic algorithms using a linear and a tailored nonlinear path #OTHEREFR are shown for comparison.", "The total adiabatic time are chosen to be T = 16, 24, 55, 117, 242, 493 for qubit number n = 1, 2, 4, 6, 8, 10, respectively, following the √ N = √ 2 n scaling.", "The machinery adiabatic algorithm designed by RL shows significant improvement over the linear algorithm, and reveals the same computation-complexity scaling as the nonlinear algorithm.", "input to a black-box function that produces a particular output value.", "This classical problem can be encoded as searching in the Hilbert space of n = log 2 N qubits for a target quantum state. These qubits are labeled by q in the following." ]
[ "(1) for Grover search are H B = 1 − |ψ 0 ψ 0 |, and H P = 1 − |m m|, where |m is a product state in Pauli-Z basis that encodes the search target, and |ψ 0 is a product state in the Pauli-X basis with all n eigenvalues equal to 1.", "The symbols X, Y , and Z refer to Pauli matrices in this work.", "A linear function choice of s(t/T ) (b = 0 in our notation), does not exhibit the quadratic speedup. It was later pointed out in Ref.", "24 that the quantum speedup is reached with a tailored nonlinear path choice of s(t/T ).", "In the Grover search problem, different problem instances correspond to different choices for the |m states, which are all connected to each other by a unitary transformation ⊗ {q} X {q} for a subset of qubits {q}, which keeps H B invariant." ]
[ "adiabatic quantum computing", "circuit-based quantum algorithm" ]
method
{ "title": "Reinforcement-learning-based architecture for automated quantum adiabatic algorithm design", "abstract": "Quantum algorithm design lies in the hallmark of applications of quantum computation and quantum simulation. Here we put forward a deep reinforcement learning (RL) architecture for automated algorithm design in the framework of quantum adiabatic algorithm, where the optimal Hamiltonian path to reach a quantum ground state that encodes a computation problem is obtained by RL techniques. We benchmark our approach in Grover search and 3-SAT problems, and find that the adiabatic algorithm obtained by our RL approach leads to significant improvement in the success probability and computing speedups for both moderate and large number of qubits compared to conventional algorithms. The RLdesigned algorithm is found to be qualitatively distinct from the linear algorithm in the resultant distribution of success probability. Considering the established complexity-equivalence of circuit and adiabatic quantum algorithms, we expect the RL-designed adiabatic algorithm to inspire novel circuit algorithms as well. Our approach offers a recipe to design quantum algorithms for generic problems through a machinery RL process, which paves a novel way to automated quantum algorithm design using artificial intelligence, potentially applicable to different quantum simulation and computation platforms from trapped ions and optical lattices to superconducting-qubit devices. Quantum simulation and quantum computing have received enormous efforts in the last two decades owing to their advantageous computational power over classical machines [1] [2] [3] [4] . In the development of quantum computing, quantum algorithms with exponential speedups have long been providing driving forces for the field to advance, with the best known example from factorizing a large composite integer [5] . In applications of quantum advantage to generic computational problems, quantum algorithm design plays a central role. In recent years, both threads of gate-based [6] and adiabatic annealing models [7, 8] In adiabatic quantum computing, the Hamiltonian can be written as a time-dependent combination of initial and final Hamiltonians, H B and H P [7, 8] , as with the computational problem encoded in the ground state of H P . Under this framework, the quantum algorithm design corresponds to the optimization of the Hamiltonian path or more explicitly the time sequence of s(t). Different choices for the path could lead to algorithms having dramatically different performance and even in the complexity scaling. For example in Grover search, a linear function of s(t/T ) leads to an algorithm with a linear complexity scaling to the search space dimension (N), whereas a nonlinear choice could reduce the complexity to √ N [24] . This implies an approach of automated quantum adiabatic algorithm design through searching for an optimal Hamiltonian path, which may lead to a generic approach of automated algorithm design given the established complexity equivalence between gate-based and adiabatic models [21] [22] [23] . The automated quantum algorithm design that is adaptable to moderatequbit-numbers is particularly in current-demand considering near term applications of noisy intermediate size quantum devices [25] . [ ] Reward Learning Agent AQC FIG. 1. Schematic illustration of the reinforcement learning (RL) approach for adiabatic quantum algorithm design. The RL agent takes the negative of the final quantum state energy of the adiabatic quantum computer (AQC) as a reward. The agent produces an action of adiabatic-path-update of s(t) to optimize the reward based on its Q-table represented by a neural network. Here, we propose a deep reinforcement learning (RL)" }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
quant-ph/9701001
quant-ph/9605043
2
However, quantum machines can recognize this language quadratically faster, in time O( √ 2 n ), using Grover's algorithm #REFR .
[ "It is easy to conclude that M decides membership in L A with probability 0 for a uniformly chosen oracle A.", "2", "Note: Theorem 3.3 and its Corollary 3.4 isolate the constraints on \"quantum parallelism\" imposed by unitary evolution.", "The rest of the proof of the above theorem is similar in spirit to standard techniques used to separate BPP from NP relative to a random oracle #OTHEREFR .", "For example, these techniques can be used to show that, relative to a random oracle A, no classical probabilistic machine can recognize L A in time o(2 n )." ]
[ "This explains why a substantial modification of the standard technique was required to prove the above theorem.", "The next result about NP ∩ co-NP relative to a random permutation oracle requires a more subtle argument; ideally we would like to apply Theorem 3.3 after asserting that the total query magnitude with which A −1 (1 n ) is probed is small.", "However, this is precisely what we are trying to prove in the first place.", "Theorem 3.6 For any T (n) which is o(2 n/3 ), relative to a random permutation oracle, with probability 1, BQTime(T (n)) does not contain NP ∩ co-NP.", "Proof." ]
[ "quantum machines" ]
method
{ "title": "Strengths and Weaknesses of Quantum Computing", "abstract": "Recently a great deal of attention has focused on quantum computation following a sequence of results [4, 16, 15] suggesting that quantum computers are more powerful than classical probabilistic computers. Following Shor's result that factoring and the extraction of discrete logarithms are both solvable in quantum polynomial time, it is natural to ask whether all of NP can be efficiently solved in quantum polynomial time. In this paper, we address this question by proving that relative to an oracle chosen uniformly at random, with probability 1, the class NP cannot be solved on a quantum Turing machine in time o(2 n/2 ). We also show that relative to a permutation oracle chosen uniformly at random, with probability 1, the class NP ∩ co-NP cannot be solved on a quantum Turing machine in time o(2 n/3 ). The former bound is tight since recent work of Grover [13] shows how to accept the class NP relative to any oracle on a quantum computer in time O(2 n/2 )." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1603.02246
quant-ph/9605043
Time symmetric and relational representations
The transformation ℑ A , applying to register A, is the so called inversion about the mean #REFR : a rotation of the basis of A makes the information acquired with function evaluation accessible to measurement.
[ "into the output state", "where register A contains the solution of the problem -namely the number of the drawer with the ball 01. H A is the Hadamard transform on register A. It transforms |00 A into", ".", "U f is function evaluation, thus performed in quantum parallelism #OTHEREFR for all the possible values of a 3 . It leaves the state of register V ,", ", unaltered when a = 01 and thus δ = 0; it changes it into − 1 √ 2 (|0 V − |1 V ) when a = 01 and δ = 1 (modulo 2 addition of 1 changes |0 V into |1 V and vice-versa)." ]
[ "We do not need to go into further detail: all we need to know of the quantum algorithm is already there.", "Eventually Alice acquires the solution by measuring the content of A, namely the observable of eigenstates the basis vectors of register A and eigenvalues (correspondingly) 00, 01, 10, 11.", "Now we extend the representation of the quantum algorithm to the process of choosing the number of the drawer with the ball b.", "We need to add a possibly imaginary register B that contains b.", "This register, under the control of Bob, has basis vectors |00 B , |01 B , |10 B , |11 B ." ]
[ "information" ]
background
{ "title": "Completing the physical representation of quantum algorithms provides a retrocausal explanation of their speedup", "abstract": "In previous works, we showed that an optimal quantum algorithm can always be seen as a sum over classical histories in each of which the problem solver knows in advance one of the possible halves of the solution she will read in the future and performs the computation steps (oracle queries) still needed to reach it. Given an oracle problem, this retrocausal explanation of the speedup yields the order of magnitude of the number of oracle queries needed to solve it in an optimal quantum way. Presently, we provide a fundamental justification for the explanation in question and show that it comes out by just completing the physical representation of quantum algorithms. Since the use of retrocausality in quantum mechanics is controversial, showing that it answers the well accepted requirement of the completeness of the physical description should be an important pass. The quantum computational speedup is the fact that quantum algorithms solve the respective problems with fewer computation steps (oracle queries) than their best classical counterparts, sometimes demonstrably fewer than classically possible. A paradigmatic example is the simplest instance of the quantum algorithm devised by Grover [1] . Bob, the problem setter, hides a ball in one of four drawers. Alice, the problem solver, is to locate it by opening drawers. In the classical case, Alice has to open up to three drawers, always one in the quantum case (the problem is an example of oracle problem and the operation of checking whether the ball is in a drawer is an example of oracle query). Deutsch [2] commented his 1985 discovery of the seminal quantum speedup, of course allowed by quantum superposition and interference, with the state-1" }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
0807.4732
quant-ph/9605043
Showing Hidden States
To show the hidden states with no prior knowledge of the oracle used, the operator G used in the original Grover's algorithm #REFR to perform the usual inversion about the mean will be used once.
[]
[ "The diagonal representation of G on 2-qubit system can take this form,", "where the vector |0 used in Eqn.", "9 is of length 4, and I 2 is the identity matrix of size 4 × 4. Consider a general system |ψ of 2-qubit register:", "The effect of applying G on |ψ gives,", "where, α = 1 4 3 j=0 α j is the mean of the amplitudes of the states in the superposition, i.e." ]
[ "hidden states", "original Grover's algorithm" ]
method
{ "title": "Hiding Quantum States in a Superposition", "abstract": "A method to hide certain quantum states in a superposition will be proposed. Such method can be used to increase the security of a communication channel. States represent an encrypted message will disappear during data exchange. This makes the message 100% safe under direct measurement by an eavesdropper. No entanglement sharing is required among the communicating parties." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1801.02771
quant-ph/9605043
Coherent-state Protocol
In the idealized setting of quantum communication complexity, a protocol that achieves the quadratic quantum advantage, up to logarithmic terms, for appointment scheduling, is that of [3] , essentially performing a distributed version of Grover search #REFR 15] .
[]
[ "Alice performs the \"inversion about the mean\" Grover iterations to find an intersecting date of availability, and she collaborates with Bob in order to implement the Grover \"oracle calls\".", "For an n-date calendar, obtaining the full quadratic quantum advantage requiresΘ( √ n) rounds of communication, while an improvement toΘ( n r ) communication and information leakage requires r-round protocols #OTHEREFR , for r ≤ √ n. In Ref.", "#OTHEREFR a general mapping is proposed from any pure state quantum protocol to an analogous coherent state protocol (reviewed in Appendix D).", "In Appendix F we implement this mapping for the distributed Grover's search protocol to obtain essentially a quadratic quantum advantage in terms of information leakage.", "Our implementation finds an efficient way to perform the distributed oracle calls for such a protocol." ]
[ "quantum communication complexity", "Grover search" ]
background
{ "title": "Practical Quantum Appointment Scheduling", "abstract": "We propose a protocol based on coherent states and linear optics operations for solving the appointmentscheduling problem. Our main protocol leaks strictly less information about each party's input than the optimal classical protocol, even when considering experimental errors. Along with the ability to generate constant-amplitude coherent states over two modes, this protocol requires the ability to transfer these modes back-and-forth between the two parties multiple times with low coupling loss. The implementation requirements are thus still challenging. Along the way, we develop new tools to study quantum information cost of interactive protocols in the finite regime." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1904.06525
quant-ph/9605043
Introduction
Nevertheless, Grover's algorithm #REFR would allow quantum computers a quadratic speedup in brute force attacks.
[ "Public key cryptography is a cornerstone of internet security.", "Quantum computers possess a threat on the widely deployed public-key cryptography schemes, whose security is based on the computational complexity of certain tasks, such as integer factorization and discrete logarithm.", "Shor's quantum algorithm #OTHEREFR and variational quantum factoring #OTHEREFR would allow one to solve these tasks with a significant boost #OTHEREFR .", "Quantum computers have less of an effect on symmetric cryptography since Shor's algorithm does not apply for their cryptoanalysis." ]
[ "Thus, the current goal is to develop cryptographic systems, that are secure against both classical and quantum attacks, before large-scale quantum computers arrive.", "Fortunately, not all public key cryptosystems are vulnerable to attacks with quantum computers #OTHEREFR .", "Several cryptosystems, that strive to remain secure under the assumption that the attacker has a large-scale quantum computer, have been suggested #OTHEREFR .", "These schemes are in the scope of so-called post-quantum cryptography.", "Existing proposals for post-quantum cryptography include codebased and lattice-based schemes for encryption and digital signatures as well as signature schemes based on hash-functions." ]
[ "quantum computers" ]
background
{ "title": "SPHINCS$^+$ digital signature scheme with GOST hash functions", "abstract": "Abstract. Many commonly used public key cryptosystems will become insecure once a scalable quantum computer is built. New cryptographic schemes that can guarantee protection against attacks with quantum computers, so-called post-quantum algorithms, have emerged in recent decades. One of the most promising candidates for a post-quantum signature scheme is SPHINCS + , which is based on cryptographic hash functions. In this contribution, we analyze the use of the new Russian standardized hash function, known as Streebog, for the implementation of the SPHINCS + signature scheme. We provide a performance comparison with SHA-256-based instantiation and give benchmarks for various sets of parameters." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
quant-ph/9810093
quant-ph/9605043
High symmetric partially entangled states
This matter is discussed in §5 and §Appendix A.) The other is Grover's inversion about average operation D #REFR .
[ "One of them is a selective phase shift transformation in certain basis vectors.", "It is given by the 2 n × 2 n diagonal matrix form,", "where subscripts x, y represent the basis vectors {|x |x ∈ {0, 1} n } and 0 ≤ θ x < 2π for ∀x.", "(Although a general phase shift transformation in the form of Eq.", "(3) takes a number of elementary gates exponential in n at most, we use only special transformations that need polynomial steps." ]
[ "The 2 n × 2 n matrix representation of D is given by", "Because we use only unitary transformations and never measure any qubits, we can regard our procedure for building |ψ n as a succession of unitary transformations.", "For simplicity, we consider a chain of transformations reversely to be a transformation from |ψ n to the uniform superposition instead of it from the uniform superposition to |ψ n .", "Fortunately, an inverse operation of the selective phase shift on certain basis vectors is also the phase shift, and an inverse operation of D defined in Eq. (4) is also D.", "In the rest of this paper, because of simplicity, we describe the procedure reversely from |ψ n to the uniform superposition." ]
[ "Grover's inversion" ]
background
{ "title": "Building partially entangled states with Grover ’ s amplitude amplification process", "abstract": "We discuss how to build some partially entangled states of n two-state quantum systems (qubits). The optimal partially entangled state with a high degree of symmetry is considered to be useful for overcoming a shot noise limit of Ramsey spectroscopy under some decoherence. This state is invariant under permutation of any two qubits and inversion between the ground state |0 and an excited state |1 for each qubit. We show that using selective phase shifts in certain basis vectors and Grover's inversion about average operations, we can construct this high symmetric entangled state by (polynomial in n) × 2 n/2 successive unitary transformations that are applied on two or three qubits. We can apply our method to build more general entangled states." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1201.6174
quant-ph/9605043
Proposition 3.1. There exists a quantum algorithm running in timeÕ(
We then search for one marked vertex in S, using Grover's algorithm #REFR with check-connection(u, ·), inÕ( √ n) time.
[ "Proof.", "We will say that a vertex i ∈ I is marked if A[i, k] = 1, and that a vertex j ∈ J is marked if B[k, j] = 1.", "Our goal is thus to find a pair (i, j) ∈ E of marked vertices. The algorithm is as follows.", "We first use the minimum finding quantum algorithm from #OTHEREFR to find the marked vertex u of largest degree in I, inÕ( √ n) time using get-degree(·) to obtain the order of a vertex from the data structure M .", "Let d denote the degree of u, let I ′ denote the set of vertices in I with degree at most d, and let S denote the set of vertices in J connected to u." ]
[ "If we find one, then this gives us a k-collision and we end the algorithm. Otherwise we proceed as follows.", "Note that, since each vertex in I ′ has at most d neighbors, by considering the number of missing edges we obtain:", "Also note that |J\\S| = n − d.", "We do a quantum search on I ′ × (J\\S) to find one pair of connected marked vertices in timeÕ(", ", using get-vert I (·, d) to access the vertices in I ′ and get-vert J (·, u) to access the vertices in J\\S." ]
[ "Grover's algorithm" ]
method
{ "title": "A Time-Efficient Output-Sensitive Quantum Algorithm for Boolean Matrix Multiplication", "abstract": "This paper presents a quantum algorithm that computes the product of two n × n Boolean matrices inÕ(n √ ℓ+ℓ √ n) time, where ℓ is the number of non-zero entries in the product. This improves the previous output-sensitive quantum algorithms for Boolean matrix multiplication in the time complexity setting by Buhrman andŠpalek (SODA'06) and Le Gall (SODA'12). We also show that our approach cannot be further improved unless a breakthrough is made: we prove that any significant improvement would imply the existence of an algorithm based on quantum search that multiplies two n × n Boolean matrices in O(n 5/2−ε ) time, for some constant ε > 0." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1507.01988
quant-ph/9605043
Introduction
Another, equally surprising discovery was made in 1996, by Lov Grover #REFR who designed a quantum algorithm that solves a general exhaustive search problem with N possible solutions in time O( √ N ).
[ "Quantum computing combines quantum physics and computer science, by studying computational models based on quantum physics (which is substantially different from conventional physics) and building quantum devices which implement those models.", "If a quantum computer is built, it will be able to solve certain computational problems much faster than conventional computers.", "The best known examples of such problems are factoring and discrete logarithm.", "These two number theoretic problems are thought to be very difficult for conventional computers but can be solved efficiently (in polynomial time) on a quantum computer #OTHEREFR .", "Since several widely used cryptosystems (such as RSA and Diffie-Hellman) are based on the difficulty of factoring or discrete logarithm, a quantum computer would be able to break those cryptosystems, shaking up the foundations of cryptography." ]
[ "This provides a quadratic speedup for a range of search problems, from problems that are solvable in polynomial time classically to NPcomplete problems.", "Many other quantum algorithms have been discovered since then.", "(More information about them can be found in surveys #OTHEREFR and the \"Quantum Algorithm Zoo\" website #OTHEREFR .)", "Given that finite automata are one of the most basic models of computation, it is natural to study them in the quantum setting.", "Soon after the discovery of Shor's factoring algorithm #OTHEREFR , the first models of quantum finite automata (QFAs) appeared #OTHEREFR ." ]
[ "quantum" ]
background
{ "title": "Automata and Quantum Computing", "abstract": "Abstract. Quantum computing is a new model of computation, based on quantum physics. Quantum computers can be exponentially faster than conventional computers for problems such as factoring. Besides full-scale quantum computers, more restricted models such as quantum versions of finite automata have been studied. In this paper, we survey various models of quantum finite automata and their properties. We also provide some open questions and new directions for researchers." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1809.00643
quant-ph/9605043
Lower bound on number of SEP queries for OPT (given an interior point)
It is well known [BBBV97] that if we allow quantum queries, then Ω( √ n) queries are needed (i.e., Grover's quantum search algorithm #REFR is optimal).
[ "We now consider lower bounding the number of quantum queries to a separation oracle needed to do optimization.", "In fact, we prove a lower bound on the number of separation queries needed for validity, which implies the same bound on optimization.", "We will use a reduction from a version 12 of the well-studied search problem:", "Given z ∈ {0, 1} n such that either |z| = 0 or |z| = 1, decide which of the two holds.", "It is not hard to see that if the access to z is given via classical queries, then Ω(n) queries are needed." ]
[ "We use this problem to show that there exist convex sets for which it is hard to construct a weak validity oracle, given a strong separation oracle.", "Since a separation oracle can be used as a membership oracle, this gives the same hardness result for constructing a weak validity oracle from a strong membership oracle.", "Theorem 25. Let 0 < ρ ≤ 1/3.", "Let A be an algorithm that can implement a VAL (4n) −1 ,ρ (K) oracle for every convex set K (with B(x 0 , r) ⊆ K ⊆ B(x 0 , R)) using only queries to a SEP 0,0 (K) oracle, and unitaries that are independent of K.", "Then the following statements are true, even when we restrict to convex sets K with r = 1/3 and R = 2 √ n:" ]
[ "quantum queries" ]
background
{ "title": "Convex optimization using quantum oracles", "abstract": "We study to what extent quantum algorithms can speed up solving convex optimization problems. Following the classical literature we assume access to a convex set via various oracles, and we examine the efficiency of reductions between the different oracles. In particular, we show how a separation oracle can be implemented using O(1) quantum queries to a membership oracle, which is an exponential quantum speed-up over the Ω(n) membership queries that are needed classically. We show that a quantum computer can very efficiently compute an approximate subgradient of a convex Lipschitz function. Combining this with a simplification of recent classical work of Lee, Sidford, and Vempala gives our efficient separation oracle. This in turn implies, via a known algorithm, that O(n) quantum queries to a membership oracle suffice to implement an optimization oracle (the best known classical upper bound on the number of membership queries is quadratic). We also prove several lower bounds: Ω( √ n) quantum separation (or membership) queries are needed for optimization if the algorithm knows an interior point of the convex set, and Ω(n) quantum separation queries are needed if it does not. *" }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
quant-ph/0312022
quant-ph/9605043
Introduction
Grover #REFR presented an algorithm for searching unstructured list of N items with quadratic speed-up over algorithms run on classical computers.
[ "Quantum computers #OTHEREFR are probabilistic devices, which promise to do some types of computation more powerfully than classical computers #OTHEREFR .", "Many quantum algorithms have been presented recently, for example, Shor #OTHEREFR presented a quantum algorithm for factorising a composite integer into its prime factors in polynomial time." ]
[ "Grover's algorithm inspired many researchers, including this work, to try to analyze and/or generalize his algorithm #OTHEREFR ].", "Grover's algorithm perfomance is near to optimum for a single match within the search space, although the number of iterations required by the algorithm increases; i.e.", "the problem becomes harder, as the number of matches exceeds half the number of items in the search space #OTHEREFR which is undesired behaviour for a search algorithm since the problem is expected to be easier for multiple matches.", "In this paper, using a partial diffusion operation, we will show a quantum algorithm, which can find a match among multiple matches within the search space after one iteration with probability at least 90% if the number of matches is more than one-third of the search space.", "For fewer matches the algorithm runs in quadratic speed up similar to Grover's algorithm with more reliable behaviour, as we will see." ]
[ "classical computers" ]
background
{ "title": "Quantum search algorithm with more reliable . . .", "abstract": "In this paper, we will use a quantum operator which performs the inversion about the mean operation only on a subspace of the system (Partial Diffusion Operator) to propose a quantum search algorithm runs in O( N/M ) for searching unstructured list of size N with M matches such that, 1 ≤ M ≤ N . We will show that the performance of the algorithm is more reliable than known quantum search algorithms especially for multiple matches within the search space. A performance comparison with Grover's algorithm will be provided." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
quant-ph/0004091
quant-ph/9605043
Farhi & Gutmann's Hamiltonian
We assume |σ is easy to prepare, so for example, |σ may be |ψ of equation #REFR .
[ "We are given n, N , f and w as above.", "Farhi & Gutmann #OTHEREFR describe a physical, analog way to do quantum search by first assuming that a Hamiltonian", "is available that distinguishes the target state |w from all others by giving it some positive energy E (the other basis states have energy 0).", "Let |σ be some arbitrary unit vector in the Hilbert space (the \"start\" state)." ]
[ "The goal is to evolve from |σ into |w .", "To search for the state |w , we are allowed to add some \"driver\" Hamiltonian H D to H w , provided that H D does not depend on the actual value of w at all.", "They choose H D = E |σ σ|, so their Hamiltonian is", "where E is some arbitrary positive value in units of energy.", "If |σ and |w are not orthogonal, then we can assume as before that σ|w = w|σ = x for some x > 0 by adjusting |σ by an appropriate phase factor." ]
[ "|σ" ]
background
{ "title": "An intuitive Hamiltonian for quantum search", "abstract": "We present new intuition behind Grover's quantum search algorithm by means of a Hamiltonian. Given a black-box Boolean function f : {0, 1} n → {0, 1} such that f (w) = 1 for exactly one w ∈ {0, 1} n , Grover [4] describes a quantum algorithm that finds w in O(2 n/2 ) time. Farhi & Gutmann [3] show that w can also be found in the same amount time by letting the quantum system evolve according to a simple Hamiltonian depending only on f . Their system evolves along a path far from that taken by Grover's original algorithm, however. The current paper presents an equally simple Hamiltonian matching Grover's algorithm step for step. The new Hamiltonian is similar in appearance from that of Farhi & Gutmann, but has some important differences, and provides new intuition for Grover's algorithm itself. This intuition both contrasts with and supplements other explanations of Grover's algorithm as a rotation in two dimensions, and suggests that the Hamiltonian-based approach to quantum algorithms can provide a useful heuristic for discovering new quantum algorithms." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
quant-ph/0504012
quant-ph/9605043
Introduction
In this column, we survey some of the results on quantum algorithms, focusing on the branch of quantum algorithms inspired by Grover's search algorithm #REFR .
[ "Shor's and Grover's algorithms have been followed by a lot of other results.", "Each of these two algorithms has been generalized and applied to several other problems. New algorithms 1 c A. Ambainis, 2004 .", "#OTHEREFR School of Mathematics, Institute for Advanced Study, Princeton, NJ 08540, USA. ambainis@ias.edu. Supported by NSF Grant DMS-0111298.", "Any opinions, findings and conclusions expressed in this material are those of the author and do not necessarily reflect views of the National Science Foundation.", "and new algorithmic paradigms (such as adiabatic computing #OTHEREFR which is the quantum counterpart of simulated annealing) have been discovered." ]
[ "Instead of the conventional introduction/review on quantum computing which starts with the backgrounds from physics, we follow a different path.", "We first describe Grover's search result and its generalization, amplitude amplification (section 2).", "Then, we explore what can be obtained by using these results as \"quantum black boxes\" in a combination with methods from conventional (non-quantum) algorithms and complexity (section 3).", "We give three examples of quantum algorithms of this type, one very simple and two more advanced ones.", "After that, in section 4, we show some examples were simple application of Grover's search fails but more advanced quantum algorithms (based on quantum walks) succeed." ]
[ "quantum algorithms" ]
background
{ "title": "News Complexity Theory Column 44", "abstract": "We review some of quantum algorithms for search problems: Grover's search algorithm, its generalization to amplitude amplification, the applications of amplitude amplification to various problems and the recent quantum algorithms based on quantum walks." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
quant-ph/0504012
quant-ph/9605043
Grover's search and amplitude amplification
His result is Theorem 2.1 #REFR Search can be solved with O( √ N ) quantum queries.
[ ", x N ∈ {0, 1} specified by a black box that answers queries.", "In a query, we input i to the black box and it outputs x i .", "Our task 3 is to output an i : x i = 1.", "Then, N queries are needed for deterministic algorithms and Ω(N ) queries are needed for probabilistic algorithms.", "(This follows by considering the case when there is exactly one i such that x i = 1 and N − 1 variables i:x i = 0.) Grover #OTHEREFR studied the quantum version of this problem (in which the black box is quantum, the input to the black box is a quantum state consisting of various i and the output is the input state modified depending on x i )." ]
[]
[ "quantum queries" ]
background
{ "title": "News Complexity Theory Column 44", "abstract": "We review some of quantum algorithms for search problems: Grover's search algorithm, its generalization to amplitude amplification, the applications of amplitude amplification to various problems and the recent quantum algorithms based on quantum walks." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1303.4127
quant-ph/9605043
I. INTRODUCTION
In the quantum world, thanks to Lov Grover #REFR , we can do this in O( √ n) steps and queries.
[ "Some classical solutions to problems achieve particular speed ups when we allow those algorithms to become quantum based.", "The most obvious result is the ability to search a group of n elements in sub linear time.", "Classically, it is required to look at all of the elements, one at a time, to ensure that the marked item is or is not present." ]
[ "In particular, this subroutine has proved useful for a number of other algorithms and achieving quantum lower bounds.", "In this paper we explore the complexity of performing this algorithm when reduced to movement on a two dimensional spatial grid.", "We restrict our model to a quantum robot walking along the two dimensional grid.", "We begin by describing the model we adapt for out algorithm in Section II.", "We then proceed to a discussion of Grover's Algorithm in Section III and follow up with the latest results about search on a spatial grid in Section IV." ]
[ "quantum world" ]
background
{ "title": "Quantum Search on the Spatial Grid", "abstract": "Some classical solutions to problems achieve particular speed ups when we allow those algorithms to become quantum based. The most obvious result is the ability to search a group of n elements in sub linear time. Classically, it is required to look at all of the elements, one at a time, to ensure that the marked item is or is not present. In the quantum world, thanks to Lov Grover [4], we can do this in O( p n) steps and queries. In particular, this subroutine has proved useful for a number of other algorithms and achieving quantum lower bounds. In this paper we explore the complexity of performing this algorithm when reduced to movement on a two dimensional spatial grid. We restrict our model to a quantum robot walking along the two dimensional grid. We begin by describing the model we adapt for out algorithm in Section II. We then proceed to a discussion of Grover’s Algorithm in Section III and follow up with the latest results about search on a spatial grid in Section IV. Finally, we give our new algorithm for search on the spatial grid with results in Section V. We describe the case when there are multiple marked items being search for and how it diers from the non spatial version in Section VI and comment on the implications this gives for other problems in Section VII. We give some concluding remarks in Section VIII and discuss what needs to be done." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
0809.0932
quant-ph/9605043
Introduction
Grover's algorithm #REFR is a probabilistic search algorithm usually presented in the context of searching an unsorted database.
[ "The original binary Deutsch-Jozsa algorithm #OTHEREFR considers a Boolean function of the form f : {0, 1} r → {0, 1} implemented in a black box circuit, or oracle, U f .", "Input states are put in a quantum superposition as query (x) and answer (y) registers so that their state vectors are expressed in terms of the dual basis #OTHEREFR |0 ′ = 1 √ 2 (|0 + |1 ) and |1 ′ = 1 √ 2 (|0 − |1 ), also denoted |+ and |− .", "The oracle is defined by its action on the registers U f |xy = |x |y ⊕ f (x) , where the |x register is the tensor product of input states |x 1 · · · |x r .", "When it is promised that the function in question is either constant (returning a fixed value) or balanced (returning outputs equally among 0 and 1), the algorithm decides deterministically which type it is with a single oracle query as opposed to the 2 r−1 + 1 required classically.", "The corresponding circuit is shown in Figure 1 (/ r denotes r wires in parallel)." ]
[ "It consists of the iteration of a compound \"Grover operator\" on a superposed register of search states as well as an ancillary qubit, which consists of an oracle and an inversion about the average operator (D).", "The oracle marks the searched In this paper, we prove extensions of both the Deutsch-Jozsa and Grover algorithms to arbitrary radices of multi-valued quantum logic.", "We denote addition over the additive group Z n by the operator ⊕ and the Kronecker tensor product by ⊗.", "The Hadamard transform is a special case of the quantum Fourier transform (QFT) in Hilbert space H n .", "The well-known Chrestenson gate for ternary quantum computing is also equivalent to the Fourier transform over Z 3 ." ]
[ "Grover's algorithm" ]
background
{ "title": "Applications of Multi-Valued Quantum Algorithms", "abstract": "This paper generalizes both the binary Deutsch-Jozsa and Grover algorithms to n-valued logic using the quantum Fourier transform. Our extended Deutsch-Jozsa algorithm is not only able to distinguish between constant and balanced Boolean functions in a single query, but can also find closed expressions for classes of affine logical functions in quantum oracles, accurate to a constant term. Furthermore, our multi-valued extension of the Grover algorithm for quantum database search requires less qudits and hence a substantially smaller memory register, as well as less wasted information states, to implement. We note several applications of these algorithms and their advantages over the binary cases." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1904.08914
quant-ph/9605043
Introduction
To do so, one uses amplitude amplification, the basic primitive of Grover's search algorithm #REFR . The original algorithm of Brassard et al.
[ "Intuitively, in the worst case, we might need Θ(N/|S|) queries just to find any elements from S, but once we do, estimating their frequency is just a standard statistics problem.", "Furthermore, for the O(N/|S|) estimation strategy to work, we don't need to suppose (circularly) that |S| is approximately known in advance, but can decide when to halt dynamically, depending on when the first element in S is found.", "In the quantum setting, we can query the membership oracle on superpositions of inputs. Here Brassard et al.", "#OTHEREFR gave an algorithm for approximate counting that makes only O N/|S| queries, for any constant ε > 0.", "Moreover, they showed how to achieve any accuracy ε with O(1/ε) multiplicative overhead #OTHEREFR Theorem 15] ." ]
[ "also used quantum phase estimation, in effect combining Grover's algorithm with Shor's period-finding algorithm.", "However, it's a folklore fact that one can remove the phase estimation, and adapt Grover search with an unknown number of marked items, to get an approximate count of the number of marked items as well.", "On the lower bound side, it follows immediately from the optimality of Grover's algorithm (i.e., the BBBV Theorem #OTHEREFR ) that even with a quantum computer, Ω N/|S| queries are needed for approximate counting to any constant accuracy.", "Hence the classical and quantum complexity of approximate counting with membership queries is completely understood.", "In this paper we study approximate counting in models of computation that go beyond membership queries." ]
[ "Grover's search algorithm" ]
method
{ "title": "Quantum Lower Bounds for Approximate Counting via Laurent Polynomials", "abstract": "This paper proves new limitations on the power of quantum computers to solve approximate counting-that is, multiplicatively estimating the size of a nonempty set S ⊆ [N ]. Given only a membership oracle for S, it is well known that approximate counting takes Θ N/|S| quantum queries. But what if a quantum algorithm is also given \"QSamples\"-i.e., copies of the state |S = i∈S |i -or even the ability to apply reflections about |S ? Our first main result is that, even then, the algorithm needs either Θ N/ |S| queries or else Θ min |S| 1/3 , N/ |S| reflections or samples. We also give matching upper bounds. We prove the lower bound using a novel generalization of the polynomial method of Beals et al. to Laurent polynomials, which can have negative exponents. We lower-bound Laurent polynomial degree using two methods: a new \"explosion argument\" that pits the positive-and negative-degree parts of the polynomial against each other, and a new formulation of the dual polynomials method. Our second main result rules out the possibility of a black-box Quantum Merlin-Arthur (or QMA) protocol for proving that a set is large. More precisely, we show that, even if Arthur can make T quantum queries to the set S ⊆ [N ], and also receives an m-qubit quantum witness from Merlin in support of S being large, we have T m = Ω min |S| , N/ |S| . This resolves the open problem of giving an oracle separation between SBP, the complexity class that captures approximate counting, and QMA. Note that QMA is \"stronger\" than the queries+QSamples model in that Merlin's witness can be anything, rather than just the specific state |S , but also \"weaker\" in that Merlin's witness cannot be trusted. Intriguingly, Laurent polynomials also play a crucial role in our QMA lower bound, but in a completely different manner than in the queries+QSamples lower bound. This suggests that the \"Laurent polynomial method\" might be broadly useful in complexity theory." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
2002.08343
quant-ph/9605043
INTRODUCTION
PQC not only cover against that menace, it works also as a response against side-channel attacks #REFR , the increasing concern about pseudo-prime generator backdoor attacks (i.e.
[ "ost-Quantum Cryptography (PQC) is a trend that has an official NIST status #OTHEREFR and which aims to be resistant to quantum computers attacks like Shor #OTHEREFR and Grover #OTHEREFR algorithms.", "NIST initiated a process to solicit, evaluate, and standardize one or more quantum-resistant public-key cryptographic algorithms.", "Particularly Shor algorithm provided a quantum computing way to break asymmetric protocols." ]
[ "Dual_EC_DRBG NSA #OTHEREFR ) or the development of quasipolynomial discrete logarithm attacks which impact severely against current de facto standards #OTHEREFR of asymmetric cryptography whose security rest on integer-factorization (IFP) and discrete-logarithm (DLP) over numeric fields.", "As a response, there is a growing interest in PQC solutions like Lattice-based, Pairing-based, Multivariate Quadratic, Code-based and Hash-based cryptography #OTHEREFR , Another kind, and overlooked solutions belong to Non-Commutative (NCC) and Non-Associative (NAC) algebraic cryptography #OTHEREFR .", "Security of a canonical algebraic asymmetric protocol always relies on a one-way function (OWF) transformed to work as a one-way trapdoor function (OWTF) #OTHEREFR .", "For instance, using the decomposition problem (DP) or the double coset problem (DCP) #OTHEREFR , both assumed to belong to AWPP time-complexity (but out of BQP) #OTHEREFR 18] problems, which lead to an eventual brute-force attack, thus yielding high computational security.", "A solution which does not require commutative subgroups is the Anshel-Anshel-Goldberg (AAG) key-exchange protocol (KEP) #OTHEREFR ." ]
[ "side-channel attacks", "pseudo-prime generator backdoor" ]
background
{ "title": "Algebraic Extension Ring Framework for Non-Commutative Asymmetric Cryptography", "abstract": "Post-Quantum Cryptography (PQC) attempts to find cryptographic protocols resistant to attacks using Shor's polynomial time algorithm for numerical field problems or Grover's algorithm to find the unique input to a black-box function that produces a particular output value. The use of nonstandard algebraic structures like non-commutative or nonassociative structures, combined with one-way trapdoor functions derived from combinatorial group theory, are mainly unexplored choices for these new kinds of protocols and overlooked in current PQC solutions. In this paper, we develop an algebraic extension ring framework who could be applied to different asymmetric protocols (i.e. key exchange, key transport, enciphering, digital signature, zero-knowledge authentication, oblivious transfer, secret sharing etc.). A valuable feature is that there is no need for big number libraries as all arithmetic is performed in extension field operations (precisely the AES field). We assume that the new framework is cryptographical secure against strong classical attacks like the sometimes-useful length-based attack, Roman'kov's linearization attacks and Tsaban's algebraic span attack. This statement is based on the non-linear structure of the selected platform which proved to be useful protecting the AES protocol. Otherwise, it could resist post-quantum attacks (Grover, Shor) and be particularly useful for computational platforms with limited capabilities like USB cryptographic keys or smartcards. Semantic security (IND-CCA2) could also be inferred for this new platform." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1712.06997
quant-ph/9605043
Introduction
In light of the fact that quantum algorithms cause a severe threat for public-key cryptography, it is natural to study the impact of quantum attacks on symmetric cryptosystems. A representative example is Grover's search algorithm #REFR . It can provide a quadratic speedup for brute-force attacks.
[ "The development of quantum computing has greatly impacted classical cryptography.", "Due to Shor's algorithm #OTHEREFR , most currently used public-key cryptosystems are known to be insecure against adversaries in possession of quantum computers, such as RSA, ELGamal and any other schemes based on factorization or discrete logarithms.", "This motivated the advent of post-quantum cryptography, which studies classical systems resistant against quantum adversaries." ]
[ "In addition, Simon's algorithm #OTHEREFR has also been applied to cryptanalysis.", "Kuwakado and Morri use it to construct a quantum distinguisher for 3-round Feistel scheme #OTHEREFR and recover partial key of Even-Mansour construction #OTHEREFR .", "Santoli and Schaffiner extend their result and present a quantum forgery attack on CBC-MAC scheme #OTHEREFR . In #OTHEREFR , Kaplan et al.", "use Simon's algorithm to attack various symmetric cryptosystems, such as CBC-MAC, PMAC, CLOC and so on.", "They also study how differential and linear cryptanalysis behave in the post-quantum world #OTHEREFR ." ]
[ "quantum attacks", "quantum algorithms" ]
background
{ "title": "Quantum impossible differential and truncated differential cryptanalysis", "abstract": "We study applications of BV algorithm and present quantum versions of impossible differential cryptanalysis and truncated differential cryptanalysis based on it. Afterwards, we analyze their efficiencies and success probabilities rigorously. In traditional impossible differential attack or truncated differential attack, it is difficult to extend the differential path, which usually limits the number of rounds that can be attacked. By contrast, our approach treats the first r − 1 rounds of the cipher as a whole and applies BV algorithm on them directly. Thus extending the number of rounds is not a problem for our algorithm." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1908.11213
quant-ph/9605043
Amongst all quantum algorithms, the reasons to focus on the Grover search #REFR are many.
[ "Whilst the first two are considered short and mid term applications respectively, the last one, perhaps the most fascinating, is generally considered to be a long term application.", "This is because of the common understanding that we will need to build scalable implementations of universal quantum gate sets with fidelity 10 −3 first, and implement quantum error corrections then, in order to finally be able to run our preferred quantum algorithm on the thereby obtained universal quantum computer. This seems feasible, yet long way to go.", "In this letter we argue that this may be a pessimistic view.", "Scientists may get luckier than this and find out that nature actually implements some of these quantum algorithms 'spontaneously'.", "Indeed, the hereby presented research suggests that the Grover search may in fact be a naturally occurring phenomenon, when fermions propagate in crystalline materials under certain conditions." ]
[ "First of all because of its remarkable generality, as it speeds up any brute force O(N ) problem into a O( √ N ) problem.", "Having just this quantum algorithm would already be extremely useful.", "Second of all, because of its remarkable robustness : the algorithm comes in many variants and has been rephrased in many ways, including in terms of resonance effects #OTHEREFR and quantum walks #OTHEREFR .", "Remember that a quantum walks (QW) are essentially local unitary gates that drive the evolution of a particle on a lattice.", "They have been used as a mathematical framework for different quantum algorithms #OTHEREFR but also for quantum simulations e.g. #OTHEREFR . This is where things get interesting." ]
[ "quantum algorithms" ]
background
{ "title": "The Grover search as a naturally occurring phenomenon", "abstract": "We provide the first evidence that under certain conditions, electrons may naturally behave like a Grover search, looking for defects in a material. The theoretical framework is that of discrete-time quantum walks (QW), i.e. local unitary matrices that drive the evolution of a single particle on the lattice. Some of these are well-known to recover the (2 + 1)-dimensional Dirac equation in continuum limit, i.e. the free propagation of the electron. We study two such Dirac QW, one on the square grid and the other on a triangular grid reminiscent of graphene-like materials. The numerical simulations show that the walker localises around a defect in O( √ N ) steps with probability O(1/ log N ). This in line with previous QW formulations of the Grover search on the 2D grid. But these Dirac QW are 'naturally occurring' and require no specific oracle step other than a hole defect in a material. Quantum Computing has three main fields of applications for quantum computing : quantum cryptography ; quantum simulation ; and quantum algorithms (e.g. Grover, Shor...). Whilst the first two are considered short and mid term applications respectively, the last one, perhaps the most fascinating, is generally considered to be a long term application. This is because of the common understanding that we will need to build scalable implementations of universal quantum gate sets with fidelity 10 −3 first, and implement quantum error corrections then, in order to finally be able to run our preferred quantum algorithm on the thereby obtained universal quantum computer. This seems feasible, yet long way to go. In this letter we argue that this may be a pessimistic view. Scientists may get luckier than this and find out that nature actually implements some of these quantum algorithms 'spontaneously'. Indeed, the hereby presented research suggests that the Grover search may in fact be a naturally occurring phenomenon, when fermions propagate in crystalline materials under certain conditions. Amongst all quantum algorithms, the reasons to focus on the Grover search [14] are many. First of all because of its remarkable generality, as it speeds up any brute force O(N ) problem into a O( √ N ) problem. Having just this quantum algorithm would already be extremely useful. Second of all, because of its remarkable robustness : the algorithm comes in many variants and has been rephrased in many ways, including in terms of resonance effects [23] and quantum walks [10] . Remember that a quantum walks (QW) are essentially local unitary gates that drive the evolution of a particle on a lattice. They have been used as a mathematical framework for different quantum algorithms [3, 27] but also for quantum simulations e.g. [4, 11, 13] . This is where things get interesting. Indeed, it has been shown many of these QW admit, as their continuum limit, some well-known PDE of physics, such as the Dirac equation [7, 12, 15, 21] . Recall that the Dirac equation governs the free propagation of the electron. Thus, these Dirac QW provided 'quantum numerical schemes', for the future quantum computers, to simulate the electron. For instance [17] shows that it is possible to describe the dynamics of fer-mions in graphene using QW. This is great, but now let us turn things the other way round : this also means that fermions provide a natural implementation of these Dirac QW. Could they be useful algorithmically ? Here we provide evidence that these Dirac QW work fine to implement the diffusion step of the Grover search. Thus, fermions may provide a natural implementation of this step. However, recall that the Grover search is the alternation of a diffusion step, with an oracle step. The later puts on a minus one phase whenever the walker hits the solution of the problem. Could the oracle step be naturally implemented in terms of fermions, as well ? Here we provide evidence that the mere presence of hole defect suffices to implement an effective oracle step. This paper focusses on Dirac QW in (2 + 1)-dimensions, on both the square grid and the triangular grid. The triangular grid is of particular interest for instance because of its ressemblance to several naturally occurring crystal-like materials. Moreover, it features topological phase effects which, by creating edge states around the hole defect, may help improbe localization. Notice the Grover search has already been described on triangular grids in [2, 9] and that, more generally, the Grover search has already been expressed as a QW on a variety of graphs before, yielding O( N log(N )) time complexity algorithms [1, 22, 25] . The aim of this contribution is to point out that simple variations of these are in fact naturally occurring phenomenon-with the hope to open a new and more direct route towards implementing the Grover search. We consider QW both over the square and the triangular grid, i.e. a grid formed by tiling the plane regularly with equilateral triangles. Consider the line segments along which the facets of the squares (or triangles) are glued, and place a point in the middle. The walker lives on those points. For the square grid we may label these points by their positions in Z 2 , for the triangular grid this would be a subset of Z 2 . The walker's 'coin' or 'spin' degree of freedom lies in H 2 , for arXiv:1908.11213v1 [quant-ph]" }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
quant-ph/0309220
quant-ph/9605043
Introduction
This is only a constant factor worse than the noiseless case, which is Grover's algorithm #REFR .
[ "Assuming the second model and some fixed ǫ, we call a quantum algorithm robust if it computes f with bounded error probability when its inputs are given by algorithms A 1 , . . . , A n .", "A first observation is that every T -query non-robust algorithm can be made robust at a multiplicative cost of O(log T ).", "With O(log T ) queries, a majority gate, and an uncomputation step, we can construct a unitaryŨ x that approximates an exact quantum query U x : |i |b → |i |b ⊕ x i very well: U x −Ũ x ≤ 1/100T .", "Since errors add linearly in a quantum algorithm, replacing U x bỹ U x in a non-robust algorithm gives a robust algorithm with almost the same final state. In some cases better constructions are possible. For instance, a recent result by Høyer et al.", "#OTHEREFR immediately implies a quantum algorithm that robustly computes OR with O( √ n) queries." ]
[ "In fact, we do not know of any function where the robust degree is more than a constant factor larger than the non-robust approximate degree.", "Our main result (made precise in Theorem 1) is the following:", "There exists a quantum algorithm that outputs x with high probability, using O(n) invocations of the A i algorithms (i.e., queries).", "This result implies that every n-bit function f can be robustly quantum computed with O(n) queries.", "This contrasts with the classical Ω(n log n) lower bound for PARITY." ]
[ "Grover's algorithm" ]
background
{ "title": "Robust quantum algorithms and polynomials", "abstract": "We study the complexity of robust quantum algorithms. These still work with high probability if the n input bits are noisy. We exhibit a robust quantum algorithm that recovers the complete input with high probability using O(n) queries. This implies that every n-bit function can be quantum computed robustly with O(n) queries, which contrasts with Feige et al. 's Ω(n log n) classical bound for PARITY. We also give similar bounds on the degrees of multilinear polynomials that robustly approximate Boolean functions." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1811.00675
quant-ph/9605043
An Illustrative Example: Quantum Search
The characteristic polynomial of this 2 × 2 matrix is #REFR and has one critical point p obtained by equating its partial derivatives to zero.
[ "where s = t/T , with T > 0, and", "As usual, the notations {|z } z∈{0,1} n and {|ẑ } z∈{0,1} n stand for the computational and Hadamard bases, respectively.", "The state |u is the sought item (the unsorted database being the computational basis {|z } z∈{0,1} n ).", "The search problem can be put into the twodimensional subspace spanned by the two states |v := |0 − 1/ √ N |u and |u , with N = 2 n .", "In this orthogonal basis, the restricted Hamiltonian H(s) takes the form" ]
[ "This critical point is non degenerate because the eigenvalues k 1 (p) := −2", "and k 2 (p) := 2 of the Hessian of f are non zero, and because k 1 (p)k 2 (p) < 0, the critical point is a saddle point.", "Now, the graph of the function f (see Figure 1 ) comes with a Gaussian curvature", "Gauß-Bonnet theorem forces this curvature to distribute itself on the surface consistently with this topology (consistent with Euler characteristic -1).", "In fact, the curvature is \"dumped\" at the critical point p:" ]
[ "one critical point", "matrix" ]
background
{ "title": "Homological Description of the Quantum Adiabatic Evolution With a View Toward Quantum Computations", "abstract": "We import the tools of Morse theory to study quantum adiabatic evolution, the core mechanism in adiabatic quantum computations (AQC). AQC is computationally equivalent to the (pre-eminent paradigm) of the Gate model but less error-prone, so it is ideally suitable to practically tackle a large number of important applications. AQC remains, however, poorly understood theoretically and its mathematical underpinnings are yet to be satisfactorily identified. Through Morse theory, we bring a novel perspective that we expect will open the door for using such mathematics in the realm of quantum computations, providing a secure foundation for AQC. Here we show that the singular homology of a certain cobordism, which we construct from the given Hamiltonian, defines the adiabatic evolution. Our result is based on E. Witten's construction for Morse homology that was derived in the very different context of supersymmetric quantum mechanics. We investigate how such topological description, in conjunction with Gauß-Bonnet theorem and curvature based reformulation of Morse lemma, can be an obstruction to any computational advantage in AQC. We also explore Conley theory, for the sake of completeness, in advance of any known practical Hamiltonian of interest." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
quant-ph/0403168
quant-ph/9605043
Introduction, motivation and results
The one example is Grover's search algorithm #REFR that computes OR function with probability 2/3 making O( √ n) queries, where n is number of Boolean variables.
[ "The laws of quantum world offers to construct new models of computation that possibly are more adequate to nature.", "The one of the most popular models of quantum computing is quantum query algorithms.", "In this paper we will view only quantum query algorithms computing total Boolean functions.", "There are some very exiting quantum query algorithms that overborne their classical analogs." ]
[ "The other example is exact (giving right answer with probability 1) quantum algorithm for PARITY making n/2 queries #OTHEREFR .", "It is the best from known exact quantum query algorithms for total Boolean functions.", "Those amazing examples show that proving nontrivial lower bounds for quantum algorithms are essentially necessary.", "There are done a lot of word on it, however many problems are still open.", "We will focus on exact quantum query algorithms." ]
[ "Grover's search algorithm" ]
background
{ "title": "Exact quantum query complexity for total Boolean functions", "abstract": "We will show that if there exists a quantum query algorithm that exactly computes some total boolean function f by making T queries, then there is a classical deterministic algorithm A that exactly computes f making O(T 3 ) queries. The best know bound previously was O(T 4 ) due to Beals et al. [6] ." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
quant-ph/0305028
quant-ph/9605043
Introduction
For example, Grover's searh algorithm #REFR solves an arbitrary exhaustive search problem with N possibilities in time O( √ N ).
[ "Quantum computing provides speedups for factoring #OTHEREFR , search #OTHEREFR and many related problems. These speedups can be quite surprising." ]
[ "Classically, it is obvious that time Ω(N ) would be needed.", "This makes lower bounds particularly important in the quantum world.", "If we can search in time O( √ N ), why we cannot search in time O(log c N )? (Among other things, that would have meant N P ⊆ BQP .) Lower bound by #OTHEREFR shows that this is not possible and Grover's algorithm is exactly optimal.", "Currently, we have good lower bounds on quantum complexity of many problems.", "They follow by two methods: adversary #OTHEREFR and polynomials method #OTHEREFR ." ]
[ "Grover's searh", "arbitrary exhaustive search" ]
background
{ "title": "Polynomial degree vs. quantum query complexity", "abstract": "The degree of a polynomial representing (or approximating)" }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1702.02450
quant-ph/9605043
Resistance to Grover's Quantum Search Algorithm:
Grover's quantum search algorithm #REFR allows a Quantum computer to search for a particular element in an unordered n-element set in a constant times √ n steps as opposed to a constant times n steps required on a classical computer.
[]
[ "Resistance to Grover's search algorithm requires increasing the search space.", "Since E-Multiplication scales linearly, this means that if an attacker has access to a quantum computer running Grover's algorithm, it is only necessary to double the running time of Ironwood to maintain the same security level that currently exists for attacks by classical computers.", "In comparison, the running time of ECC would have to increase by a factor of 4 since ECC is a based on a quadratic algorithm.", "We assume that Ironwood is running on the braid group B N over the finite field F q .", "Note that there q N polynomials of degree N − 1 over F q ." ]
[ "Quantum computer", "Grover's quantum search" ]
background
{ "title": "Ironwood Meta Key Agreement and Authentication Protocol", "abstract": "Abstract-Number theoretic public key solutions are subject to various quantum attacks making them less attractive for longer term use. Certain group theoretic constructs show promise in providing quantum-resistant cryptographic primitives. We introduce a new protocol called a Meta Key Agreement and Authentication Protocol (MKAAP) that has some characteristics of a public key solution and some of a shared-key solution. Then we describe the Ironwood MKAAP, analyze its security, and show how it resists quantum attacks. We also show Ironwood implemented on several IoT devices, measure its performance, and show how it performs better than existing key agreement schemes." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1006.3651
quant-ph/9605043
Introduction
One of two most famous quantum algorithms is Grover's search #REFR which can search among N possibilities in O( √ N ) steps.
[]
[ "This provides a quadratic speedup over the naive classical algorithm for a variety of search problems #OTHEREFR .", "Grover's algorithm can be re-cast as computing OR of N bits x 1 , . . .", ", x N , with O( √ N ) queries to a black box storing x 1 , . . . , x N .", "A natural generalization of this problem is computing the value of an AND-OR formula of x 1 , . . . , x N .", "Grover's algorithm easily generalizes to computing AND-OR formulas of small depth d." ]
[ "famous quantum algorithms" ]
background
{ "title": "Quantum algorithms for formula evaluation", "abstract": "Abstract. We survey the recent sequence of algorithms for evaluating Boolean formulas consisting of NAND gates." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
quant-ph/0602156
quant-ph/9605043
Grover's search
Grover's quantum search algorithm ( #REFR ) is well-known for the quadratic speedup it offers in the solutions of NP-complete problems.
[]
[ "The algorithm is optimal up to a multiplicative constant ( #OTHEREFR ).", "The task is: given a function f : 0, ..2 n → 0, 1, find x : 0, ..2 n , such that f x = 1.", "For simplicity we assume that there is only a single solution, which we denote x 1 , i.e.", "f x 1 = 1 and f x = 0 for all x = x 1 .", "The proofs are not very different for a general case of more than one solutions." ]
[ "Grover's quantum search" ]
background
{ "title": "Quantum Predicative Programming", "abstract": "Abstract. The subject of this work is quantum predicative programming -the study of developing of programs intended for execution on a quantum computer. We look at programming in the context of formal methods of program development, or programming methodology. Our work is based on probabilistic predicative programming, a recent generalisation of the well-established predicative programming. It supports the style of program development in which each programming step is proven correct as it is made. We inherit the advantages of the theory, such as its generality, simple treatment of recursive programs, time and space complexity, and communication. Our theory of quantum programming provides tools to write both classical and quantum specifications, develop quantum programs that implement these specifications, and reason about their comparative time and space complexity all in the same framework." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1901.04123
quant-ph/9605043
In fact, a slight modification of Grover's quantum search algorithm (see #REFR ) leads to a solution with complexity of the order of O( √ N ), where N depends on the number of possible states of control and state parameters. The number N is more precisely defined in Sect. 3.
[ "Then it becomes possible to employ deterministic and probabilistic methods for obtaining a global optimum.", "In this work, we propose a new framework for solution of the trajectory optimization problem via classical discrete search algorithms including an exhaustive search algorithm (Method I, Sect. 4.1), a random search algorithm (Method II, Sect. 4.2), and a hybrid search algorithm (Method III, Sect. 4.3) .", "This framework also allows us to efficiently use quantum computational algorithms for global trajectory optimization.", "In this context, we propose new approaches for solution of the trajectory optimization problem using quantum exhaustive search algorithms (Method IV, Sect. 5.2), a quantum random search algorithm (Method V, Sect. 5.3), and a quantum hybrid algorithm (Method VI, Sect. 5.4).", "It turns out that quantum computers, in principle, are significantly superior to classical computers in solving the underlying discrete search problems." ]
[ "A main focus of this paper is to show that trajectory optimization problems can be tackled efficiently using quantum computational algorithms employed either alone or in conjunction with a randomized search.", "As noted earlier, to achieve this we use a discretization scheme which makes the use of quantum computing possible.", "We also note here that unlike many other works in the literature on trajectory optimization which are set up to only detect the local optimum, our approach enables the search of the global optimum.", "We will demonstrate our method using two canonical problems in trajectory optimization, namely the brachistochrone problem and the moon landing problem.", "The method presented here could be made even more effective if it is combined with other techniques like the gradient descent and simulated annealing (see #OTHEREFR )." ]
[ "Grover's quantum search" ]
method
{ "title": "Trajectory optimization using quantum computing", "abstract": "Abstract We present a framework wherein the trajectory optimization problem (or a problem involving calculus of variations) is formulated as a search problem in a discrete space. A distinctive feature of our work is the treatment of discretization of the optimization problem wherein we discretize not only independent variables (such as time) but also dependent variables. Our discretization scheme enables a reduction in computational cost through selection of coarse-grained states. It further facilitates the solution of the trajectory optimization problem via classical discrete search algorithms including deterministic and stochastic methods for obtaining a global optimum. This framework also allows us to efficiently use quantum computational algorithms for global trajectory optimization. We demonstrate that the discrete search problem can be solved by a variety of techniques including a deterministic exhaustive search in the physical space or the coefficient space, a randomized search algorithm, a quantum search algorithm or by employing a combination of randomized and quantum search algorithms depending on the nature of the problem. We illustrate our methods by solving some canonical problems in trajectory optimization. We also present a comparative study of the performances of different methods in solving our example problems. Finally, we make a case for using quantum search algorithms as they offer a quadratic speed-up in comparison to the traditional non-quantum algorithms." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1312.2579
quant-ph/9605043
I. INTRODUCTION
Almost all methods to date for simulating systems of fermions and electronic structure take advantage of the second quantized formalism #REFR .
[ "These proposals have been realized experimentally for several systems #OTHEREFR .", "Systems of interacting fermions are a natural target for quantum simulation.", "In particular, this implies the simulation of interacting electronic systems, due to the status of the electron as perhaps the most scientifically important fermion.", "In physics, a significant goal would be the quantum simulation of the phase diagram of the Fermi-Hubbard model due to its importance for high temperature superconductivity.", "In chemistry, the electronic structure problem provides a rich set of instances that can be addressed by quantum simulation #OTHEREFR ." ]
[ "The overall Hamiltonian is expressed as a sum of combinations of creation and annihilation operators.", "The exchange symmetry of the problem is represented by the algebra of the creation and annihilation operators.", "The state of the system is represented using qubits in the occupation number basis, in which the state of each qubit represents the occupancy state of one orbital.", "The creation and annihilation operators are then mapped to qubit operators by, for example, the Jordan-Wigner or Bravyi-Kitaev transformations #OTHEREFR .", "This approach to the simulation of electronic structure in particular has been widely explored #OTHEREFR , and early examples have been implemented in NMR and optical quantum computers #OTHEREFR ]." ]
[ "electronic structure", "second quantized formalism" ]
method
{ "title": "Quantum Algorithms for Quantum Chemistry based on the sparsity of the CI-matrix", "abstract": "Quantum chemistry provides a target for quantum simulation of considerable scientific interest and industrial importance. The majority of algorithms to date have been based on a secondquantized representation of the electronic structure Hamiltonian -necessitating qubit requirements that scale linearly with the number of orbitals. The scaling of the number of gates for such methods, while polynomial, presents some serious experimental challenges. However, because the number of electrons is a good quantum number for the electronic structure problem it is unnecessary to store the full Fock space of the orbitals. Representation of the wave function in a basis of Slater determinants for fixed electron number suffices. However, to date techniques for the quantum simulation of the Hamiltonian represented in this basis -the CI-matrix -have been lacking. We show how to apply techniques developed for the simulation of sparse Hamiltonians to the CI-matrix. We prove a number of results exploiting the structure of the CI-matrix, arising from the Slater rules which define it, to improve the application of sparse Hamiltonian simulation techniques in this case. We show that it is possible to use the minimal number of qubits to represent the wavefunction, and that these methods can offer improved scaling in the number of gates required in the limit of fixed electron number and increasing basis set size relevant for high-accuracy calculations. We hope these results open the door to further investigation of sparse Hamiltonian simulation techniques in the context of the quantum simulation of quantum chemistry. *" }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
quant-ph/0311171
quant-ph/9605043
Introduction
Grover #REFR presented an algorithm for searching unstructured list of N items with quadratic speed-up over algorithms run on classical computers.
[ "Quantum computers #OTHEREFR are probabilistic devices, which promise to do some types of computation more powerfully than classical computers #OTHEREFR .", "Many quantum algorithms have been presented recently, for example, Shor #OTHEREFR presented a quantum algorithm for factorising a composite integer into its prime factors in polynomial time." ]
[ "Grover's algorithm inspired many researchers, including this work, to try to analyze and/or generalize his algorithm #OTHEREFR ].", "Grover's algorithm is proved to be optimal for a single match within the search space, although the number of iterations required by the algorithm increases; i.e.", "the problem becomes harder, as the number of matches exceeds half the number of items in the search space #OTHEREFR which is undesired behaviour for a search algorithm since the problem is expected to be easier.", "In this paper we will present a fast quantum algorithm, which can find a match among multiple matches within the search space after few iterations faster than any classical or quantum algorithm although for small number of matches the algorithm behaves classically.", "This leads us to proposing a hybrid search engine that includes Grover's algorithm and the algorithm proposed here." ]
[ "classical computers" ]
background
{ "title": "A Hybrid Quantum Search Engine: A Fast Quantum Algorithm for Multiple Matches", "abstract": "In this paper we will present a quantum algorithm which works very efficiently in case of multiple matches within the search space and in the case of few matches, the algorithm performs classically. This allows us to propose a hybrid quantum search engine that integrates Grover's algorithm and the proposed algorithm here to have general performance better that any pure classical or quantum search algorithm." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1907.01641
quant-ph/9605043
Introduction
A major breakthrough was the quadratic speedup in the search algorithms of quantum walks #REFR .
[ "Quantum information has recently attracted attention.", "Especially, quantum walks have been actively discussed from both the theoretical and practical viewpoints #OTHEREFR ." ]
[ "The paper #OTHEREFR extended the random walk concept on a graph to quantum walks on a directed graph.", "Based on the ideas presented in #OTHEREFR and #OTHEREFR , he introduced an unitary operator that works on a Hilbert space whose elements are made of pairs of edges and the components of the state transition matrix.", "Recently, studies on the mixing time of the quantum walk on the graph has also been accelerated #OTHEREFR .", "In parallel with these researches, the ranking process has been actively discussed in classic network studies.", "A prominent ranking process is Google's PageRank algorithm, which appropriately sorts web pages in order of their importance and impact." ]
[ "quantum walks" ]
background
{ "title": "Sensitivity of quantum PageRank", "abstract": "Abstract. In this paper, we discuss the sensitivity of quantum PageRank. By using the finite dimensional perturbation theory, we estimate the change of the quantum PageRank under a small analytical perturbation on the Google matrix. In addition, we will show the way to estimate the lower bound of the convergence radius as well as the error bound of the finite sum in the expansion of the perturbed PageRank." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1709.00378
quant-ph/9605043
D. Quantum operators and some quantum algorithms
Grover provided a quantum algorithm, that solves a unique search problem with O( √ N ) queries #REFR .
[ "and |− = H|1 = |0 −|1 √ 2 .", "We also have a three-qubit gate, the Toffoli gate, defined by", "It is known that an arbitrary classical Boolean function can be implemented by using only NAND gates.", "It is easy to see that a NAND gate can be implemented by a Toffoli gate:", "Algorithm 1: BDD solver constructed from a periodic Gaussian function input : lattice L(B), target vector t output: closest vector cv 1 function Round: R n → Z n that rounds every element of an input vector; 2 Initialize: count = 0; A search problem is called a unique search problem if there is only one target, and an unknown target search problem if at least a target exists but the number of targets is unknown." ]
[ "When the number of targets is unknown, Brassard et al.", "provided a modified Grover algorithm that solves the search problem with O( √ N ) queries #OTHEREFR , which is of the same order as the Grover search.", "In general, we will simply call these algorithms by Grover search." ]
[ "quantum algorithm" ]
background
{ "title": "Space-efficient classical and quantum algorithms for the shortest vector problem", "abstract": "A lattice is the integer span of some linearly independent vectors. Lattice problems have many significant applications in coding theory and cryptographic systems for their conjectured hardness. The Shortest Vector Problem (SVP), which is to find the shortest non-zero vector in a lattice, is one of the well-known problems that are believed to be hard to solve, even with a quantum computer. In this paper we propose space-efficient classical and quantum algorithms for solving SVP. Currently the best time-efficient algorithm for solving SVP takes 2 n+o(n) time and 2 n+o(n) space. Our classical algorithm takes 2 2.05n+o(n) time to solve SVP with only 2 0.5n+o(n) space. We then modify our classical algorithm to a quantum version, which can solve SVP in time 2 1.2553n+o(n) with 2 0.5n+o(n) classical space and only poly(n) qubits." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1909.11719
quant-ph/9605043
Benchmarks
Grover's algorithm #REFR is a quantum search algorithm that allows solving the problem in the order of O( √ N) versus O(N) operations on a classical computer.
[ "QFT is the quantum analogue of the inverse discrete Fourier transform.", "It is a key building block for many existing quantum algorithms, such as Shor's algorithm and quantum phase estimation #OTHEREFR .", "The QFT circuit is composed of two quantum logic gates, i.e.", "the Hadamard gate (H) and the Controlled Phase gate (CF).", "These two types of gates are decomposed into a subset of base rotations supported by both superconducting quantum device and instruction set." ]
[ "The algorithm is composed of repeated application of a quantum subroutine called Grover operator #OTHEREFR .", "The Grover operator is built out of Hadamard gates surrounding an operation that performs the conditional phase shift.", "Similar to the QFT, the Hadamard and phase shift gates are decomposed into simple rotations and controlled-not gates. 5.1.3.", "Processor characteristics Our timing constrain satisfaction experiments require the basic knowledge on the processor implementation characteristics.", "Thus we propose a 32-bit 5-stage in-order processor architecture called ICE core." ]
[ "quantum search algorithm" ]
background
{ "title": "Understanding Quantum Control Processor Capabilities and Limitations through Circuit Characterization", "abstract": "Building usable quantum computers hinges on building a classical control hardware pipeline that is scalable, extensible, and provides real time response. The control processor part of this pipeline provides functionality to map between the high-level quantum programming languages and low-level pulse generation using Arbitrary Waveform Generators. In this paper, we discuss design alternatives with an emphasis on supporting intermediate-scale quantum devices, with O(10 2 ) qubits. We introduce a methodology to assess the efficacy of a quantum ISA to encode quantum circuits. We use this methodology to evaluate several design points: RISC-like, vectors, and VLIW-like. We propose two quantum extensions to the broadly used open RISC-V ISA. Given the rapid rate of change in the quantum hardware pipeline, our open-source implementation provides a good starting point for design space experimentation, while our metrics can be independently used to guide design decisions." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1102.2332
quant-ph/9605043
Canonical Quantum Search Algorithms and modifications
The canonical quantum search algorithm proposed by Grover #REFR solves a general search problem in which there are N elements that can be represented by n basis states in Hilbert Space i.e.
[]
[ "N < 2 n , where N and n are both positive integers.", "Let HS = {0, 1} n and let Or : HS {0, 1}, where Or represents the oracle, which returns the answer when sampled but no other information is known about Or.", "Using this framework along with quantum operators, the target state ts, which is a basis state in HS such that Or(ts) = 1, is to be found. It consists of the following steps:", "i.", "Initializing a set of qubits |s , which represent the solutions and an output qubit." ]
[ "canonical quantum search" ]
method
{ "title": "A Fast Measurement based fixed-point Quantum Search Algorithm", "abstract": "Abstract: Generic quantum search algorithm searches for target entity in an unsorted database by repeatedly applying canonical Grover's quantum rotation transform to reach near the vicinity of the target entity represented by a basis state in the Hilbert space associated with the qubits. Thus, when qubits are measured, there is a high probability of finding the target entity. However, the number of times quantum rotation transform is to be applied for reaching near the vicinity of the target is a function of the number of target entities present in the unsorted database, which is generally unknown. A wrong estimate of the number of target entities can lead to overshooting or undershooting the targets, thus reducing the success probability. Some proposals have been made to overcome this limitation. These proposals either employ quantum counting to estimate the number of solutions or fixed point schemes. This paper proposes a new scheme for stopping the application of quantum rotation transformation on reaching near the targets by measurement and subsequent processing to estimate the distance of the state vector from the target states. It ensures a success probability, which is at least greater than half for all the ratios of the number of target entities to the total number of entities in a database, which are less than half. The search problem is trivial for remaining possible ratios. The proposed scheme is simpler than quantum counting and more efficient than the known fixed-point schemes. It has same order of computational complexity as canonical Grover`s search algorithm but is slow by a factor of two and requires an additional ancilla qubit." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
quant-ph/9910059
quant-ph/9605043
Introduction
The striking examples are integer factoring in polynomial time (see 18]) and nding pre{images of an n{ary Boolean function (\searching") in time O( p 2 n ) (see #REFR ).
[ "During the last years it has been shown that computers taking advantage of quantum mechanical phenomena outperform currently used computers." ]
[ "Quantum computers are not only of theoretical nature|there are several suggestions how to physically realize them (see, e. g., #OTHEREFR ).", "On the way towards building a quantum computer, one very important problem is to stabilize quantum mechanical systems since they are very vulnerable.", "A theory of quantum error{correcting codes has already been established (see 15]).", "Nevertheless, the problem of how to encode and decode quantum error{correcting codes has hardly been addressed, yet.", "In this paper, we present the construction of quantum error{correcting codes based on classical Reed{Solomon (RS) codes. For RS codes, many classical decoding techniques exist." ]
[ "n{ary Boolean function", "polynomial time" ]
background
{ "title": "Quantum Reed-Solomon Codes", "abstract": "Abstract. We introduce a new class of quantum error{correcting codes derived from (classical) Reed{Solomon codes over nite elds of characteristic two. Quantum circuits for encoding and decoding based on the discrete cyclic Fourier transform over nite elds are presented." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1812.07798
quant-ph/9605043
2
The bra notation (shown as 〈 |) displays transpose conjugate of x vector (| 〉). #REFR , in which denote set of complex number.
[ "Basic Concepts Quantum states can be represented by vectors or a more famous notation of bra/Ket.", "Kets (shown as | 〉) display column vectors and are generally used to describe quantum states." ]
[ "A qubit is a unit vector in a complex two-dimensional space that the specific basis vectors with the notation of |0〉 and |1〉 have been selected for this space.", "The base vectors of |0〉 and |1〉 are quantum counterparts of classic bits of 0 and 1, respectively.", "Unlike classical bits, qubits can be in any superposition of |0〉 and |1〉 like |0〉 + |1〉 where α and β are the complex numbers such that | | 2 + | | 2 = 1.", "If such a superposition is measured compare with the base of |0〉 and |1〉, then |0〉 and |1〉 are observed with probability of | | 2 and | | 2 , respectively." ]
[ "bra notation" ]
background
{ "title": "A General protocol for Distributed Quantum Gates", "abstract": "Abstract Distributed quantum computation requires to apply quantum remote gates on separate nodes or subsystems of network. On the other hand, Toffoli gate is a universal and well-known quantum gate. It is frequently used in synthesis of quantum circuits. In this paper, a general protocol for implementing a remote -qubit controlled-gate is presented with minimum required resources. Then, the proposed method is applied for implementing a Toffoli gate in bipartite and tripartite systems. This method also is optimal when group of the qubits belong to one part, section or subsystem of network. Beucase it only use one entangled qubit for each group of the qubits in mentioned conditions. Introduction Interest in quantum computing has increased with great potential in solving specific problems and it is becoming an important computational issue [1] [2] [3] [4] [5] [6] [7] . The theory of quantum computing is getting more and more mature since it was initiated by Feynman and Deutsch in the 1980s [8, 9] . Compared with classical computing, quantum computing has the outstanding advantages in terms of the speed of computing. Quantum computation has revolutionized computer science, showing that the processing of quantum states can lead to a tremendous speed up in the solution of a class of problems, as compared to traditional algorithms that process classical bits [2, 3] . A large-scale quantum computer is needed to solve complex problems at higher speeds. But, there are some problems in implementation of a large-scale quantum system. Due to the interaction of qubits with the environment that leads to quantum decoherence and more sensitivity to errors [10] [11] [12] ,the number of qubits used in processing information should be limited. One reasonable solution for overcoming to the mentioned problem is distributed quantum computer. A distributed quantum computation can be built using two or more low-capacity quantum computers with fewer qubits as distributed nodes or subsystems in a network of quantum system for solving a single problem [13, 14] . Distributed quantum computation first had been proposed by Grover [15], Cleve and Buhrman [16] , and Cirac et al. [17] . Then, Ying and Feng [11] defined an algebraic language for describing a distributed quantum circuits. After that, Van Meter et al. [18] proposed a structure for VBE carryripple adder in a distributed quantum circuit. One the other hand, to setup a distributed quantum system, a communication protocol is needed between its separate nodes. In 2001, Yepez [19] proposed idea using of classical communication instead of quantum communication in interconnecting the subsystems or nodes of distributed quantum computers called as Type-II quantum computers. In this paper, quantum communication (type-I) is used for interconnecting the subsystems of a distributed quantum computer. One of methods for transmitting qubits with unconditional security, between nodes of network is Quantum Teleportation (QT) [20] [21] [22] [23] . In teleportation, qubits are transmitted between two users or nodes, without physically moving them and then computations are locally performed on qubits, which is also known as teledata. There is an alternative approach, called as telegate that executes gates remotely and directly using the quantum entanglement when nodes are in a long distance. One of the problems in the second method is to establish optimal implementations of quantum gates between qubits that are located in different nodes of the distributed quantum computer. One of well-known reversible and quantum gates is Toffoli gate that is universal. I.e. any reversible and quantum circuit can be constructed from Toffoli gates. So, it is important to implementation of a protocol for applying n-qubit remote Toffoli gate between separate nodes of network." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1510.02682
quant-ph/9605043
Quantum computation
Grover's Algorithm can be adapted to look for the binary number that it solves SSAT(n, m), adapting the function C of the clauses of SAT as Grover depicts in #REFR .
[]
[ "Using this adaptation, the Grover's Algorithm is O(2 n/2 ).", "This is not good but it is better than O(2 n ).", "In #OTHEREFR , there is survey of the quantum potential and complexity and the Shor's algorithm for prime factorization and other discrete algorithms.", "Here, the quantum computation approach is used to formulate a novel algorithm based on quantum hardware for the general SAT.", "Similar to the idea of Jozsa [1992] and Berthiaume and Brassard [1992, 1994 ] of a random number generator with a Feynman #OTHEREFR reachable, and reversible quantum circuit approach." ]
[ "Grover's Algorithm" ]
method
{ "title": "Classical and Quantum Algorithms for the Boolean Satisfiability Problem", "abstract": "This paper presents a complete algorithmic study of the decision Boolean Satisfiability Problem under the classical computation and quantum computation theories. The paper depicts deterministic and probabilistic algorithms, propositions of their properties and the main result is that the problem has not an efficient algorithm (NP is not P). Novel quantum algorithms and propositions depict that the complexity by quantum computation approach for solving the Boolean Satisfiability Problem or any NP problem is lineal time." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1108.0469
quant-ph/9605043
Grover's algorithm
Grover's algorithm #REFR finds an item in an unstructured list of length n, taking time O( √ n).
[]
[ "Classically, every item must be inspected, requiring O(n) time on average." ]
[ "Grover's algorithm" ]
background
{ "title": "Formal Analysis of Quantum Systems using Process Calculus", "abstract": "Quantum communication and cryptographic protocols are well on the way to becoming an important practical technology. Although a large amount of successful research has been done on proving their correctness, most of this work does not make use of familiar techniques from formal methods such as formal logics for specification, formal modelling languages, separation of levels of abstraction, and compositional analysis. We argue that these techniques will be necessary for the analysis of large-scale systems that combine quantum and classical components, and summarize the results of initial investigation using behavioural equivalence in process calculus. This paper is a summary of Simon Gay's invited talk at ICE'11." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1511.00657
quant-ph/9605043
ACKNOWLEDGMENTS
In section we consider the use of the black hole dynamical map S to speed up Grover's search algorithm #REFR .
[ "Thus, aside from its interest as a potential model for black holes, the Horowitz-Maldecena model provides an interesting example of nonlinear quantum mechanics in which subsystem structure remains well-defined (i.e. the issues described in Appendix F do not arise).", "In sections and we show that if Alice has access to such a black hole and has foresightfully shared entangled states with Bob, then Alice can send instantaneous noisy signals to Bob and vice-versa independent of their spatial separation.", "We quantify the classical information-carrying capacity of the communication channels between Alice and Bob and find that they vanish only quadratically with the deviation from unitarity of the black hole dynamics, as measured by the deviation of the condition number of M from one.", "Hence, unless the deviation from unitarity is negligibly small, detectable causality violations can infect the entirety of spacetime.", "Furthermore, the bidirectional nature of the communication makes it possible in principle for Alice to send signals into her own past lightcone, thereby generating grandfather paradoxes." ]
[ "We find a lower bound on the condition number of M as a function of the beyond-Grover speedup.", "By our results of sections and this in turn implies a lower bound on the superluminal signaling capacity induced by the black hole.", "In section we prove the other direction: assuming one can signal superluminally we derive a lower bound on the condition number of M , which in turn implies a super-Grover speedup #OTHEREFR .", "We find that the black-box solution of NP-hard problems in polynomial time implies superluminal signaling with inverse polynomial capacity and vice versa.", "Communication from Alice to Bob Theorem 1." ]
[ "Grover's search algorithm" ]
method
{ "title": "Grover search and the no-signaling principle", "abstract": "Two of the key properties of quantum physics are the no-signaling principle and the Grover search lower bound. That is, despite admitting stronger-than-classical correlations, quantum mechanics does not imply superluminal signaling, and despite a form of exponential parallelism, quantum mechanics does not imply polynomial-time brute force solution of NP-complete problems. Here, we investigate the degree to which these two properties are connected. We examine four classes of deviations from quantum mechanics, for which we draw inspiration from the literature on the black hole information paradox. We show that in these models, the physical resources required to send a superluminal signal scale polynomially with the resources needed to speed up Grover's algorithm. Hence the no-signaling principle is equivalent to the inability to solve NP-hard problems efficiently by brute force within the classes of theories analyzed. Recently the firewalls paradox [1, 2] has shown that our understanding of quantum mechanics and general relativity appear to be inconsistent at the event horizon of a black hole. Many of the leading proposals to resolve the paradox involve modifying quantum mechanics. For example, the finalstate projection model of Horowitz and Maldecena [3] and the state dependence model of Papadodimas and Raju [4] are modifications to quantum theory which might resolve the inconsistency. One reason to be skeptical of such modifications of quantum mechanics is that they can often give rise to superluminal signals, and hence introduce acausality into the model. For example, Weinberg nonlinearities allow for superluminal signaling [5, 6] . This is generally seen as unphysical. In contrast, in standard quantum theory, entanglement does not give rise to superluminal signaling. Another startling feature of such models is that they might allow one to construct computers far more powerful even than conventional quantum computers. In particular, they may allow one to solve NP-hard problems in polynomial time. NPhard problems refer to those problems for which the solution can be verified in polynomial time, but for which there are exponentially many possible solutions. It is impossible for standard quantum computers to solve NP-hard problems efficiently by searching over all possible solutions. This is a consequence of the query complexity lower bound of Bennett, Bernstein, Brassard and Vazirani [7], which shows one cannot search an unstructured list of 2 n items in fewer than 2 n/2 queries with a quantum computer. (Here a query is an application of a function f whose output indicates if you have found a solution. The query complexity of search is the minimum number of queries to f , possibly in superposition, required to find a solution.) This bound is achieved by Grover's search algorithm [8] . In contrast, many modifications of quantum theory allow quantum computers to search an exponentially large solution space in polynomial time. For example, quantum computers equipped with postselection [9], Deutschian closed timelike curves [10] [11] [12] , or nonlinearities [13] [14] [15] [16] [17] all admit poly-time solution of NPhard problems by brute force search. In this paper we explore the degree to which superluminal signaling and speedups over Grover's algorithm are connected. We consider several modifications of quantum mechanics which are inspired by resolutions of the firewalls paradox. For each modification, we show that the theory admits superluminal signaling if and only if it admits a query complexity speedup over Grover search. Furthermore, we establish a quantitative relationship between superluminal signaling and speedups over Grover's algorithm. More precisely, we show that if one can transmit one classical bit of information superluminally using n qubits and m operations, then one can speed up Grover search on a system of poly(n, m) qubits with poly(n, m) operations, and vice versa. In other words, the ability to send a superluminal signal with a reasonable amount of physical resources is equivalent to the ability to violate the Grover lower bound with a reasonable amount of physical resources. Therefore the no-signaling principle is equivalent to the inability to solve NP-hard problems efficiently by brute force within the classes of theories analyzed. Note that in the presence of nonlinear dynamics, density matrices are no longer equivalent to ensembles of pure states. Here, we consider measurements to produce probabilistic ensembles of postmeasurement pure states and compute the dynamics of each of these pure states separately. Alternative formulations, in particular Everettian treatment of measurements as entangling unitaries, lead in some cases to different conclusions about superluminal signaling. See e.g. [18] . We consider four modifications of quantum mechanics, which are inspired by resolutions of the firewalls paradox. The first two are \"continuous\" modifications in the sense that they have a tunable parameter δ which quantifies the deviation from quantum mechanics. The second two are \"discrete\" modifications in which standard quantum mechanics is supplemented by one additional operation. The first \"continuous\" modification of quantum theory we consider is the final state projection model of Horowitz and Maldecena [3], in which the black hole singularity projects the wavefunction onto a specific quantum state. This can be thought of as a projective measurement with postselection, which induces a linear (but not necessarily unitary) map on the projective Hilbert space. (In some cases it is possible for the Horowitz-Maldecena final state projection model to induce a perfectly unitary process S for the black hole, but in general interactions between the collapsing body and infalling Hawking radiation inside the event horizon induce deviations from unitarity [19] .) Such linear but nonunitary maps allow both superluminal signaling and speedups over Grover search. Any non-unitary map M of condition number 1 + δ allows for superluminal signaling with channel capacity O(δ 2 ) with a single application of M . The protocol for signaling is simple -suppose Alice has the ability to apply M , and suppose Alice and Bob share the entangled state where |φ 0 and |φ 1 are the minimum/maximum singular vectors of M , respectively. If Alice chooses to apply M or not, then Bob will see a change in his half of the state, which allows signaling with channel capacity ∼ δ 2 . Furthermore, it is also possible for Bob to signal superluminally to Alice with the same state -if Bob chooses to measure or not to measure his half of the state, it will also affect the state of Alice's system after Alice applies M . So this signaling is bidirectional, even if only one party has access to the non-unitary map. In the context of the black hole information paradox, this implies the acausality in the final state projection model could be present even far away from the black hole. Also, assuming one can apply the same M multiple times, one can perform single-query Grover search using ∼ 1/δ applications of M using the methods of [9, 13] . More detailed proofs of these results are provided in Appendix A. We next examine the way in which these results are connected. First, assuming one can speed up Grover search, by a generalization of the hybrid argument of [7] , there is a lower bound on the deviation from unitarity required to achieve the speedup. By our previous results this implies a lower bound on the superluminal signaling capacity of the map M . More specifically, suppose that one can search an unstructured list of N items using q queries, with possibly non-unitary operations applied between queries. Then, the same non-unitary dynamics must be capa-2 ble of transmitting superluminal signals with channel capacity C using shared entangled states, where Here η is a constant which is roughly ∼ 0.42. In particular, solving NP-hard problems in polynomial time by unstructured search would imply superluminal signaling with inverse polynomial channel capacity. This can be regarded as evidence against the possibility of using black hole dynamics to efficiently solve NP-hard problems of reasonable size. A proof of this fact is provided in Appendix A. In the other direction, assuming one can send a superluminal signal with channel capacity C, there is a lower bound on the deviation from unitarity which was applied. The proof is provided in Appendix A. Again by our previous result, this implies one could solve the Grover search problem on a database of size N using a single query and applications of the nonlinear map. Combining these results, this implies that if one can send a superluminal signal with n applications of M , then one can beat Grover's algorithm with O(n) applications of M as well, and vice versa. This shows that in these models, the resources required to observe an exponential speedup over Grover search is polynomially related to the resources needed to send a superluminal signal. Hence an operational version of the no-signaling principle (such as \"one cannot observe superluminal signaling in reasonable-sized experiments\") is equivalent to an operational version of the Grover lower bound (\"one cannot observe violations of the Grover lower bound in reasonable-sized experiments\"). The next continuous modification of quantum mechanics we consider is modification of the Born rule. Suppose that quantum states evolve by unitary transformations, but upon measurement one sees outcome x with probability proportional to some function f (α x ) of the amplitude α x on x. That is, one sees x with probability Note we have added a normalization factor to ensure this induces a valid probability distribution on outcomes. This is loosely inspired by Marolf and Polchinski's work [20] which suggests that the \"state-dependence\" resolution of the firewalls paradox [4] gives rise to violations of the Born rule. First, assuming some reasonable conditions on f (namely, that f is differentiable, f ′ changes signs a finite number of times in [0, 1] , and the measurement statistics of f do not depend on the normalization of the state), we must have f (α x ) = |α x | p for some p. The proof is provided in Appendix B. Next we study the impact of such modified Born rules with p = 2 + δ for small δ. Aaronson [9] previously showed that such models allow for single-query Grover search in polynomial time while incurring a multiplicative overhead 1/|δ|, and also allow for superluminal signaling using shared entangled states of ∼ 1/|δ| qubits. (His result further generalizes to the harder problem of counting the number of solutions to an NP-hard problem, which is a #P-hard problem). We find that these relationships hold in the opposite directions as well. Specifically, we show if one can send a superluminal signal with an entangled state on m qubits with probability ǫ, then we must have δ = Ω(ǫ/m). By the results of Aaronson [9] this implies one can search a list of N items using O( m ǫ log N ) time. Hence having the ability to send a superluminal signal using m qubits implies the ability to perform an exponential speedup of Grover's algorithm with multiplicative overhead m. In the other direction, if one can achieve even a constant-factor speedup over Grover's algorithm using a system of m qubits, we show |δ| is at least 1/m as well. More precisely, by a generalization of the hybrid argument of [7] , if there is an algorithm to search an unordered list of N items with Q queries using m qubits, then 1 6 So if Q < √ N /24, then we must have |δ| ≥ 1 12m . The proofs of these facts are provided in Appendix B. Combining these results shows that the number of qubits required to observe superluminal signaling or even a modest speedup over Grover's algorithm are polynomially related. Hence one can derive an operational version of the no-signaling principle from the Grover lower bound and vice versa. This quantitative result is in some sense stronger than the result we achieve for the final-state projection model, 3 because here we require only a mild speedup over Grover search to derive superluminal signaling. We next consider two \"discrete\" modifications of quantum mechanics in which standard quantum mechanics is supplemented by one additional operation. We show that both modifications admit both superluminal signaling with O (1) (|00 + |11 ). If Alice measures her half of the state, and Bob clones his state k times and measures each copy in the computational basis, then Bob will either see either 0 k or 1 k as his output. On the other hand, if Alice does not measure her half of the state, and Bob does the same experiment, his outcomes will be a random string in {0, 1} k . Bob can distinguish these two cases with an error probability which scales inverse exponentially with k, and thus receive a signal faster than light. In addition to admitting superluminal signaling with entangled states, this model also allows the solution of NP-hard problems (and even #P-hard problems) using a single query to the oracle. This follows by considering the following gadget: given a state ρ on a single qubit, suppose one makes two copies of ρ, performs a Controlled-NOT gate between the copies, and discards one of the copies. This is summarized in a circuit diagram in Fig This performs a non-linear operation M on the space of density matrices, and following the techniques of Abrams and Lloyd [13] , one can use this operation to \"pry apart\" quantum states which are exponentially close using polynomially many applications of the gadget. The proof is provided in Appendix C. This answers an open problem of [21] about the power of quantum computers that can clone. Therefore, adding cloning to quantum mechanics allows for both the poly-time solution of NPhard problems by brute force search, and the ability to efficiently send superluminal signals. Second, inspired by the final state projection model [3] , we consider a model in which one can postselect on a generic state |ψ of n qubits. Although Aaronson [9] previously showed that allowing for postselection on a single qubit suffices to solve NP-hard and #P-hard problems using a single oracle query, this does not immediately imply that postselecting on a larger state has the same property, because performing the unitary which rotates |0 n to |ψ will in general require exponentially many gates. Despite this limitation, this model indeed allows the polynomial-time solution of NP-hard problems (as well as #P-hard problems) and superluminal signaling. To see this, first note that given a gadget to postselect on |ψ , one can obtain multiple copies of |ψ by inputting the maximally entangled state i |i |i into the circuit and postselecting one register on the state |ψ . So consider creating two copies of |ψ , and applying the gadget shown in Figure 2 , where the bottom register is postselected onto |ψ , an operation we denote by |ψ . For Haarrandom |ψ , one can show the quantity ψ|Z ⊗I|ψ is exponentially small, so this gadget simulates postselection on |0 on the first qubit. The complete proof is provided in Appendix D. Therefore, allowing postselection onto generic states is at least as powerful as allowing postselection onto the state |0 , so by Aaronson's results [9] this model admits both superluminal signaling and exponential speedups over Grover search. In addition, we address an open question from [13] regarding the computational implications of general nonlinear maps on pure states. In [13] , Abrams and Lloyd argued that generic nonlinear maps allow for the solution of NP-hard problems and #P-hard problems in polynomial time, except possibly for pathological examples. In Appendix E, we prove this result rigorously in the case the map is differentiable. Thus any pathological examples, if they exist, must fail to be differentiable. (Here we assume the nonlinearity maps pure states to pure states; as a result it does not subsume our results on quantum computers which can clone, as the cloning operation may map pure states to mixed states. A detailed discussion is provided in Appendix C.) Unfortunately, the action of general nonlinear maps on subsystems of entangled states are not well-defined, essentially because they interact poorly with the linearity of the tensor product. We discuss this in detail in Appendix F. Hence we are unable to connect this result to signaling in the general case. The central question in complexity theory is which computational problems can be solved efficiently and which cannot. Through experience, computer scientists have found that the most fruitful way to formalize the notion of efficiency is by demanding that the resources, such as time and memory, used to solve a problem must scale at most polynomially with the size of the problem instance (i.e. the size of the input in bits). A widely held conjecture, called the quantum Church-Turing thesis, states that the set of computational problems solvable in-principle with polynomial resources in our universe is equal to BQP, defined mathematically as the set of decision problems answerable using quantum circuits of polynomially many gates [22] . So far, this conjecture has held up remarkably well. Physical processes which conceivably might be more computationally powerful than quantum Turing machines, such as various quantum many-body dynamics of fermions, bosons, and anyons, as well as scattering processes in relativistic quantum field theories, can all be simulated with polynomial overhead by quantum circuits [23] [24] [25] [26] [27] . The strongest challenge to the quantum ChurchTuring thesis comes from quantum gravity. Indeed, many of the recent quantum gravity models proposed in relation to the black hole firewalls paradox involve nonlinear behavior of wavefunctions [3, 4] and thus appear to suggest computational power beyond that of polynomial-size quantum circuits. In particular, the prior work of Abrams and Lloyd suggest that such nonlinearities generically enable polynomial-time solution to NP-hard problems, a dramatic possibility, that standard quantum circuits are not generally expected to admit [13, 28] . Here, we have investigated several models and found a remarkably consistent pattern; in each case, if the modification to quantum mechanics is in a parameter regime allowing polynomial-time solution to NPhard problems through brute-force search, then it also allows the transmission of superluminal signals through entangled states. Such signaling allows causality to be broken at locations arbitrarily far removed from the vicinity of the black hole, thereby raising serious questions as to the consistency of the models. Thus, the quantum Church-Turing thesis appears to be remarkably robust, depending not in a sensitive way on the complete Hilbert-space formalism of quantum mechanics, but rather derivable from more foundational operational principles such as the impossibility of superluminal signaling. Some more concrete conjectures on these lines are discussed in Appendix G. ACKNOWLEDGMENTS" }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
0809.4545
quant-ph/9605043
Relational computation as a hypothetical classical many body interaction
The fundamental physical model of classical computation is the bouncing ball model of reversible computation #REFR .
[ "We postulate a many body interaction inspired to a well known paradox of classical mechanics: statically, the application of external forces to a perfectly rigid body is balanced by infinitely many distributions of stress inside the body, against one distribution if the body is flexible.", "This paradox is ported to a perfectly rigid body made of moving parts, whose coordinates are submitted to mechanical constraints representing the problem.", "Applying a force to an \"input part\" brings in the many body problem.", "It is reasonable to postulate that the many distributions of stress inside the body find a combination of movements of the body's parts that satisfies all the constraints at the same time.", "It is interesting to note that giving up the limitation to two body interaction marks the departure from classical computation." ]
[ "Here the variables at stake are ball positions and momenta.", "Outside collisions, there is no simultaneous dependence between the variables of different balls, which are independent of each other.", "During collision, there is simultaneous dependence between the variables of the colliding balls, but this is confined to ball pairs (there can be several collisions at the same time, but involving independent ball pairs, with no simultaneous dependence between the variables of different pairs).", "The simultaneous collision between many balls is avoided to avoid the many body problem, the non-determination of the dynamics.", "Instead, by assuming a perfect simultaneous dependence between all computational variables, one can devise an idealized classical machine that, thanks to a many body interaction, nondeterministically produces the solution of a (either linear or non linear) system of Boolean equations under the simultaneous influence of all equations." ]
[ "classical computation" ]
background
{ "title": "The quantum speed up as advanced knowledge of the solution", "abstract": "With reference to a search in a database of size N , Grover states: \"What is the reason that one would expect that a quantum mechanical scheme could accomplish the search in O \" √ N \" steps? It would be insightful to have a simple two line argument for this without having to describe the details of the search algorithm\". The answer provided in this work is: \"because any quantum algorithm takes the time taken by a classical algorithm that knows in advance 50% of the information that specifies the solution of the problem\". In database search, knowing in advance 50% of the n bits that specify the database location, brings the search from O (2 . This empirical rule, unnoticed so far, holds for both quadratic and exponential speed ups and is theoretically justified in three steps: (i) once the physical representation is extended to the production of the problem on the part of the oracle and to the final measurement of the computer register, quantum computation is reduction on the solution of the problem under a relation representing problem-solution interdependence, (ii) the speed up is explained by a simple consideration of time symmetry, it is the gain of information about the solution due to backdating, to before running the algorithm, a timesymmetric part of the reduction on the solution; this advanced knowledge of the solution reduces the size of the solution space to be explored by the algorithm, (iii) if ℑ is the information acquired by measuring the content of the computer register at the end of the algorithm, the quantum algorithm takes the time taken by a classical algorithm that knows in advance 50% of ℑ, which brings us to the initial statement. The fact that a problem solving and computation process can be represented as a single interaction, sheds light on our capability of perceiving (processing) many things together at the same time in the so called \"present\"." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
quant-ph/0104100
quant-ph/9605043
Remarks:
It is known that querying in superposition gives a speed up over classical algorithms for certain data retrieval problems, the most notable one being Grover's algorithm #REFR for searching an unordered list of n elements using O( √ n) quantum queries.
[ "In fact, both the storage and the query schemes are classical deterministic in Beame and Fich's solution.", "In their paper, Beame and Fich #OTHEREFR also show a lower bound of t = Ω log log m log log log m as a function of m for (n O(1) , 2 (log m) 1−Ω(1) , t) classical deterministic cell probe schemes, and a lower bound of t = Ω log n log log n as a function of n for (n O(1) , (log m) O(1) , t) classical deterministic cell probe schemes.", "But their lower bound proof breaks down if the query scheme is randomised.", "Our result thus shows that the upper bound scheme of Beame and Fich is optimal all the way up to the bounded error address-only quantum cell probe model.", "Also, our proof is substantially simpler than that of Beame and Fich. 2." ]
[ "The power of quantum querying for data structure problems was studied in the context of static membership by Radhakrishnan, Sen and Venkatesh #OTHEREFR .", "In their paper, they worked in the quantum bit probe model, which is our quantum cell probe model where the word size is just one bit.", "They showed, roughly speaking, that quantum querying does not give much advantage over classical schemes for the set membership problem.", "Our result above seems to suggest that quantum search is perhaps not more powerful than classical search for the predecessor problem as well. 3.", "In the next section, we formally describe the \"address-only\" restrictions we impose on the query algorithm." ]
[ "quantum queries" ]
background
{ "title": "Lower bounds in the quantum cell probe model", "abstract": "We introduce a new model for studying quantum data structure problems -the quantum cell probe model. We prove a lower bound for the static predecessor problem in the address-only version of this model where we allow quantum parallelism only over the 'address lines' of the queries. The address-only quantum cell probe model subsumes the classical cell probe model, and many quantum query algorithms like Grover's algorithm fall into this framework. Our lower bound improves the previous known lower bound for the predecessor problem in the classical cell probe model with randomised query schemes, and matches the classical deterministic upper bound of Beame and Fich [BF99]. Beame and Fich [BF99] have also proved a matching lower bound for the predecessor problem, but only in the classical deterministic setting. Our lower bound has the advantage that it holds for the more general quantum model, and also, its proof is substantially simpler than that of Beame and Fich. We prove our lower bound by obtaining a round elimination lemma for quantum communication complexity. A similar lemma was proved by Miltersen, Nisan, Safra and Wigderson [MNSW98] for classical communication complexity, but it was not strong enough to prove a lower bound matching the upper bound of Beame and Fich. Our quantum round elimination lemma also allows us to prove rounds versus communication tradeoffs for some quantum communication complexity problems like the 'greater-than' problem. We also study the static membership problem in the quantum cell probe model. Generalising a result of Yao [Yao81], we show that if the storage scheme is implicit, that is it can only store members of the subset and 'pointers', then any quantum query scheme must make Ω(log n) probes." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
2001.10749
quant-ph/9605043
I. INTRODUCTION
Grover, who proposed a quantum algorithm for searching an item in an unsorted database containing n items, which runs in time O( √ n) #REFR .
[ "Deutsch proposed the notion of a quantum Turing machine as a physically realizable model for a quantum computer #OTHEREFR .", "The first impressive result witnessing \"quantum power\" was P.", "Shor's algorithm for integer factorization, which could run in polynomial time on a quantum computer #OTHEREFR .", "It should be stressed that no classical polynomial time factoring algorithm is currently known.", "On this fact, the security of many nowadays cryptographic protocols, e.g. RSA and Diffie-Hellman, actually relies. Another relevant progress was made by L." ]
[ "These and other theoretical advances naturally drove much attention end efforts on the physical realization of quantum computational devices (see, e.g., [14] #OTHEREFR [16] #OTHEREFR ).", "While we can hardly expect to see a full-featured quantum computer in the near future, it might be reasonable to envision classical computing devices incorporating quantum components.", "Since the physical realization of quantum computational systems has proved to be an extremely complex task, it is also reasonable to keep quantum components as \"small\" as possible.", "Small size quantum devices are modeled by quantum finite automata, a theoretical model for quantum machines with finite memory.", "* Electronic address: stefano.olivares@fisica.unimi.it Indeed, in current implementations of quantum computing the preparation and initialization of qubits in superposition or/and entangled states is often challenging, making worth the study of quantum computation with restricted memory, which requires less demanding resources, as in the case of the quantum finite automata." ]
[ "quantum algorithm" ]
background
{ "title": "Photonic Realization of a Quantum Finite Automaton", "abstract": "We describe a physical implementation of a quantum finite automaton recognizing a well known family of periodic languages. The realization exploits the polarization degree of freedom of single photons and their manipulation through linear optical elements. We use techniques of confidence amplification to reduce the acceptance error probability of the automaton. It is worth remarking that the quantum finite automaton we physically realize is not only interesting per se, but it turns out to be a crucial building block in many quantum finite automaton design frameworks theoretically settled in the literature. < l a t e x i t s h a 1 _ b a s e 6 4 = \" V F T n s Z y b e J o K t E U r d b y d + b T g J I w = \" > A A A B 7 n i c b V D L S g N B E O z 1 G e M r 6 t H L Y B A 8 h V 0 V 1 F v Q i 8 c I r g k k S 5 i d z C Z D 5 r H O z A p h y U 9 4 8 a D i 1 e / x 5 t 8 4 S f a g i Q U N R V U 3 3 V 1 x y p m x v v / t L S 2 v r K 6 t l z b K m 1 v b O 7 u V v f 0 H o z J N a E g U V 7 o V Y 0 M 5 k z S 0 z H L a S j X F I u a 0 G Q 9 v J n 7 z i W r D l L y 3 o 5 R G A v c l S x j B 1 k m t j m F 9 g b t n 3 U r V r / l T o E U S F K Q K B R r d y l e n p 0 g m q L S E Y 2 P a g Z / a K M f a M s L p u N z J D E 0 x G e I + b T s q s a A m y q f 3 j t G x U 3 o o U d q V t G i q / p 7 I s T B m J G L X K b A d m H l v I v 7 n t T O b X E Y 5 k 2 l m q S S z R U n G k V V o 8 j z q M U 2 J 5 S N H M N H M 3 Y r I A G t M r I u o 7 E I I 5 l 9 e J O F p 7 a r m 3 5 1 X 6 9 d F G i U 4 h C M 4 g Q A u o A 6 3 0 I A Q C H B 4 h l d 4 8 x 6 9 F + / d + 5 i 1 L n n F z A H 8 g f f 5 A z R T j 5 w = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" V F T n s Z y b e J o K t E U r d b y d + b T g J M r I u o 7 E I I 5 l 9 e J O F p 7 a r m 3 5 1 X 6 9 d F G i U 4 h C M 4 g Q A u o A 6 3 0 I A Q C H B 4 h l d 4 8 x 6 9 F + / d + 5 i 1 L n n F z A H 8 g f f 5 A z R T j 5 w = < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" V F T n s Z y b e J o K t E U r d b y d + b T g J I w = \" > A A A B 7 n i c b V D L S g N B E O z 1 G e M r 6 t H L Y B A 8 h V 0 V 1 F v Q i 8 c I r g k k S 5 i d z C Z D 5 r H O z A p h y U 9 4 8 a D i 1 e / x 5 t 8 4 S f a g i Q U N R V U 3 3 V 1 x y p m x v v / t L S 2 v r K 6 t l z b K m 1 v b O 7 u V v f 0 H o z J N a E g U V 7 o V Y 0 M 5 k z S 0 z H L a S j X F I u a 0 G Q 9 v J n 7 z i W r D l L y 3 o 5 R G A v c l S x j B 1 k m t j m F 9 g b t n 3 U r V r / l T o E U S F K Q K B R r d y l e n p 0 g m q L S E Y 2 P a g Z / a K M f a M s L p u N z J D E 0 x G e I + b T s q s a A m y q f 3 j t G x U 3 o o U d q V t G i q / p 7 I s T B m J G L X K b A d m H l v I v 7 n t T O b X E Y 5 k 2 l m q S S z R U n G k V V o 8 j z q M U 2 J 5 S N H M N H M 3 Y r I A G t M r I u o 7 E I I 5 l 9 e J O F p 7 a r m 3 5 1 X 6 9 d F G i U 4 h C M 4 g Q A u o A 6 3 0 I A Q C H B 4 h l d 4 8 x 6 9 F + / d + 5 i 1 L n n F z A H 8 g f f 5 A z R T j 5 w = < / l a t e x i t > n 1 < l a t e x i t s h a 1 _ b a s e 6 4 = \" h 0 i m J 2 u 5 + w 2 / d 6 8 S 1 4 e U j 4 W n r C I = \" > A A A B 8 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B i y U R Q b 0 V v X i s Y G y h C W W z 3 b R L d z d h d y O U k L / h x Y O K V 3 + N N / + N 2 z Y H b X 0 w 8 H h v h p l 5 U c q Z N q 7 7 7 V R W V t f W N 6 q b t a 3 t n d 2 9 + v 7 B o 0 4 y R a h P E p 6 o b o Q 1 5 U x S 3 z D D a T d V F I u I 0 0 4 0 v p 3 6 n S e q N E v k g 5 m k N B R 4 K F n M C D Z W C g L N h g L 3 c 3 n m F f 1 6 w 2 2 6 M 6 B l 4 p W k A S X a / f p X M E h I J q g 0 h G O t e 5 6 b m j D H y j D C a V E L M k 1 T T M Z 4 S H u W S i y o D v P Z z Q U 6 s c o A x Y m y J Q 2 a q b 8 n c i y 0 n o j I d g p s R n r R m 4 r / e b 3 M x F d h z m S a G S r J f F G c c W Q S N A 0 A D Z i i x P C J J Z g o Z m 9 F Z I Q V J s b G V L M h e I s v L x P / v H n d d O 8 v G q 2 b M o 0 q H M E x n I I H l 9 C C O 2 i D D w R S e I Z X e H M y 5 8 V 5 d z 7 m r R W n n D m E P 3 A + f w A 1 d J F V < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" h 0 i m J 2 u 5 + w 2 / d 6 8 S 1 4 e U j 4 W n r C I = \" > A A A B 8 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B i y U R Q b 0 V v X i s Y G y h C W W z 3 b R L d z d h d y O U k L / h x Y O K V 3 + N N / + N 2 z Y H b X 0 w 8 H h v h p l 5 U c q Z N q 7 7 7 V R W V t f W N 6 q b t a 3 t n d 2 9 + v 7 B o 0 4 y R a h P E p 6 o b o Q 1 5 U x S 3 z D D a T d V F I u I 0 0 4 0 v p 3 6 n S e q N E v k g 5 m k N B R 4 K F n M C D Z W C g L N h g L 3 c 3 n m F f 1 6 w 2 2 6 M 6 B l 4 p W k A S X a / f p X M E h I J q g 0 h G O t e 5 6 b m j D H y j D C a V E L M k 1 T T M Z 4 S H u W S i y o D v P Z z Q U 6 s c o A x Y m y J Q 2 a q b 8 n c i y 0 n o j I d g p s R n r R m 4 r / e b 3 M x F d h z m S a G S r J f F G c c W Q S N A 0 A D Z i i x P C J J Z g o Z m 9 F Z I Q V J s b G V L M h e I s v L x P / v H n d d O 8 v G q 2 b M o 0 q H M E x n I I H l 9 C C O 2 i D D w R S e I Z X e H M y 5 8 V 5 d z 7 m r R W n n D m E P 3 A + f w A 1 d J F V < / l a t e x i t > < l a t e x i t s h a 1 _ b a s e 6 4 = \" h 0 i m J 2 u 5 + w 2 / d 6 8 S 1 4 e U j 4 W n r C I = \" > A A A B 8 n i c b V B N S 8 N A E J 3 U r 1 q / q h 6 9 L B b B i y U R Q b 0 V v X i s Y G y h C W W z 3 b R L d z d h d y O U k L / h x Y O K V 3 + N N / + N 2 z Y H b X 0 w 8 H h v h p l 5 U c q Z N q 7 7 7 V R W V t f W N 6 q b t a 3 t n d 2 9 + v 7 B o 0 4 y R a h P E p 6 o b o Q 1 5 U x S 3 z D D a T d V F I u I 0 0 4 0 v p 3 6 n S e q N E v k g 5 m k N B R 4 K F n M C D Z W C g L N h g L 3 c 3 n m F f 1 6 w 2 2 6 M 6 B l 4 p W k A S X a / f p X M E h I J q g 0 h G O t e 5 6 b m j D H y j D C a V E L M k 1 T T M Z 4 S H u W S i y o D v P Z z Q U 6 s c o A x Y m y J Q 2 a q b 8 n c i y 0 n o j I d g p s R n r R m 4 r / e b 3 M x F d h z m S a G S r J f F G c c W Q S N A 0 A D Z i i x P C J J Z g o Z m 9 F Z I Q V J s b G V L M h e I s v L x P / v H n d d O 8 v G q 2 b M o 0 q H M E x n I I H l 9 C C O 2 i D D w R S e I Z X e H M y 5 8 V 5 d z 7 m r R W n n D m E P 3 A + f w A 1 d J F V < / l a t e x i t > S + K M 4 E t g m e v o / 7 X D N q x d g R Q j V 3 t 2 I 6 J J p Q 6 0 I q u x D 8 x Z e X S V C v X d W 8 u / N q 4 7 p I o w T H c A J n 4 M M F N O A W m h A A B Q X P 8 A p v y K A X 9 I 4 + 5 q 0 r q J g 5 g j 9 A n z / 6 a 5 C n < / l a t e x i t >" }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
1810.03390
quant-ph/9605043
Related works
Grover, first introduced a quantum search algorithm and formally known as Grover's search algorithm #REFR .
[ "In 1996, Lov. K." ]
[ "In this works, Grover described that in a telephone directory, there are N numbers of telephone numbers and telephone numbers are ordered completely random in order.", "The searching of one telephone numbers in classical algorithm required Ο running time.", "Grover has shown that in Quantum algorithm running time required Ο .", "In Grover's algorithm, along with quantum interference and superposition of state the amplitude amplification technique is utilized to increase the amplitude of desire quantum state i.e. search key ( 0 ).", "In this algorithm #OTHEREFR , the first step of algorithm is started with 'n' numbers of quantum register of 'n' qubits, where 'n' is required to specify N=2 n numbers of search space and all 'n' qubits are initialized to |0 ." ]
[ "quantum search algorithm" ]
background
{ "title": "Constant Time Quantum search Algorithm Over A Datasets: An Experimental Study Using IBM Q Experience", "abstract": "Abstract-In this work, a constant time Quantum searching algorithm over a datasets is proposed and subsequently the algorithm is executed in real chip quantum computer developed by IBM Quantum experience (IBMQ). QISKit, the software platform developed by IBM is used for this algorithm implementation. Quantum interference, Quantum superposition and phase shift of quantum state applied for this constant time search algorithm. The proposed quantum algorithm is executed in QISKit SDK local backend 'local_qasm_simulator', real chip 'ibmq_16_melbourne' and 'ibmqx4' IBMQ. Result also suggest that real chip ibmq_16_melbourne is more quantum error or noise prone than ibmqx4." }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }
quant-ph/0407122
quant-ph/9605043
Partial quantum search may be easier
It is well known that if the database is supplied in the form of a suitable quantum oracle, then it is possible to use quantum parallelism and determine the rank of any element using approximately π 4 ¡ √ N queries #REFR .
[ "In this paper we study this problem in the quantum setting.", "Again, we restrict ourselves to algorithms that make no error." ]
[ "Furthermore, this algorithm is optimal #OTHEREFR (also #OTHEREFR ).", "The ideas used to speed up partial search classically can be used to reduce the number of queries by a factor of", "over the standard quantum search algorithm.", "That is, we randomly pick K − 1 of the blocks and run the quantum search algorithm on the N 1 − 1 K ¡ locations in the chosen blocks. This would require π 4", "√ N queries." ]
[ "suitable quantum oracle" ]
background
{ "title": "Is partial quantum search of a database any easier?", "abstract": "We consider the partial database search problem where given a quantum database f : {0," }
{ "title": "A fast quantum mechanical algorithm for database search", "abstract": "An unsorted database contains N records, of which just one satisfies a particular property. The problem is to identify that one record. Any classical algorithm, deterministic or probabilistic, will clearly take O (N) steps since on the average it will have to examine a large fraction of the N records. Quantum mechanical systems can do several operations simultaneously due to their wave like properties. This paper gives an O ( JN) step quantum mechanical algorithm for identifying that record. It is within a constant factor of the fastest possible quantum mechanical algorithm." }