Spaces:
Runtime error
Runtime error
zhaoxiang
commited on
Commit
•
6c777f0
1
Parent(s):
588adcc
fix two bugs
Browse files- ez_cite.py +8 -37
- intro.bib +102 -41
ez_cite.py
CHANGED
@@ -172,7 +172,7 @@ def get_paper_data(text):
|
|
172 |
|
173 |
bibtex = dictionary_from_json['citationStyles']['bibtex']
|
174 |
bibtex = bibtex.replace('&', 'and')
|
175 |
-
citation_label = re.findall(r
|
176 |
|
177 |
citationCount = dictionary_from_json['citationCount']
|
178 |
|
@@ -192,9 +192,10 @@ def move_cite_inside_sentence(sent, ez_citation):
|
|
192 |
sent_new = sent[:-1] + ' <ez_citation>' + character
|
193 |
else:
|
194 |
count = sent.count('\n')
|
195 |
-
|
196 |
-
|
197 |
-
|
|
|
198 |
return sent_new.replace('<ez_citation>', ez_citation)
|
199 |
|
200 |
|
@@ -355,43 +356,13 @@ def ez_cite(introduction, debug=False):
|
|
355 |
|
356 |
|
357 |
|
|
|
358 |
|
359 |
-
|
360 |
-
introduction = r"""A long-standing paradigm in machine learning is the trade-off between the complexity of a model family and the model's ability to generalize: more expressive model classes contain better candidates to fit complex trends in data, but are also prone to overfitting noise \cite{nielsen2015neural, geman1992neural}. \textit{Interpolation}, defined for our purposes as choosing a model with zero training error, was hence long considered bad practice \cite{hastie2009elements}. The success of deep learning - machine learning in a specific regime of extremely complex model families with vast amounts of tunable parameters - seems to contradict this notion; here, consistent evidence shows that among some interpolating models, more complexity tends \textit{not to harm} the generalisation performance, a phenomenon described as "benign overfitting" \cite{bartlett2021deep}.
|
361 |
-
|
362 |
-
In recent years, a surge of theoretical studies have reproduced benign overfitting in simplified settings with the hope of isolating the essential ingredients of the phenomenon \cite{bartlett2021deep, belkin2021fit}. For example, Ref. \cite{bartlett2020benign} showed how interpolating linear models in a high complexity regime (more dimensions than datapoints) could generalize just as well as their lower-complexity counterparts on new data, and analyzed the properties of the data that lead to the "absorption" of noise by the interpolating model without harming the model's predictions. Ref. \cite{belkin2019reconciling} showed that there are model classes of simple functions that change quickly in the vicinity of the noisy training data, but recover a smooth trend elsewhere in data space (see Figure 1). Such functions have also been used to train nearest neighbor models that perfectly overfit training data while generalizing well, thereby directly linking "spiking models" to benign overfitting \cite{belkin2019does}. Recent works try to recover the basic mechanism of such spiking models using the language of Fourier analysis \cite{muthukumar2020harmless, muthukumar2021classification, dar2021farewell}.
|
363 |
-
|
364 |
-
In parallel to these exciting developments in the theory of deep learning, quantum computing researchers have proposed families of parametrised quantum algorithms as model classes for machine learning (e.g. Ref. \cite{benedetti2019parameterized}). These quantum models can be optimised similarly to neural networks \cite{mitarai2018quantum, schuld2019evaluating} and have interesting parallels to kernel methods \cite{schuld2019quantum, havlivcek2019supervised} and generative models \cite{lloyd2018quantum, dallaire2018quantum}. Although researchers have taken some first steps to study the expressivity \cite{abbas2021power, wright2020capacity, sim2019expressibility, hubregtsen2021evaluation}, trainability \cite{mcclean2018barren, cerezo2021cost} and generalisation \cite{caro2021encoding, huang_power_2021, caro2022generalization, banchi2021generalization} of quantum models, we still know relatively little about their behaviour. In particular, the interplay of overparametrisation, interpolation, and generalisation that seems so important for deep learning is yet largely unexplored.
|
365 |
-
|
366 |
-
In this paper we develop a simplified framework in which questions of overfitting in quantum machine learning can be investigated. Essentially, we exploit the observation that quantum models can often be described in terms of Fourier series where well-defined components of the quantum circuit influence the selection of Fourier modes and their respective Fourier coefficients \cite{gil2020input, schuld2021effect, wierichs2022general}. We link this description to the analysis of spiking models and benign overfitting by building on prior works analyzing these phenomena using Fourier methods. In this approach, the complexity of a model is related to the number of Fourier modes that its Fourier series representation consists of, and overparametrised model classes have more modes than needed to interpolate the training data (i.e., to have zero training error). After deriving the generalization error for such model classes these "superfluous" modes lead to spiking models, which have large oscillations around the training data while keeping a smooth trend everywhere else. However, large numbers of modes can also harm the recovery of an underlying signal, and we therefore balance this trade-off to produce an explicit example of benign overfitting in a quantum machine learning model.
|
367 |
-
|
368 |
-
The mathematical link described above allows us to probe the impact of important design choices for a simplified class of quantum models on this trade-off. For example, we find why a measure of redundancy in the spectrum of the Hamiltonian that defines standard data encoding strategies strongly influences this balance; in fact to an extent that is difficult to counterbalance by other design choices of the circuit.
|
369 |
-
|
370 |
-
The remainder of the paper proceeds as follows. We will first review the classical Fourier framework for the study of interpolating models and develop explicit formulae for the error in these models to produce a basic example of benign overfitting (Sec. 2). We will then construct a quantum model with analogous components to the classical model, and demonstrate how each of these components is related to the structure of the corresponding quantum circuit and measurement (Sec. 3). We then analyze specific cases that give rise to "spikiness" and benign overfitting in these quantum models (Sec. 3.2)."""
|
371 |
-
|
372 |
-
# # arXiv:2302.01365v3
|
373 |
-
# introduction = r"""In order for a learning model to generalise well from training data, it is often crucial to encode some knowledge about the structure of the data into the model itself. Convolutional neural networks are a classic illustration of this principle, whose success at image related tasks is often credited to the existence of model structures that relate to label invariance of the data under translation symmetries. Together with the choice of loss function and hyperparameters, these structures form part of the basic assumptions that a learning model makes about the data, which is commonly referred to as the _inductive bias_ of the model.
|
374 |
-
|
375 |
-
# One of the central challenges facing quantum machine learning is to identify data structures that can be encoded usefully into quantum learning models; in other words, what are the forms of inductive bias that naturally lend themselves to quantum computation? In answering this question, we should be wary of hoping for a one-size-fits-all approach in whichquantum models outperform neural network models at generic learning tasks. Rather, effort should be placed in understanding how the Hilbert space structure and probabilistic nature of the theory suggest particular biases for which quantum machine learning may excel. Indeed, an analogous perspective is commonplace in quantum computation, where computational advantages are expected only for specific problems that happen to benefit from the peculiarities of quantum logic.
|
376 |
-
|
377 |
-
# In the absence of large quantum computers and in the infancy of quantum machine learning theory, how should we look for insight on this issue? One possibility is to turn to complexity theory, where asymptotic advantages of quantum learning algorithms have been proven. These results are few and far between however, and the enormous gap between what is possible to prove in a complexity-theoretic sense, and the types of advantages that may be possible in practice, means that there are growing doubts about the practical relevance of these results. Indeed, progress in machine learning is often the result of good ideas built on intuition, rather than worst-case complexity theoretic analysis. To repeat a common quip: many problems in machine learning are NP-hard, but neural networks don't know that so they solve them anyway.
|
378 |
-
|
379 |
-
# We will take a different route, and lean on the field of quantum foundations to guide us. Quantum foundations is predominantly concerned with understanding the frontier between the quantum and classical world, and typically values a clear qualitative understanding of a phenomenon over purely mathematical knowledge. For these reasons it is well suited to identify features of quantum theory that may advance quantum machine learning in useful directions. In particular, we focus on the phenomenon of contextuality, which is perhaps the most prominent form of nonclassicality studied in the literature. Contextuality has a considerable tradition of being studied in relation to quantum computation, where it is closely connected to the possibility of computational speed-up. Despite this, it has had relatively little attention in quantum machine learning, with only a couple of works linking contextuality to implications for learning.
|
380 |
-
|
381 |
-
# We adopt a notion of contextuality called \textit{generalised contextuality}, introduced by Spekkens in 2004. Loosely speaking, it refers to the fact that (i) there are different experimental procedures (called contexts) in the theory that are indistinguishable1, and (ii) any statistical model that reproduces the predictions of the theory must take these contexts into account. With this choice, our first task will then be to introduce a framework to talk about generalised contextuality in machine learning (Section 2). This was missing in previous works, which prove consequences for learning based on phenomena originating from contextuality, but do not attempt to define a notion of contextuality for machine learning that captures a wide range of models. Our general philosophy will be that the framework should depend purely on what a learning model can do, and not on the details of how it does it; i.e., the framework should be independent of the theory on which the models are built. This is necessary to have a framework that treats quantum and classical algorithms on the same footing, and ultimately involves adopting definitions in a similar spirit to the notion of operational contextuality as recently described in.
|
382 |
-
|
383 |
-
# We mostly focus on a paradigm of machine learning called multi-task learning, in which the aim is to simultaneously learn a number of separate models for a collection of different (but typically correlated) tasks. Multi-task learning scenarios are conceptually similar to commonly studied contextuality scenarios, and this similarity leads us to a definition of what it means for a multi-task model to be contextual (Section 3). Although the focus on multi-class learning problems appears restrictive, as the separation between tasks is arbitrary at the mathematical level, we also arrive at a notion of contextuality in the single task setting (Section 6). In particular, we argue that it makes sense to think of contextuality as a property relative to a particular inductive bias of a model, rather than a property of the model as a whole.
|
384 |
-
|
385 |
-
# Once we have described our framework, our second task will be to identify specific learning problems for which contextuality plays a role (Section 4). We show that this is the case when learning probabilistic models from data sets which feature a linearly conserved quantity in a discrete label space (see Figure 1). Such data sets can arise naturally from experiments involving conserved quantities, zero-sum game scenarios, logistics with conserved resources, substance diffusion in biological systems, and human mobility and migration. We show that the ability of a model to encode the conserved quantity as an inductive bias directly links to a central concept in generalised contextuality, called \textit{operational equivalence}. This results in a constraint on noncontextual learning models that encode the desired bias, which amounts to a limit on the expressivity of noncontextual model classes. For certain data sets, this limitation can negatively impact generalisation performance due to the lack of a suitable model within the class that matches the underlying data distribution; in such cases contextuality may therefore be required for learning. To illustrate this point, in Section 5 we construct a toy problem based on the rock, paper, scissors zero-sum game and prove precise limits on the expressivity of noncontextual model classes that attempt to learn the payoff behaviour of the game.
|
386 |
-
|
387 |
-
# In the final part of the work, we study the performance of quantum models for problems that involve our contextuality-inspired bias (Section 7). We first describe two approaches to construct quantum ansatze encoding the bias. The first of these encodes the bias into the state structure of the ansatz, and exploits tools from geometric quantum machine learning. The second approach encodes the bias into the measurement structure, and we present a new family of measurements to this end that may be of independent interest. We then use these tools in a simple numerical investigation (Section 8), inspired by a recent work of Schreiber et al.. Using the fact that quantum machine learning models are equivalent to truncated Fourier series, the authors of define the notion of a classical surrogate model: a linear Fourier features model that has access to the same frequencies of the quantum model, but which lacks its specific inductive bias. The authors found that classical surrogate model classes perform better than quantum model classes on a wide range of regression tasks, the message being that it is still unclear what the inductive bias of quantum machine learning is useful for. In our numerical study, we show that a quantum model class that encodes our contextuality-inspired bias achieves a lower generalisation error than the corresponding surrogate model classes at a specific learning task, even after allowing for regularisation in the surrogate model. We argue that this is due to the fact that the bias cannot be easily encoded into the surrogate model class, which therefore cannot exploit this information during learning.
|
388 |
-
|
389 |
-
# In Section 9 we elaborate on a number of areas where contextuality-inspired inductive bias can be expected to play a role in learning. Many of these areas are classical in nature, and therefore suggests that quantum machine learning may be suited to tackling classical learning problems with a specific structure. Finally, in Section 10, we outline our vision for this line of research and the possible next steps to take. Overall, we hope our approach and framework will lead to a new way of thinking about quantum machine learning, and ultimately lead to the identification of impactful problems where the specific structure of quantum theory makes quantum models the machine learning models of choice."""
|
390 |
-
|
391 |
|
392 |
|
393 |
|
394 |
if __name__ == "__main__":
|
395 |
|
396 |
-
final_intro, bib_file_content = ez_cite(
|
397 |
|
|
|
172 |
|
173 |
bibtex = dictionary_from_json['citationStyles']['bibtex']
|
174 |
bibtex = bibtex.replace('&', 'and')
|
175 |
+
citation_label = re.findall(r"@(\w+){([\w'-]+)", bibtex)[0][1]
|
176 |
|
177 |
citationCount = dictionary_from_json['citationCount']
|
178 |
|
|
|
192 |
sent_new = sent[:-1] + ' <ez_citation>' + character
|
193 |
else:
|
194 |
count = sent.count('\n')
|
195 |
+
sent = sent.strip()
|
196 |
+
character = sent[-1]
|
197 |
+
sent_new = sent[:-1] + ' <ez_citation>' + character + '\n'*count
|
198 |
+
|
199 |
return sent_new.replace('<ez_citation>', ez_citation)
|
200 |
|
201 |
|
|
|
356 |
|
357 |
|
358 |
|
359 |
+
example1 = r"""In the current Noisy Intermediate-Scale Quantum (NISQ) era, a few methods have been proposed to construct useful quantum algorithms that are compatible with mild hardware restrictions. Most of these methods involve the specification of a quantum circuit Ansatz, optimized in a classical fashion to solve specific computational tasks.
|
360 |
|
361 |
+
Next to variational quantum eigensolvers in chemistry and variants of the quantum approximate optimization algorithm, machine learning approaches based on such parametrized quantum circuits stand as some of the most promising practical applications to yield quantum advantages."""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
362 |
|
363 |
|
364 |
|
365 |
if __name__ == "__main__":
|
366 |
|
367 |
+
final_intro, bib_file_content = ez_cite(example1, debug=True)
|
368 |
|
intro.bib
CHANGED
@@ -1,53 +1,114 @@
|
|
1 |
|
2 |
-
%citationCount:
|
3 |
-
%tldr:
|
4 |
-
%url: https://www.semanticscholar.org/paper/
|
5 |
-
@
|
6 |
-
author = {
|
7 |
-
booktitle = {
|
8 |
-
|
9 |
-
|
10 |
-
year = {
|
11 |
}
|
12 |
|
13 |
-
%citationCount:
|
14 |
-
%tldr: This
|
15 |
-
%url: https://www.semanticscholar.org/paper/
|
16 |
-
@
|
17 |
-
author = {S.
|
18 |
-
|
19 |
-
|
|
|
|
|
|
|
|
|
20 |
}
|
21 |
|
22 |
-
%citationCount:
|
23 |
-
%tldr:
|
24 |
-
%url: https://www.semanticscholar.org/paper/
|
25 |
-
@Article{
|
26 |
-
author = {
|
27 |
-
booktitle = {Physical Review
|
28 |
-
journal = {Physical Review
|
29 |
-
title = {
|
30 |
year = {2021}
|
31 |
}
|
32 |
|
33 |
-
%citationCount:
|
34 |
-
%tldr: This work
|
35 |
-
%url: https://www.semanticscholar.org/paper/
|
36 |
-
@
|
37 |
-
author = {
|
38 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
39 |
year = {2023}
|
40 |
}
|
41 |
|
42 |
-
%citationCount:
|
43 |
-
%tldr:
|
44 |
-
%url: https://www.semanticscholar.org/paper/
|
45 |
-
@
|
46 |
-
author = {
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
}
|
|
|
1 |
|
2 |
+
%citationCount: 36
|
3 |
+
%tldr: It is argued that the evolution of quantum computing is unlikely to be different: hybrid algorithms are likely here to stay well past the NISQ era and even into full fault-tolerance, with the quantum processors augmenting the already powerful classical processors which exist by performing specialized tasks.
|
4 |
+
%url: https://www.semanticscholar.org/paper/f42a170d57260b1c1e518d9302139c3e7b416c58
|
5 |
+
@Article{Callison2022HybridQA,
|
6 |
+
author = {A. Callison and N. Chancellor},
|
7 |
+
booktitle = {Physical Review A},
|
8 |
+
journal = {Physical Review A},
|
9 |
+
title = {Hybrid quantum-classical algorithms in the noisy intermediate-scale quantum era and beyond},
|
10 |
+
year = {2022}
|
11 |
}
|
12 |
|
13 |
+
%citationCount: 22
|
14 |
+
%tldr: This article presents artificial intelligence (AI)-based and heuristic-based methods recently reported in the literature that attempt to address quantum circuit compilation challenges, and group them based on underlying techniques that they implement.
|
15 |
+
%url: https://www.semanticscholar.org/paper/dea427b94e9734ccf7c28e4851e384689300d2bd
|
16 |
+
@Article{Kusyk2021SurveyOQ,
|
17 |
+
author = {Janusz Kusyk and S. Saeed and M. U. Uyar},
|
18 |
+
booktitle = {IEEE Transactions on Quantum Engineering},
|
19 |
+
journal = {IEEE Transactions on Quantum Engineering},
|
20 |
+
pages = {1-16},
|
21 |
+
title = {Survey on Quantum Circuit Compilation for Noisy Intermediate-Scale Quantum Computers: Artificial Intelligence to Heuristics},
|
22 |
+
volume = {2},
|
23 |
+
year = {2021}
|
24 |
}
|
25 |
|
26 |
+
%citationCount: 8
|
27 |
+
%tldr: This work harnesses the SDP based formulation of the Hamiltonian ground state problem to design a NISQ eigensolver and solves constrained problems to calculate the excited states of Hamiltonians, the lowest energy of symmetry constrained Hamiltonians and determine the optimal measurements for quantum state discrimination.
|
28 |
+
%url: https://www.semanticscholar.org/paper/f436202291102b3dccbddb6d16f916fa04cc8c0f
|
29 |
+
@Article{Bharti2021NoisyIQ,
|
30 |
+
author = {Kishor Bharti and T. Haug and V. Vedral and L. Kwek},
|
31 |
+
booktitle = {Physical Review A},
|
32 |
+
journal = {Physical Review A},
|
33 |
+
title = {Noisy intermediate-scale quantum algorithm for semidefinite programming},
|
34 |
year = {2021}
|
35 |
}
|
36 |
|
37 |
+
%citationCount: 80
|
38 |
+
%tldr: This work proposes two strategies for partial compilation, exploiting the structure of variational circuits to pre-compile optimal pulses for specific blocks of gates, and indicates significant pulse speedups ranging from 1.5x-3x in typical benchmarks, with only a small fraction of the compilation latency of GRAPE.
|
39 |
+
%url: https://www.semanticscholar.org/paper/f787a1c99dc207ff18563547bea49f3f2ee5ca8a
|
40 |
+
@Article{Gokhale2019PartialCO,
|
41 |
+
author = {P. Gokhale and Yongshan Ding and T. Propson and Christopher Winkler and N. Leung and Yunong Shi and D. Schuster and H. Hoffmann and F. Chong},
|
42 |
+
booktitle = {Micro},
|
43 |
+
journal = {Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture},
|
44 |
+
title = {Partial Compilation of Variational Algorithms for Noisy Intermediate-Scale Quantum Machines},
|
45 |
+
year = {2019}
|
46 |
+
}
|
47 |
+
|
48 |
+
%citationCount: 87
|
49 |
+
%tldr: None
|
50 |
+
%url: https://www.semanticscholar.org/paper/2b8e5a9567bb75b629d67c28ee4759e71097bfef
|
51 |
+
@Article{Lavrijsen2020ClassicalOF,
|
52 |
+
author = {W. Lavrijsen and Ana Tudor and Juliane Müller and Costin Iancu and W. A. Jong},
|
53 |
+
booktitle = {International Conference on Quantum Computing and Engineering},
|
54 |
+
journal = {2020 IEEE International Conference on Quantum Computing and Engineering (QCE)},
|
55 |
+
pages = {267-277},
|
56 |
+
title = {Classical Optimizers for Noisy Intermediate-Scale Quantum Devices},
|
57 |
+
year = {2020}
|
58 |
+
}
|
59 |
+
|
60 |
+
%citationCount: 17
|
61 |
+
%tldr: Simulations suggest that compared with conventional hardware efficient ansatz, the circuit-structure-tunable method can generate circuits apparently more robust against both coherent and incoherent noise, and hence is more likely to be implemented on near-term devices.
|
62 |
+
%url: https://www.semanticscholar.org/paper/b9018dcf5f0c59de097b89accbeec0abe60e0ee7
|
63 |
+
@Article{Huang2022RobustRQ,
|
64 |
+
author = {Yuhan Huang and Qing Li and Xiaokai Hou and Rebing Wu and M. Yung and A. Bayat and Xiaoting Wang},
|
65 |
+
booktitle = {Physical Review A},
|
66 |
+
journal = {Physical Review A},
|
67 |
+
title = {Robust resource-efficient quantum variational ansatz through an evolutionary algorithm},
|
68 |
+
year = {2022}
|
69 |
+
}
|
70 |
+
|
71 |
+
%citationCount: 3
|
72 |
+
%tldr: A hardware architecture for quantum circuit simulation based on a 1-input single gate regardless of the number of qubits used in the circuit to implement simulator smaller and to reduce simulation time by skipping calculation and data transfer.
|
73 |
+
%url: https://www.semanticscholar.org/paper/9537338e6a5c83aca4f03a272c0cbc4db7d32670
|
74 |
+
@Article{Hong2022QuantumCS,
|
75 |
+
author = {Yun-Shik Hong and Seokhun Jeon and Sihyeong Park and Byung-Soo Kim},
|
76 |
+
booktitle = {Information and Communication Technology Convergence},
|
77 |
+
journal = {2022 13th International Conference on Information and Communication Technology Convergence (ICTC)},
|
78 |
+
pages = {1909-1911},
|
79 |
+
title = {Quantum Circuit Simulator based on FPGA},
|
80 |
+
year = {2022}
|
81 |
+
}
|
82 |
+
|
83 |
+
%citationCount: 1
|
84 |
+
%tldr: The evaluation using a quantum computer simulator with a noise model based on a real NISQ device shows that the proposed offline pruning technique improves the accuracy of U CCSD-VQE while sustaining the potential accuracy of the UCCSD ansatz and has a synergy with a well-known noise mitigation method called zero noise extrapolation (ZNE).
|
85 |
+
%url: https://www.semanticscholar.org/paper/d2308b29661dfd63fb12d8320d66bc39a6b102f1
|
86 |
+
@Article{Imamura2023OfflineQC,
|
87 |
+
author = {Satoshi Imamura and Akihiko Kasagi and Eiji Yoshida},
|
88 |
+
booktitle = {International Conference on Quantum Computing and Engineering},
|
89 |
+
journal = {2023 IEEE International Conference on Quantum Computing and Engineering (QCE)},
|
90 |
+
pages = {349-355},
|
91 |
+
title = {Offline Quantum Circuit Pruning for Quantum Chemical Calculations},
|
92 |
+
volume = {01},
|
93 |
year = {2023}
|
94 |
}
|
95 |
|
96 |
+
%citationCount: 1
|
97 |
+
%tldr: An investigation into the impact of realistic noise on the classical optimizer and the determination of optimal circuit depth for the Quantum Approximate Optimization Algorithm (QAOA) in the presence of noise shows that increasing the number of layers in the QAOA in an attempt to increase accuracy may not work well in a noisy device.
|
98 |
+
%url: https://www.semanticscholar.org/paper/b4d639c4c4d3719ba9ec9a60a48abeea26e28105
|
99 |
+
@Inproceedings{Pellow-Jarman2023QAOAPI,
|
100 |
+
author = {Aidan Pellow-Jarman and Shane McFarthing and I. Sinayskiy and A. Pillay and Francesco Petruccione},
|
101 |
+
title = {QAOA Performance in Noisy Devices: The Effect of Classical Optimizers and Ansatz Depth},
|
102 |
+
year = {2023}
|
103 |
+
}
|
104 |
+
|
105 |
+
%citationCount: 5
|
106 |
+
%tldr: This work proposes the use of estimation of distribution algorithms for the parameter optimization in a specific case of VQAs, the quantum approximate optimization algorithm, and shows an statistically significant improvement of the cost function minimization compared to traditional optimizers.
|
107 |
+
%url: https://www.semanticscholar.org/paper/29d82f13bd242855cd690f508c9cc31a0a8d9dc4
|
108 |
+
@Article{Soloviev2022QuantumPC,
|
109 |
+
author = {Vicente P. Soloviev and P. Larrañaga and C. Bielza},
|
110 |
+
booktitle = {GECCO Companion},
|
111 |
+
journal = {Proceedings of the Genetic and Evolutionary Computation Conference Companion},
|
112 |
+
title = {Quantum parametric circuit optimization with estimation of distribution algorithms},
|
113 |
+
year = {2022}
|
114 |
}
|