RedTachyon
commited on
Upload folder using huggingface_hub
Browse files- znNITCJyTI/1_image_0.png +3 -0
- znNITCJyTI/4_image_0.png +3 -0
- znNITCJyTI/5_image_0.png +3 -0
- znNITCJyTI/7_image_0.png +3 -0
- znNITCJyTI/znNITCJyTI.md +457 -0
- znNITCJyTI/znNITCJyTI_meta.json +25 -0
znNITCJyTI/1_image_0.png
ADDED
Git LFS Details
|
znNITCJyTI/4_image_0.png
ADDED
Git LFS Details
|
znNITCJyTI/5_image_0.png
ADDED
Git LFS Details
|
znNITCJyTI/7_image_0.png
ADDED
Git LFS Details
|
znNITCJyTI/znNITCJyTI.md
ADDED
@@ -0,0 +1,457 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Accelerated Quality-Diversity Through Massive Parallelism
|
2 |
+
|
3 |
+
Bryan Lim bryan.lim16@imperial.ac.uk Department of Computing Imperial College London Maxime Allard *m.allard20@imperial.ac.uk* Department of Computing Imperial College London Luca Grillotti luca.grillotti16@imperial.ac.uk Department of Computing Imperial College London Antoine Cully a.cully@imperial.ac.uk Department of Computing Imperial College London Reviewed on OpenReview: *https: // openreview. net/ forum? id= znNITCJyTI*
|
4 |
+
|
5 |
+
## Abstract
|
6 |
+
|
7 |
+
Quality-Diversity (QD) optimization algorithms are a well-known approach to generate large collections of diverse and high-quality solutions. However, derived from evolutionary computation, QD algorithms are population-based methods which are known to be datainecient and require large amounts of computational resources. This makes QD algorithms slow when used in applications where solution evaluations are computationally costly. A
|
8 |
+
common approach to speed up QD algorithms is to evaluate solutions in parallel, for instance by using physical simulators in robotics. Yet, this approach is limited to several dozen of parallel evaluations as most physics simulators can only be parallelized more with a greater number of CPUs. With recent advances in simulators that run on accelerators, thousands of evaluations can now be performed in parallel on single GPU/TPU. In this paper, we present QDax, an accelerated implementation of MAP-Elites which leverages massive parallelism on accelerators to make QD algorithms more accessible. We show that QD algorithms are ideal candidates to take advantage of progress in hardware acceleration. We demonstrate that QD algorithms can scale with massive parallelism to be run at interactive timescales without any significant eect on the performance. Results across standard optimization functions and four neuroevolution benchmark environments show that experiment runtimes are reduced by two factors of magnitudes, turning days of computation into minutes. More surprising, we observe that reducing the number of generations by two orders of magnitude, and thus having significantly shorter lineage does not impact the performance of QD algorithms.
|
9 |
+
|
10 |
+
These results show that QD can now benefit from hardware acceleration, which contributed significantly to the bloom of deep learning.
|
11 |
+
|
12 |
+
## 1 Introduction
|
13 |
+
|
14 |
+
Quality-Diversity (QD) algorithms (Pugh et al., 2016; Cully & Demiris, 2017; Chatzilygeroudis et al., 2021)
|
15 |
+
have recently shown to be an increasingly useful tool across a wide variety of fields such as robotics (Cully et al., 2015; Chatzilygeroudis et al., 2018), reinforcement learning (RL) (Ecoet et al., 2021), engineering design optimization (Gaier et al., 2018), latent space exploration for image generation (Fontaine & Nikolaidis, 2021) and video game design (Gravina et al., 2019; Fontaine et al., 2020a; Earle et al., 2021). Instead of optimizing for a single solution like in conventional optimization, QD optimization searches for a population of high-performing and diverse solutions.
|
16 |
+
|
17 |
+
Adopting the QD optimization framework has many benefits. The diversity of solutions found in QD enables rapid adaptation and robustness to unknown environments (Cully et al., 2015; Chatzilygeroudis et al., 2018; Kaushik et al., 2020). Additionally, QD algorithms are also powerful exploration algorithms. They have been shown to be eective in solving sparse-reward hard-exploration tasks and achieved state-of-the-art results on previously unsolved RL benchmarks (Ecoet et al., 2021). This is a result of the diversity of solutions present acting as stepping stones (Clune, 2019) during the optimization process. QD algorithms are also useful in the design of more open-ended algorithms (Stanley et al., 2017; Stanley, 2019; Clune, 2019)
|
18 |
+
which endlessly generate their own novel learning opporunities. They have been used in pioneering work for open-ended RL with environment generation (Wang et al., 2019; 2020). Lastly, QD can also be used as eective data generators for RL tasks. The motivation for this arises from the availability of large amounts of data which resulted in the success of modern machine learning. The early breakthroughs in supervised learning in computer vision have come from the availability of large diverse labelled datasets (Deng et al.,
|
19 |
+
2009; Barbu et al., 2019). The more recent successes in unsupervised learning and pre-training of large models have similarly come from methods that can leverage even larger and more diverse datasets that are unlabelled and can be more easily obtained by scraping the web (Devlin et al., 2018; Brown et al., 2020). As Gu et al. (Gu et al., 2021) highlighted, more ecient data generation strategies and algorithms are needed to obtain similar successes in RL.
|
20 |
+
|
21 |
+
Addressing the computational scalability of QD algorithms (focus of this work) oers a hopeful path towards open-ended algorithms that endlessly generates its own challenges and solutions to these challenges. Likewise, they can also play a significant role in this more data-centric view of RL by generating diverse and high-quality datasets of policies and trajectories both with supervision and in an unsupervised setting (Cully, 2019; Paolo et al., 2020).
|
22 |
+
|
23 |
+
The main bottleneck faced by QD algorithms are the large number of evaluations required that is on the order of millions. When using QD in the field of Reinforcement Learning (RL) for robotics, this issue is mitigated by performing these evaluations in physical simulators such as Bullet (Coumans &
|
24 |
+
Bai, 2016–2020), DART (Lee et al., 2018), and MuJoCo (Todorov et al., 2012). However, these simulators have mainly been developed to run on CPUs.
|
25 |
+
|
26 |
+
Methods like MPI can be used to parallelise over multiple machines, but this requires a more sophisticated infrastructure (i.e., multiple machines) and adds some network communication overhead which can add significant runtime to the algorithm. Additionally, the number of simulations that can be performed in parallel can only scale with the number of CPU cores available. Hence, both the lack of scalability coupled with the large number of evaluations required generally make the evaluation process of evolutionary based algorithms like QD, for robotics take days on modern 32-core CPUs. Our work builds on the advances and availability of hardware accelerators, high-performance programming frameworks (Bradbury et al., 2018) and simulators (Freeman et al., 2021; Makoviychuk et al.,
|
27 |
+
2021) that support these devices to scale QD algorithms.
|
28 |
+
|
29 |
+
Figure 1: QDax uses massive parallelism on hardware accelerators like GPUs/TPUs to speed up runtime of QD algorithms by orders of magnitude Historically, significant breakthroughs in algorithms have come from major advances in computing hardware. Most notably, the use of Graphic Processing Units (GPUs) to perform large vector and matrix computations enabled significant order-of-magnitude speedups in training deep neural networks. This brought about modern deep-learning systems which have revolutionized computer vision (Krizhevsky et al., 2012; He et al., 2016; Redmon et al., 2016), natural language processing (Hochreiter & Schmidhuber, 1997; Vaswani et al., 2017) and even biology (Jumper et al., 2021). Even in the current era of deep learning, significant network architectural breakthroughs (Vaswani et al., 2017) were made possible and are shown to scale with larger datasets and more computation (Devlin et al., 2018; Shoeybi et al., 2019; Brown et al., 2020; Smith et al., 2022).
|
30 |
+
|
31 |
+
![1_image_0.png](1_image_0.png)
|
32 |
+
|
33 |
+
Our goal in this paper is to bring the benefits of advances in compute and hardware acceleration to QD
|
34 |
+
algorithms (Fig. 1). The key contributions of this work are: (1) We show that massive parallelization with QD using large batch sizes significantly speeds up run-time of QD algorithms at no loss in final performance, turning hours/days of computation into minutes. (2) We demonstrate that contrary to prior beliefs, the number of iterations (generations) of the QD algorithm is not critical, given a large batch size. We observe this across optimization tasks and a range of QD-RL tasks. (3) We release QDax, an open-source accelerated Python framework for Quality-Diversity algorithms (MAP-Elites) which enables massive parallelization on a single machine. This makes QD algorithms more accessible to a wider range of practitioners and researchers.
|
35 |
+
|
36 |
+
The source code of QDax is available at https://github.com/adaptive-intelligent-robotics/QDax.
|
37 |
+
|
38 |
+
## 2 Related Work
|
39 |
+
|
40 |
+
Quality-Diversity. QD algorithms were derived from interests in divergent search methods (Lehman &
|
41 |
+
Stanley, 2011a) and behavioural diversity (Mouret & Doncieux, 2009; 2012) in evolutionary algorithms and hybridizing such methods with the notion of fitness and reward (Lehman & Stanley, 2011b). While QD
|
42 |
+
algorithms are promising solutions to robotics and RL, they remain computationally expensive and take a long time to converge due to high sample-complexity. The move towards more complex environments with high-dimensional state and action spaces, coupled with the millions of evaluations required for the algorithm to converge, makes these algorithms even more inaccessible to regular hardware devices. Progress has been made towards lowering the sample complexity of QD algorithms and can generally be categorized into two separate approaches. The first approach is to leverage the eciency of other optimization methods such as evolution strategies (Colas et al., 2020; Fontaine et al., 2020b; Cully, 2020; Wang et al., 2021) and policy-gradients (Nilsson & Cully, 2021; Pierrot et al., 2021). The other line of work known as *model-based* quality-diversity (Gaier et al., 2018; Keller et al., 2020; Lim et al., 2021), reduces the number of evaluations required through the use of surrogate models to provide a prediction of the descriptor and objective.
|
43 |
+
|
44 |
+
Our work takes a separate approach orthogonal to sample-eciency and focuses improvement on the runtime of QD algorithms instead by leveraging the batch size at each iteration. Additionally, despite algorithmic innovations that improve sample-eciency, most QD implementations still rely on evaluations being distributed over large compute systems. These often give impressive results (Colas et al., 2020; Fontaine et al., 2019)
|
45 |
+
but such resources are mainly inaccessible to most researchers and yet still take significant amount of time to obtain. Our work aims to make QD algorithms more accessible by running quickly on more commonly available accelerators, such as cloud available GPUs.
|
46 |
+
|
47 |
+
Hardware Acceleration for Machine Learning. Machine Learning, and more specifically Deep Learning methods, have benefited from specialised hardware accelerators that can parallelize operations. In the mid-2000's researchers started using GPUs to train neural networks (Steinkrau et al., 2005) because of their high degree of parallelism and high memory bandwidth. After the introduction of general purpose GPUs, the use of specialized GPU compatible code for Deep Learning methods (Raina et al., 2009; Ciresan et al.,
|
48 |
+
2012) enabled deep neural networks to be trained a few orders of magnitude quicker than previously on CPUs (Lecun et al., 2015). Very quickly, frameworks such as Torch (Collobert et al., 2011), Tensorflow (Abadi et al., 2016), PyTorch (Paszke et al., 2019) or more recently JAX (Bradbury et al., 2018) were developed to run numerical computations on GPUs or other specialized hardware.
|
49 |
+
|
50 |
+
These frameworks have led to tremendous progress in deep learning. In other sub-fields such as deep reinforcement learning (DRL) or robotics, the parallelization has happened on the level of neural networks. However, RL algorithms need a lot of data and require interaction with the environment to obtain it. Such methods suer from a slow data collection process as the physical simulators used to collect data were mainly developed for CPUs, which results in a lack of scalable parallelism and data transfer overhead between devices.
|
51 |
+
|
52 |
+
More recently, new rigid body physics simulators that can leverage GPUs and run thousands of simulations in parallel have been developed. Brax (Freeman et al., 2021) and IsaacGym (Makoviychuk et al., 2021)
|
53 |
+
are examples of these new types simulators. Gradient-based DRL methods can benefit from this massive parallelism as this would directly correspond to estimating the gradients more accurately through collection of larger amounts of data in a faster amount of time (Rudin et al., 2021) at each optimization step. Recent work (Gu et al., 2021; Rudin et al., 2021) show that control policies can be trained with DRL algorithms like PPO (Schulman et al., 2017) and DIAYN (Eysenbach et al., 2018) in minutes on a single GPU. However, unlike gradient-based methods, it is unclear until now how evolutionary and population-based approaches like QD would benefit from this massive parallelism since the implications of massive batch sizes has not been studied to the best of our knowledge. Our work studies the impact of massive parallelization on QD
|
54 |
+
algorithms and its limitations.
|
55 |
+
|
56 |
+
## 3 Problem Statement
|
57 |
+
|
58 |
+
Quality-Diversity Problem. The Quality-Diversity (QD) problem Pugh et al. (2016); Chatzilygeroudis et al. (2021); Fontaine & Nikolaidis (2021) is an optimization problem which consists of searching for a set of solutions A that (1) are locally optimal, and (2) exhibit diverse features. QD problems are characterized by two components: (1) an objective function to maximize f : æ R, and (2) a descriptor function d : æ D ™ Rn. That descriptor function is used to dierentiate between solutions; it takes as input a solution ◊ œ , and computes a descriptive low-dimensional feature vector.
|
59 |
+
|
60 |
+
The goal of QD algorithms is to return an *archive* of solutions A satisfying the following condition: for each achievable descriptor c œ D, there exists a solution ◊A,c œ A such that d(◊A,c) = c and f(◊A,c) maximizes the objective function with the same descriptor {f(◊) | ◊ œ · d(◊) = c}. However, the descriptor space D
|
61 |
+
is usually continuous. This would require storing an infinite amount of solutions in the set. QD algorithms alleviate this problem by considering a tesselation of the descriptor space into cells (Mouret & Clune, 2015; Cully et al., 2015; Vassiliades et al., 2017): (celli)iœI and keeping only a single solution per cell. QD algorithms aim at finding a set of policy parameters (◊j )jœJ maximizing the QD-Score Pugh et al. (2016), defined as follows (where f(·) is assumed non-negative without loss of generality):
|
62 |
+
|
63 |
+
maximize QD-Score $=\sum_{j\in\mathcal{J}}f(\mathbf{\theta}_{j})\quad\text{such that}\forall j\in\mathcal{J},\;d(\mathbf{\theta}_{j})\in\text{cell}_{j}$ (1)
|
64 |
+
Thus, maximizing the QD-Score is equivalent to maximizing the number of cells containing a policy from the archive, while also maximizing the objective function in each cell.
|
65 |
+
|
66 |
+
Quality-Diversity for Neuroevolution-based RL. QD algorithms can also be used on RL problems, modeled as Markov Decision Processes (S, A*, p, r*), where S is the states set, A denotes the set of actions, p is a probability transition function, and r is a state-dependent reward function. The return R is defined as the sum of rewards: R = qt rt. In a standard RL setting, the goal is to find a policy fi◊ maximizing the expected return.
|
67 |
+
|
68 |
+
The QD for Reinforcement Learning (QD-RL) problem (Nilsson & Cully, 2021; Tjanaka et al., 2022) is defined as a QD problem in which the goal is to find a set of diverse policy parameters A = (◊j )jœJ leading to diverse high-performing behaviours (Fig. 2). In the QD-RL context, the objective function matches the expected return f(◊j ) = E\#R(◊j )$; the descriptor function d(·) characterizes the state-trajectory of policy fi◊j : d(◊j ) = E
|
69 |
+
ËdÂ(· (◊j ))
|
70 |
+
È(with · denoting the state-trajectory s1:T ). The QD-Score to maximize (formula 1)
|
71 |
+
can then be expressed as follows, where all expected returns should be non-negative:
|
72 |
+
|
73 |
+
$$\begin{array}{ll}\mbox{maximize}&\mbox{QD-Score}=\sum_{j\in{\cal J}}\mbox{E}\left[R^{(\mathbf{\theta}_{j})}\right]\quad\mbox{such that}\forall j\in{\cal J},\;\mbox{E}\left[\widetilde{d}(\mathbf{\tau}^{(\mathbf{\theta}_{j})})\right]\in\mbox{cell}_{j}\\ \end{array}\tag{2}$$
|
74 |
+
|
75 |
+
## 4 Background: Map-Elites
|
76 |
+
|
77 |
+
MAP-Elites Mouret & Clune (2015) is a well-known QD algorithm which considers a descriptor space discretized into grid cells (Fig. 3 4th column). At the start, an archive A is created and initialized by evaluating random solutions. Then, at every subsequent iteration, MAP-Elites (i) generates new candidate solutions, (ii) evaluates their return and descriptor and (iii) attempts to add them to the archive A. The iterations are done until we reach a total budget H of evaluations. During step (i), solutions are selected uniformly from the archive A, and undergo variations to obtain a new batch of solutions BÂ. In all our experiments, we use the iso-line variation operator (Vassiliades & Mouret, 2018) (Appendix Algo. 2).
|
78 |
+
|
79 |
+
Then (ii), the solutions in the sampled batch B = (◊Âj )jœJ1,NBK are evaluated to obtain their respective returns (R(Â◊j ))jœJ1,NBK and descriptors (d(◊Âj ))jœJ1,NBK. Finally (iii), each solution ◊Âj œ A is placed in its corresponding cell in the behavioural grid according to its descriptor d(◊Âj ). If the cell is empty, the solution is added to the archive. If the cell is already occupied by another solution, the solution with the highest return is kept, while the other is discarded. A pseudo-code for MAP-Elites is presented in Algo. 1.
|
80 |
+
|
81 |
+
We use MAP-Elites to study and show how QD algorithms can be scaled through parallelization. We leave other variants and enhancements of QD algorithms which use learned descriptors (Cully, 2019; Paolo et al.,
|
82 |
+
2020; Miao et al., 2022) and dierent optimization strategies such as policy gradients (Nilsson & Cully, 2021; Pierrot et al., 2021; Tjanaka et al., 2022) and evolutionary strategies (Colas et al., 2020; Fontaine et al.,
|
83 |
+
2020b) for future work; we expect these variants to only improve performance on tasks, and benefit from the same contributions and insights of this work.
|
84 |
+
|
85 |
+
## 5 Leveraging Hardware Acceleration For Quality-Diversity
|
86 |
+
|
87 |
+
In population-based methods, new solutions are the result of older solutions that have undergone variations throughout an iterative process as described in Algo. 1. The number of iterations I of a method commonly depends on the total number of evaluations (i.e. computational budget) H for the optimization algorithm and the batch size NB. For a fixed computation budget, a large batch size NB would result in a lower number of iterations I and vice versa. At each iteration, a single solution can undergo a variation and each variation is a learning step to converge towards an optimal solution. It follows that the number of iterations I defines the maximum number of learning steps a solution can take. In the extreme case of NB = H,
|
88 |
+
the method simply reduces to a single random variation to a random sample of parameters.
|
89 |
+
|
90 |
+
![4_image_0.png](4_image_0.png)
|
91 |
+
|
92 |
+
aAny variation operator can be used to obtain new solutions. We use the iso-line variation (see Algo 2 in Appendix. A.2)
|
93 |
+
Based on this, an initial thought would be that the performance of population-based methods would be negatively impacted by a heavily reduced number of learning steps and require more iterations to find good performing solutions. The iterations of the algorithm are a sequential operation and cannot be parallelized. However, the evaluation of solutions at each iteration can be massively parallelized by increasing the batch size NB which suits modern hardware accelerators. We investigate the eect of large NB on population-based methods (more specifically QD algorithms) by ablating exponentially increasing NB which was relatively unexplored in the literature.
|
94 |
+
|
95 |
+
Conventional QD algorithms parallelize evaluations by utilizing multiple CPU cores, where each CPU
|
96 |
+
separately runs an instance of the simulation to evaluate a solution. For robotics experiments, we utilize Brax (Freeman et al., 2021), a dierentiable physics engine in Python which enables massively parallel rigid body simulations. By leveraging a GPU/TPU, utilizing this simulator allows us to massively parallelize the evaluations in the QD loop which is the major bottleneck of QD algorithms. To provide a sense of scale, QD algorithms normally run on the order of several dozens of evaluations in parallel with NB ≥ 102 due to limited CPUs while Brax can simulate over 10,000 solutions in parallel allowing QDax to have NB ≥ 105.
|
97 |
+
|
98 |
+
Brax is built on top of the JAX (Bradbury et al., 2018) programming framework, which provides an API to run accelerated code across any number of hardware acceleration devices such as CPU, GPU or TPU.
|
99 |
+
|
100 |
+
Beyond massive parallelization, acceleration of our implementation is also enabled with code compatible with just-in-time (JIT) compilation and fully on-device computation. Another key benefit of JAX is the JIT compilation which allows JAX to make full use of its Accelerated Linear Algebra (XLA).We provide implementation details with static archives that makes QDax compatible with JIT in the Appendix. Lastly, another bottleneck which slowed the algorithm down was the data transfer and marshalling across devices.
|
101 |
+
|
102 |
+
To address this issue, we carefully consider data structures and place all of them on-device. QDax places the JIT-compiled QD algorithm components on the same device. This enables the entire QD algorithm to be run without interaction with the CPU.
|
103 |
+
|
104 |
+
![5_image_0.png](5_image_0.png)
|
105 |
+
|
106 |
+
task respectively discover diverse gaits for moving forward, D: Omni-directional Ant task discovers ways to move in every direction, E: Ant Trap task with deceptive rewards.
|
107 |
+
|
108 |
+
## 6 Experiments
|
109 |
+
|
110 |
+
Our experiments aim to answer the following questions: (1) How does massive parallelization aect performance of Quality-Diversity (MAP-Elites) algorithms? (2) How significant is the number of iterations/learning steps in QD algorithms? (3) What magnitude of speed-up does massive parallelization oer over existing implementations? (4) How does this dier across dierent hardware accelerators?
|
111 |
+
|
112 |
+
## 6.1 Domains
|
113 |
+
|
114 |
+
Rastrigin and Sphere. Rastrigin and Sphere are standard optimization functions commonly used as benchmark domains in optimization (Hansen et al., 2010; 2021) and QD-literature (Fontaine et al., 2020b; Fontaine & Nikolaidis, 2021). We optimize a n = 100 dimensional parameter space bounded between 0 and 1.
|
115 |
+
|
116 |
+
More details regarding the objective function and descriptors are provided in the Appendix. We select these simple domains to demonstrate that the phenomena of large batch sizes applies generally to QD algorithms and not just to certain domains.
|
117 |
+
|
118 |
+
Planar Arm. The planar arm is a simple low-dimensional control task used in QD literature (Cully et al.,
|
119 |
+
2015; Vassiliades & Mouret, 2018; Fontaine & Nikolaidis, 2021) where the goal is to find the inverse kinematic solution (joint positions) of a planar robotic arm for every reachable position of the end eector. As the arm is redundant, the objective f for each solution is to minimize the variance of the joint angles (i.e. smooth solutions) while the descriptor corresponds to the x-y position of the end eector obtained via forward kinematics. For our experiments, we use a 7-DoF arm.
|
120 |
+
|
121 |
+
Continuous Control QD-RL. We perform experiments on three dierent QD-RL benchmark tasks (Cully et al., 2015; Conti et al., 2018; Nilsson & Cully, 2021; Tjanaka et al., 2022); *omni-directional* robot locomotion, uni-directional robot locomotion and a deceptive reward *trap* task. In the omni-directional task, the goal is to discover locomotion skills to move eciently in every direction.The descriptor functions is defined as the final x-y positions of the center of mass of the robot at the end of the episode while the objective f is defined as a sum of a survival reward and torque cost. In contrast, the goal in the uni-directional tasks is to find a collection of diverse gaits to walk forward as fast as possible. In this task, the descriptor function is defined as the average time over the entire episode that each leg is in contact with the ground. For each foot i, the contact with the ground Ci is logged as a Boolean (1: contact, 0: no-contact) at each time step t. The descriptor function of this task was used to allow robots to recover quickly from mechanical damage (Cully et al., 2015). The objective f of this task is a sum of the forward velocity, survival reward and torque cost.
|
122 |
+
|
123 |
+
Full equation details can be found in the Appendix B. In the Trap tasks, the environment contains a trap 2 right in front of the ant. The goal is to learn to move forward as fast as possible. If naively done using purely objective based algorithms, the ant will get stuck in the trap. The objective f of this task is a sum of the forward velocity, survival reward and torque cost while the descriptor is the final x-y position of the robot at the end of the episode.
|
124 |
+
|
125 |
+
We use the Hopper, Walker2D, Ant and Humanoid gym locomotion environments made available on Brax (Freeman et al., 2021) on these tasks. In total, we report results on a combination of six tasks and environments; Omni-directional Ant, Uni-directional Hopper, Walker, Ant and Humanoid, and Ant Trap.
|
126 |
+
|
127 |
+
Fig. 2 illustrates examples of the types behaviors discovered from these tasks. We use fully connected neural network controllers with two hidden layers of size 64 and tanh output activation functions as policies across all QD-RL environments and tasks.
|
128 |
+
|
129 |
+
## 6.2 E**Ects Of Massive Parallelization On Qd Algorithms**
|
130 |
+
|
131 |
+
To evaluate the eect of the batchsize NB, we run the algorithm for a fixed number of evaluations. We use 5 million evaluations for all QD-RL environments and 20 million evaluations for *rastrigin* and *sphere*. We evaluate the performance of the dierent batch sizes using the QD-score. The QD-score (Pugh et al., 2016)
|
132 |
+
aims to capture both performance and diversity in a single metric. This metric is computed as the sum of objective values of all solutions in the archive (Eqn. 1). We plot this metric with respect to three separate factors: number of evaluations, number of iterations and total runtime. Other commonly used metrics in QD
|
133 |
+
literature such as the best objective value and coverage can be found in the Appendix. We use a single A100 GPU to perform our experiments.
|
134 |
+
|
135 |
+
Fig. 3 shows the performance curves for QD-score and more importantly the dierences when plot against the number of evaluations, iterations and total runtime. A key observation in the first column is that the metrics converge to the same final score after the fixed number of evaluations regardless of the batch size used. The Wilcoxon Rank-Sum Test for the final QD score across all the dierent batches results in p-values p>0.05 after applying the Bonferroni Correction. This shows that we do not observe statistically significant dierences between the dierent batch sizes. Therefore, larger batch sizes and massive parallelism do not negatively impact the final performance of the algorithm. However, an important observation is that the larger batch sizes have a trend to be slower in terms of number of evaluations to converge. This can be expected as a larger number of evaluations are performed per iteration at larger batch sizes. Conversely, in some cases (Ant-Uni), a larger batch size can even have a positive impact on the QD-score. Given this result, the third column of Fig. 3 then demonstrates the substantial speed-up in total runtime of the algorithm obtained from using larger batch sizes with massive parallelism while obtaining the same performances. We can obtain similar results but in the order of minutes instead of hours. An expected observation we see when comparing the plots in the second and third column of Fig. 3 is how the total runtime is proportional to the number of iterations. As we are able to increase the evaluation throughput at each iteration through the parallelization, it takes a similar amounts of time to evaluate both smaller and larger batch sizes. The speed-up in total runtime of the algorithm by increasing batch size eventually disappears as we reach the limitations of the hardware. This corresponds to the results presented in the the next section 6.3 (Fig. 4)
|
136 |
+
where we see the number eval/s plateauing.
|
137 |
+
|
138 |
+
![7_image_0.png](7_image_0.png)
|
139 |
+
|
140 |
+
evaluations, iterations and run-times for each batch size. The rightmost column shows the final archives. The Ant and Hopper Uni results are presented in the Appendix. The bold lines and shaded areas represent the median and interquartile range over 10 replications respectively.
|
141 |
+
|
142 |
+
| Batch | Black-box | Simple Control | | Neuroevolution for QD-RL | | | | | |
|
143 |
+
|---------------------------------------------------------------------------------------------------------|-------------|------------------|------|----------------------------|-----------|--------|-------------|---------|---------|
|
144 |
+
| Size | Rastrigin | Sphere | Arm | HopperUni | WalkerUni | AntUni | HumanoidUni | AntOmni | AntTrap |
|
145 |
+
| 256 | 55025 | 65550 | 7761 | 185 | 8587 | 14951 | 11183 | 7272 | 1339 |
|
146 |
+
| 1,024 | 12720 | 16327 | 1770 | 63 | 1432 | 3412 | 3608 | 1690 | 562 |
|
147 |
+
| 4,096 | 2920 | 4086 | 451 | 15 | 254 | 734 | 470 | 400 | 119 |
|
148 |
+
| 16,384 | 832 | 1053 | 104 | 4 | 62 | 177 | 182 | 98 | 63 |
|
149 |
+
| 32,768 | 455 | 535 | 54 | 2 | 29 | 85 | 101 | 49 | 24 |
|
150 |
+
| 65,536 | 240 | 281 | 34 | 2 | 21 | 45 | 56 | 33 | 18 |
|
151 |
+
| 131,072 | 147 | 151 | 23 | 1 | 13 | 26 | - | 21 | 16 |
|
152 |
+
| Table 1: Number of iterations needed to reach threshold QD-score (minimum QD-score across all batch siz | | | | | | | es) | | |
|
153 |
+
|
154 |
+
The most surprising result is that the number of iterations and thus learning steps of the algorithm do not significantly aect the performance of the algorithm when increasing batch sizes NB are used (Fig. 3 - second column).
|
155 |
+
|
156 |
+
In the case of the *QD-RL* domains, we observe that using NB = 131,072 which runs for a total of only I = 39 provides the similar performance as when run with NB = 256 and I = 19,532. This is true for all the problem domains presented. To evaluate this more concretely, Table 1 shows the iterations needed to reach a threshold QD-score. The threshold QD-score is the minimum QD-score reached across all the batch sizes. The results in Fig. 1 clearly show that larger batch sizes require significantly less iterations to achieve the same QD-score.
|
157 |
+
|
158 |
+
Given a fixed number of evaluations, a larger batch size would imply a lower number of iterations. This can also be observed in the second column of Fig. 3. Therefore, our results show that a larger batch size with less iterations has no negative impact on QD algorithms and can significantly speed-up the runtime of QD algorithms using parallelization for a fixed evaluation budget. The iterations/learning steps remain an important part of QD algorithms as new solutions that have been recently discovered and added to the archive A from a previous iteration can be selected as stepping stones to form good future solutions. This is particularly evident in more complex tasks (Ant Trap) where there is an exploration bottleneck in which it can be observed that large batch sizes struggle due to the lack of stepping stones from a low number of iterations. However, our experiments show that iterations are not as important as long as the number of evaluations remains identical. This is a critical observation as while iterations cannot be performed in parallel, evaluations can be, which enables the use of massive parallelization.
|
159 |
+
|
160 |
+
This is in contrast to other population-based approaches such as evolutionary strategies (Salimans et al.,
|
161 |
+
2017), where we observe that massively large batch sizes can be detrimental when we run similar batch size ablation experiments (see Appendix C). We hypothesize that the reason for this is because in conventional evolutionary strategies (ES), there is a single optimization "thread" going on. The population is mainly used to provide an empirical approximation of the gradient. Increasing the size of the population helps to obtain a better estimate, but after a certain size, there are no benefits. On the contrary, with larger batch sizes, the algorithms will waste evaluations. Conversely, in Quality-Diversity algorithms (MAP-Elites in particular), each of the thousands cells of the container can be seen as an independent optimization "thread".
|
162 |
+
|
163 |
+
This enables MAP-Elites to greatly benefit from extra-large batch sizes.
|
164 |
+
|
165 |
+
## 6.3 Evaluation Throughput And Runtime Speed Of Qd Algorithms
|
166 |
+
|
167 |
+
We also evaluate the eect that increasing batch sizes have on the evaluation throughput of QD algorithms.
|
168 |
+
|
169 |
+
We start with a batch size NB of 64 and double from this value until we reach a plateau and observe a drop in performance in throughput. In our experiments, a maximum batch size of 131,072 is used.
|
170 |
+
|
171 |
+
The number of evaluations per second (eval/s) is used to quantify this throughput. The eval/s metric is computed by running the algorithm for a fixed number of generations (also referred to as iterations) N (100 in our experiments). We divide the corresponding batch size NB representing the number of evaluations performed in this iteration and divide this value by the time it takes to perform one iteration tn. We use the final average value of this metric across the entire run: eval/s = 1N
|
172 |
+
qNn=1 NB
|
173 |
+
tn . While the evaluations per second can be an indication of the improvement in throughput from this implementation, we ultimately care about running the entire algorithm faster. To do this, we evaluate the ability to speed up the total runtime of QD algorithms. In this case, we run the algorithm to a fixed number of evaluations (1 million), as
|
174 |
+
|
175 |
+
| Implementation | Simulator | Resources | Eval/s | Batch size | Runtime (s) | Batch size |
|
176 |
+
|------------------|-------------|--------------|----------|--------------|---------------|--------------|
|
177 |
+
| QDax (Ours) | Brax | GPU A100 | 30,846 | 65,536 | 69 | 65,536 |
|
178 |
+
| QDax (Ours) | Brax | GPU 2080 | 11,031 | 8,192 | 117 | 8,192 |
|
179 |
+
| pyribs | PyBullet | 32 CPU-cores | 184 | 8,192 | 7,234 | 4,096 |
|
180 |
+
| pymapelites | PyBullet | 32 CPU-cores | 185 | 8,192 | 6,509 | 16,384 |
|
181 |
+
| Sferesv2 | DART | 32 CPU-cores | 1,190 | 512 | 1,243 | 32,768 |
|
182 |
+
|
183 |
+
QDax (Ours) Brax GPU A100 30,846 65,536 69 65,536
|
184 |
+
|
185 |
+
QDax (Ours) Brax GPU 2080 11,031 8,192 117 8,192
|
186 |
+
|
187 |
+
pyribs PyBullet 32 CPU-cores 184 8,192 7,234 4,096
|
188 |
+
|
189 |
+
pymapelites PyBullet 32 CPU-cores 185 8,192 6,509 16,384
|
190 |
+
|
191 |
+
Sferesv2 DART 32 CPU-cores 1,190 512 1,243 32,768
|
192 |
+
|
193 |
+
Table 2: Maximum throughput of evaluations per second and fastest run time obtained and their corresponding
|
194 |
+
|
195 |
+
batch sizes across implementations. The medians over the 10 replications are reported.
|
196 |
+
|
197 |
+
usually done in QD literature. Running for a fixed number of iterations would be an unfair comparison as the experiments with smaller batch sizes would have much less evaluations performed in total.
|
198 |
+
|
199 |
+
For this experiment, we consider the Ant Omnidirectional task. We compare against common implementations of MAP-Elites and open-source simulators which utilize parallelism across CPU threads (see Table 2). All baseline algorithms used simulations with a fixed number of timesteps (100 in our experiments). We compare against both Python and C++
|
200 |
+
implementations as baselines. Pymapelites (Mouret
|
201 |
+
& Clune, 2015) is a simple reference implementation from the authors of MAP-Elites that was made to be easily transformed for individual purposes.
|
202 |
+
|
203 |
+
Pyribs (Tjanaka et al., 2021) is a more recent QD optimization library maintained by the authors of CMAME (Fontaine et al., 2020b). In both Python implementations, evaluations are parallelised on each core using the multiprocessing Python package. Lastly, Sferesv2 (Mouret & Doncieux, 2010) is an optimized, multi-core and lightweight C++ framework for evolutionary computation, which includes QD implementations. It relies on template-based programming to achieve optimized execution speeds. The multi-core distribution for parallel evaluations is handled by Intel Threading Building Blocks (TBB) library. For simulators, we use PyBullet (Coumans & Bai, 2016–2020) in our Python baselines and Dynamic Animation and Robotics Toolkit (DART) (Lee et al., 2018) for the C++ baseline. We recognize that this comparison is not perfect and there could exist more optimized combinations of implementations and simulators but believe the baselines selected gives a good representation of what is commonly used in most works Cully & Mouret (2013); Nilsson & Cully (2021); Tjanaka et al. (2022).
|
204 |
+
|
205 |
+
Figure 4: Average number of eval/s and full runtime of algorithm across batch sizes and implementations. Note the log scales on both axes to make distinction between batch sizes clearer.
|
206 |
+
|
207 |
+
We also test our implementation on two dierent GPU devices, a more accessible RTX2080 local device and a higher-performance A100 on Google Cloud. We only consider a single GPU device at each time. QDax was also tested and can be used to perform experiments across distributed GPU and TPU devices but we omit these results for simplicity.
|
208 |
+
|
209 |
+
Fig. 4 (Left) clearly shows that QDax has the ability to scale to much larger batch sizes which results in a higher throughput of evaluations. It is important to note the log scale on both axes to appreciate the magnitude of the dierences. For QDax implementations (blue and orange), the number of evaluations per second scales as the batch size used increases. This value eventually plateaus once we reach the limit of the device. On the other hand, all the baseline implementations scales to a significantly lower extent. These results can be expected as evaluations using simulators which run on CPUs are limited as each CPU core runs a separate instance of the simulation. Therefore, given only a fixed number of CPU cores, these baselines would not scale as the batch size is increased. Scaling is only possible by increasing the number of CPUs used in parallel which is only possible with large distributed system with thousands of CPUs in the network.
|
210 |
+
|
211 |
+
QDax can reach up to a maximum of 30,000 evaluations per second on an A100 GPU compared to maximum of 1,200 (C++) or 200 (Python) evaluations per second in the baselines (see Table 2). This is a 30 to 100 times increase in throughput, turning computation on the order of days to minutes. The negligible dierences between the pyribs (green) and pymapelites (red) results show that the major bottleneck is indeed the evaluations and simulator used, as both of these baselines use the PyBullet simulator. The performance of the Sferesv2 (purple) implementation can be attributed to its highly optimized C++ code. However, the same lack of scalability is also observed when the batchsize is increased. When looking at run time of the algorithm for a fixed number of evaluations on Fig. 4 (Right), we can see the eect of the larger throughput of evaluations at each iteration reflected in the decreasing run-time when larger batch sizes are used. We can run a QD algorithm with 1 million evaluations in just slightly over a minute (See Table 2) when using a batch size of 65,536 compared to over 100 minutes taken by Python baselines.
|
212 |
+
|
213 |
+
Our results also show that this scaling through massive parallelism is only limited by the hardware available.
|
214 |
+
|
215 |
+
The experiments on both the RTX2080 (orange) and A100 (blue) show similar trends and increases in both evaluations per second and total runtime. The 2080 plateaus at a batch size of 8,192 capable of 11,000 eval/s while the higher-end A100 plateaus later at a batch size of 65,536 completing 30,000 eval/s.
|
216 |
+
|
217 |
+
## 7 Limitations And Future Work
|
218 |
+
|
219 |
+
In this paper, we presented QDax, an implementation of MAP-Elites that utilizes massive parallelization on accelerators that reduce the runtime of QD algorithms to interactive timescales on the order of minutes instead of hours or days. We evaluate QDax across a range QD tasks and show that the performance of QD algorithms are maintained despite the significant speed-up that comes with the massive parallelism.
|
220 |
+
|
221 |
+
Despite the apparent importance of iterations in QD algorithms, we show that when large batch sizes are used, a heavily reduced number of iterations and hence learning steps, provides similar results thus greatly accelerating the runtime of these QD algorihtms. This is observed on all the QD problems we considered, ranging from black-box optimization problems to high-dimensional RL tasks.
|
222 |
+
|
223 |
+
Despite reaping the benefits of hardware in order to significantly accelerate the algorithm, there are some limitations that could arise in the future when scaling up this work. As the archive stores the parameters of the entire population among other things, the memory of the device becomes an issue preventing larger networks with more parameters and higher dimensional inputs from being used. This issue showed up at the largest batch size of 131, 072 in the humanoid environment which had a significantly larger observation space of close to 300 dimensions. Similarly, this memory limitation also prevent larger archives with more cells from being used (i.e. larger population size). However, the experiments in this paper do not do use anything less than what is commonly used in QD literature.
|
224 |
+
|
225 |
+
QDax is a general framework and tool that is useful for accelerating population-based learning, including QD algorithms, over a wide range of problem settings. From the code provided in the supplementary materials and the experiments shown, we demonstrate that QDax can be used for standard black-box function optimization tasks, simple low-dimensional control tasks and more complex neuroevolution tasks such as in QD-RL. Other than the MAP-Elites (ME) algorithm used in this paper, the QDax framework can be used to implement and accommodate more complex QD algorithms such as ME-ES (Colas et al., 2020), PGA-ME (Nilsson &
|
226 |
+
Cully, 2021), CMA-ME (Fontaine et al., 2020b), Dierentiable QD (Fontaine & Nikolaidis, 2021) and more.
|
227 |
+
|
228 |
+
However, while we expect the findings of the paper in terms of the relationship between the batch size and iterations to hold for these algorithms, the time performance might suer due to additional computation required such as gradient steps or matrix inversions by these more sophisticated algorithms.
|
229 |
+
|
230 |
+
Through this work, we hope the increased accessibility of QDax can help bring ideas from an emerging field of optimization to accelerate progress in machines learning. We also hope to see new algorithmic innovations that will leverage the massive parallelization to improve performance of QD algorithms.
|
231 |
+
|
232 |
+
## Broader Impact Statement
|
233 |
+
|
234 |
+
Accessibility was a key consideration and motivation of this work. Our work turns the execution of QD
|
235 |
+
algorithms from what took days/weeks on large CPU clusters to only minutes on a single easily accessible and free cloud GPU/TPU. We hope that this will prevent limitations of access of these algorithms to wider and more diverse communities of people, particularly from emerging and developing economies and beyond only well-resourced research groups and enterprises. We believe this increase in accessibility has the potential to open up novel applications of QD in new domains. On the other hand, a key scientific takeaway from our work was also that QD algorithms were able to scale with more powerful and modern hardware through massive parallelization. We recognize that this could also open up the possibility of 'buying' results (Schwartz et al., 2020) by utilizing more compute.
|
236 |
+
|
237 |
+
We are also aware that algorithms that can scale well with compute come at the risk of carbon footprint cost.
|
238 |
+
|
239 |
+
Evident from the bloom of deep learning and more recently large language models, computational demands only continue to increase with models that scale. Interestingly, our work shows that given the same machine, we can decrease computation time by two orders of magnitude with no loss in performance of the algorithm which can aid in reducing environmental impact. Nonetheless, we highlight research on the environmental and carbon impact of AI and recommendations to be accountable in minimizing its eects (Dobbe & Whittaker, 2019; Dhar, 2020). A common step towards reducing AI's climate impact is to increase transparency on energy consumption and carbon emissions from computational resources used. We report the estimated emissions used in our experiments in the Appendix. Estimations were conducted using the Machine Learning Impact calculator (Lacoste et al., 2019).
|
240 |
+
|
241 |
+
## Acknowledgments
|
242 |
+
|
243 |
+
This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) grant EP/V006673/1 project REcoVER, and by Google with GCP credits.
|
244 |
+
|
245 |
+
## References
|
246 |
+
|
247 |
+
Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jerey Dean, Matthieu Devin, Sanjay Ghemawat, Georey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. Tensorflow: A system for large-scale machine learning. In *12th* USENIX Symposium on Operating Systems Design and Implementation (OSDI 16), pp. 265–283, 2016.
|
248 |
+
|
249 |
+
URL https://www.usenix.org/system/files/conference/osdi16/osdi16-abadi.pdf.
|
250 |
+
|
251 |
+
Andrei Barbu, David Mayo, Julian Alverio, William Luo, Christopher Wang, Danny Gutfreund, Joshua Tenenbaum, and Boris Katz. Objectnet: A large-scale bias-controlled dataset for pushing the limits of object recognition models. 2019.
|
252 |
+
|
253 |
+
James Bradbury, Roy Frostig, Peter Hawkins, Matthew James Johnson, Chris Leary, Dougal Maclaurin, George Necula, Adam Paszke, Jake VanderPlas, Skye Wanderman-Milne, and Qiao Zhang. JAX: composable transformations of Python+NumPy programs, 2018. URL http://github.com/google/jax.
|
254 |
+
|
255 |
+
Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners.
|
256 |
+
|
257 |
+
arXiv preprint arXiv:2005.14165, 2020.
|
258 |
+
|
259 |
+
Konstantinos Chatzilygeroudis, Vassilis Vassiliades, and Jean-Baptiste Mouret. Reset-free trial-and-error learning for robot damage recovery. *Robotics and Autonomous Systems*, 100:236–250, 2018.
|
260 |
+
|
261 |
+
Konstantinos Chatzilygeroudis, Antoine Cully, Vassilis Vassiliades, and Jean-Baptiste Mouret. Qualitydiversity optimization: a novel branch of stochastic optimization. In *Black Box Optimization, Machine* Learning, and No-Free Lunch Theorems, pp. 109–135. Springer, 2021.
|
262 |
+
|
263 |
+
Dan C. Ciresan, Ueli Meier, and Jürgen Schmidhuber. Multi-column deep neural networks for image classification. *CoRR*, abs/1202.2745, 2012. URL http://arxiv.org/abs/1202.2745.
|
264 |
+
|
265 |
+
Je Clune. Ai-gas: Ai-generating algorithms, an alternate paradigm for producing general artificial intelligence.
|
266 |
+
|
267 |
+
arXiv preprint arXiv:1905.10985, 2019.
|
268 |
+
|
269 |
+
Cédric Colas, Vashisht Madhavan, Joost Huizinga, and Je Clune. Scaling map-elites to deep neuroevolution.
|
270 |
+
|
271 |
+
In *Proceedings of the 2020 Genetic and Evolutionary Computation Conference*, pp. 67–75, 2020.
|
272 |
+
|
273 |
+
Ronan Collobert, Koray Kavukcuoglu, and Clément Farabet. Torch7: A matlab-like environment for machine learning. 2011. URL http://infoscience.epfl.ch/record/192376.
|
274 |
+
|
275 |
+
Edoardo Conti, Vashisht Madhavan, Felipe Petroski Such, Joel Lehman, Kenneth Stanley, and Je Clune.
|
276 |
+
|
277 |
+
Improving exploration in evolution strategies for deep reinforcement learning via a population of noveltyseeking agents. *Advances in neural information processing systems*, 31, 2018.
|
278 |
+
|
279 |
+
Erwin Coumans and Yunfei Bai. Pybullet, a python module for physics simulation for games, robotics and machine learning. http://pybullet.org, 2016–2020.
|
280 |
+
|
281 |
+
Antoine Cully. Autonomous skill discovery with quality-diversity and unsupervised descriptors. In *Proceedings* of the Genetic and Evolutionary Computation Conference, pp. 81–89, 2019.
|
282 |
+
|
283 |
+
Antoine Cully. Multi-emitter map-elites: Improving quality, diversity and convergence speed with heterogeneous sets of emitters. *arXiv preprint arXiv:2007.05352*, 2020.
|
284 |
+
|
285 |
+
Antoine Cully and Yiannis Demiris. Quality and diversity optimization: A unifying modular framework.
|
286 |
+
|
287 |
+
IEEE Transactions on Evolutionary Computation, 22(2):245–259, 2017.
|
288 |
+
|
289 |
+
Antoine Cully and Jean-Baptiste Mouret. Behavioral repertoire learning in robotics. In Proceedings of the 15th annual conference on Genetic and evolutionary computation, pp. 175–182, 2013.
|
290 |
+
|
291 |
+
Antoine Cully, Je Clune, Danesh Tarapore, and Jean-Baptiste Mouret. Robots that can adapt like animals.
|
292 |
+
|
293 |
+
Nature, 521(7553):503–507, 2015.
|
294 |
+
|
295 |
+
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In *2009 IEEE conference on computer vision and pattern recognition*, pp. 248–255. Ieee, 2009.
|
296 |
+
|
297 |
+
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. *arXiv preprint arXiv:1810.04805*, 2018.
|
298 |
+
|
299 |
+
Payal Dhar. The carbon impact of artificial intelligence. *Nature Machine Intelligence*, 2(8):423–425, 2020. R. Dobbe and M. Whittaker. Ai and climate change: How they're connected, and what we can do about it.
|
300 |
+
|
301 |
+
2019.
|
302 |
+
|
303 |
+
Sam Earle, Justin Snider, Matthew C Fontaine, Stefanos Nikolaidis, and Julian Togelius. Illuminating diverse neural cellular automata for level generation. *arXiv preprint arXiv:2109.05489*, 2021.
|
304 |
+
|
305 |
+
Adrien Ecoet, Joost Huizinga, Joel Lehman, Kenneth O Stanley, and Je Clune. First return, then explore.
|
306 |
+
|
307 |
+
Nature, 590(7847):580–586, 2021.
|
308 |
+
|
309 |
+
Benjamin Eysenbach, Abhishek Gupta, Julian Ibarz, and Sergey Levine. Diversity is all you need: Learning skills without a reward function. *arXiv preprint arXiv:1802.06070*, 2018.
|
310 |
+
|
311 |
+
Matthew Fontaine and Stefanos Nikolaidis. Dierentiable quality diversity. Advances in Neural Information Processing Systems, 34, 2021.
|
312 |
+
|
313 |
+
Matthew C Fontaine, Scott Lee, Lisa B Soros, Fernando de Mesentier Silva, Julian Togelius, and Amy K
|
314 |
+
Hoover. Mapping hearthstone deck spaces through map-elites with sliding boundaries. In Proceedings of The Genetic and Evolutionary Computation Conference, pp. 161–169, 2019.
|
315 |
+
|
316 |
+
Matthew C Fontaine, Ruilin Liu, Ahmed Khalifa, Jignesh Modi, Julian Togelius, Amy K Hoover, and Stefanos Nikolaidis. Illuminating mario scenes in the latent space of a generative adversarial network. *arXiv preprint* arXiv:2007.05674, 2020a.
|
317 |
+
|
318 |
+
Matthew C Fontaine, Julian Togelius, Stefanos Nikolaidis, and Amy K Hoover. Covariance matrix adaptation for the rapid illumination of behavior space. In *Proceedings of the 2020 genetic and evolutionary computation* conference, pp. 94–102, 2020b.
|
319 |
+
|
320 |
+
C. Daniel Freeman, Erik Frey, Anton Raichuk, Sertan Girgin, Igor Mordatch, and Olivier Bachem. Brax
|
321 |
+
- a dierentiable physics engine for large scale rigid body simulation, 2021. URL http://github.com/
|
322 |
+
google/brax.
|
323 |
+
|
324 |
+
Adam Gaier, Alexander Asteroth, and Jean-Baptiste Mouret. Data-ecient design exploration through surrogate-assisted illumination. *Evolutionary computation*, 26(3):381–410, 2018.
|
325 |
+
|
326 |
+
Daniele Gravina, Ahmed Khalifa, Antonios Liapis, Julian Togelius, and Georgios N Yannakakis. Procedural content generation through quality diversity. In *2019 IEEE Conference on Games (CoG)*, pp. 1–8. IEEE,
|
327 |
+
2019.
|
328 |
+
|
329 |
+
Shixiang Shane Gu, Manfred Diaz, Daniel C Freeman, Hiroki Furuta, Seyed Kamyar Seyed Ghasemipour, Anton Raichuk, Byron David, Erik Frey, Erwin Coumans, and Olivier Bachem. Braxlines: Fast and interactive toolkit for rl-driven behavior engineering beyond reward maximization. *arXiv preprint arXiv:2110.04686*,
|
330 |
+
2021.
|
331 |
+
|
332 |
+
Nikolaus Hansen, Anne Auger, Raymond Ros, Steen Finck, and Petr Posik. Comparing Results of 31 Algorithms from the Black-Box Optimization Benchmarking BBOB-2009. In ACM-GECCO Genetic and Evolutionary Computation Conference, Portland, United States, July 2010. URL https://hal.
|
333 |
+
|
334 |
+
archives-ouvertes.fr/hal-00545727. pp. 1689-1696.
|
335 |
+
|
336 |
+
Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea Tuöar, and Dimo Brockho. Coco: A
|
337 |
+
platform for comparing continuous optimizers in a black-box setting. *Optimization Methods and Software*,
|
338 |
+
36(1):114–144, 2021.
|
339 |
+
|
340 |
+
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016.
|
341 |
+
|
342 |
+
Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. *Neural computation*, 9(8):1735–1780, 1997.
|
343 |
+
|
344 |
+
John Jumper, Richard Evans, Alexander Pritzel, Tim Green, Michael Figurnov, Olaf Ronneberger, Kathryn Tunyasuvunakool, Russ Bates, Augustin éídek, Anna Potapenko, Alex Bridgland, Clemens Meyer, Simon A A Kohl, Andrew J Ballard, Andrew Cowie, Bernardino Romera-Paredes, Stanislav Nikolov, Rishub Jain, Jonas Adler, Trevor Back, Stig Petersen, David Reiman, Ellen Clancy, Michal Zielinski, Martin Steinegger, Michalina Pacholska, Tamas Berghammer, Sebastian Bodenstein, David Silver, Oriol Vinyals, Andrew W
|
345 |
+
Senior, Koray Kavukcuoglu, Pushmeet Kohli, and Demis Hassabis. Highly accurate protein structure prediction with AlphaFold. *Nature*, 596(7873):583–589, 2021. doi: 10.1038/s41586-021-03819-2.
|
346 |
+
|
347 |
+
Rituraj Kaushik, Pierre Desreumaux, and Jean-Baptiste Mouret. Adaptive prior selection for repertoire-based online adaptation in robotics. *Frontiers in Robotics and AI*, 6:151, 2020.
|
348 |
+
|
349 |
+
Leon Keller, Daniel Tanneberg, Svenja Stark, and Jan Peters. Model-based quality-diversity search for ecient robot learning. *arXiv preprint arXiv:2008.04589*, 2020.
|
350 |
+
|
351 |
+
Alex Krizhevsky, Ilya Sutskever, and Georey E Hinton. Imagenet classification with deep convolutional neural networks. *Advances in neural information processing systems*, 25:1097–1105, 2012.
|
352 |
+
|
353 |
+
Alexandre Lacoste, Alexandra Luccioni, Victor Schmidt, and Thomas Dandres. Quantifying the carbon emissions of machine learning. *arXiv preprint arXiv:1910.09700*, 2019.
|
354 |
+
|
355 |
+
Yann Lecun, Yoshua Bengio, and Georey Hinton. Deep learning. *Nature 2015 521:7553*, 521:436–444, 5 2015.
|
356 |
+
|
357 |
+
ISSN 1476-4687. doi: 10.1038/nature14539. URL https://www.nature.com/articles/nature14539.
|
358 |
+
|
359 |
+
Jeongseok Lee, Michael X. Grey, Sehoon Ha, Tobias Kunz, Sumit Jain, Yuting Ye, Siddhartha S. Srinivasa, Mike Stilman, and C. Karen Liu. DART: Dynamic animation and robotics toolkit. The Journal of Open Source Software, 3(22):500, Feb 2018. doi: 10.21105/joss.00500. URL https://doi.org/10.21105/joss.
|
360 |
+
|
361 |
+
00500.
|
362 |
+
|
363 |
+
Joel Lehman and Kenneth O Stanley. Abandoning objectives: Evolution through the search for novelty alone.
|
364 |
+
|
365 |
+
Evolutionary computation, 19(2):189–223, 2011a.
|
366 |
+
|
367 |
+
Joel Lehman and Kenneth O Stanley. Evolving a diversity of virtual creatures through novelty search and local competition. In *Proceedings of the 13th annual conference on Genetic and evolutionary computation*,
|
368 |
+
pp. 211–218, 2011b.
|
369 |
+
|
370 |
+
Bryan Lim, Luca Grillotti, Lorenzo Bernasconi, and Antoine Cully. Dynamics-aware quality-diversity for ecient learning of skill repertoires. *arXiv preprint arXiv:2109.08522*, 2021.
|
371 |
+
|
372 |
+
Viktor Makoviychuk, Lukasz Wawrzyniak, Yunrong Guo, Michelle Lu, Kier Storey, Miles Macklin, David Hoeller, Nikita Rudin, Arthur Allshire, Ankur Handa, and Gavriel State. Isaac gym: High performance gpu-based physics simulation for robot learning. *CoRR*, abs/2108.10470, 2021. URL https://arxiv.org/
|
373 |
+
abs/2108.10470.
|
374 |
+
|
375 |
+
Jiayu Miao, Tianze Zhou, Kun Shao, Ming Zhou, Weinan Zhang, Jianye Hao, Yong Yu, and Jun Wang.
|
376 |
+
|
377 |
+
Promoting quality and diversity in population-based reinforcement learning via hierarchical trajectory space exploration. In *2022 International Conference on Robotics and Automation (ICRA)*, pp. 7544–7550.
|
378 |
+
|
379 |
+
IEEE, 2022.
|
380 |
+
|
381 |
+
J.-B. Mouret and S. Doncieux. SFERESv2: Evolvin' in the multi-core world. In Proc. of Congress on Evolutionary Computation (CEC), pp. 4079–4086, 2010.
|
382 |
+
|
383 |
+
J-B Mouret and Stéphane Doncieux. Encouraging behavioral diversity in evolutionary robotics: An empirical study. *Evolutionary computation*, 20(1):91–133, 2012.
|
384 |
+
|
385 |
+
Jean-Baptiste Mouret and Je Clune. Illuminating search spaces by mapping elites. arXiv preprint arXiv:1504.04909, 2015.
|
386 |
+
|
387 |
+
Jean-Baptiste Mouret and Stéphane Doncieux. Overcoming the bootstrap problem in evolutionary robotics using behavioral diversity. In *2009 IEEE Congress on Evolutionary Computation*, pp. 1161–1168. IEEE,
|
388 |
+
2009.
|
389 |
+
|
390 |
+
Olle Nilsson and Antoine Cully. Policy gradient assisted map-elites. In Proceedings of the Genetic and Evolutionary Computation Conference, pp. 866–875, 2021.
|
391 |
+
|
392 |
+
Giuseppe Paolo, Alban Laflaquiere, Alexandre Coninx, and Stephane Doncieux. Unsupervised learning and exploration of reachable outcome space. In 2020 IEEE International Conference on Robotics and Automation (ICRA), pp. 2379–2385. IEEE, 2020.
|
393 |
+
|
394 |
+
Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Köpf, Edward Z. Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. Pytorch: An imperative style, high-performance deep learning library. *CoRR*,
|
395 |
+
abs/1912.01703, 2019. URL http://arxiv.org/abs/1912.01703.
|
396 |
+
|
397 |
+
Thomas Pierrot, Valentin Macé, Georey Cideron, Karim Beguir, Antoine Cully, Olivier Sigaud, and Nicolas Perrin. Diversity policy gradient for sample ecient quality-diversity optimization. 2021.
|
398 |
+
|
399 |
+
Justin K Pugh, Lisa B Soros, and Kenneth O Stanley. Quality diversity: A new frontier for evolutionary computation. *Frontiers in Robotics and AI*, 3:40, 2016.
|
400 |
+
|
401 |
+
Rajat Raina, Anand Madhavan, and Andrew Y. Ng. Large-scale deep unsupervised learning using graphics processors. In *Proceedings of the 26th Annual International Conference on Machine Learning*, ICML '09, pp. 873–880, New York, NY, USA, 2009. Association for Computing Machinery. ISBN 9781605585161. doi:
|
402 |
+
10.1145/1553374.1553486. URL https://doi.org/10.1145/1553374.1553486.
|
403 |
+
|
404 |
+
Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. In *Proceedings of the IEEE conference on computer vision and pattern recognition*, pp.
|
405 |
+
|
406 |
+
779–788, 2016.
|
407 |
+
|
408 |
+
Nikita Rudin, David Hoeller, Philipp Reist, and Marco Hutter. Learning to walk in minutes using massively parallel deep reinforcement learning. *arXiv preprint arXiv:2109.11978*, 2021.
|
409 |
+
|
410 |
+
Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, and Ilya Sutskever. Evolution strategies as a scalable alternative to reinforcement learning. *arXiv preprint arXiv:1703.03864*, 2017.
|
411 |
+
|
412 |
+
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. Proximal policy optimization algorithms. *arXiv preprint arXiv:1707.06347*, 2017.
|
413 |
+
|
414 |
+
Roy Schwartz, Jesse Dodge, Noah A Smith, and Oren Etzioni. Green ai. *Communications of the ACM*, 63
|
415 |
+
(12):54–63, 2020.
|
416 |
+
|
417 |
+
Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro.
|
418 |
+
|
419 |
+
Megatron-lm: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053, 2019.
|
420 |
+
|
421 |
+
Shaden Smith, Mostofa Patwary, Brandon Norick, Patrick LeGresley, Samyam Rajbhandari, Jared Casper, Zhun Liu, Shrimai Prabhumoye, George Zerveas, Vijay Korthikanti, Elton Zhang, Rewon Child, Reza Yazdani Aminabadi, Julie Bernauer, Xia Song, Mohammad Shoeybi, Yuxiong He, Michael Houston, Saurabh Tiwary, and Bryan Catanzaro. Using deepspeed and megatron to train megatron-turing nlg 530b, a large-scale generative language model, 2022.
|
422 |
+
|
423 |
+
Kenneth O Stanley. Why open-endedness matters. *Artificial life*, 25(3):232–235, 2019.
|
424 |
+
|
425 |
+
Kenneth O Stanley, Joel Lehman, and Lisa Soros. Open-endedness: The last grand challenge you've never heard of. *While open-endedness could be a force for discovering intelligence, it could also be a component* of AI itself, 2017.
|
426 |
+
|
427 |
+
Dave Steinkrau, Patrice Y. Simard, and Ian Buck. Using gpus for machine learning algorithms. In Proceedings of the Eighth International Conference on Document Analysis and Recognition, ICDAR '05, pp.
|
428 |
+
|
429 |
+
1115–1119, USA, 2005. IEEE Computer Society. ISBN 0769524206. doi: 10.1109/ICDAR.2005.251. URL
|
430 |
+
https://doi.org/10.1109/ICDAR.2005.251.
|
431 |
+
|
432 |
+
Bryon Tjanaka, Matthew C. Fontaine, Yulun Zhang, Sam Sommerer, Nathan Dennler, and Stefanos Nikolaidis.
|
433 |
+
|
434 |
+
pyribs: A bare-bones python library for quality diversity optimization. https://github.com/icaros-usc/
|
435 |
+
pyribs, 2021.
|
436 |
+
|
437 |
+
Bryon Tjanaka, Matthew C Fontaine, Julian Togelius, and Stefanos Nikolaidis. Approximating gradients for dierentiable quality diversity in reinforcement learning. *arXiv preprint arXiv:2202.03666*, 2022.
|
438 |
+
|
439 |
+
Emanuel Todorov, Tom Erez, and Yuval Tassa. Mujoco: A physics engine for model-based control. In *2012* IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5026–5033. IEEE, 2012.
|
440 |
+
|
441 |
+
Vassilis Vassiliades and Jean-Baptiste Mouret. Discovering the Elite Hypervolume by Leveraging Interspecies Correlation. In *GECCO 2018 - Genetic and Evolutionary Computation Conference*, Kyoto, Japan, July 2018. doi: 10.1145/3205455.3205602. URL https://hal.inria.fr/hal-01764739.
|
442 |
+
|
443 |
+
Vassilis Vassiliades, Konstantinos Chatzilygeroudis, and Jean-Baptiste Mouret. Using centroidal voronoi tessellations to scale up the multidimensional archive of phenotypic elites algorithm. *IEEE Transactions* on Evolutionary Computation, 22(4):623–630, 2017.
|
444 |
+
|
445 |
+
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, £ukasz Kaiser, and Illia Polosukhin. Attention is all you need. In *Advances in neural information processing systems*, pp.
|
446 |
+
|
447 |
+
5998–6008, 2017.
|
448 |
+
|
449 |
+
Rui Wang, Joel Lehman, Je Clune, and Kenneth O Stanley. Poet: open-ended coevolution of environments and their optimized solutions. In *Proceedings of the Genetic and Evolutionary Computation Conference*, pp. 142–151, 2019.
|
450 |
+
|
451 |
+
Rui Wang, Joel Lehman, Aditya Rawal, Jiale Zhi, Yulun Li, Jerey Clune, and Kenneth Stanley. Enhanced poet: Open-ended reinforcement learning through unbounded invention of learning challenges and their solutions. In *International Conference on Machine Learning*, pp. 9940–9951. PMLR, 2020.
|
452 |
+
|
453 |
+
Yutong Wang, Ke Xue, and Chao Qian. Evolutionary diversity optimization with clustering-based selection for reinforcement learning. In *International Conference on Learning Representations*, 2021.
|
454 |
+
|
455 |
+
Daan Wierstra, Tom Schaul, Jan Peters, and Juergen Schmidhuber. Natural evolution strategies. In *2008* IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), pp.
|
456 |
+
|
457 |
+
3381–3387, 2008. doi: 10.1109/CEC.2008.4631255.
|
znNITCJyTI/znNITCJyTI_meta.json
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"languages": null,
|
3 |
+
"filetype": "pdf",
|
4 |
+
"toc": [],
|
5 |
+
"pages": 17,
|
6 |
+
"ocr_stats": {
|
7 |
+
"ocr_pages": 1,
|
8 |
+
"ocr_failed": 0,
|
9 |
+
"ocr_success": 1,
|
10 |
+
"ocr_engine": "surya"
|
11 |
+
},
|
12 |
+
"block_stats": {
|
13 |
+
"header_footer": 17,
|
14 |
+
"code": 0,
|
15 |
+
"table": 2,
|
16 |
+
"equations": {
|
17 |
+
"successful_ocr": 2,
|
18 |
+
"unsuccessful_ocr": 0,
|
19 |
+
"equations": 2
|
20 |
+
}
|
21 |
+
},
|
22 |
+
"postprocess_stats": {
|
23 |
+
"edit": {}
|
24 |
+
}
|
25 |
+
}
|