"{\"text\":{\"0\":\" Attention Is All You Need\\nAshish Vaswani\\u0003\\nGoogle Brain\\navaswani@google.comNoam Shazeer\\u0003\\nGoogle Brain\\nnoam@google.comNiki Parmar\\u0003\\nGoogle Research\\nnikip@google.comJakob Uszkoreit\\u0003\\nGoogle Research\\nusz@google.com\\nLlion Jones\\u0003\\nGoogle Research\\nllion@google.comAidan N. Gomez\\u0003y\\nUniversity of Toronto\\naidan@cs.toronto.edu\\u0141ukasz Kaiser\\u0003\\nGoogle Brain\\nlukaszkaiser@google.com\\nIllia Polosukhin\\u0003z\\nillia.polosukhin@gmail.com\\nAbstract\\nThe dominant sequence transduction models are based on complex recurrent or\\nconvolutional neural networks that include an encoder and a decoder. The best\\nperforming models also connect the encoder and decoder through an attention\\nmechanism. We propose a new simple network architecture, the Transformer,\\nbased solely on attention mechanisms, dispensing with recurrence and convolutions\\nentirely. Experiments on two machine translation tasks show these models to\\nbe superior in quality while being more parallelizable and requiring signi\\ufb01cantly\\nless time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-\\nto-German translation task, improving over the existing best results, including\\nensembles, by over 2 BLEU. On the WMT 2014 English-to-French translation task,\\nour model establishes a new single-model state-of-the-art BLEU score of 41.8 after\\ntraining for 3.5 days on eight GPUs, a small fraction of the training costs of the\\nbest models from the literature. We show that the Transformer generalizes well to\\nother tasks by applying it successfully to English constituency parsing both with\\nlarge and limited training data.\\n1 Introduction\\nRecurrent neural networks, long short-term memory [ 13] and gated recurrent [ 7] neural networks\\nin particular, have been \\ufb01rmly established as state of the art approaches in sequence modeling and\\n\\u0003Equal contribution. Listing order is random. Jakob proposed replacing RNNs with self-attention and started\\nthe effort to evaluate this idea. Ashish, with Illia, designed and implemented the \\ufb01rst Transformer models and\\nhas been crucially involved in every aspect of this work. Noam proposed scaled dot-product attention, multi-head\\nattention and the parameter-free position representation and became the other person involved in nearly every\\ndetail. Niki designed, implemented, tuned and evaluated countless model variants in our original codebase and\\ntensor2tensor. Llion also experimented with novel model variants, was responsible for our initial codebase, and\\nef\\ufb01cient inference and visualizations. Lukasz and Aidan spent countless long days designing various parts of and\\nimplementing tensor2tensor, replacing our earlier codebase, greatly improving results and massively accelerating\\nour research.\\nyWork performed while at Google Brain.\\nzWork performed while at Google Research.\\n31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.arXiv:1706.03762v5 [cs.CL] 6 Dec 2017 transduction problems such as language modeling and machine translation [ 35,2,5]. Numerous\\nefforts have since continued to push the boundaries of recurrent language models and encoder-decoder\\narchitectures [38, 24, 15].\\nRecurrent models typically factor computation along the symbol positions of the input and output\\nsequences. Aligning the positions to steps in computation time, they generate a sequence of hidden\\nstatesht, as a function of the previous hidden state ht\\u00001and the input for position t. This inherently\\nsequential nature precludes parallelization within training examples, which becomes critical at longer\\nsequence lengths, as memory constraints limit batching across examples. Recent work has achieved\\nsigni\\ufb01cant improvements in computational ef\\ufb01ciency through factorization tricks [ 21] and conditional\\ncomputation [ 32], while also improving model performance in case of the latter. The fundamental\\nconstraint of sequential computation, however, remains.\\nAttention mechanisms have become an integral part of compelling sequence modeling and transduc-\\ntion models in various tasks, allowing modeling of dependencies without regard to their distance in\\nthe input or output sequences [ 2,19]. In all but a few cases [ 27], however, such attention mechanisms\\nare used in conjunction with a recurrent network.\\nIn this work we propose the Transformer, a model architecture eschewing recurrence and instead\\nrelying entirely on an attention mechanism to draw global dependencies between input and output.\\nThe Transformer allows for signi\\ufb01cantly more parallelization and can reach a new state of the art in\\ntranslation quality after being trained for as little as twelve hours on eight P100 GPUs.\\n2 Background\\nThe goal of reducing sequential computation also forms the foundation of the Extended Neural GPU\\n[16], ByteNet [ 18] and ConvS2S [ 9], all of which use convolutional neural networks as basic building\\nblock, computing hidden representations in parallel for all input and output positions. In these models,\\nthe number of operations required to relate signals from two arbitrary input or output positions grows\\nin the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes\\nit more dif\\ufb01cult to learn dependencies between distant positions [ 12]. In the Transformer this is\\nreduced to a constant number of operations, albeit at the cost of reduced effective resolution due\\nto averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as\\ndescribed in section 3.2.\\nSelf-attention, sometimes called intra-attention is an attention mechanism relating different positions\\nof a single sequence in order to compute a representation of the sequence. Self-attention has been\\nused successfully in a variety of tasks including reading comprehension, abstractive summarization,\\ntextual entailment and learning task-independent sentence representations [4, 27, 28, 22].\\nEnd-to-end memory networks are based on a recurrent attention mechanism instead of sequence-\\naligned recurrence and have been shown to perform well on simple-language question answering and\\nlanguage modeling tasks [34].\\nTo the best of our knowledge, however, the Transformer is the \\ufb01rst transduction model relying\\nentirely on self-attention to compute representations of its input and output without using sequence-\\naligned RNNs or convolution. In the following sections, we will describe the Transformer, motivate\\nself-attention and discuss its advantages over models such as [17, 18] and [9].\\n3 Model Architecture\\nMost competitive neural sequence transduction models have an encoder-decoder structure [ 5,2,35].\\nHere, the encoder maps an input sequence of symbol representations (x1;:::;x n)to a sequence\\nof continuous representations z= (z1;:::;z n). Given z, the decoder then generates an output\\nsequence (y1;:::;y m)of symbols one element at a time. At each step the model is auto-regressive of continuous representations z= (z1;:::;z n). Given z, the decoder then generates an output\\nsequence (y1;:::;y m)of symbols one element at a time. At each step the model is auto-regressive\\n[10], consuming the previously generated symbols as additional input when generating the next.\\nThe Transformer follows this overall architecture using stacked self-attention and point-wise, fully\\nconnected layers for both the encoder and decoder, shown in the left and right halves of Figure 1,\\nrespectively.\\n2 Figure 1: The Transformer - model architecture.\\n3.1 Encoder and Decoder Stacks\\nEncoder: The encoder is composed of a stack of N= 6 identical layers. Each layer has two\\nsub-layers. The \\ufb01rst is a multi-head self-attention mechanism, and the second is a simple, position-\\nwise fully connected feed-forward network. We employ a residual connection [ 11] around each of\\nthe two sub-layers, followed by layer normalization [ 1]. That is, the output of each sub-layer is\\nLayerNorm( x+ Sublayer( x)), where Sublayer(x)is the function implemented by the sub-layer\\nitself. To facilitate these residual connections, all sub-layers in the model, as well as the embedding\\nlayers, produce outputs of dimension dmodel = 512 .\\nDecoder: The decoder is also composed of a stack of N= 6identical layers. In addition to the two\\nsub-layers in each encoder layer, the decoder inserts a third sub-layer, which performs multi-head\\nattention over the output of the encoder stack. Similar to the encoder, we employ residual connections\\naround each of the sub-layers, followed by layer normalization. We also modify the self-attention\\nsub-layer in the decoder stack to prevent positions from attending to subsequent positions. This\\nmasking, combined with fact that the output embeddings are offset by one position, ensures that the\\npredictions for position ican depend only on the known outputs at positions less than i.\\n3.2 Attention\\nAn attention function can be described as mapping a query and a set of key-value pairs to an output,\\nwhere the query, keys, values, and output are all vectors. The output is computed as a weighted sum\\nof the values, where the weight assigned to each value is computed by a compatibility function of the\\nquery with the corresponding key.\\n3 Scaled Dot-Product Attention\\n Multi-Head Attention\\nFigure 2: (left) Scaled Dot-Product Attention. (right) Multi-Head Attention consists of several\\nattention layers running in parallel.\\n3.2.1 Scaled Dot-Product Attention\\nWe call our particular attention \\\"Scaled Dot-Product Attention\\\" (Figure 2). The input consists of\\nqueries and keys of dimension dk, and values of dimension dv. We compute the dot products of the\\nquery with all keys, divide each bypdk, and apply a softmax function to obtain the weights on the\\nvalues.\\nIn practice, we compute the attention function on a set of queries simultaneously, packed together\\ninto a matrix Q. The keys and values are also packed together into matrices KandV. We compute\\nthe matrix of outputs as:\\nAttention(Q;K;V ) = softmax(QKT\\npdk)V (1)\\nThe two most commonly used attention functions are additive attention [ 2], and dot-product (multi-\\nplicative) attention. Dot-product attention is identical to our algorithm, except for the scaling factor\\nof1pdk. Additive attention computes the compatibility function using a feed-forward network with\\na single hidden layer. While the two are similar in theoretical complexity, dot-product attention is\\nmuch faster and more space-ef\\ufb01cient in practice, since it can be implemented using highly optimized\\nmatrix multiplication code.\\nWhile for small values of dkthe two mechanisms perform similarly, additive attention outperforms\\ndot product attention without scaling for larger values of dk[3]. We suspect that for large values of\\ndk, the dot products grow large in magnitude, pushing the softmax function into regions where it has\\nextremely small gradients4. To counteract this effect, we scale the dot products by1pdk.\\n3.2.2 Multi-Head Attention\\nInstead of performing a single attention function with dmodel-dimensional keys, values and queries,\\nwe found it bene\\ufb01cial to linearly project the queries, keys and values htimes with different, learned\\nlinear projections to dk,dkanddvdimensions, respectively. On each of these projected versions of\\nqueries, keys and values we then perform the attention function in parallel, yielding dv-dimensional\\noutput values. These are concatenated and once again projected, resulting in the \\ufb01nal values, as\\ndepicted in Figure 2.\\n4To illustrate why the dot products get large, assume that the components of qandkare independent random\\nvariables with mean 0and variance 1. Then their dot product, q\\u0001k=Pdk\\ni=1qiki, has mean 0and variance dk.\\n4 Multi-head attention allows the model to jointly attend to information from different representation\\nsubspaces at different positions. With a single attention head, averaging inhibits this.\\nMultiHead( Q;K;V ) = Concat(head 1;:::;head h)WO\\nwhere head i= Attention( QWQ\\ni;KWK\\ni;VWV\\ni)\\nWhere the projections are parameter matrices WQ\\ni2Rdmodel\\u0002dk,WK\\ni2Rdmodel\\u0002dk,WV\\ni2Rdmodel\\u0002dv\\nandWO2Rhdv\\u0002dmodel.\\nIn this work we employ h= 8 parallel attention layers, or heads. For each of these we use\\ndk=dv=dmodel=h= 64 . Due to the reduced dimension of each head, the total computational cost\\nis similar to that of single-head attention with full dimensionality.\\n3.2.3 Applications of Attention in our Model\\nThe Transformer uses multi-head attention in three different ways:\\n\\u000fIn \\\"encoder-decoder attention\\\" layers, the queries come from the previous decoder layer,\\nand the memory keys and values come from the output of the encoder. This allows every\\nposition in the decoder to attend over all positions in the input sequence. This mimics the\\ntypical encoder-decoder attention mechanisms in sequence-to-sequence models such as\\n[38, 2, 9].\\n\\u000fThe encoder contains self-attention layers. In a self-attention layer all of the keys, values\\nand queries come from the same place, in this case, the output of the previous layer in the\\nencoder. Each position in the encoder can attend to all positions in the previous layer of the\\nencoder.\\n\\u000fSimilarly, self-attention layers in the decoder allow each position in the decoder to attend to\\nall positions in the decoder up to and including that position. We need to prevent leftward\\ninformation \\ufb02ow in the decoder to preserve the auto-regressive property. We implement this\\ninside of scaled dot-product attention by masking out (setting to \\u00001) all values in the input\\nof the softmax which correspond to illegal connections. See Figure 2.\\n3.3 Position-wise Feed-Forward Networks\\nIn addition to attention sub-layers, each of the layers in our encoder and decoder contains a fully\\nconnected feed-forward network, which is applied to each position separately and identically. This\\nconsists of two linear transformations with a ReLU activation in between.\\nFFN(x) = max(0;xW 1+b1)W2+b2 (2)\\nWhile the linear transformations are the same across different positions, they use different parameters\\nfrom layer to layer. Another way of describing this is as two convolutions with kernel size 1.\\nThe dimensionality of input and output is dmodel = 512 , and the inner-layer has dimensionality\\ndff= 2048 .\\n3.4 Embeddings and Softmax\\nSimilarly to other sequence transduction models, we use learned embeddings to convert the input\\ntokens and output tokens to vectors of dimension dmodel. We also use the usual learned linear transfor-\\nmation and softmax function to convert the decoder output to predicted next-token probabilities. In\\nour model, we share the same weight matrix between the two embedding layers and the pre-softmax\\nlinear transformation, similar to [ 30]. In the embedding layers, we multiply those weights bypdmodel.\\n3.5 Positional Encoding\\nSince our model contains no recurrence and no convolution, in order for the model to make use of the\\norder of the sequence, we must inject some information about the relative or absolute position of the\\n5 Table 1: Maximum path lengths, per-layer complexity and minimum number of sequential operations\\nfor different layer types. nis the sequence length, dis the representation dimension, kis the kernel\\nsize of convolutions and rthe size of the neighborhood in restricted self-attention.\\nLayer Type Complexity per Layer Sequential Maximum Path Length\\nOperations\\nSelf-Attention O(n2\\u0001d) O(1) O(1)\\nRecurrent O(n\\u0001d2) O(n) O(n)\\nConvolutional O(k\\u0001n\\u0001d2)O(1) O(logk(n))\\nSelf-Attention (restricted) O(r\\u0001n\\u0001d)O(1) O(n=r)\\ntokens in the sequence. To this end, we add \\\"positional encodings\\\" to the input embeddings at the\\nbottoms of the encoder and decoder stacks. The positional encodings have the same dimension dmodel\\nas the embeddings, so that the two can be summed. There are many choices of positional encodings,\\nlearned and \\ufb01xed [9].\\nIn this work, we use sine and cosine functions of different frequencies:\\nPE(pos;2i)=sin(pos=100002i=d model)\\nPE(pos;2i+1)=cos(pos=100002i=d model)\\nwhereposis the position and iis the dimension. That is, each dimension of the positional encoding\\ncorresponds to a sinusoid. The wavelengths form a geometric progression from 2\\u0019to10000\\u00012\\u0019. We\\nchose this function because we hypothesized it would allow the model to easily learn to attend by\\nrelative positions, since for any \\ufb01xed offset k,PEpos+kcan be represented as a linear function of\\nPEpos.\\nWe also experimented with using learned positional embeddings [ 9] instead, and found that the two\\nversions produced nearly identical results (see Table 3 row (E)). We chose the sinusoidal version\\nbecause it may allow the model to extrapolate to sequence lengths longer than the ones encountered\\nduring training.\\n4 Why Self-Attention\\nIn this section we compare various aspects of self-attention layers to the recurrent and convolu-\\ntional layers commonly used for mapping one variable-length sequence of symbol representations\\n(x1;:::;x n)to another sequence of equal length (z1;:::;z n), withxi;zi2Rd, such as a hidden\\nlayer in a typical sequence transduction encoder or decoder. Motivating our use of self-attention we\\nconsider three desiderata.\\nOne is the total computational complexity per layer. Another is the amount of computation that can\\nbe parallelized, as measured by the minimum number of sequential operations required.\\nThe third is the path length between long-range dependencies in the network. Learning long-range\\ndependencies is a key challenge in many sequence transduction tasks. One key factor affecting the\\nability to learn such dependencies is the length of the paths forward and backward signals have to\\ntraverse in the network. The shorter these paths between any combination of positions in the input\\nand output sequences, the easier it is to learn long-range dependencies [ 12]. Hence we also compare\\nthe maximum path length between any two input and output positions in networks composed of the\\ndifferent layer types.\\nAs noted in Table 1, a self-attention layer connects all positions with a constant number of sequentially\\nexecuted operations, whereas a recurrent layer requires O(n)sequential operations. In terms of\\ncomputational complexity, self-attention layers are faster than recurrent layers when the sequence\\nlengthnis smaller than the representation dimensionality d, which is most often the case with\\nsentence representations used by state-of-the-art models in machine translations, such as word-piece\\n[38] and byte-pair [ 31] representations. To improve computational performance for tasks involving\\nvery long sequences, self-attention could be restricted to considering only a neighborhood of size rin\\n6 the input sequence centered around the respective output position. This would increase the maximum\\npath length to O(n=r). We plan to investigate this approach further in future work.\\nA single convolutional layer with kernel width k\\n\\n\\n\\n\\n\\n\\nIt\\nis\\nin\\nthis\\nspirit\\nthat\\na\\nmajority\\nof\\nAmerican\\ngovernments\\nhave\\npassed\\nnew\\nlaws\\nsince\\n2009\\nmaking\\nthe\\nregistration\\nor\\nvoting\\nprocess\\nmore\\ndifficult\\n.\\n\\n\\n\\n\\n\\n\\n\\nFigure 3: An example of the attention mechanism following long-distance dependencies in the\\nencoder self-attention in layer 5 of 6. Many of the attention heads attend to a distant dependency of\\nthe verb \\u2018making\\u2019, completing the phrase \\u2018making...more dif\\ufb01cult\\u2019. Attentions here shown only for\\nthe word \\u2018making\\u2019. Different colors represent different heads. Best viewed in color.\\n13 Input-Input Layer5\\nThe\\nLaw\\nwill\\nnever\\nbe\\nperfect\\n,\\nbut\\nits\\napplication\\nshould\\nbe\\njust\\n-\\nthis\\nis\\nwhat\\nwe\\nare\\nmissing\\n,\\nin\\nmy\\nopinion\\n.\\n\\n\\nThe\\nLaw\\nwill\\nnever\\nbe\\nperfect\\n,\\nbut\\nits\\napplication\\nshould\\nbe\\njust\\n-\\nthis\\nis\\nwhat\\nwe\\nare\\nmissing\\n,\\nin\\nmy\\nopinion\\n.\\n\\n\\nInput-Input Layer5\\nThe\\nLaw\\nwill\\nnever\\nbe\\nperfect\\n,\\nbut\\nits\\napplication\\nshould\\nbe\\njust\\n-\\nthis\\nis\\nwhat\\nwe\\nare\\nmissing\\n,\\nin\\nmy\\nopinion\\n.\\n\\n\\nThe\\nLaw\\nwill\\nnever\\nbe\\nperfect\\n,\\nbut\\nits\\napplication\\nshould\\nbe\\njust\\n-\\nthis\\nis\\nwhat\\nwe\\nare\\nmissing\\n,\\nin\\nmy\\nopinion\\n.\\n\\nFigure 4: Two attention heads, also in layer 5 of 6, apparently involved in anaphora resolution. Top:\\nFull attentions for head 5. Bottom: Isolated attentions from just the word \\u2018its\\u2019 for attention heads 5\\nand 6. Note that the attentions are very sharp for this word.\\n14 Input-Input Layer5\\nThe\\nLaw\\nwill\\nnever\\nbe\\nperfect\\n,\\nbut\\nits\\napplication\\nshould\\nbe\\njust\\n-\\nthis\\nis\\nwhat\\nwe\\nare\\nmissing\\n,\\nin\\nmy\\nopinion\\n.\\n\\n\\nThe\\nLaw\\nwill\\nnever\\nbe\\nperfect\\n,\\nbut\\nits\\napplication\\nshould\\nbe\\njust\\n-\\nthis\\nis\\nwhat\\nwe\\nare\\nmissing\\n,\\nin\\nmy\\nopinion\\n.\\n\\n\\nInput-Input Layer5\\nThe\\nLaw\\nwill\\nnever\\nbe\\nperfect\\n,\\nbut\\nits\\napplication\\nshould\\nbe\\njust\\n-\\nthis\\nis\\nwhat\\nwe\\nare\\nmissing\\n,\\nin\\nmy\\nopinion\\n.\\n\\n\\nThe\\nLaw\\nwill\\nnever\\nbe\\nperfect\\n,\\nbut\\nits\\napplication\\nshould\\nbe\\njust\\n-\\nthis\\nis\\nwhat\\nwe\\nare\\nmissing\\n,\\nin\\nmy\\nopinion\\n.\\n\\nFigure 5: Many of the attention heads exhibit behaviour that seems related to the structure of the\\nsentence. We give two such examples above, from two different heads from the encoder self-attention\\nat layer 5 of 6. The heads clearly learned to perform different tasks.\\n15\",\"1\":\" On the Bene\\ufb01ts of Biophysical Synapses\\nJulian Lemmel, Radu Grosu\\nFaculty of Informatics of Technische Universit \\u00a8at Wien, Austria.\\njulian.lemmel@tuwien.ac.at, radu.grosu@tuwien.ac.at\\nAbstract\\nThe approximation capability of ANNs and their RNN instan-\\ntiations, is strongly correlated with the number of parameters\\npacked into these networks. However, the complexity barrier\\nfor human understanding, is arguably related to the number\\nof neurons and synapses in the networks, and to the associ-\\nated nonlinear transformations. In this paper we show that\\nthe use of biophysical synapses, as found in LTCs, have two\\nmain bene\\ufb01ts. First, they allow to pack more parameters for\\na given number of neurons and synapses. Second, they allow\\nto formulate the nonlinear-network transformation, as a linear\\nsystem with state-dependent coef\\ufb01cients. Both increase inter-\\npretability, as for a given task, they allow to learn a system\\nlinear in its input features, that is smaller in size compared\\nto the state of the art. We substantiate the above claims on\\nvarious time-series prediction tasks, but we believe that our\\nresults are applicable to any feedforward or recurrent ANN.\\nIntroduction\\nInspired by spiking neurons, arti\\ufb01cial neurons (ANs) com-\\nbine in one unit, the additive behavior of biological neurons\\nwith the graded nonlinear behavior of their synapses (Bishop\\n1995; Goodfellow, Bengio, and Courville 2016). This makes\\nANs implausible from a biophysical point of view, and pre-\\ncluded their adoption in neural science.\\nArti\\ufb01cial neural networks (ANNs) however, correct this\\nbiological blunder. In ANNs it is irrelevant what is the mean-\\ning of a neuron, and what is that of a synapse. What mat-\\nters, is the mathematical expression of the network itself.\\nThis was best exempli\\ufb01ed by ResNets, which were forced,\\nfor technical reasons, to separate the additive transforma-\\ntion from the graded one, and introduce new state variables,\\nwhich are the outputs of the additive neural, rather than the\\nnonlinear synaptic units (He et al. 2016).\\nThis separation allows us to reconcile ResNets with liq-\\nuid time-constant neural networks (LTCs), a biophysical\\nmodel for nonspiking neurons, that shares architectural mo-\\ntifs, such as activation, inhibition, sequentialization, mutual\\nexclusion, and synchronization, with gene regulatory net-\\nworks (Lechner et al. 2019, 2020; Hasani et al. 2021; Alon\\n2007). LTCs capture the behavior of neurons in the retina of\\nlarge species (Kandel et al. 2013), and that of the neurons\\nCopyright \\u00a9 2022, Association for the Advancement of Arti\\ufb01cial\\nIntelligence (www.aaai.org). All rights reserved.in small species, such as the C.elegans nematode (Wicks,\\nRoehrig, and Rankin 1996). In LTCs, a neuron is a capaci-\\ntor, and its rate of change is the sum of a leaking current, and\\nof synaptic currents. The conductance of a synapse varies in\\na graded nonlinear fashion with the potential of the presy-\\nnaptic neuron, and this is multiplied with a difference of po-\\ntential of the postsynaptic neuron, to produce the synaptic\\ncurrent. Hence, the graded nonlinear transformation is the\\none that synapses perform, which is indeed the case in na-\\nture, and not the one performed by neurons.\\nIn contrast to ResNets, NeuralODEs and CT-RNNs (Chen\\net al. 2018; Funahashi and Nakamura 1993), LTCs multi-\\nply (or gate) the conductance with a difference of poten-\\ntial. This is dictated by physics, as one needs to obtain a\\ncurrent. Gating makes each neuron interpretable as a lin-\\near system with state-dependent coef\\ufb01cients (Alvarez-Melis\\nand Jaakkola 2018; C \\u00b8 imen 2008). Moreover, LTCs associate\\neach activation function to a synapse (like in nature) and not\\nto a neuron (like in ANNs). This allows LTCs to pack con-\\nsiderably more parameters in a network with a given number\\nof neurons and synapses. As the approximation capability\\nof ANNs and LTCs is strongly correlated with their num-\\nber of learnable parameters, LTCs are able to approximate of neurons and synapses. As the approximation capability\\nof ANNs and LTCs is strongly correlated with their num-\\nber of learnable parameters, LTCs are able to approximate\\nthe same behavior with a much smaller network, that is ex-\\nplainable in terms of its architectural motifs. We argue that\\nnonlinearity and the size of a neural network are the major\\ncomplexity barriers for human understanding.\\nMoving the activation functions to synapses can be ac-\\ncomplished in any ANN, with the same bene\\ufb01ts as for LTCs\\nin network-size reduction. The gating of sigmoidal activa-\\ntion functions can be replaced with hyperbolic-tangent acti-\\nvation functions. However, one looses the biophysical inter-\\npretation of a neural network, the linear interpretation of its\\nneurons, and the polarity of its synapses.\\nWe compared the expressive power of LTCs with that\\nof CT-RNNs, (Augmented) NeuralODEs, LSTMs, and CT-\\nGRUs, for various recurrent tasks. In this comparison, we\\nconsidered LTCs and CT-RNNs with both neural and synap-\\ntic activation functions. We also investigated the bene\\ufb01ts of\\ngating sigmoidal activation with a difference of potential.\\nOur results show that synaptic activation considerably re-\\nduces the number of neurons and associated synapses re-\\nquired to solve a task, not only in LTCs but also in CT-\\nRNNs. We also show that the use of hyperbolic-tangent ac-arXiv:2303.04944v1 [cs.NE] 8 Mar 2023 tivation functions in CT-RNNs has similar expressive power\\nas gating sigmoids with a difference of potential, but it\\nlooses the linear interpretation.\\nThe rest of the paper is structured as follows. First, we\\nprovide a fresh look into ANNs, ResNets, NeuralODEs, CT-\\nRNNs, and LTCs. This paves the way to then show the ben-\\ne\\ufb01ts of biophysical synapses in various recurrent tasks. Fi-\\nnally we discuss our results and touch on future work.\\nA Fresh Look at Neural Networks\\nArti\\ufb01cial Neural Networks\\nAn AN receives one or more inputs, sums them up in a linear\\nfashion, and passes the result through a nonlinear activation\\nfunction, whose bias bis the condition for the neuron to \\ufb01re\\n(spike). However, activation is graded (non-spiking), with\\nsmooth (e.g. sigmoidal) shape. Formally:\\nyt+1\\ni=\\u001b(nX\\nj=1wt\\njiyt\\nj+bt+1\\ni)\\u001b(x) =1\\n1 +e\\u0000x(1)\\nwhere as in Figure 1, yt+1\\niis the output of neuron iat layer\\nt+ 1,yt\\njis the output of neuron jat layert,wt\\njiis the weight\\nassociated to the synapse between neuron jat layertand\\nneuroniat layert+ 1,bt+1\\niis the bias (threshold) of neuron\\niat layert+ 1, and\\u001bis the activation function, e.g., the\\nlogistic function above. A network with one input layer, one\\noutput layer, and N\\u00152hidden layers, is called a deep neural\\nnetwork (DNN) (Goodfellow, Bengio, and Courville 2016).\\nANNs are universal approximators.\\nAlthough ANs are biophysically implausible, ANNs are\\nin fact closely related to nonspiking neural networks. To\\ndemonstrate this, let us look \\ufb01rst at ResNets (He et al. 2016).\\nResidual Neural Networks\\nDNNs with a large number of hidden layers suffer from the\\ndegradation problem, which persists even if the vanishing\\ngradients are curated. Intuitively, DNNs cannot accurately\\nlearn identities. Hence, they were simply added to the DNNs\\nin form of skip connections (He et al. 2016).\\nThe resulting architecture, as shown in Figure 1, was\\ncalled a residual neural network (ResNet)1. In ResNets, the\\noutputsxt\\niof the sums are distinguished from the outputs yt\\ni\\nof the sigmoids. Formally:\\nxt+1\\ni=xt\\ni+nX\\nj=1wt\\njiyt\\njyt\\nj=\\u001b(xt\\nj+bt\\nj) (2)\\nThis distinction is very important from a biophysical point of\\nview. The main idea is that neurons are just summation units,\\nand the sigmoidal transformation happens in synapses. In\\nfact, one can put the weights in the synaptic transformation,\\ntoo, which leads to the equivalent equations:\\nxt+1\\ni=xt\\ni+nX\\nj=1yt\\njiyt\\nji=wt\\nji\\u001b(xt\\nj+bt\\nj)(3)\\n1In (He et al. 2016), xt\\niskips the \\ufb01rst sum and it is added di-\\nrectly to xt+2\\ni. Hence, the architecture shown in Figure 1, can be\\nregarded as ResNets with \\ufb01nest skip granularity.\\nFigure 1: DNN (in black) and ResNet (in black and blue).\\nHerewt\\njican be thought of as the maximum conductance\\nof the input dependent synaptic transformation \\u001b(xt\\nj+bt\\nj).\\nThis transformation is indeed graded in nature, that is non-\\nspiking. Since ResNets are particular DNNs, with identity as\\na linear activation, they are also universal approximators.\\nNeural Ordinary Differential Equations\\nEquations (3) is the Euler discretization of a set of differ-\\nential equations, where the time step is simply taken to be\\none (E 2017; Chen et al. 2018). Mathematically:\\n_xi(t) =nX\\nj=1yji(t)yji(t) =wji(t)\\u001b(xj(t) +bj(t))(4)\\nIn these equations, x,y, and the parameters wandbchange\\ncontinuously in time. Now suppose we make the parameters\\nconstant. Are we still going to have a universal ODE approx-\\nimator? The answer is yes, as we will show in next section.\\nThe differential equations are as follows:\\n_xi(t) =nX\\nj=1yji(t)yji(t) =wji\\u001b(xj(t) +bj)(5)\\nThis is the form of Neural Ordinary Differential Equations\\n(NeuralODEs) (E 2017; Chen et al. 2018).2Taking the state\\nof the network as the sigmoid yof a sum is equivalent to\\ntaking the state as the sum xof sigmoids.\\nTheorem 1 (NeuralODEs) .Letxandybe state vectors.\\nThen _y=\\u001b(Wy+b)is equivalent to _x=W\\u001b(x+b).\\nProof. Takex=Wy. Then the following holds:\\n_x=W_y=W\\u001b(Wy+b) =W\\u001b(x+b)\\nA slight extension called ANODEs is given in (Dupont, Then _y=\\u001b(Wy+b)is equivalent to _x=W\\u001b(x+b).\\nProof. Takex=Wy. Then the following holds:\\n_x=W_y=W\\u001b(Wy+b) =W\\u001b(x+b)\\nA slight extension called ANODEs is given in (Dupont,\\nDoucet, and Teh 2019), which embeds the input in the inter-\\nnal state, and projects the state to the outputs S, as follows:\\nx(t0) = [x;0]Ty=\\u0019S(x(tN)) (6)\\nNeuralODEs are harder to learn than ResNets. For the train-\\ning purpose, one can use the adjoint equation, and employ\\nef\\ufb01cient numerical solvers (E 2017; Chen et al. 2018).\\n2Strictly speaking, NeuralODEs _x(t) =f(x)may have an arbi-\\ntrary number of neural layers for the function f. Figure 2: Synaptic-activation DNN and ResNet.\\nContinuous-Time Recurrent Neural Networks\\nAutonomous case. In this form of CT-RNNs, the input is\\nthe initial state. Let us call them ACT-RNNs, They extend\\nNeuralODEs with a leading term \\u0000wixi(t)(Funahashi and\\nNakamura 1993). Their mathematical form is as follows:\\n_xi(t) =\\u0000wixi(t) +Pn\\nj=1yji(t)\\nyji(t) =wji\\u001b(xj(t) +bj)(7)\\nThe leading term brings the system back to the equilibrium\\nstate, when no input is available. Hence, a small perturba-\\ntion is forgotten, that is, the system is stable. Like in Neu-\\nralODEs, one can interchange sumation and activation.\\nTheorem 2 (ACT-RNNs) .Letxandybe state vectors. Then\\n_y=\\u0000w\\u0003y+\\u001b(Wy+b)and_x=\\u0000w\\u0003x+W\\u001b(x+b)are\\nequivalent ODEs where \\u0003is the pointwise product of vectors.\\nProof. Letx=Wy. Then (Funahashi and Nakamura 1993):\\n_x=W_y\\n=W(\\u0000w\\u0003y+\\u001b(Wy+b)) =\\u0000w\\u0003x+W\\u001b(x+b)\\nACT-RNNs are universal approximators, and stabilization is\\nnot relevant in this respect (Funahashi and Nakamura 1993).\\nHence, NeuralODEs are universal approximators, too.\\nSynaptic activation. Like in ANNs, ACT-RNNs associate\\neach activation function to a neuron. We therefore call them\\nNA-ACT-RNNs, where NA stands for neural activation.\\nHowever, as shown in Figure 2, any ANN can be rewrit-\\nten, such that activation functions are associated to synapses.\\nWe call this form of ACT-RNNs, SA-ACT-RNN, where SA\\nstands for synaptic activation. Adding to each activation a\\nvariancea, too, one has the following explicit form:\\n_xi(t) =\\u0000wixi(t) +Pn\\nj=1yji(t)\\nyji(t) =wji\\u001b(ajixj(t) +bji)(8)\\nThe advantage of SA-ACT-RNNs is that they pack many\\nmore parameters in a network, for the same number of neu-\\nrons and synapses. For example, an SA-ACT-RNN with 32\\nneurons, connected in an all to all fashion, is able to pack\\n3104 parameters. This roughly corresponds to an NA-ACT-\\nRNN with 54 neurons which packs 3132 parameters.\\nWhile the approximation capability of a neural network\\nis strongly correlated with its number of parameters, we\\nstrongly believe that the complexity barrier for human un-\\nderstanding, is in the number of neurons and synapses.\\nFigure 3: The electric representation of a nonspiking neuron.\\nGeneral case. CT-RNNs have in general associated a time\\nvarying input signal u, too, that is, they are RNNs. The way\\nthe input is considered, plays a very important role.\\nA popular way of adding the input signal u, is to extend\\nthe sum within a sigmoid with a sum corresponding to the\\ninput. In vectorial form this looks as follows:\\n_y=\\u0000w\\u0003y+\\u001b(Wy+Vu+b) (9)\\nThis form has excellent convergence properties, but it cannot\\nbe extended to synaptic activations. We therefore prefer the\\nfollowing form, which has the same convergence properties:\\n_x=\\u0000w\\u0003x+W\\u001b(ax\\u0003x+bx)+V\\u001b(au\\u0003u+bu)(10)\\nwhereax;bxandau;burepresent the variance and the bias\\nvectors for the state and the input vectors, respectively. Fi-\\nnally, another popular way of adding the input to CT-RNNs\\nis in a linear fashion, as below.\\n_x=\\u0000w\\u0003x+W\\u001b(a\\u0003x+b)+Vu (11)\\nSynaptic activation. Like in SA-ACT-RNNs, the last two\\nCT-RNNs can be rewritten, by associating each activation to\\na synapse. To distinguish the two variants, we call them NA-\\nCT-RNNs and SA-CT-RNNs, respectively. In scalar form,\\nthe sigmoidal-input version can be written as below. The\\nlinear-input version is very similar:\\n_xi(t) =\\u0000wixi(t) +Pn\\nj=1yji(t) +Pm\\nj=1zji(t)\\nyji(t) =wji\\u001b(ax\\njixj(t) +bx\\nji)\\nzji(t) =vji\\u001b(au\\njiuj(t) +bu\\nji)(12)\\nNow consider an SA-CT-RNN with 32 neurons, connected\\nall to all, and with 32 inputs. It packs a total of 6176 param-\\neters. This roughly corresponds to an NA-CT-RNN with 54\\nneurons, which packs a total of 6102 parameters.\\nLiquid Time Constant Networks\\nAutonomous case. LTCs are a biophysical model for the\\nneural system of small species (Wicks, Roehrig, and Rankin\\n1996; Lechner et al. 2019, 2020; Hasani et al. 2021), and the\\nretina of large species (Kandel et al. 2013). Due to the small\\ndimension of these neural systems ( \\u00141mm), neural trans-\\nmission happens passively, in the analog domain, without retina of large species (Kandel et al. 2013). Due to the small\\ndimension of these neural systems ( \\u00141mm), neural trans-\\nmission happens passively, in the analog domain, without\\nconsiderable attenuation. Hence, the neurons do not need to\\nspike for an accurate signal transmission.\\nAs shown in Figure 3, the neuron\\u2019s membrane is an insu-\\nlator, with ions both on its inside and outside. Electrically, sigmpresynaptic \\nactivation\\npostsynaptic \\nactivationsynaptic \\ncurrent\\nsigmpresynaptic \\nactivation\\npostsynaptic \\nactivationunsqueeze(x,-2)unsqueeze(x, -1)\\nsum(x,-2)\\nsynaptic \\ncurrentFigure 4: Synaptic Layer in LTCs with synaptic (top) and neural (bottom) activation.\\nit is a capacitor. The difference between the inside-outside\\nionic concentrations de\\ufb01nes the membrane potential (MP)\\nx. The rate of change of xdepends on the currents ypass-\\ning through the membrane. These are either external currents\\n(ignored for ALTCs), a leakage current, and synaptic cur-\\nrents. For simplicity, we consider only chemical synapses.\\nThe capacitor equation is then as follows:\\nC_xi(t) =wli(eli\\u0000xi(t)) +Pn\\nj=1yji(t)\\nyji(t) =wji\\u001b(ajixj(t) +bji) (eji\\u0000xi(t))(13)\\nwhereCis the membrane capacitance, elithe resting poten-\\ntial,wlithe leaking conductance, and ejithe synaptic poten-\\ntials. These are either 0 mV for excitatory synapses (poten-\\ntialxiis negative so the current is positive), or -90 mV for\\ninhibitory synapses (the current is in this case negative).\\nEquations (18) are very similar to Equations (7) of an SA-\\nACT-RNN. They have a leaking current, which ensures the\\nstability of the ALTC, and a presynaptic-neuron controlled\\nconductance \\u001bfor the synapses, with maximum conduc-\\ntancewji. This conductance is multiplied with a difference\\nof potential eji\\u0000xi(t), to get a current. This biophysical\\nconstraint, makes them different from SA-ACT-RNNs. So\\nwhat is the signi\\ufb01cance of this gating term from the point of\\nview of machine learning? As we prove below, it has impor-\\ntant consequences for the interpretability of ALTCs.\\nTheorem 3 (Interpretability) .Each ALTC neuron is inter-\\npretable as linear regression of its inputs.\\nProof. Letx(0)be the input. This is propagated in time\\nasx(t). Letwji\\u001b(ajixj+bji)be the the state-dependent\\nweight from neuron jto neuroni. Then according to Equa-\\ntions (18), _xi(t)is a linear regression in x, for eachi. More-\\nover, small perturbations of xlead to small changes in _x.\\nALTCs are able to pack even more parameters than SA-\\nACT-RNNs. For example, an ALTC with 32 neurons, con-\\nnected in an all to all fashion, is able to pack 4192 parame-\\nters, whereas an SA-ACT-RNN packs only 3104 parameters.This roughly corresponds to an NA-ACT-RNN with 64 neu-\\nrons which packs 4224 parameters\\nNeural activation. While in nature each synapse has dis-\\ntinct dynamics, one may want to consider that in particu-\\nlar cases, all outgoing synapses of a neuron have the same\\nactivation parameters. Let us call this version of ALTCs as\\nNA-ALTCs, where NA stands for neural activation. We also\\ninterchangeably refer to ALTCs as SA-ALTCs. Formally:\\nC_xi(t) =wli(eli\\u0000xi(t)) +Pn\\nj=1yji(t)\\nyji(t) =wji\\u001b(ajxj(t) +bj) (ei\\u0000xi(t))(14)\\nIf one takes elito be zero and ajto be one, NA-ALTCs\\nare the same as NA-ACT-RNNs except for the gating term\\neji\\u0000xi(t). As discussed above, this term makes NA-ALTCs\\nlinear systems with state dependent coef\\ufb01cients, which not\\nonly makes them more interpretable, but allows the appli-\\ncation of state-dependent Ricatti equations in the automatic\\nsynthesis of nonlinear controllers (C \\u00b8 imen 2008).\\nIn our experiments, we found that CT-RNNs where the\\nactivation function is a hyperbolic tangent, has very similar\\nconvergence and learning-accuracy properties with LTCs,\\nwhere the activation function is a sigmoid. However, hyper-\\nbolic tangents fail to capture the opening degree of synaptic\\nchannels and their associated polarity, the same way gated\\nsigmoids do: sigmoids accurately capture this degree, and\\nthe difference of potential the gating polarity.\\nGeneral case. Like CT-RNNs, LTCs have in general asso-\\nciated a time-varying input signal u. As for CT-RNNs, we\\nconsider both a sigmoidal-input version and a linear-input\\nversion. In scalar form, the \\ufb01rst can be written as below:\\nC_xi(t) =wli(eli\\u0000xi(t)) +Pn\\nj=1yji(t) +Pm\\nj=1zji(t)\\nyji(t) =wji\\u001b(ax\\njixj(t) +bx\\nji)(eji\\u0000xi(t))\\nzji(t) =vji\\u001b(au\\njiuj(t) +bu\\nji)(eji\\u0000xi(t))(15) version. In scalar form, the \\ufb01rst can be written as below:\\nC_xi(t) =wli(eli\\u0000xi(t)) +Pn\\nj=1yji(t) +Pm\\nj=1zji(t)\\nyji(t) =wji\\u001b(ax\\njixj(t) +bx\\nji)(eji\\u0000xi(t))\\nzji(t) =vji\\u001b(au\\njiuj(t) +bu\\nji)(eji\\u0000xi(t))(15)\\nAn SA-LTC with 32 neurons, connected in an all to all fash-\\nion and with 32 inputs, packs a total of 8288 parameters, 600 2000 6000\\n# Params1.01.52.02.53.0test loss\\n600 2000 6000 20000\\n# Params1.01.52.02.53.0test lossNA-CT-RNN-S \\nNA-CT-RNN-T \\nNA-LTC \\nSA-CT-RNN-S \\nSA-CT-RNN-T \\nSA-LTC \\nLSTM\\nCT-GRU\\nANODEFigure 5: Results for the Walker2d kinematics-learning experiments. Left: synaptic inputs, Right: linear inputs. The size of the\\nmarker dots represent the number of neurons (or cells in case of LSTMs): 8, 16, 32 or 64 (from smallest to largest).\\n600 2000 6000\\n# Params0.0040.0060.0080.0100.0120.014test loss\\n600 2000 6000 20000\\n# Params0.0040.0060.0080.0100.0120.014test lossNA-CT-RNN-S \\nNA-CT-RNN-T \\nNA-LTC \\nSA-CT-RNN-S \\nSA-CT-RNN-T \\nSA-LTC \\nLSTM\\nCT-GRU\\nANODE\\nFigure 6: Results for the Half-Cheetah kinematics-learning experiments. Left\\/Right and marker size as before.\\nwhereas an SA-CT-RNN packs only 6176 parameters. This\\nroughly corresponds to an NA-CT-RNN with 63 neurons,\\nwhich packs a total of 8253 parameters.\\nThe linear-input version of SA-LTCs is very similar. For-\\nmally, it is described as below:\\nC_xi(t) =wli(eli\\u0000xi(t)) +Pn\\nj=1yji(t) +Pm\\nj=1zji(t)\\nyji(t) =wji\\u001b(ax\\njixj(t) +bx\\nji)(eji\\u0000xi(t))\\nzji(t) =vjiuj(t)(16)\\nAn SA-LTC with 32 neurons, connected in an all to all fash-\\nion and with 32 inputs, packs 5216 parameters. This roughly\\ncorresponds to a linear-input NA-CT-RNN with 51 neurons,\\nwhich packs a total of 5355 parameters.\\nNeural activation. Like in the autonous case, one can also\\nconsider NA-LTCs, where all outgoing synapses of a neuron\\nhave the same activation parameters. For the sigmoidal-input\\nversion one obtains the following equations:\\nC_xi(t) =wli(eli\\u0000xi(t)) +Pn\\nj=1yji(t) +Pm\\nj=1zji(t)\\nyji(t) =wji\\u001b(ax\\njxj(t) +bx\\nj)(ei\\u0000xi(t))\\nzji(t) =vji\\u001b(au\\njuj(t) +bu\\nj)(ei\\u0000xi(t))(17)\\nThe linear-input version is similar, but in this case zji(t) =\\nvjiuj(t). As for ALTCs, NA-LTCs are very similar to NA-\\nCT-RNNs, with the exception of the gating term eji\\u0000xi(t).As discussed before, one can get rid of gating, by using a\\nhyperbolic-tangent activation function, with its associated\\nloss of linearity and biophysical meaning.\\nLTCs are universal approximators (Hasani 2020; Hasani\\net al. 2021). This is true for both their autonomous and gen-\\neral form, and for synaptic and linear inputs.\\nExperimental Evaluation\\nSequential model structure\\nA three-layered sequential structure was used for all experi-\\nments in this section. Let us denote by u(t)the input at time\\ntand byyi(t)the output of layer iat timet. The output of\\nthe \\ufb01nal layer is the predicted output ^y(t) =y3(t).\\nThe \\ufb01rst layer maps the inputs to an RNN-layer with ei-\\nther a linear ( y1(t) =Ainu(t) +bin) or a synaptic (as dis-\\ncussed above) transformation - dubbed linear orsynaptic\\ninput mapping, respectively. In case of synaptic input map-\\nping these sensory synapses were implemented in accor-\\ndance with the RNN model used, i.e. they either used synap-\\ntic or neural activation, and did also incorporate the multi-\\nplication with a difference of potentials in case of LTCs.\\nThe second layer contains the RNN cells and its output\\nis computed by employing an ODE-solver. The actual ODE\\nbeing solved is determined by the model type. The different\\nvariants are explained in the preceding section. Unlike con-\\nventional NeuralODEs, the RNN cells in our model retain a\\nstate (and consequently information) after each time-step t.\\nThe third layer, irrespective of the speci\\ufb01c model type 600 2000 6000\\n# Params0.0050.0100.0150.0200.0250.030test loss\\n600 2000 6000 20000\\n# Params0.0050.0100.0150.0200.0250.030test lossNA-CT-RNN-S \\nNA-CT-RNN-T \\nNA-LTC \\nSA-CT-RNN-S \\nSA-CT-RNN-T \\nSA-LTC \\nLSTM\\nCT-GRU\\nANODEFigure 7: Results for the Half-Cheetah Behavioural Cloning modeling experiments. Left\\/Right and marker size as before.\\n600 2000 6000\\n# Params0.10.20.30.40.5test loss\\n600 2000 6000 20000\\n# Params0.10.20.30.40.5test lossNA-CT-RNN-S\\nNA-CT-RNN-T\\nNA-LTC\\nSA-CT-RNN-S\\nSA-CT-RNN-T\\nSA-LTC\\nLSTM\\nCT-GRU\\nFigure 8: Results for the Sequential MNIST classi\\ufb01cation experiments. Left\\/Right and marker size as before.\\nused, maps the \\ufb01nal RNN state y2(t)to the output vector\\ny3(t)in a linear fashion, that is, y3(t) =Aouty2(t) +bout.\\nNeural and synaptic dynamics were implemented as py-\\ntorch-lightning modules, ensuring re-usability and portabil-\\nity across different devices such as CPU and GPU. Upon\\ninstantiating the module, the desired model type is speci\\ufb01ed\\nand the parameters are initialized accordingly. Initialization\\nbounds were taken from (Hasani et al. 2021), and are given\\nin the Supplementary Materials. In order to reduce the pa-\\nrameter space, some parameters were \\ufb01xed at some value\\nand were not subject to training through backpropagation.\\nSince the resulting system of ODEs is stiff, the choice of\\nthe ODE-solver has a strong impact on the performance. We\\nchose the explicit Euler solver with 10 unfolds for all the\\nexperiments in this paper, as it gave good enough accuracy\\nwith low time-complexity, compared to more sophisticated\\nsolvers such as Runge-Kutta methods (rk4 or dopri5).\\nThe chain of computations for a layer of synapses with\\nneural and synaptic activation is shown in Figure 4 top and\\nbottom respectively. Here, xandyare the state of the pre-\\nsynaptic and post-synaptic neuron respectively. LTC-RNNs\\nextend CT-RNN synapses by multiplying the activation with\\na difference-of-potential term (bottom part of the \\ufb01gures).\\nSynaptic activation was realised by extending vectors to ma-\\ntrices throughout the computation graph while also replacing\\neach corresponding matrix-multiplication by the element-\\nwise (or Hadamard) product. Particularly, this means that\\nintermediary results are also represented as matrices while\\nthey are vector-valued in case of neural activation.Robotic Experiments\\nTo explore the parameter-packing and the linear-gating ben-\\ne\\ufb01ts of biophysical synapses we conducted four supervised-\\nlearning time-series experiments: Walker2d prediction,\\nHalf-Cheetah prediction, Half-Cheetah behavioural cloning,\\nand Sequential-MNIST classi\\ufb01cation.\\nThe parameter-packing bene\\ufb01ts are evaluated by using\\nCT-RNNs and LTCs with 8, 16, 32, and 64 neurons (con-\\nnected all-to-all) in the hidden layer, with both neural and\\nsynaptic activation, and with both synaptic and linear in-\\nputs. The gating bene\\ufb01ts are evaluated by using CT-RNNs\\nwith both sigmoid and tanh activation functions.\\nWe also provide results for LSTMs, CT-GRUs and AN-\\nODEs with 10 augmenting dimensions. We use a linear input\\nmapping as this is the default for the \\ufb01rst two. By lacking a\\nstabilization term, the simple NeuralODEs discussed before\\nperform worse than CT-RNNs, and we do not show them in\\nour results.\\nAlthough our results are mainly for robotic control and\\nthey are restricted to LTCs and CT-RNNs, we claim that they\\nare applicable to any feedforward or recurrent ANN.\\nThe CT-RNN and LTC models tested are abbreviated as\\nNA-CT-RNN and NA-LTC for neural activation, and SA-\\nCT-RNN and SA-LTC for synaptic activation. CT-RNNs\\nused either a sigmoidal or a hyperbolic-tangent activation,\\nmarked with the suf\\ufb01x S and T, respectively. LTCs always\\nused a sigmoidal activation, because a hyperbolic tangent\\nnot only fails to capture the biophysics of synapses, but also\\nrenders the network very unstable. For all models we did ex-\\nperiments with both linear- and synaptic-input mappings, as the latter more closely capture sensory neurons. All experi-\\nments used the Adam optimizer (Kingma and Ba 2014).\\nLearning Walker-2D kinematics. This robotic task is in-\\nspired by the physics simulation in (Rubanova, Chen, and\\nDuvenaud 2019), and implemented by using the gym envi-\\nronment in (Brockman et al. 2016). It evaluates how well\\nvarious RNNs are suited to learn kinematic dynamics.\\nTo create the training dataset, we used a non-recurrent\\npolicy, pretrained via proximal policy optimization (PPO)\\n(Schulman et al. 2017), and the Rllib (Liang et al. 2017) re-\\ninforcement learning framework. To increase the task com-\\nplexity, we used the pretrained policy at 4 different training\\nstages (between 500 to 1200 PPO iterations). We then col-\\nlected 17-dimensional observation vectors, performing 400\\nrollouts of 1000 steps each on the Walker2d-v2 OpenAI\\ngym environment and the MuJoCo physics engine (Todorov,\\nErez, and Tassa 2012). Note that there is no need to include\\nthe actions in the training set, because the policy is determin-\\nistic. We used 15% percent of the dataset for testing, 10%\\npercent for validation and the rest for training.\\nWe aligned the rollouts into sequences of length 20 and\\nthen trained each of the models three times for 200 epochs.\\nThis was done for 8, 16, 32, and 64 RNN cells.\\nFigure 5 shows for each model the median test loss and its\\nmin and max values, for three runs, with respect to the num-\\nber of neurons, and the associated number of parameters.\\nCT-RNNs perform better for the linear input-mapping,\\nwhereas SA-LTCs for synaptic input-mapping. The pack-\\ning bene\\ufb01t of biophysical synapses is seen in the fact that\\nSA-CT-RNNs and SA-LTCs pack essentially as many pa-\\nrameters as NA-CT-RNNs and NA-LTCs, with half of the\\nnumber of neurons. The gating bene\\ufb01t is exempli\\ufb01ed by the\\nfact that CT-RNN-Ts perform better than CT-RNN-Ss. LTCs\\nperform better than CT-RNNs in all instances. LSTMs and\\nCT-GRUs attain (even greater) parameter packing through\\na more elaborate concept of a structured cell, but not nec-\\nessarily with greater accuracy, when their number of cells\\nequals the number of neurons of SA-LTCs. Since LSTMs\\nand CT-GRUs have by default a linear input-mapping, they\\nappear only in the right \\ufb01gure. ANODEs, are also shown in\\nthe right \\ufb01gure. They perform comparable to NA-LTCs.\\nLearning Half-Cheetah kinematics. Similar to the Wal-\\nker-2D, we learned the kinematics of the Half-Cheetah.\\nFor this experiment we collected 100 rollouts with a con-\\ntroller that was trained using Truncated Quantile Critics\\n(TQC) (Kuznetsov et al. 2020). Just a single version pro-\\nvided by the stable-baselines zoo (Raf\\ufb01n 2018) was used\\nthis time, making the task relatively easier than the previ-\\nous one. Again, each rollout is composed of a series of 1000\\ndatapoints consisting of a 27-dimension observation vector\\ngenerated by the MuJoCo physics engine and a 6-dimension\\naction vector that is produced by the controller. The same\\ndata was used in the following two different tasks:\\n1.Kinematics modeling. Predicting the next observation af-\\nter having seen 20 preceding observations. The action\\nvectors are not used for this task since the observations\\nserve both as inputs and as labels.2.Behavioural cloning. Predicting the next action after hav-\\ning seen 20 preceding observations. In this task the ob-\\nservations serve as inputs while the actions are the labels.\\nFigure 6 shows the results for the Half-Cheetah kinematic\\nmodeling, and Figure 7 the ones for Half-Cheetah behavioral\\ncloning. The median, min and max test loss are represented\\nas before. The results in both \\ufb01gures follow a very similar\\npattern as the ones for Walker-2D. However, in this case\\nthe bene\\ufb01ts of CT-RNN-Ts are evident only for the synaptic\\ninput-mapping. LTCs remain more performant. In Figure 7,\\nLTSMs and CT-GRUs have a slightly better accuracy com-\\npared to SA-LTCs, at the expense of more parameters.\\nSequential-MNIST classi\\ufb01cation. The MNIST dataset LTSMs and CT-GRUs have a slightly better accuracy com-\\npared to SA-LTCs, at the expense of more parameters.\\nSequential-MNIST classi\\ufb01cation. The MNIST dataset\\nconsists of 70,000 gray-scale images of 28 \\u000228 pixels, con-\\ntaining hand-written digits (LeCun 1998). In order to make\\nthis task as a sequential one, the images are transformed into\\nsequences of length 28, by taking each row vector, as an in-\\nput in time. The desired output is a one-hot encoded vec-\\ntor representing integers from 0 to 9. Consequently, a cross-\\nentropy loss was used when training the models.\\nThe results shown in Figure 8 are as before, the median,\\nmin and max test loss of three runs each. They follow a\\nsimilar pattern with the previous \\ufb01gures, but the LTCs with\\nsynaptic inputs are less stable, and fail to properly converge\\nfor the largest number of neurons. The best accuracy is at-\\ntained by NA-CTRNNs and LSTMs.\\nDiscussion and Conclusion\\nThe main goal of this paper was to investigate the synaptic-\\nactivation and linear-gating bene\\ufb01ts of biophysical synapses,\\nas they occur in LTCs. To this end, we asked:\\n\\u2022 What happens if one uses neural activation in LTCs?\\n\\u2022 What happens if linear gating is dropped in LTCs?\\nThis resulted in two versions of LTCs, and four versions\\nof CT-RNNs, with either linear or synaptic input, and with\\nsigmoid or tanh activation, respectvely. We thoroughly ex-\\namined the accuracy and parameter-packing ability of these\\nnetworks, for an increasing number of neurons, and com-\\npared them to those of ANODEs, LSTMs, and CT-GRUs.\\nWe observed that LTCs and CT-RNNs with synaptic acti-\\nvation achieve essentially the same accuracy and parameter\\npacking, for half of the number of neurons, as LTCs and\\nCT-RNNs with neural activation. The linear gating of LTCs\\nfurther improved this accuracy. We also observed that the ac-\\ncuracy and packing bene\\ufb01ts of LTCs is comparable to those\\nof cells in LSTMs. However, the latter rely on a much more\\nelaborate concept of a structured cell.\\nWe claimed that the bene\\ufb01ts of biophysical synapses ap-\\nply to any ANN. However, we showed them explicitly for\\nLTCs and CT-RNNs, only. Hence, the full version of this pa-\\nper would have to substantiate this claim. For example, for\\nfeed-forward CNNs, one could use a standard CNN base,\\nand consistently replace its neural activations with synap-\\ntic ones. Similarly, for LTSMs and CT-GRUs, one could\\nmake their recurrent connections synaptic, by using a tanh-\\nactivation for each synapse, instead of one for each cell. For clarity, we kept NeuralODEs as simple as possible, by\\ncon\\ufb01ning their right-hand-side transformation to one layer.\\nHowever, one could have used more powerful transforma-\\ntions, which might have led to better results. Nevertheless,\\nwe think that the discussed bene\\ufb01ts would still apply.\\nFinally, biophysical synapses may better support sparse\\nnetworks too, as it was claimed in (Lechner et al. 2020).\\nHowever, for obvious space reasons, a thorough investiga-\\ntion of this claim had to be postponed to future work.\\nReferences\\nAlon, U. 2007. Network Motifs: Theory and Experimental\\nApproaches. Nature Reviews , 8.\\nAlvarez-Melis, D.; and Jaakkola, T. 2018. Towards Robust\\nInterpretability with Self-Explaining Neural Networks. In\\nProceedings of NIPS\\u201918, the 32nd Conference on Neural In-\\nformation Processing Systems . Montreal, Canada.\\nBishop, C. 1995. Neural Networks for Pattern Recognition .\\nClaredon Press, Oxford.\\nBrockman, G.; Cheung, V .; Pettersson, L.; Schneider, J.;\\nSchulman, J.; Tang, J.; and Zaremba, W. 2016. Openai gym.\\narXiv preprint arXiv:1606.01540 .\\nChen, T.; Rubanova, Y .; Bettencourt, J.; and Duvenaud, D.\\n2018. Neural Ordinary Differential Equations. In Advances\\nin Neural Information Processing Systems , 6571\\u20136583.\\nDupont, E.; Doucet, A.; and Teh, Y . 2019. Augmented Neu-\\nral ODEs. ArXiv, eprint 1904.01681, arXiv:1904.01681.\\nE, W. 2017. A Proposal on Machine Learning via Dynamical\\nSystems. Communications in Mathematics and Statistics , 5:\\n1\\u201311.\\nFunahashi, K.; and Nakamura, Y . 1993. Approximation of\\nDynamical Systems by Continuous Time Recurrent Neural\\nNetworks. Neural networks , 6(6): 801\\u2013806.\\nGoodfellow, I.; Bengio, Y .; and Courville, A. 2016. Deep\\nLearning . MIT Press.\\nHasani, R. 2020. Interpretable Recurrent Neural Networks\\nin Continuous-time Control Environments . Ph.D. thesis,\\nWien.\\nHasani, R.; Lechner, M.; Amini, A.; Rus, D.; and Grosu,\\nR. 2021. Liquid Time-constant Networks. Proceedings of\\nthe AAAI Conference on Arti\\ufb01cial Intelligence , 35(9): 7657\\u2013\\n7666.\\nHe, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Identity Map-\\npings in Deep Residual Networks. CoRR , abs\\/1603.05027.\\nKandel, E.; Schwartz, J.; Jessel, T.; Siegelbaum, S.; and\\nHudspeth, A. 2013. Principles of Neural Science . McGraw-\\nHill Education \\/ McGraw-Hill Medical, 5 edition.\\nKingma, D.; and Ba, J. 2014. Adam: A Method for Stochas-\\ntic Optimization. arXiv preprint arXiv:1412.6980 .\\nKuznetsov, A.; Shvechikov, P.; Grishin, A.; and Vetrov,\\nD. 2020. Controlling Overestimation Bias with Trun-\\ncated Mixture of Continuous Distributional Quantile Critics.\\narXiv:2005.04269 [cs, stat] .\\nLechner, M.; Hasani, R.; Amini, A.; Henzinger, T.; Rus, D.;\\nand Grosu, R. 2020. Neural circuit policies enabling au-\\nditable autonomy. Nature Machine Intelligence , 2: 642\\u2013652.Lechner, M.; Hasani, R.; Zimmer, M.; Henzinger, T.; and\\nGrosu, R. 2019. Designing Worm-Inspired Neural Networks\\nfor Interpretable Robotic Control. In Proceedings of the\\n2019 International Conference on Robotics and Automation\\n(ICRA) . Montreal, Canada.\\nLeCun, B., Cortes. 1998. MNIST Handwritten Digit\\nDatabase, Yann LeCun, Corinna Cortes and Chris Burges.\\nhttp:\\/\\/yann.lecun.com\\/exdb\\/mnist\\/.\\nLiang, E.; Liaw, R.; Nishihara, R.; Moritz, P.; Fox, R.; Gon-\\nzalez, J.; Goldberg, K.; and Stoica, I. 2017. Ray RLLib: A\\nComposable and Scalable Reinforcement Learning Library.\\nCoRR , abs\\/1712.09381.\\nPoli, M.; Massaroli, S.; Yamashita, A.; Asama, H.; and Park,\\nJ. 2020. TorchDyn: A Neural Differential Equations Library.\\narXiv preprint arXiv:2009.09346 .\\nRaf\\ufb01n, A. 2018. RL Baselines Zoo.\\nRubanova, Y .; Chen, R. T.; and Duvenaud, D. 2019. La-\\ntent odes for irregularly-sampled time series. arXiv preprint\\narXiv:1907.03907 .\\nSchulman, J.; Wolski, F.; Dhariwal, P.; Radford, A.; and\\nKlimov, O. 2017. Proximal policy optimization algorithms.\\narXiv preprint arXiv:1707.06347 .\\nTodorov, E.; Erez, T.; and Tassa, Y . 2012. Mujoco: A physics\\nengine for model-based control. In 2012 IEEE\\/RSJ Interna- arXiv preprint arXiv:1707.06347 .\\nTodorov, E.; Erez, T.; and Tassa, Y . 2012. Mujoco: A physics\\nengine for model-based control. In 2012 IEEE\\/RSJ Interna-\\ntional Conference on Intelligent Robots and Systems , 5026\\u2013\\n5033. IEEE.\\nWicks, S.; Roehrig, C.; and Rankin, C. 1996. A Dynamic\\nNetwork Simulation of the Nematode Tap Withdrawal Cir-\\ncuit: Predictions Concerning Synaptic Function Using Be-\\nhavioral Criteria. Journal of Neuroscience , 16(12): 4017\\u2013\\n4031.\\nC \\u00b8 imen, T. 2008. State-Dependent Riccati Equation (SDRE)\\nControl: A Survey. Proceedings of the 17th World Congress\\nof the International Federation of Automatic Control , 6\\u201311.\\nAppendix\\nOur implementation is using the torchdyn pack-\\nage (Poli et al. 2020) which in turn is based on\\npytorch-lightning . It comprises several different\\nODE-solvers supporting automatic differentiation. Since\\nthe package by itself was intended for implementing au-\\ntonomous NeuralODEs, a small trick was necessary for cre-\\nating CT-RNNs and LTC-RNNs that receive inputs. Au-\\ntonomous NeuralODEs encapsulate a non-trivial function\\n(such as a neural network) that is used for computing\\nthe derivative at each ODE-solver stepdx\\ndt=F(x). For\\nachieving the non-autonomous behaviordx\\ndt=F0(x;u)\\nthe input was appended to the state at each step when\\npassed to the ODE-solver and it\\u2019s derivative was set to zero\\n[dx\\ndt;0] =F([x;u]). Consequently, the solution computed\\nby the ODE-solver amounts to being the unaltered input ap-\\npended to the desired \\ufb01nal state.\\nC_xi(t) =wli(eli\\u0000xi(t)) +Pn\\nj=1yji(t)\\nyji(t) =wji\\u001b(ajixj(t) +bji) (eji\\u0000xi(t))(18) Table 2: Parameters of the recurrent layer and their initial-\\nization bounds.\\u0003only present in LTC-RNNs.\\nName Description Initialization\\nsynaptic parameters\\nC\\u0003Membrane capacitance 1 (\\ufb01xed)\\nw Synaptic strength 0.01 - 1.0\\nb Synaptic midpoint 0.3 - 0.8\\na Synaptic slope 3 - 8\\ne\\u0003Reversal potential -1 or +1\\ncell body parameters\\nel Resting potential 0 (\\ufb01xed)\\nwl Leakage conductance 0.01 - 1.0\\nTable 2 shows the initialization ranges used for the LTC\\nlayer parameters taken from previous work. Since LTC dy-\\nnamics tend to diverge rapidly if leaving the parameters un-\\nconstrained, the following conditions were enforced during\\ntraining:C\\u00150,w\\u00150,a\\u00150. These constraints are a\\ndirect consequence of the capacitor equation Eq. 18 from\\nwhich LTCs are derived wherein neither conductance ( w),\\ncapacitance ( Cm) nor synaptic gating ( a) may be negative.\\nSimilar constraints are also assumed in the proof of universal\\napproximation found in (Hasani et al. 2021).\\nType-descriptors\\nThe form of the derivative used whithin the NeuralODE is\\ncon\\ufb01gured by passing a type descriptor which is a string of\\nthe form:\\nctrnn [w-mode]act[factor][rec-type] in-mode [lis]\\nIn any case the derivative is computed using the following\\nformula: _x=Synapses (x;x) +Input (u;x)\\u0000decay (x).\\nThe type descriptor determines how the individual terms are\\ncalculated.\\nSizes ofE,band\\u000bdepend on the recurrence type used:Table 3: Type descriptor components and resulting formulas\\nw-mode Synapses (x;y)\\nNone a\\u001b(wx+b)\\u0003Factor (y)\\nr w\\u001b(x+b)\\u0003Factor (y)\\nv w\\u001b(a(x+b))\\u0003Factor (y)\\nact \\u001b(x)\\n\\u2019sigm\\u2019 1=(1 +e\\u0000x)\\n\\u2019tanh\\u2019 tanh(x)\\nfactor Factor (x)\\nNone 1\\n\\u2019*\\u2019 (1\\u0000x)\\n\\u2019+\\u2019 (e\\u0000x)\\nrec-type\\nNone neuronal activation\\n\\u2019s\\u2019 synaptic activation\\nin-mode Input (u;x)\\n\\u2019linear\\u2019 Iu+bi\\n\\u2019synaptic\\u2019 Synapses (u;x)\\nlis decay (x)\\nNone \\u001cx\\n\\u2019lis\\u2019 \\u001c(x\\u0000x0)\\nRecurrence type Shape of parameters\\nneuronal ( None ) ( model size)\\nsynaptic (\\u2019s\\u2019) ( model size,model size)\\nThe different w-mode s allow for adjusting the way\\nSynapses are parameterized, actdetermines the activation\\nfunction used, the factor distinguishes LTCs from CT-\\nRNNs, rec-type is used for switching between Neuronal and\\nSynaptic activation ,in-mode determines the input mode and\\nTable 1: Results for the INSERT experiments\\nModel n = 8 n = 16 n = 32 n = 64\\n# Par MSE \\u000210\\u00002# Par MSE \\u000210\\u00002# Par MSE \\u000210\\u00002# Par MSE \\u000210\\u00002\\nANODE 1663 0:62\\u00060:04 2263 0:46\\u00060:01 3463 0:40\\u00060:02 5863 0:31\\u00060:01\\nCT-GRU 5139 0:84\\u00060:05 12427 0:50\\u00060:02 - -\\nLSTM 1427 0:94\\u00060:00 3339 0:53\\u00060:02 8699 0:38\\u00060:00 -\\nNA-CT-RNN linear 555 1:10\\u0006nan 1211 0:60\\u0006nan 2907 0:41\\u00060:01 7835 0:45\\u00060:02\\nNA-CT-RNN synaptic 601 1:04\\u00060:04 1249 0:58\\u00060:02 2929 0:45\\u00060:08 7825 0:57\\u00060:04\\nNA-LTC linear 571 1:15\\u00060:07 1243 0:56\\u00060:01 2971 0:40\\u0006nan 7963 0:32\\u00060:07\\nNA-LTC synaptic 617 1:08\\u0006nan 1281 0:67\\u0006nan 2993 0:52\\u00060:01 7953 0:61\\u00060:06\\nSA-CT-RNN linear 667 1:01\\u00060:02 1691 0:54\\u00060:00 4891 0:36\\u0006nan -\\nSA-CT-RNN synaptic 1091 0:88\\u00060:04 2539 0:61\\u0006nan 6587 0:50\\u00060:04 -\\nSA-LTC linear 947 0:93\\u0006nan 2379 0:46\\u00060:02 6779 0:33\\u00060:03 -\\nSA-LTC synaptic 1371 0:82\\u00060:03 3227 0:47\\u00060:03 8475 0:28\\u0006nan - lisis used when the initial state (= leakage potential) should\\nbe learnable.\\nThe recurrence type determines how recurrenct connec-\\ntions are implemented:\\n\\u2022neuronal : corresponds to the ctrnn model. Connections\\ncan be thought of being only of linear nature, all incom-\\ning synapses of a particular cell share the same bias and\\nsummation happens before applying the activation func-\\ntion.\\n\\u2022synaptic : each synapse has a separate bias b, scalew\\nand reference potential E. An additional summation step\\nhappens after calculating individual synaptic activations.\\n_x=P[Synapses (x;x)] +Input (u;x)\\u0000decay (x)\\nExamples\\n\\u2022 Vanilla CT-RNN: ctrnn vtanh linear\\n\\u2022 SA-LTC: ctrnn vsigm+s synaptic\",\"2\":\" MOREA: a GPU-accelerated Evolutionary Algorithm for\\nMulti-Objective Deformable Registration of 3D Medical Images\\nGeorgios Andreadis\\nLeiden University Medical Center\\nLeiden, The Netherlands\\nG.Andreadis@lumc.nlPeter A.N. Bosman\\nCentrum Wiskunde & Informatica\\nAmsterdam, The Netherlands\\nPeter.Bosman@cwi.nlTanja Alderliesten\\nLeiden University Medical Center\\nLeiden, The Netherlands\\nT.Alderliesten@lumc.nl\\nABSTRACT\\nFinding a realistic deformation that transforms one image into\\nanother, in case large deformations are required, is considered\\na key challenge in medical image analysis. Having a proper im-\\nage registration approach to achieve this could unleash a number\\nof applications requiring information to be transferred between\\nimages. Clinical adoption is currently hampered by many exist-\\ning methods requiring extensive configuration effort before each\\nuse, or not being able to (realistically) capture large deformations.\\nA recent multi-objective approach that uses the Multi-Objective\\nReal-Valued Gene-pool Optimal Mixing Evolutionary Algorithm\\n(MO-RV-GOMEA) and a dual-dynamic mesh transformation model\\nhas shown promise, exposing the trade-offs inherent to image reg-\\nistration problems and modeling large deformations in 2D. This\\nwork builds on this promise and introduces MOREA: the first evo-\\nlutionary algorithm-based multi-objective approach to deformable\\nregistration of 3D images capable of tackling large deformations.\\nMOREA includes a 3D biomechanical mesh model for physical plau-\\nsibility and is fully GPU-accelerated. We compare MOREA to two\\nstate-of-the-art approaches on abdominal CT scans of 4 cervical\\ncancer patients, with the latter two approaches configured for the\\nbest results per patient. Without requiring per-patient configura-\\ntion, MOREA significantly outperforms these approaches on 3 of\\nthe 4 patients that represent the most difficult cases.\\nKEYWORDS\\ndeformable image registration, multi-objective optimization, smart\\nmesh initialization, repair method, GOMEA\\n1 INTRODUCTION\\nIn recent decades, the field of radiation oncology has experienced\\nrapid developments. Key to its modern practice are medical images\\nacquired before, during, and after treatment. Although these im-\\nages are already guiding clinical decision-making in many ways,\\nthe transfer of information between multiple images that feature\\nlarge deformations or content mismatches has proven to be a hard\\nchallenge and has eluded widespread clinical adoption. In general,\\nthe challenge of Deformable Image Registration (DIR) is to find a\\nrealistic transformation that matches two or more image spaces\\nto each other, as illustrated in Figure 1. Given this transformation,\\nother metadata could be transferred between images, such as anno-\\ntated contours [ 30] or 3D radiation dose distributions [ 33], opening\\nup opportunities to make radiation treatment more precise [16].\\nThe DIR problem consists of three main objectives: an image-\\nbased objective (for a visual comparison), a contour-based objective\\n(for an assessment of object contour overlap), and a realism-basedobjective (to measure the energy required to perform the defor-\\nmation). These objectives are conflicting, especially when large\\ndeformations and content mismatches are at play [ 1]. DIR is there-\\nfore an inherently multi-objective problem, making Evolutionary\\nAlgorithms (EAs) well-suited for its optimization [19].\\nA diverse set of approaches to DIR has emerged [ 5,17,45]. These\\nall take a single-objective approach, requiring the user to choose\\nthe weights associated with the optimization objectives for each\\nuse, a priori . This can however hinder clinical adoption, since it has\\nbeen shown that choosing good weights (and other parameters) for\\nspecific patients is difficult in general and can strongly influence\\nregistration quality [ 36]. Even when configured for the best results,\\nmany existing approaches struggle with large deformations and\\ncontent mismatches between images because of limitations of their registration quality [ 36]. Even when configured for the best results,\\nmany existing approaches struggle with large deformations and\\ncontent mismatches between images because of limitations of their\\nunderlying transformation models and (often gradient-descent-\\nbased) optimization techniques. This shortcoming forms a second\\nobstacle to their translation into clinical workflows. Therefore, there\\nstill is a need for a DIR approach that does not require a priori\\nobjective weight configuration andcan tackle large deformations.\\nThe need to configure objective weights a priori has previously\\nbeen addressed by taking a multi-objective approach [ 2]. This re-\\nmoves the need to select weights for the optimization objectives in a\\nscalarized problem formulation a priori , since a set of solutions can\\nbe produced that appropriately represents the trade-off between\\ndifferent conflicting objectives, allowing the user to select a solu-\\ntion from this set, a posteriori . To overcome the second obstacle, a\\nflexible dual-dynamic triangular mesh transformation model that\\nallows for inverse-consistent, biomechanical registration has been\\nintroduced [ 3]. This model can match structures on both images to\\ncapture large deformations. The Multi-Objective Real-Valued Gene-\\npool Optimal Mixing Evolutionary Algorithm (MO-RV-GOMEA)\\nhas proven to be effective at performing DIR with this model for\\n2D images by decomposing the problem into local, partial evalu-\\nations [ 10]. The Graphics Processing Unit (GPU) is exceptionally\\nwell-suited to execute these partial evaluations in parallel, yielding\\n(a)Source image\\n (b)Target image\\n (c)Example registration\\nFigure 1: Illustration of two images with large deformations\\nand an example of a deformable image registration with\\nMOREA\\u2019s dual-dynamic mesh transformation model.arXiv:2303.04873v1 [cs.CV] 8 Mar 2023 Georgios Andreadis, Peter A.N. Bosman, and Tanja Alderliesten\\nsignificant speed-ups [ 12]. Recently, first steps have been taken to\\nextend this GPU-accelerated approach to 3D images [ 4], for which\\nthe benefits of partial evaluations may be even greater due to the\\nincrease in the amount of image information (from 65k pixels in\\n2D to more than 2 million voxels in 3D), leading to more, but also\\ncostlier partial evaluations. While this extended approach has been\\nshown to be capable of solving simple registration problems of sin-\\ngle objects, it misses several crucial components required to tackle\\nclinical problems that feature multiple interacting objects.\\nIn this work, we therefore introduce MOREA, the first EA-based\\nMulti-Objective Registration approach capable of registering 3D im-\\nages with large deformations using a biomechanical model, without\\nrequiring a priori configuration of objective weights. In MOREA, a\\n3D tetrahedral mesh is initialized on interesting structures using a\\nnovel custom mesh generation approach, and a repair mechanism\\nfor folded meshes is embedded. With MOREA we furthermore im-\\nprove on prior modeling strategies [ 4] for all objectives to ensure\\ndesirable deformations will be achieved.\\n2 DEFORMABLE IMAGE REGISTRATION FOR\\nLARGE DEFORMATIONS\\nIn this section, we define the DIR optimization problem (Section 2.1)\\nand examine existing approaches (Section 2.2).\\n2.1 Problem Definition\\nThe problem of DIR for a pair of 2 images is to find a non-rigid\\ntransformation \\ud835\\udc47that deforms a source image \\ud835\\udc3c\\ud835\\udc60to match a tar-\\nget image\\ud835\\udc3c\\ud835\\udc61as closely as possible [ 40]. We distinguish between\\nunidirectional andsymmetric registration: in unidirectional registra-\\ntion, only\\ud835\\udc47(\\ud835\\udc3c\\ud835\\udc60)\\u2248\\ud835\\udc3c\\ud835\\udc61is optimized, while in symmetric registration,\\n\\ud835\\udc47\\u2032(\\ud835\\udc3c\\ud835\\udc61)\\u2248\\ud835\\udc3c\\ud835\\udc60is also optimized [ 40]. This can improve the physical\\nviability of the registration. Another desirable distinction for reg-\\nistrations is inverse-consistency [40], guaranteeing a one-to-one\\ncorrespondence between any point in the source image and its\\ncorresponding point in the target image.\\nRegistrations can generally be evaluated according to three\\nclasses of quality metrics. Image intensity metrics compare the pre-\\ndicted voxel intensity values of \\ud835\\udc47(\\ud835\\udc3c\\ud835\\udc60)to the voxel intensity values\\nof\\ud835\\udc3c\\ud835\\udc61, using metrics such as cross-correlation or mutual informa-\\ntion [ 26].Contour metrics judge registration accuracy by applying\\n\\ud835\\udc47to pairs of sets of points, representing contours ( \\ud835\\udc36\\ud835\\udc60and\\ud835\\udc36\\ud835\\udc61), and\\ncomputing the distances between those point sets. One example is\\nthe Chamfer distance [ 22]: for each pair\\u27e8\\ud835\\udc36\\ud835\\udc60,\\ud835\\udc36\\ud835\\udc61\\u27e9, the longest mini-\\nmum distance is calculated between points in \\ud835\\udc47(\\ud835\\udc36\\ud835\\udc60)and any point\\nin\\ud835\\udc36\\ud835\\udc61. DIR approaches can also use these contours at initialization\\ntime, to build transformation models for use during optimization.\\nFinally, deformation magnitude metrics express registration realism\\nby measuring the force needed to apply the deformation, using a\\nphysical model of the image space [ 23]. This can serve as a regular-\\nization mechanism, discouraging the registration to overfit.\\n2.2 Related Work\\nThese three quality metrics are conflicting objectives that form a\\ntrade-off [ 1]. A number of single-objective registration approaches\\nhave emerged in recent years, typically attempting to deal with thistrade-off by exploring different objective scalarizations. This how-\\never has the downside of having to set objective weights, a priori .\\nWe categorize these existing approaches broadly according to the\\nabove defined classes of quality metrics, into classes of approaches\\nmainly optimizing for (1) intensity match, (2) contour match, and\\n(3) both matches simultaneously. These and other features are com-\\npared for selected prominent approaches in Table 1.\\nAn example of the first class, optimizing for intensity match, is\\nthe Elastix toolbox [ 28]. It uses a B-spline based transformation\\nmodel, which uses B\\u00e9zier curves to model physical space. With this\\nmodel, Elastix optimizes for intensity, regularized by deformation the Elastix toolbox [ 28]. It uses a B-spline based transformation\\nmodel, which uses B\\u00e9zier curves to model physical space. With this\\nmodel, Elastix optimizes for intensity, regularized by deformation\\nmagnitude metrics. While this is a good fit for many applications,\\nwe observe that registering more complex, large deformations with\\nlocal discontinuities (such as studied in this work) can be difficult.\\nThe ANTs SyN registration approach [ 5] was conceived to model\\nsuch large deformations, featuring symmetric, inverse-consistent,\\nand intensity-based registration using time-varying velocity fields.\\nA third intensity-based approach is the Demons algorithm [ 42], us-\\ning principles from optical flow and Maxwell\\u2019s Demons for inverse-\\nconsistent registration. A more recent version of this approach also\\nhas a mechanism to handle content mismatch between images [ 34].\\nBoth the ANTs and Demons approach can in theory flexibly model\\nlarge deformations, but lack biomechanical modeling capabilities\\nand only operate on image intensity. This can hamper reliably\\nproducing anatomically valid registrations [30].\\nThis is one reason to consider the second class of approaches.\\nOne of these approaches is the Thin-Plate Splines Robust Point\\nMatching approach (TPS-RPM), which deforms contours using a\\nthin-plate spline model [ 18]. Subsequent work validated this on\\nan abdominal test case, registering a deforming bladder and two\\nsurrounding organs [ 44]. There is also a symmetric version of TPS-\\nRPM, which improves robustness on large deformations [ 8]. Work\\nconducted in parallel also applies a similar model to contours for ab-\\ndominal registration problems [ 39]. While large deformations can\\nbe modeled, the biomechanical plausibility of the transformation is\\nnot guaranteed, and objective weights still require configuration.\\nAnother contour-based approach is MORFEUS [ 17], which registers\\na mesh representation of imaged objects using a Finite Element\\nMethod (FEM) solver. It has shown promising results on brachyther-\\napy applications in the abdomen [ 37]. Although MORFEUS uses\\na biomechanical model, which improves realism, it does not take\\nimage intensities into account, thus losing detail between object\\nsurfaces and relying too heavily on (user-supplied) contours.\\nRecent work has targeted this shortcoming by proposing a com-\\nbined contour-based and image-based approach: the ANAtomically\\nCONstrained Deformation Algorithm (ANACONDA) [ 45] optimizes\\na fixed scalarization of image and contour terms by using the quasi-\\nNewton algorithm. This approach however lacks biomechanical\\nmodeling, and also introduces yet another parameter to configure.\\nOther hybrid attempts have also emerged, such as a combination\\nof the Demons approach with local FEM meshes [ 48], or the use\\nof an image-based registration step to derive tissue elasticities that\\nare later used in an FEM-based registration approach [29].\\nIn general, we see a gap: an approach that includes all registra-\\ntion aspects in one model. As Table 1 shows, we target this gap\\nwith MOREA by being both image-based and contour-based, fea-\\nturing biomechanical modeling, and exploiting the multi-objective MOREA: a GPU-accelerated Evolutionary Algorithm for Multi-Objective Deformable Registration of 3D Medical Images\\nFeature Elastix [28] ANTs SyN [5] Demons [42] TPS-RPM [18] ANACONDA [45] MORFEUS [17] MOREA (this work)\\nImage-based \\u2713 \\u2713 \\u2713 \\u2717 \\u2713 \\u2717 \\u2713\\nContour-based \\u2717 \\u2717 \\u2717 \\u2713 \\u2713 \\u2713 \\u2713\\nBiomechanical \\u2717 \\u2717 \\u2717 \\u2717 \\u2717 \\u2713 \\u2713\\nMulti-objective \\u2717 \\u2717 \\u2717 \\u2717 \\u2717 \\u2717 \\u2713\\nTable 1: Comparison of selected prominent existing DIR approaches by supported registration features.\\nnature of the DIR problem. These novelties are made possible by\\nthe flexibility and robustness of EAs, which are well-suited to op-\\ntimize non-differentiable, multi-objective problems. Additionally,\\nthe objective functions include millions of image voxel values and\\nare therefore expensive to compute, calling for hardware acceler-\\nation. Modern model-based EAs such as MO-RV-GOMEA feature\\nexcellent GPU compatibility, making them a good fit for optimizing\\nthe DIR problem.\\n3 MO-RV-GOMEA\\nThe structure of Black-Box Optimization (BBO) problems only gets\\nrevealed through repeated function evaluations. Gray-Box Opti-\\nmization (GBO) problems, on the other hand, have a (partly) known\\nproblem structure, which can be exploited during optimization. The\\nGOMEA suite of EAs has proven to be exceptionally well suited\\nfor efficiently solving both benchmark and real-world GBO prob-\\nlems [ 41]. Its extension to multi-objective, real-valued problems,\\nMO-RV-GOMEA [ 11], has even found real-world adoption in clini-\\ncal practice for prostate brachytherapy treatment planning [ 7,13].\\nWe give an overview of the key working principles of MO-RV-\\nGOMEA here. A detailed description may be found in literature [ 14].\\nNon-dominated solutions are preserved across generations in an\\nelitist archive with a pre-specified capacity [ 31]. Each generation\\nstarts with the selection of a subset of non-dominated solutions from\\nthe current population. This selection is clustered into \\ud835\\udc58equally\\nsized clusters. For each cluster, MO-RV-GOMEA employs a linkage\\nmodel that describes dependence relations between variables using\\na set of dependent variable sets, called Family of Subset (FOS) ele-\\nments. This linkage model can be learned during optimization in a\\nBBO setting, but in MOREA, we employ a static linkage model based\\non topological proximity of variables (see Section 4.2.1). Variation\\nthen proceeds by considering variables in FOS elements jointly in\\na procedure called optimal mixing . In this step, distributions are\\nestimated for each FOS element in each cluster, and new, partial\\nsolutions are sampled from these distributions. Newly sampled\\npartial solutions are evaluated and accepted if their insertion into\\nthe parent solution results in a solution that dominates the parent\\nsolution or that is non-dominated in the current elitist archive.\\n4 APPROACH\\nThe approach outlined in this work builds on the recently pro-\\nposed multi-objective approach for 3D images [ 4]. In this section,\\nwe present the new techniques we have added, in modeling the\\nproblem (Section 4.1), initializing the population of solutions (Sec-\\ntion 4.2), and optimizing the deformations (Section 4.3).4.1 Modeling\\n4.1.1 Enhancing realism with tissue-specific elasticities. Adjacent\\nwork has indicated that using tissue-specific elasticities, instead\\nof assuming one homogeneous elasticity for the entire image re-\\ngion, can enhance the realism of resulting deformations [ 37,46].\\nFollowing this insight, we extend the deformation magnitude ob-\\njective used in existing work [ 4] by computing an elasticity factor\\nfor each tetrahedron, based on its underlying image region. Imple-\\nmentation details for this computation are provided in Appendix A.\\nWe observe in exploratory experiments that this leads to better\\nregistration outcomes (see Appendix Section C.3.1).\\nTo compute the deformation magnitude objective, we consider\\nall corresponding edges \\ud835\\udc52\\ud835\\udc60and\\ud835\\udc52\\ud835\\udc61of each tetrahedron \\ud835\\udeff\\u2208\\u0394, be-\\nlonging to the mesh on the source image and the mesh on the target To compute the deformation magnitude objective, we consider\\nall corresponding edges \\ud835\\udc52\\ud835\\udc60and\\ud835\\udc52\\ud835\\udc61of each tetrahedron \\ud835\\udeff\\u2208\\u0394, be-\\nlonging to the mesh on the source image and the mesh on the target\\nimage, respectively. This includes 4 spoke edges that better capture\\nflattening motion, giving a total of 10 edges per tetrahedron [ 4].\\nGiven the tetrahedron-specific elasticity constant \\ud835\\udc50\\ud835\\udeff, the objective\\nis computed as follows:\\n\\ud835\\udc53magnitude =1\\n10|\\u0394|\\u2211\\ufe01\\n\\ud835\\udeff\\u2208\\u0394\\uf8ee\\uf8ef\\uf8ef\\uf8ef\\uf8ef\\uf8f0\\u2211\\ufe01\\n(\\ud835\\udc52\\ud835\\udc60,\\ud835\\udc52\\ud835\\udc61)\\u2208\\ud835\\udc38\\ud835\\udeff\\ud835\\udc50\\ud835\\udeff(\\u2225\\ud835\\udc52\\ud835\\udc60\\u2225\\u2212\\u2225\\ud835\\udc52\\ud835\\udc61\\u2225)2\\uf8f9\\uf8fa\\uf8fa\\uf8fa\\uf8fa\\uf8fb\\n4.1.2 Robustly estimating image similarity. The intensity objective\\nwe use is defined as a voxel-to-voxel comparison by taking the sum\\nof squared intensity differences, with special handling for compar-\\nisons of foreground (i.e., non-zero intensity) and background (i.e.,\\nzero intensity) voxels. We use a random sampling technique that is\\nwell-suited for GPU acceleration (defined in detail in Appendix A).\\nUsing the set of all sampled image points on both images, \\ud835\\udc43\\ud835\\udc60and\\n\\ud835\\udc43\\ud835\\udc61, and image intensities of source image \\ud835\\udc3c\\ud835\\udc60and target image \\ud835\\udc3c\\ud835\\udc61, the\\nobjective is defined as follows:\\n\\ud835\\udc53intensity =1\\n|\\ud835\\udc43\\ud835\\udc60|+|\\ud835\\udc43\\ud835\\udc61|\\uf8ee\\uf8ef\\uf8ef\\uf8ef\\uf8ef\\uf8f0\\u2211\\ufe01\\n\\ud835\\udc5d\\ud835\\udc60\\u2208\\ud835\\udc43\\ud835\\udc60\\u210e(\\ud835\\udc5d\\ud835\\udc60,\\ud835\\udc47(\\ud835\\udc5d\\ud835\\udc60))+\\u2211\\ufe01\\n\\ud835\\udc5d\\ud835\\udc61\\u2208\\ud835\\udc43\\ud835\\udc61\\u210e(\\ud835\\udc5d\\ud835\\udc61,\\ud835\\udc47\\u2032(\\ud835\\udc5d\\ud835\\udc61))\\uf8f9\\uf8fa\\uf8fa\\uf8fa\\uf8fa\\uf8fb\\n\\u210e(\\ud835\\udc5d\\ud835\\udc60,\\ud835\\udc5d\\ud835\\udc61)=\\uf8f1\\uf8f4\\uf8f4\\uf8f4 \\uf8f2\\n\\uf8f4\\uf8f4\\uf8f4\\uf8f3(\\ud835\\udc5d\\ud835\\udc60\\u2212\\ud835\\udc5d\\ud835\\udc61)2\\ud835\\udc5d\\ud835\\udc60>0\\u2227\\ud835\\udc5d\\ud835\\udc61>0\\n0 \\ud835\\udc5d\\ud835\\udc60=0\\u2227\\ud835\\udc5d\\ud835\\udc61=0\\n1 otherwise\\n4.1.3 Approximating the guidance error. In contrast to previous\\nwork where an exact guidance measure was used as one of the ob-\\njectives [ 4], in this work we have opted to introduce a measure that\\nis an approximation thereof that can be much more efficiently com-\\nputed using the GPU-accelerated sampling method that we already\\nuse for the calculation of the values for the image similarity objec-\\ntive. Preliminary experiments showed very similar results (when\\nlooking at the voxel displacement fields), also because a perfect Georgios Andreadis, Peter A.N. Bosman, and Tanja Alderliesten\\n\\/gid01748\\n\\/gid01748\\/gid01748\\n\\/gid01748\\n\\/gid01748\\/gid01748\\n\\/gid01748\\/gid01748\\n\\/gid01748\\/gid01748\\/gid01748\\n\\/gid01748\\n(a)The initial configura-\\ntion, with positive area\\nsigns for each triangle.\\n\\/gid01748\\n\\/gid01748\\/gid01748\\n\\/gid01748\\n\\/gid01748\\/gid01748\\n\\/gid01748\\/gid01748\\n\\/gid01748\\/gid01748\\/gid01748\\n\\/gid01162(b)The fold, detected\\nby a sign change in the\\nfolded (red) triangle.\\n(c)The repair method,\\nresolving the fold by\\nmoving the red point.\\nFigure 2: 2D illustration of a mesh configuration with and\\nwithout a constraint violation (fold). One of the triangles is\\nfolded, due to the red point having moved outside the cen-\\ntral triangle, colored yellow. The folded area is colored red.\\nguidance error is not necessarily the best solution. In Appendix A,\\nwe provide details regarding the implementation.\\nMOREA\\u2019s guidance objective is computed at positions \\ud835\\udc43\\ud835\\udc60and\\n\\ud835\\udc43\\ud835\\udc61, using the set \\ud835\\udc3aof all point set pairs \\u27e8\\ud835\\udc36\\ud835\\udc60,\\ud835\\udc36\\ud835\\udc61\\u27e9\\ud835\\udc56and the minimal\\npoint-to-point-set distance \\ud835\\udc51(\\ud835\\udc5d,\\ud835\\udc36). The total number of guidance\\npoints is indicated as |\\ud835\\udc3a\\ud835\\udc60|and|\\ud835\\udc3a\\ud835\\udc61|, and a truncation radius as \\ud835\\udc5f.\\nThe guidance objective is now defined as follows:\\n\\ud835\\udc53guidance =1\\n|\\ud835\\udc43\\ud835\\udc60|+|\\ud835\\udc43\\ud835\\udc61|\\u2211\\ufe01\\n\\u27e8\\ud835\\udc36\\ud835\\udc60,\\ud835\\udc36\\ud835\\udc61\\u27e9\\u2208\\ud835\\udc3a\\\"\\n|\\ud835\\udc36\\ud835\\udc60|\\n|\\ud835\\udc3a\\ud835\\udc60|\\ud835\\udc54(\\ud835\\udc43\\ud835\\udc60,\\ud835\\udc47,\\ud835\\udc36\\ud835\\udc60,\\ud835\\udc36\\ud835\\udc61)+|\\ud835\\udc36\\ud835\\udc61|\\n|\\ud835\\udc3a\\ud835\\udc61|\\ud835\\udc54(\\ud835\\udc43\\ud835\\udc61,\\ud835\\udc47\\u2032,\\ud835\\udc36\\ud835\\udc61,\\ud835\\udc36\\ud835\\udc60)#\\n\\ud835\\udc54(\\ud835\\udc43,\\u03a6,\\ud835\\udc36,\\ud835\\udc36\\u2032)=\\u2211\\ufe01\\n\\ud835\\udc5d\\u2208\\ud835\\udc43\\n\\ud835\\udc51(\\ud835\\udc5d,\\ud835\\udc36)<\\ud835\\udc5f\\\"\\n\\ud835\\udc5f\\u2212\\ud835\\udc51(\\ud835\\udc5d,\\ud835\\udc36)\\n\\ud835\\udc5f(\\ud835\\udc51(\\ud835\\udc5d,\\ud835\\udc36)\\u2212\\ud835\\udc51(\\u03a6(\\ud835\\udc5d),\\ud835\\udc36\\u2032))2#\\n4.1.4 Rapidly computing constraints. MOREA\\u2019s solutions represent\\nmeshes with hundreds of points, which can easily get entangled\\ninto folded configurations. Such constraint violations should be\\nprevented, to uphold the guarantee of inverse-consistency. Prior\\nwork [ 4] used a strategy that proved error-prone in more complex\\nmeshes. MOREA includes a novel fold detection method that is\\nbased on an observed phenomenon: a mesh fold will cause the sign\\nof at least one tetrahedron\\u2019s volume to change, as illustrated in\\nFigure 2 (the figure is in 2D, but this also holds in 3D). Our method\\nuses this phenomenon to detect folds and to measure their severity,opening up repair opportunities (see Section 4.3.1). Implementation\\ndetails for our method are provided in Appendix A.\\n4.2 Initialization of Registration Solutions\\nSignificant performance gains can be obtained if the initial guesses\\ngiven to the optimizer are closer to desirable objective space regions\\nthan a random guess or grid-like initializations [ 9]. We introduce\\ntwo techniques that provide such initial guesses.\\n4.2.1 Exploiting problem structures with mesh initialization. We\\ninitialize the meshes to align with objects in the image, adapting\\nan existing method for 2D images [ 9] and expanding it to facilitate\\nparallelization on the GPU. First, we place points on the contours\\nof objects in the source image to capture their shape (see Fig. 3a).\\nWe choose these points by greedily taking a spread-out subset from\\nthe contour annotations also used for the guidance objective, as\\nwell as a small fraction of randomly chosen points across the image.\\nThen, we perform a Delaunay tetrahedralization on these points,\\nusing the TetGen suite [ 25] (see Fig. 3b). This yields a mesh that we\\nduplicate to the target image space to complete the dual-dynamic\\ntransformation model.\\nAs laid out in Section 3, MO-RV-GOMEA evaluates groups of\\nvariables (i.e., FOS elements) jointly during variation. Exploratory\\nexperiments have shown that using edges as FOS elements (i.e.,\\ngroups of two connected points, with the variables encoding their\\ncoordinates), is beneficial for this problem. If two FOS elements\\nare completely independent because their variables are not needed\\nfor the partial evaluation of each set, variation and evaluation for\\nthese FOS elements can be done in parallel. We conduct two further\\nsteps to facilitate parallel evaluation and optimization on the GPU.\\nFirst, we execute a greedy set cover algorithm1to find a subset\\nof edges that covers all points (see Fig. 3c), so that each variable\\n(point coordinate) undergoes variation. We could alternatively use First, we execute a greedy set cover algorithm1to find a subset\\nof edges that covers all points (see Fig. 3c), so that each variable\\n(point coordinate) undergoes variation. We could alternatively use\\nall edges, but this would lead to points being included in several\\nFOS sets and thus undergoing variation multiple times per genera-\\ntion. For parallelization purposes, it is more efficient to select an\\n(approximately) minimal set of edges.\\nGiven the edge subset found by the set cover, we now determine\\nwhich FOS elements can be safely optimized in parallel. For this,\\nwe build an interaction graph based on topological proximity [ 12],\\nwhere two elements are connected if their sets of dependent tetra-\\nhedra overlap, i.e., the tetrahedra that are reevaluated when an\\n1Source: https:\\/\\/github.com\\/martin-steinegger\\/setcover\\n(a)Points placed on poten-\\ntially interesting positions.\\n(b)Custom mesh derived\\nfrom these points.\\n(c)Edges selected for varia-\\ntion through set cover.\\n(d)Interaction graph (blue)\\nbetween selected edges.\\n(e)Graph coloring computed\\non interaction graph.\\nFigure 3: 2D illustration of the mesh initialization process, which produces a custom mesh and determines which groups of\\nedges (i.e., FOS elements) can be optimized in parallel. Selected edges are highlighted in red, interaction edges in blue. MOREA: a GPU-accelerated Evolutionary Algorithm for Multi-Objective Deformable Registration of 3D Medical Images\\nelement is changed (see Fig. 3d). Given this graph, parallel groups\\nare created with the DSATUR graph coloring algorithm [ 15] (see\\nFig. 3e). The dependent tetrahedra of each parallel group can be\\nevaluated in parallel on the GPU, which has been proven to lead to\\nspeed-ups of more than 100x on 2D images [12].\\nTetrahedral mesh quality can further be improved by specifying\\nsurfaces that should be included in the generated mesh. We apply\\nthis principle to the bladder by generating a surface mesh using the\\nMarching Cubes algorithm. We then specify its triangular surfaces\\nas constraints to the mesh generation algorithm, ensuring that\\nbladder surface triangles are included in the mesh. Exploratory\\nexperiments show superior performance when using this step (see\\nAppendix B.3.1).\\n4.2.2 Ensuring diversity in initial population. To promote diversity\\nin the initial population, prior work generates random deviations\\nfor each point in the mesh, starting at a grid-initialized solution [ 4].\\nWe observe that this method can produce many folded mesh con-\\nfigurations in generated solutions, which get discarded and thus\\nhamper convergence speed. In this work, we use a radial-basis-\\nfunction approach to introduce large deformations free of mesh\\nfolds. Implementation details on how these fields are generated and\\napplied to solution meshes are provided in Appendix A.\\n4.3 Repairing and Steering\\nDuring optimization, we apply two techniques to improve the qual-\\nity of solutions obtained, and the time needed to reach them.\\n4.3.1 Repairing infeasible solutions. By default, infeasible solutions\\n(i.e., solutions with either of the two meshes having one or more\\nfolds) are discarded. This, however, can hamper the creation of\\nhigh-quality offspring, as infeasible solutions may still provide\\nuseful information for higher-quality search space regions. We\\ntherefore devise a repair method that attempts to reverse folds\\non a point-by-point basis. For each point in a folded tetrahedron,\\nthe method mutates the point using a Gaussian distribution scaled\\nby its estimated distance to the surrounding 3D polygon. After\\n64 samples, the change with the best constraint improvement is\\nselected, if present. If all samples result in a deterioration, repair is\\naborted. The repair process for one point is illustrated in Figure 2c.\\n4.3.2 Applying pressure with adaptive steering. In general, an ap-\\nproximation set should be as diverse as possible while resembling\\nthe Pareto set as closely as possible. In practice, however, not all\\nregions of the Pareto front are of equal interest to users. A user con-\\nducting medical DIR for images with large deformations is typically\\nnot interested in solutions with a small deformation magnitude.\\nThe user is actually most interested in solutions with good guid-\\nance objective values, and we would like the algorithm to steer its\\nsearch towards that region in the objective space. Following earlier\\nwork [ 1], we implement an adaptive steering strategy, which steers\\nthe front towards high-quality guidance solutions after an explo-\\nration period of 100 generations. Given the best guidance objective\\nvalue\\ud835\\udc60\\ud835\\udc3aof any solution in the elitist archive, we only preserve\\nsolutions with guidance objective values between [\\ud835\\udc60\\ud835\\udc3a; 1.5\\ud835\\udc60\\ud835\\udc3a], i.e.,\\nthis becomes a hard constraint.5 EXPERIMENTS\\nWe compare MOREA to existing state-of-the-art registration ap-\\nproaches. Due to the complexity of the problem, we do not impose\\none time limit on all approaches, but rather ensure that they have\\n(reasonably) converged. We repeat all approaches with all configu-\\nrations 5 times, seeded reproducibly. All MOREA registration runs\\nare run on Dell Precision 7920R machines with NVIDIA RTX A5000\\nGPUs. Additional information on experimental setup and results is\\nprovided in the appendix.\\n5.1 Registration Problems\\nWe test all approaches on 4 clinical registration problems with large GPUs. Additional information on experimental setup and results is\\nprovided in the appendix.\\n5.1 Registration Problems\\nWe test all approaches on 4 clinical registration problems with large\\ndeformations (see Table 2). We retrospectively select two successive\\nComputerized Tomography (CT) scans of the abdominal area of\\ncervical cancer patients, acquired for radiation treatment planning\\npurposes, with a Philips Brilliance Big Bore scanner. On the first CT\\nscan, the bladder of the patient is filled, and on the second scan, the\\nbladder is empty and thus has shrunken significantly. This large\\ndeformation is challenging to register correctly while respecting\\nthe surrounding organs (e.g., rectum and bowel) and bony anatomy.\\nPatients 1\\u20133 represent common cases in clinical practice, exhibiting\\nlarge deformations and little to no margin between bladder and\\nbowel in the full-bladder scan. The bladder of Patient 4 largely\\npreserves its shape and exhibits a wide margin between bladder\\nand bowel, making registration easier. This case, however, is also\\nrarer in practice, and therefore less representative.\\nThe axial slices of the CT scans have a thickness of 3 mm,\\nwith in-slice resolutions ranging between (0.86,0.86)mm and\\n(1.07,1.07)mm. Each scan is resampled to (1.5,1.5,1.5)mm for\\nconsistency. Afterward, each scan pair is rigidly registered (i.e.,\\ntranslated, rotated, or scaled linearly) to align the bony anatomies\\nof both scans, using bone contours delineated by a radiation\\ntherapy technologist (RTT). Each pair is cropped to an axis-aligned\\nbounding box surrounding the bladder with a 30 mm margin,\\ntaking the maximal bounds from both images. This restricts the\\nregistration to the region where treatment was delivered, including\\nthe surrounding organs at risk.\\nContours of key organs in each scan have been annotated by\\nan RTT and verified by a radiation oncologist. The sets of points\\ndefining these contours serve as input to the guidance objective\\nof MOREA. We also use these clinical contours to generate binary\\nmasks for each organ and the bones by filling 2D polygonal esti-\\nmates formed by contours on each slice. As is common in practice,\\nthese contours can overlap, since organs are delineated indepen-\\ndently and are often surrounded by a small safety margin. Registra-\\ntion approaches therefore need to be robust enough to handle this\\noverlap. Several anatomically relevant corresponding landmarks\\nhave been annotated by an RTT and verified by a radiation oncolo-\\ngist on both scans, for evaluation purposes (see Appendix D).\\n5.2 Registration Approaches\\nWe consider a number of existing, popular registration approaches\\nfor which executable code is available. For these approaches, we\\nfollow a two-phase configuration process. First, we explore relevant\\ncoarse-grained settings for a single patient scan pair (of Patient 1), Georgios Andreadis, Peter A.N. Bosman, and Tanja Alderliesten\\nInstance Source Target\\nPatient 1\\nPatient 2\\nPatient 3\\nPatient 4\\nTable 2: Sagittal slices of all registration problems, with or-\\ngans contoured in different colors.\\nto find a suitable configuration for the imaging modality and prob-\\nlem difficulty. Then, we conduct fine-grained configuration on the\\nremaining settings (e.g., objective scalarization weights) for each\\npatient scan pair. We describe the resulting configuration for each\\napproach below, including the general coarse-grained configuration\\nof MOREA. A detailed overview of how we reached these configu-\\nrations, with additional configuration experiments, can be found in\\nAppendix C.\\n5.2.1 Elastix. We configure Elastix to conduct a regularized, multi-\\nresolution [ 43] image registration. Recommended settings2did not\\nyield satisfactory results on our scans, therefore we first register\\ncomposite mask images onto each other for each patient. This\\nis used as starting point for optimization on the original image\\nintensities. As a fine-grained configuration step for each patient,\\nwe configure the weight assigned to the deformation magnitude\\n2Based on an official parameter settings database: https:\\/\\/elastix.lumc.nl\\/modelzoo\\/objective in a fixed sweep of exponentially increasing weights of\\n[0,0.001,0.01,..., 10.0], as is done in related work [8].\\n5.2.2 ANTs SyN. For the ANTs SyN algorithm, the recommended\\nsettings3for multi-resolution registration also were not satisfactory,\\nwhich led us to conduct initial configuration experiments with sev-\\neral key parameters, listed in Appendix C. We also add a composite\\nmask in an additional image channel that is registered alongside the\\nimage. For each patient, we test the same regularization weight of\\nthe overall deformation by testing the same weights as for Elastix.\\n5.2.3 This work: MOREA. MOREA uses a single-resolution ap-\\nproach and is configured to generate a mesh of 600 points (i.e., the\\nproblem is 3600-dimensional), using the strategies for mesh gen-\\neration described in Section 4.2. We set the elitist archive capacity\\nto 2000 and use 10 clusters during optimization, with a runtime\\nbudget of 500 generations, during which the EA converges (see\\nAppendix D). As MOREA is a multi-objective approach returning\\nan approximation set of registrations, we do not need to configure\\nit further for each patient.\\n5.3 Evaluation of Registrations\\nSolutions to complex registration problems, such as the problems\\nin this study, require a multi-faceted evaluation. Below, we outline\\ntwo main methods for evaluating registrations: surface-based ac-\\ncuracy and visual inspection. Additional methods are described in\\nAppendix Section B.2 and applied in Appendices C and D.\\n5.3.1 Surface-based registration accuracy. A key part of evaluating\\nregistration accuracy is to assess how well the surfaces (contours) of\\nobjects align [ 16]. We use the Hausdorff distance, which represents\\nthe largest minimal distance between any two points on two object\\nsurfaces. This can be interpreted as the severity of the worst surface\\nmatch. To account for potential deformation inaccuracies at the\\nborder regions of the image, we discard a margin of 15 mmon each\\nside for the computation of this metric. Since this is smaller than the\\nearlier cropping margin of 30 mm, the bladder and regions around\\nit are left untouched by this second crop.\\n5.3.2 Visual inspection. Surface-based accuracy analysis is com-\\nplemented by a visual inspection, since a registration with a good\\ncontour match can still have undesirable deformations in regions\\nbetween contours. This inspection includes viewing slices of the\\ntarget image overlaid with the source contours transformed using\\nthe computed forward DVF of the registration. To also inspect the\\ndeformation between contours, we also visualize the full deforma-\\ntion: First, we render the DVF itself with a quiver plot. Second,\\nwe overlay a regular grid onto a slice and deform it with the DVF, deformation between contours, we also visualize the full deforma-\\ntion: First, we render the DVF itself with a quiver plot. Second,\\nwe overlay a regular grid onto a slice and deform it with the DVF,\\nwhich gives a different perspective.\\n5.4 Comparison of Registrations\\nAll registration solutions from all approaches are compared using\\nthe same evaluation pipeline, to ensure a fair comparison. Each\\napproach is configured to output its registrations in the form of a\\nforward and an inverse DVF, which define the deformation on the\\nsource and the target image, respectively. Existing approaches either\\n3Based on technical documentation: https:\\/\\/github.com\\/ANTsX\\/ANTs\\/wiki\\/Anatomy-\\nof-an-antsRegistration-call MOREA: a GPU-accelerated Evolutionary Algorithm for Multi-Objective Deformable Registration of 3D Medical Images\\n(a)Patient 1\\n (b)Patient 2\\n (c)Patient 3\\n (d)Patient 4\\nFigure 4: A selection of the best predicted deformations for each patient, represented by deformed contours rendered onto the\\ntarget image with its reference contours (i.e., target in blue). Annotated slices showing all organs are provided in Table 2.\\n(a) Elastix\\n (b) ANTs\\n (c) MOREA\\nFigure 5: Forward deformation vector fields and deformed contours of selected predicted deformations on Patient 1, for all 3\\napproaches (down-sampled for visibility). Arrow colors represent deformation magnitudes, in voxels (1 voxel =1.5mm).\\ndirectly or indirectly can be configured to output such DVFs. For\\nMOREA, we rasterize the deformation encoded by the two deformed\\nmeshes of a solution, using an existing rasterization method [ 24].\\nSince we are comparing single-objective approaches to a multi-\\nobjective approach (MOREA), we need to select solutions from\\nMOREA\\u2019s approximation set. We conduct this a posteriori selection\\nby starting at the solution with the best guidance objective value\\nand manually navigating through the approximation front to find a\\nsolution with a good trade-off between contour quality and realism.\\nWe also conduct statistical testing using the two-sided Mann-\\nWhitney U test (a standard non-parametric test) to compare MOREA\\nto ANTs and Elastix. The Hausdorff distance of the bladder contour\\nis used as the test metric, as it describes the largest deforming organ.\\nTo correct for multiple tests in the pair-wise comparisons, we apply\\nBonferroni correction to the \\ud835\\udefc-level and reduce it from 0.05 to 0.025.\\n6 RESULTS AND DISCUSSION\\nFigure 4 shows selected outcomes from each per-patient fine-\\ngrained configuration experiment, along with a solution from\\nMOREA\\u2019s approximation front for each patient. For Elastix, we\\nselect the runs with regularization weights 1.0, 1.0, 10.0, and 10.0\\non Patients 1\\u20134, respectively, and for ANTs, we select all runs with\\nweight 0. The full results of our configuration experiments for bothProblem MOREA vs. Elastix MOREA vs. ANTs\\nPatient 1 0.011 (+) 0.007 (+)\\nPatient 2 0.007 (+) 0.007 (+)\\nPatient 3 0.012 (+) 0.007 (+)\\nPatient 4 0.007 (+) 0.195 ( -)\\nTable 3: p-values of pair-wise comparisons of Hausdorff dis-\\ntances for the bladder between approaches. A plus ( +) indi-\\ncates a better mean with MOREA, a minus ( -) the opposite.\\nSignificant results are highlighted.\\nexisting approaches can be inspected in Appendix Sections B.1.2\\nand B.2.2. Convergence plots for Patient 1, which show how all\\napproaches have converged to the results presented here, can\\nbe found in Appendix D. As described in Section 5.1, there is an\\nintrinsic difference in difficulty between the scans. In general, we\\nobserve MOREA generally outperforming other approaches on\\nthe more difficult patients (1\\u20133), as can be seen visually in the\\ndeformed contours shown in Figure 4 and in the additional renders\\nand analyses provided in Appendix D.\\nForPatient 1 , we also render DVF slices in Figure 5, showing the\\ntransformation computed for each region of one slice. We observe\\nthat the deformations returned by Elastix and ANTs only deform Georgios Andreadis, Peter A.N. Bosman, and Tanja Alderliesten\\nFigure 6: Approximation front produced by MOREA on Pa-\\ntient 1. We render 3 zoomed-in registration solutions.\\nthe top region of the bladder. MOREA is the only approach which\\ndistributes this deformation across the entire bladder, which is a\\nmore realistic deformation in this flexible volume. Figure 6 plots\\nthe approximation set that is produced by MOREA on Patient 1,\\nhighlighting 3 solutions with slightly different deformations. This\\nillustrates the range of solutions presented to the user, all of which\\nspread the deformation across the bladder.\\nPatient 2 , which features the largest volume change in the blad-\\nder, seems to prove the most difficult: MOREA comes closest to\\nmodeling its deformation (see Fig. 4), although this comes at the\\ncost of the bowel also being moved downwards. A probable cause\\nis the little space (i.e., margin) left between the two organs in the\\nsource image. Here, MOREA\\u2019s result exposes a more fundamental\\nproblem that affects all approaches: structures separated by little to\\nno margin in one image cannot be separated in the other image with\\na transformation model consisting of a single mesh. The change of\\nbladder shape in Patient 3 is less severe than for Patient 2, but still\\nproves challenging for Elastix and ANTs (see Fig. 4). Especially the\\nback region (located left of the image center) does not match the\\ntarget. Patient 4 represents a relatively easy registration problem,\\nwith little change in the shape of the bladder and a clear margin\\nbetween bladder and bowel (see Fig. 2). On this problem, visual\\ninspection shows that ANTs and MOREA both find a good bladder\\ncontour fit, while Elastix struggles with both bladder and bowel.\\nExamining these results quantitatively, we conduct significance\\ntests on the Hausdorff distance of the bladder, listed in Table 3.\\nIn all patients, the contour match of the bladder as deformed by\\nMOREA is significantly superior to Elastix\\u2019s contour match. ANTs\\nmodels the contour of the bladder significantly less accurately than\\nMOREA in 3 out of 4 cases, with the fourth case (Patient 4) not\\nhaving a significantly different result. Appendix D lists significance\\ntest results for all organs, which confirm these trends, but also show\\nthat MOREA\\u2019s Hausdorff distance can sometimes be significantlyhigher than that of ANTs or Elastix. This does not however need\\nto imply worse registration performance, as a qualitative analysis\\nshows. For example, the deformed shape of the sigmoid of Patient 2\\nfound by ANTs is strongly off (see Figure 4). However, its metric\\nvalue is deemed significantly better than MOREA\\u2019s, even though\\nMOREA is closer to the target in terms of general shape.\\n7 CONCLUSIONS\\nThis work uniquely brings multiple lines of research in the field of\\ndeformable image registration together. We have introduced a reg-\\nistration approach, MOREA, that is both contour-based and image-\\nbased, uses a biomechanical model, and performs multi-objective op-\\ntimization. This combination uniquely positions MOREA to tackle\\nchallenging 3D image registration problems with large deforma-\\ntions and content mismatches. MOREA was built on the MO-RV-\\nGOMEA model-based evolutionary algorithm with several problem-\\nspecific extensions, such as GPU acceleration, solution repair, and\\nobject-aligned mesh initialization. Our experiments have shown\\npromising results on 4 cervical cancer patient scans, reaching higher\\ncontour registration accuracy than two state-of-the-art approaches\\non 3 of the 4 patients, representing the most difficult cases. Impor-\\ntantly, the deformation produced by MOREA seems to be more\\nuniformly spread across objects than the deformations produced\\nby existing approaches, which is deemed to be more realistic.\\nSolutions obtained by MOREA still contain local inaccuracies\\nwhich does leave room for improvement, in particular in regions\\nwhere organs interface. In fact, the results of this study expose a Solutions obtained by MOREA still contain local inaccuracies\\nwhich does leave room for improvement, in particular in regions\\nwhere organs interface. In fact, the results of this study expose a\\nmore fundamental problem in DIR, which is the inability of typical\\nDIR models to capture local discontinuities and content mismatches.\\nThis motivates future research into the modeling of independent or-\\ngan motion, following recent work on this topic [ 35,38]. MOREA\\u2019s\\nextensible, biomechanical model could be well-positioned for ex-\\npansions to capture these phenomena. Given such an expanded\\napproach, a larger validation study, with more patients and involv-\\ning domain experts, could help close the gap to clinical practice.\\nACKNOWLEDGMENTS\\nThe authors thank W. Visser-Groot and S.M. de Boer (Dept. of Ra-\\ndiation Oncology, LUMC, Leiden, NL) for their contributions to\\nthis study. This research is part of the research programme Open\\nTechnology Programme with project number 15586, which is fi-\\nnanced by the Dutch Research Council (NWO), Elekta, and Xomnia.\\nFurther, the work is co-funded by the public-private partnership\\nallowance for top consortia for knowledge and innovation (TKIs)\\nfrom the Dutch Ministry of Economic Affairs. MOREA: a GPU-accelerated Evolutionary Algorithm for Multi-Objective Deformable Registration of 3D Medical Images\\nA TECHNICAL IMPLEMENTATION DETAILS\\nFOR THE MOREA APPROACH\\nIn this appendix, we provide additional technical implementation\\ndetails for the MOREA approach proposed in Section 4.\\nA.1 Modeling the deformation magnitude\\nMOREA\\u2019s deformation magnitude objective models heterogeneous\\nelasticities for different image regions. For each tetrahedron \\ud835\\udeff, we\\nestablish the elasticity of its underlying image region by sampling\\nfrom object-specific binary masks (see Figure 7). These masks are\\ncomputed for each object by filling the interior of its contour (avail-\\nable as guidance), yielding a discrete object segmentation. We com-\\npute the overlap that each object mask has with the tetrahedron\\n\\ud835\\udeff, which produces one fraction per object. In the example given\\nin Figure 7, this would be a fraction of 0.4 for the object corre-\\nsponding to this mask. These object fractions are multiplied by\\npre-determined elasticity factors for different tissue types, yielding\\nan overall element-specific factor for \\ud835\\udeff. At present, only bones and\\nbladder are assigned custom factors. The magnitude objective value\\nfor\\ud835\\udeffis multiplied by this factor to better model the actual energy\\nrequired to deform this image region.\\nA.2 Modeling the image similarity\\nThe image intensity objective of MOREA is defined as a sum of\\nsquared intensity differences at certain sample points. Modeling\\nthe partial objective value of one tetrahedron requires determining\\nwhich image voxels to sample. The existing prototype [ 4] tries to\\nfind all voxels with center points lying inside the tetrahedron, us-\\ning a line-search-inspired method. We observe, however, that this\\ndiscrete association of voxels with tetrahedra leads to undesirable\\nbehavior around tetrahedral surfaces, with voxels sometimes be-\\ning associated with multiple or no neighboring tetrahedra. This\\nphenomenon can be used to improve the sampled value while not\\nimproving or even deteriorating the true value.\\nIn our approach, we therefore introduce a random-sampling\\nbased method which samples the image space continuously, in-\\nterpolating intensity values between voxel centers. This is also\\nbetter-suited for GPU acceleration, since there are less decision\\npoints at which execution needs to pause. We uniformly sample \\ud835\\udc41\\npoints in each tetrahedron using its barycentric coordinate system,\\nwith\\ud835\\udc41being determined by the volume of the tetrahedron. For\\neach point, we sample 4 random real numbers \\ud835\\udc5f\\ud835\\udc56\\u2208[0; 1]and take\\n\\u2212log(\\ud835\\udc5f\\ud835\\udc56)for a uniform spread. We then normalize the coordinates\\nby their sum, to ensure that they lie in the tetrahedron. Instead of a\\nconventional random number generator, we use the Sobol sequence,\\nfor a more even spread of sample points. We ensure reproducibility\\nby seeding the Sobol sequence for each tetrahedron with a seed de-\\nrived from its coordinates. Therefore, the same positions are always\\nsampled per tetrahedron configuration.\\nA.3 Modeling the guidance error\\nThe guidance error objective of MOREA approximates the contour\\nmatch of a solution. Previous work [ 4] computes the extent of a\\ncontour match by considering each point in \\ud835\\udc36\\ud835\\udc60and computing the\\ndistance of its corresponding version in target space to the closest\\npoint in the set \\ud835\\udc36\\ud835\\udc61. This requires iterating over all points in \\ud835\\udc36\\ud835\\udc60,\\nFigure 7: 2D illustration of how one tetrahedral element\\n(here: the red triangle) overlaps with the mask of an organ.\\nThe computed overlap fractions are used to establish the\\nelasticity factor for this tetrahedron\\u2019s deformation magni-\\ntude.\\n(a)Source contour point set.\\nT(ps)\\n 1\\n 1\\n 2 2\\n 3 4\\n 4 5\\n 5 6\\n 6 6 6 6\\n 6 6 7 7 7 7 7 7\\n 7\\n 7 8 8 (b)Target contour point set.\\nFigure 8: Two point sets of object contours in a source and\\ntarget image, with minimal distance maps visualized using\\nisolines. A randomly sampled point \\ud835\\udc5d\\ud835\\udc60is close to the source\\ncontour, but the transformed \\ud835\\udc47(\\ud835\\udc5d\\ud835\\udc60)is farther away from the target image, with minimal distance maps visualized using\\nisolines. A randomly sampled point \\ud835\\udc5d\\ud835\\udc60is close to the source\\ncontour, but the transformed \\ud835\\udc47(\\ud835\\udc5d\\ud835\\udc60)is farther away from the\\ntarget contour. The yellow shaded area represents the trun-\\ncation area beyond which sampled points are discarded.\\nestablishing which tetrahedron they are located in, and computing\\nthe transformation at that point. We introduce a new, continuous\\nguidance formulation that approximates point-wise distances and\\nproved to be faster and more robust, in preliminary experiments.\\nDuring the random sampling process used for the intensity ob-\\njective on the source image \\ud835\\udc3c\\ud835\\udc60, we also consider the same locations\\non a distance map of \\ud835\\udc36\\ud835\\udc60, which gives the closest point to the source\\ncontour (see Figure 8). The distance at that point in the map of \\ud835\\udc36\\ud835\\udc60\\nis subtracted from the distance at the corresponding point in the\\nmap of\\ud835\\udc36\\ud835\\udc61, and weighted inversely by the distance to the source\\ncontour. The distances are truncated to a radius around each guid-\\nance point, measuring 2.5% of the width of the image, so that far\\naway movements do not influence the guidance error of a point set.\\nWe normalize the guidance error of each point set by the number of\\npoints in that set compared to the total number of guidance points,\\nto counteract biases towards more well-defined or larger contours.\\nA.4 Accurately detecting mesh folds\\nA function detecting constraint violations needs to have high pre-\\ncision (i.e., accurately identify all violations) and low latency (i.e.,\\nquickly return its answer). It should furthermore be defined contin-\\nuously, so that the method can also assess the severity of violations.\\nThis is important for methods that repair violations.\\nPrior work on mesh-based 3D image registration [ 4] uses a ray-\\nintersection method, testing if a point is inside a so-called bounding Georgios Andreadis, Peter A.N. Bosman, and Tanja Alderliesten\\nFigure 9: A 2D vector field produced by our radial-basis-\\nfunction approach used to generate solutions. Red dots\\nmark attractors, with their size indicating their weight.\\npolygon. This method has proven error-prone in 3D in preliminary\\nexperiments, producing false positives and negatives. We therefore\\ndevelop a new method for detecting folds in a tetrahedral mesh,\\nbased on the signed volumes of its tetrahedra [ 21]. Our method\\ncalculates the signed volume of each tetrahedron in the initial mesh\\nconfiguration, to establish a set of reference signs. When a point is\\nmoved, we recalculate the signed volumes of all tetrahedra that this\\naffects and compare them to the respective reference signs. The\\nsigns of at least one tetrahedron will flip if a fold has occurred. We\\nuse this phenomenon to detect mesh constraint violations and to\\ncompute the severity of each violation, using the absolute value of\\nthe violating signed volume.\\nA.5 Ensuring diversity in the initial population\\nEven with a smartly initialized mesh, the diversity of the popula-\\ntion at generation 0 plays an important role [ 32]. Prior work uses\\none reference solution and generates random deviations by sam-\\npling around each mesh point with increasingly large variance [ 4].\\nFor low-resolution meshes, this method can be effective, but for\\nhigher-resolution meshes, this method can lead to many constraint\\nviolations in the generated solutions (i.e., folded mesh configura-\\ntions). We introduce a method for initialization noise that generates\\nlarge deformations free of constraint violations, inspired by ap-\\nproaches using radial basis functions in other domains [ 47]. Our\\nmethod places a number of Gaussian kernels on both source and\\ntarget images and models a sense of gravity from mesh points to-\\nwards these kernels. These forces are applied in incremental rounds,\\nas long as they do not cause constraint violations. A deformation\\nvector field generated by this strategy is depicted in Figure 9. MOREA: a GPU-accelerated Evolutionary Algorithm for Multi-Objective Deformable Registration of 3D Medical Images\\nB EXTENDED PROBLEM SPECIFICATION\\nIn this appendix, we provide additional information on the regis-\\ntration problems used in this study and specify additional methods\\nfor evaluation and comparison of registration quality.\\nB.1 Additional Problem Information\\nTable 4 lists the in-slice resolutions of the CT scans used. This is\\nthe physical resolution of each slice prior to our resampling step\\nto(1.5,1.5)mm. We also provide additional views on each medical\\nimage: For each patient, Table 5 lists two slices per source and target\\nimage. This provides a useful additional perspective, since some\\nmovements are better visible from a different angle.\\nB.2 Additional Evaluation Methods\\nWe evaluate each solution with four types of methods, based on\\n(1) surface-based registration accuracy, (2) visual inspection using\\n2D and 3D visualizations, (3) volume-based registration accuracy,\\n(4) landmark registration accuracy. Method types (1) and (2) have\\nbeen described in Section 5.3. Here, we give an additional strategy\\nfor (1), and outline additional methods (3) and (4).\\nB.2.1 Surface-based registration accuracy. Alternatively to the\\nHausdorff distance, the 95th percentile of the Hausdorff distance is\\nanother indicator we use in our study. This represents the distance\\nfor which it holds that 95% of all surface point distances are smaller\\nthan this distance. Both Hausdorff and Hausdorff 95th percentile\\nmetrics are computed using the pymia PyPI package.\\nB.2.2 Volume-based registration accuracy. Adjacent to surface ac-\\ncuracy, we are interested in the accuracy of individual volumes\\n(e.g., organs, bones) represented in the images. A common metric\\nfor this is the Dice coefficient, which represents the fraction of\\nvolume overlap compared to total volumes. Using binary masks of\\neach annotated object in the images, we compute this metric on a\\nvoxel-by-voxel basis. We compare the binary masks corresponding\\nto the target image against binary masks of the source image trans-\\nformed using the computed deformation. With the same reasoning\\nas for surface-based evaluation (see Section 5.3), we discard the\\nsame border margin when evaluating volume-based metrics.\\nB.2.3 Landmark registration accuracy. A set of corresponding land-\\nmarks not provided to the algorithm during optimization can be\\nused to locally assess the accuracy of a registration. For each pair\\nof landmarks, we transform the source landmark using the forward\\ntransformation to target space, and compute landmark accuracy as\\nthe Euclidean distance between the transformed source landmark\\nand its corresponding target landmark. This is a common accuracy\\nmeasure in image registration studies [ 16,20], but can be less accu-\\nrate as an indicator of overall registration quality, since landmarks\\nare placed on visible anatomical structures that often have limited\\nmovement, as is the case in our scans.B.3 Comparing Multi-Object Metrics\\nThe metrics of individual organs cannot be adequately interpreted\\nin isolation, as organ motions are related and therefore form trade-\\noffs. We visualize these trade-offs by plotting scores for different\\norgans in one parallel coordinates plot, similar to the color-coded\\nheatmap comparison presented in [ 27]. These line plots help inform\\ndecisions that need to take registration quality across registration\\ntargets into account.\\nPatient Scan In-slice Resolution\\nPatient 1Full bladder (0.86,0.86)mm\\nEmpty bladder (0.98,0.98)mm\\nPatient 2Full bladder (1.04,1.04)mm\\nEmpty bladder (1.07,1.07)mm\\nPatient 3Full bladder (0.98,0.98)mm\\nEmpty bladder (0.98,0.98)mm\\nPatient 4Full bladder (1.04,1.04)mm\\nEmpty bladder (1.00,1.00)mm\\nTable 4: In-slice resolutions for the slices of each CT scan,\\nprior to resampling them to (1.5,1.5)mm. Georgios Andreadis, Peter A.N. Bosman, and Tanja Alderliesten\\nInstance Source image: sagittal Target image: sagittal Source image: coronal Target image: coronal\\nPatient 1\\nPatient 2\\nPatient 3\\nPatient 4\\nTable 5: Slices of all registration problems, with organs contoured. Sagittal: side view; coronal: front-to-back view. MOREA: a GPU-accelerated Evolutionary Algorithm for Multi-Objective Deformable Registration of 3D Medical Images\\nC CONFIGURATION OF COMPARED\\nAPPROACHES\\nC.1 Elastix\\nWe use Elastix version 5.0.0. Based on parameter settings from the\\nElastix Model Zoo4, we apply multi-resolution Elastix registration\\nto our registration problems with a range of configurations, trying\\nto find the optimal configuration for each problem (see Section C.1.3\\nfor our parameter files). Inspired by an approach implementing sym-\\nmetric registration in Elastix using a group-wise methodology [ 6],\\nwe also experiment with a symmetric variant which registers both\\nimages to a common image mid-space. For all setups, we relax con-\\nvergence requirements by increasing the number of iterations per\\nresolution to 10,000, which is significantly larger (5 times) than\\nthe computational budget given in most reference files. This is\\ndone to give Elastix sufficient opportunity to model the large defor-\\nmations present. We also stabilize optimization by increasing the\\nnumber of image sampling points from the frequently used 10,000\\nto 20,000. Although increasing the computational complexity, this\\nshould make image intensity approximations used internally during\\noptimization more accurate and computed gradients more reliable.\\nElastix computes the inverse transform by default, meaning a vec-\\ntor field defined in fixed (target) space leading to moving (source)\\nspace. To compute the forward transform, which is needed to trans-\\nform annotations from moving (source) to fixed (target) space, we\\nrerun the registrations with the given parameter files and the com-\\nputed transform as initial transform, but replace the metric(s) with\\ntheDisplacementMagnitudePenalty metric. This effectively finds\\ntheforward transform of the computed inverse transform. Export-\\ning this forward transform in isolation, by removing the initial\\ntransform pointer from the parameter file, yields the desired DVF.\\nElastix does not support the optimization of object contour\\nmatches, which are optimized by the MOREA approach through\\nthe guidance objective. To ensure a fair comparison, we attempt\\nto input this information as a pair of composite mask images to\\nimplicitly pass on contour information. Each mask image is made\\nby combining the different binary object masks available for each\\nscan, giving each object segmentation a different homogeneous\\nintensity value. In runs where this feature is enabled, we precede\\nthe CT image registration run with a registration of these prepared\\ncomposite masks.\\nC.1.1 Coarse-grained configuration experiments. First, we conduct\\nan initial set of runs on Patient 1 to establish a suitable base con-\\nfiguration for this problem modality and difficulty. We explore the\\ninfluence of registration direction (unidirectional vs. symmetric)\\nand the use of a composite mask registration step (with vs. without),\\nassuming a regularization weight of 0.001, to give Elastix flexibility\\nfor large deformations (a large weight on the deformation magni-\\ntude weight can hinder large deformations).\\nIn Figure 10, we plot the performance of Elastix using symmet-\\nric and unidirectional registration, reporting two different metrics\\n(Dice score and 95th percentile of the Hausdorff distance). We ob-\\nserve that unidirectional registration generally performs similarly\\nor better compared to symmetric registration, except for the rec-\\ntum and anal canal, in terms of Dice score. Due to the relatively\\n4https:\\/\\/elastix.lumc.nl\\/modelzoo\\/\\n(a) Dice scores.\\n(b) 95th percentiles of the Hausdorff distance.\\nFigure 10: Comparison of symmetric and unidirectional reg-\\nistration in Elastix, for multiple runs. The baseline score af-\\nter rigid registration is plotted in blue.\\n(a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 11: Visual renders of deformations predicted by\\nElastix configurations using unidirectional and symmetric\\nregistration, without mask registration step. (a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 11: Visual renders of deformations predicted by\\nElastix configurations using unidirectional and symmetric\\nregistration, without mask registration step.\\nlarge performance gain in the bladder (the most strongly deforming\\norgan), we choose unidirectional registration at this point. This\\nchoice is supported by visual inspection of Figure 11, which shows\\nslightly better performance on the bladder in the coronal slice.\\nWe now turn to the use of a composite mask registration step, in\\nan attempt to get larger deformations by simplifying the informa-\\ntion input to Elastix. Figure 12 shows the same metrics, but with Georgios Andreadis, Peter A.N. Bosman, and Tanja Alderliesten\\n(a) Dice scores.\\n(b) 95th percentiles of the Hausdorff distance.\\nFigure 12: Comparison of unidirectional registration with\\nand without a composite mask registration step in Elastix,\\nfor multiple runs. The baseline score after rigid registration\\nis plotted in blue.\\nand without the use of such a step (while using unidirectional reg-\\nistration). The results do not identify one clear superior approach,\\nsince the Dice score of the with-mask configuration is generally\\nsuperior but the Hausdorff 95th percentile is lower for the without-\\nmask configuration. Figure 13 indicates that adding a mask step\\nimproves the modeling of the base region of the bladder, but the\\nmiddle region is merely contracted sideways without moving the\\ntop region downwards, thereby not resulting in anatomically real-\\nistic deformations. Nevertheless, we choose this version over the\\nversion without mask registration step, since the large deformation\\nneeded is modeled more closely with the step added.\\nC.1.2 Fine-grained configuration experiments per patient. For each\\npatient, we try exponentially increasing regularization weights; an\\nexponential regularization weight sweep that is also used in similar\\nwork [ 8]. The Dice scores on each patient are reported in Figure 14\\nand the 95th percentiles of the Hausdorff distance in Figure 15.\\nRenders for each problem are provided in Figures 16\\u201319.\\nWe observe that the optimal regularization weight varies\\nstrongly between different registration problems. While the scans\\nof Patient 1 (Fig. 16) are best served with a weight of 1.0 out of the\\ntried settings, the scans of Patient 3 (Fig. 18) seem better off with a\\nweight of 10.0.\\n(a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 13: Visual renders of deformations predicted by\\nElastix configurations with and without a composite mask\\nregistration step, using unidirectional registration. MOREA: a GPU-accelerated Evolutionary Algorithm for Multi-Objective Deformable Registration of 3D Medical Images\\n(a) Patient 1.\\n (b) Patient 2.\\n(c) Patient 3.\\n (d) Patient 4.\\nFigure 14: Dice scores for per-patient fine-grained configuration runs in Elastix. The baseline score after rigid registration is\\nplotted in blue.\\n(a) Patient 1.\\n (b) Patient 2.\\n(c) Patient 3.\\n (d) Patient 4.\\nFigure 15: Hausdorff 95th percentiles for per-patient fine-grained configuration runs in Elastix. The baseline score after rigid\\nregistration is plotted in blue. Georgios Andreadis, Peter A.N. Bosman, and Tanja Alderliesten\\n(a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 16: Visual renders of deformations predicted by\\nElastix with different regularization weights, on Patient 1.\\n(a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 17: Visual renders of deformations predicted by\\nElastix with different regularization weights, on Patient 2.\\n(a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 18: Visual renders of deformations predicted by\\nElastix with different regularization weights, on Patient 3.\\n(a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 19: Visual renders of deformations predicted by\\nElastix with different regularization weights, on Patient 4. MOREA: a GPU-accelerated Evolutionary Algorithm for Multi-Objective Deformable Registration of 3D Medical Images\\nC.1.3 Parameter files. Below, we list the parameter files that we\\nused for the different variants of Elastix registration. Tokens starting\\nwith the $character denote variables that are resolved before we\\npass the file to Elastix (e.g., a random seed that we increment at\\nevery repeat).\\nListing 1: Forward transform parameters for conventional,\\nunidirectional deformation.\\n\\/\\/ ImageTypes\\n(FixedImagePixelType \\\"short\\\")\\n(FixedImageDimension 3)\\n(MovingImagePixelType \\\"short\\\")\\n(MovingImageDimension 3)\\n\\/\\/ Multi resolution\\n(Registration \\\"MultiMetricMultiResolutionRegistration\\\")\\n(HowToCombineTransforms \\\"Compose\\\")\\n(NumberOfHistogramBins 32)\\n(NumberOfResolutions 4)\\n(MaximumNumberOfIterations 10000)\\n\\/\\/ Optimizer\\n(Optimizer \\\"AdaptiveStochasticGradientDescent\\\")\\n(AutomaticParameterEstimation \\\"true\\\")\\n(UseAdaptiveStepSizes \\\"true\\\")\\n(CheckNumberOfSamples \\\"true\\\")\\n(UseDirectionCosines \\\"true\\\")\\n(RandomSeed $random_seed)\\n\\/\\/ Metric\\n(Metric \\\"AdvancedMattesMutualInformation\\\"\\n\\\"TransformBendingEnergyPenalty\\\")\\n(Metric0Weight 1.0)\\n(Metric1Weight $regularization_weight)\\n\\/\\/ Components\\n(FixedImagePyramid \\\"FixedSmoothingImagePyramid\\\")\\n(MovingImagePyramid \\\"MovingSmoothingImagePyramid\\\")\\n(Interpolator \\\"BSplineInterpolator\\\")\\n(ResampleInterpolator \\\"FinalBSplineInterpolator\\\")\\n(Resampler \\\"DefaultResampler\\\")\\n(Transform \\\"BSplineTransform\\\")\\n\\/\\/ Transform\\n(FinalGridSpacingInPhysicalUnits 2.0)\\n\\/\\/ Sampling\\n(ImageSampler \\\"RandomCoordinate\\\")\\n(NewSamplesEveryIteration \\\"true\\\")\\n(NumberOfSpatialSamples 20000)\\n\\/\\/ Interpolation and resampling\\n(BSplineInterpolationOrder 1)\\n(FinalBSplineInterpolationOrder 3)\\n(DefaultPixelValue 0)\\n\\/\\/ Output and other\\n(WriteTransformParametersEachIteration \\\"false\\\" \\\"false\\\" \\\"false\\\"\\n\\\"false\\\" \\\"false\\\")\\n(WriteTransformParametersEachResolution \\\"true\\\" \\\"true\\\" \\\"true\\\" \\\"true\\\"\\n\\\"true\\\")\\n(ShowExactMetricValue \\\"false\\\" \\\"false\\\" \\\"false\\\" \\\"false\\\" \\\"false\\\")\\n(WriteResultImageAfterEachResolution \\\"false\\\")\\n(WriteResultImage \\\"true\\\")\\n(ResultImagePixelType \\\"short\\\")\\n(ResultImageFormat \\\"nii.gz\\\")Listing 2: Forward transform parameters for symmetric de-\\nformation.\\n\\/\\/ ImageTypes\\n(FixedImagePixelType \\\"short\\\")\\n(FixedInternalImagePixelType \\\"short\\\")\\n(FixedImageDimension 4)\\n(MovingImagePixelType \\\"short\\\")\\n(MovingInternalImagePixelType \\\"short\\\")\\n(MovingImageDimension 4)\\n\\/\\/ Multi resolution\\n(Registration \\\"MultiResolutionRegistration\\\")\\n(HowToCombineTransforms \\\"Compose\\\")\\n(NumberOfHistogramBins 32)\\n(NumberOfResolutions 4)\\n(MaximumNumberOfIterations 10000)\\n(MaximumNumberOfSamplingAttempts 10)\\n\\/\\/ Optimizer\\n(Optimizer \\\"AdaptiveStochasticGradientDescent\\\")\\n(AutomaticParameterEstimation \\\"true\\\")\\n(UseAdaptiveStepSizes \\\"true\\\")\\n(CheckNumberOfSamples \\\"true\\\")\\n(UseDirectionCosines \\\"true\\\")\\n(RandomSeed \\\\$random_seed)\\n\\/\\/ Metric\\n(Metric \\\"$metric\\\")\\n(NumEigenValues 1)\\n(TemplateImage \\\"ArithmeticAverage\\\" \\\"ArithmeticAverage\\\")\\n(Combination \\\"Sum\\\" \\\"Sum\\\")\\n(SubtractMean \\\"true\\\")\\n(MovingImageDerivativeScales 1.0 1.0 1.0 0.0)\\n\\/\\/ Components\\n(FixedImagePyramid \\\"FixedSmoothingImagePyramid\\\")\\n(MovingImagePyramid \\\"MovingSmoothingImagePyramid\\\")\\n(ImagePyramidSchedule 8 8 8 0 4 4 4 0 2 2 2 0 1 1 1 0)\\n(Interpolator \\\"ReducedDimensionBSplineInterpolator\\\")\\n(ResampleInterpolator \\\"FinalReducedDimensionBSplineInterpolator\\\")\\n(Resampler \\\"DefaultResampler\\\")\\n(Transform \\\"BSplineStackTransform\\\")\\n\\/\\/ Transform\\n(FinalGridSpacingInPhysicalUnits 2.0)\\n\\/\\/ Sampling\\n(ImageSampler \\\"RandomCoordinate\\\")\\n(NewSamplesEveryIteration \\\"true\\\")\\n(NumberOfSpatialSamples 20000)\\n\\/\\/ Interpolation and resampling\\n(BSplineTransformSplineOrder 1)\\n(FinalBSplineInterpolationOrder 3)\\n(DefaultPixelValue 0)\\n\\/\\/ Output and other\\n(WriteTransformParametersEachIteration \\\"false\\\" \\\"false\\\" \\\"false\\\"\\n\\\"false\\\")\\n(WriteTransformParametersEachResolution \\\"true\\\" \\\"true\\\" \\\"true\\\" \\\"true\\\")\\n(ShowExactMetricValue \\\"false\\\" \\\"false\\\" \\\"false\\\" \\\"false\\\")\\n(WriteResultImageAfterEachResolution \\\"false\\\")\\n(WriteResultImage \\\"true\\\")\\n(ResultImagePixelType \\\"short\\\")\\n(ResultImageFormat \\\"nii.gz\\\") Georgios Andreadis, Peter A.N. Bosman, and Tanja Alderliesten\\nC.2 ANTs SyN\\nWe use ANTs SyN algorithm version 2.4.2. We bootstrap a regis-\\ntration command using the antsRegistrationSyN.sh script and\\ncustomize it to fit our problem (see Section C.2.3 for our run com-\\nmands). Following official recommendations5, we consider the fol-\\nlowing settings to be left tunable for this problem: (1) what region\\nradius to use for the cross correlation metric, (2) whether to use\\ncomposite masks as an additional image modality channel during\\nregistration, (3) what gradient step size to use, (4) what regular-\\nization weight to assign to local deformations between time steps,\\nand (5) what regularization weight to assign to the total deforma-\\ntion. We configure the first four parameters for Patient 1, and then\\nconfigure the fifth parameter for each patient, separately.\\nIn our setup, we relaxed convergence limits compared to guide-\\nlines to allow for longer, and hopefully more accurate registration.\\nIn terms of metrics, we do not use the point set registration metric\\nthat is mentioned in the manual, as the manual states that this\\nmetric is not currently supported in ANTs SyN.\\nWe encountered that ANTs SyN random seed does not have any\\neffect on the outcome of registration with the Cross Correlation\\n(CC) measure, even with a random sampling strategy. The current\\nversion seems fully deterministic, but without taking the random\\nseed into account, therefore always producing the same output,\\nregardless of the seed. This is problematic, since we would like to\\nget multiple outputs that expose how the registration approach\\nreacts to slightly varying inputs. To mitigate the lack of control on\\nthe determinism of the registration, we slightly perturb the sigma\\nsmoothing factors (see Listing 3) with very small (deterministically\\nrandom) deltas. \\u03943is normally distributed and capped between\\n[\\u22120.1,0.1],\\u03942between[\\u22120.05,0.05], and\\u03941between[\\u22120.01,0.01].\\nC.2.1 Coarse-grained configuration experiments. We conduct an\\ninitial set of coarse-grained configuration experiments on Patient 1\\nwith the ANTs SyN algorithm. The officially recommended set-\\ntings serve as our baseline: a cross-correlation radius of 4 voxels,\\na gradient step size of 0.1, registration of only the image itself (no\\nadditional channels), and an update regularization weight of 3.0.\\nFor each of these settings, we experiment with different deviations\\nfrom the baseline.\\nCross correlation radius First, we investigate the impact of a different\\ncross correlation radius. Larger values should improve registration\\naccuracy, since more context information is taken into account\\nwhen computing the cross correlation of a sample. Figure 20 con-\\nfirms this expectation, although it shows little impact overall. Most\\norgans show little deviation in score, but the anal canal is registered\\nmore accurately in terms of Dice score when the radius is increased.\\nWe observe that there are diminishing returns here, e.g., a change\\nof radius from 7 to 8 provides only marginal improvement. Still, we\\ndecide to use the largest setting tested (8 voxels, meaning 12 mm\\nin the case of the clinical problems), since this setting provides\\nthe best outcome and there is no time limit on registration in our\\nstudy. The visual render in Figure 21 shows the visual impact of\\nthis setting, which can be described as limited.\\n5https:\\/\\/github.com\\/ANTsX\\/ANTs\\/wiki\\/Anatomy-of-an-antsRegistration-call\\n(a) Dice scores.\\n(b) 95th percentiles of the Hausdorff distance.\\nFigure 20: Comparison of registrations with different region\\nradii for the ANTs cross correlation metric. The baseline\\nscore after rigid registration is plotted in blue.\\n(a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 21: Visual renders of deformations predicted by ANT\\nconfigurations with different CC radii.\\nComposite mask channel Second, we explore the effect of including\\na composite mask image channel during registration. Figure 22 configurations with different CC radii.\\nComposite mask channel Second, we explore the effect of including\\na composite mask image channel during registration. Figure 22\\nprovides evidence that including a mask channel has added value in\\nterms of Dice score for registration of all organs. The difference in\\nperformance is only slightly visible in Figure 23, but the difference\\nin metric values motivates our decision to use a mask channel in\\nthe upcoming patient-specific configuration steps.\\nGradient step size Third, we examine the impact of using a different\\ngradient step size on the registration performance of ANTs. A MOREA: a GPU-accelerated Evolutionary Algorithm for Multi-Objective Deformable Registration of 3D Medical Images\\n(a) Dice scores.\\n(b) 95th percentiles of the Hausdorff distance.\\nFigure 22: Comparison of registrations with and without a\\ncomposite mask channel in ANTs. The baseline score after\\nrigid registration is plotted in blue.\\n(a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 23: Visual renders of deformations predicted by ANT\\nconfigurations with and without a composite mask channel.\\nlarger step size between time points in ANTs\\u2019 registration could\\nlead to larger deformations becoming feasible, since optimization\\nis less likely to get stuck in local minima. Figure 24 indicates that\\nchoosing a larger step size than the recommended value of 0.1 can\\nbe beneficial, with 1.0 providing a good trade-off for different organs.\\nLarger step sizes such as 5.0 cause the algorithm to overshoot the\\ntarget and strongly deform a number of organs, as can be seen in\\nthe contour renders (Figure 25). We choose a gradient step size of\\n1.0 for its good trade-off between performance targets.\\nUpdate regularization weight Finally, we use the deduced settings\\nfrom the previous three sweeps to test which update regularization\\n(a) Dice scores.\\n(b) 95th percentiles of the Hausdorff distance.\\nFigure 24: Comparison of ANTs registrations with different\\ngradient step sizes between time points. The baseline score\\nafter rigid registration is plotted in blue.\\n(a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 25: Visual renders of deformations predicted by ANT\\nconfigurations with different gradient step sizes.\\nweight performs best. Figure 26 shows best overall performance\\nfor 4.0, in both metrics. Visually, Figure 27 indicates that weights\\n4.0 and 5.0 lead to the best registration outcomes, with little visible\\ndifference between the two. Based on visual and quantitative results,\\nwe choose an update regularization weight of 4.0 for the patient-\\nspecific configuration experiments.\\nC.2.2 Fine-grained configuration experiments per patient. We try\\nexponentially increasing total regularization weights for all prob-\\nlem instances. Figures 28 and 29 plot the Dice scores and Hausdorff\\n95th percentiles for each problem instance, and Figures 30\\u201333 show Georgios Andreadis, Peter A.N. Bosman, and Tanja Alderliesten\\n(a) Dice scores.\\n(b) 95th percentiles of the Hausdorff distance.\\nFigure 26: Comparison of ANTs registrations with differ-\\nent update regularization weights between time points. The\\nbaseline score after rigid registration is plotted in blue.\\n(a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 27: Visual renders of deformations predicted by ANT\\nconfigurations with different update regularization weights.\\nrenders of the deformed contours that ANTs predicts for these in-\\nstances. We observe that regularization has a strong impact on per-\\nformance in all examined cases, but that often the (relatively) better\\noutcomes are still acquired without regularization. Figures 30\\u201332\\nshow ANTs failing to model the large deformation taking place in\\nthe bladder and its surrounding organs, regardless of the regular-\\nization. The Dice and Hausdorff metric results underscore these\\nobservations. In Figure 33, ANTs shows that it can model the blad-\\nder deformation quite closely, but it should be noted that this is\\nmorphologically also the easiest problem.C.2.3 Run commands. We list the two commands that we used for\\nregistration with ANTs. Tokens starting with the $character denote\\nvariables that are resolved before we execute these commands. Note\\nthat the random seed, even though given to the command, is not\\nfunctional and does not change the output.\\nListing 3: ANTs registration command for multivariate reg-\\nistration with composite masks.\\n$ANTSPATH\\/antsRegistration\\n--verbose 1\\n--random-seed $random_seed\\n--dimensionality 3\\n--float 0\\n--collapse-output-transforms 1\\n--output [ , Warped.nii.gz, InverseWarped.nii.gz ]\\n--interpolation Linear\\n--use-histogram-matching 0\\n--winsorize-image-intensities [ 0.005, 0.995 ]\\n--initial-moving-transform [ $fixed_composite_mask,\\n$moving_composite_mask, 1 ]\\n--transform SyN[ $gradient_step_size,\\n$update_regularization_weight,\\n$total_regularization_weight ]\\n--metric CC[ $fixed_composite_mask, $moving_composite_mask, 1,\\n$cross_correlation_radius ]\\n--metric CC[ $fixed_image, $moving_image, 1,\\n$cross_correlation_radius ]\\n--convergence [ 2000x1000x500x250, 1e-6, 10 ]\\n--shrink-factors 8x4x2x1\\n--smoothing-sigmas {3+delta_3}x{2+delta_2}x{1+delta_1}x0vox\\nListing 4: ANTs registration command for multivariate reg-\\nistration without composite masks.\\n$ANTSPATH\\/antsRegistration\\n--verbose 1\\n--random-seed $random_seed\\n--dimensionality 3\\n--float 0\\n--collapse-output-transforms 1\\n--output [ , Warped.nii.gz, InverseWarped.nii.gz ]\\n--interpolation Linear\\n--use-histogram-matching 0\\n--winsorize-image-intensities [ 0.005, 0.995 ]\\n--initial-moving-transform [ $fixed_image, $moving_image, 1 ]\\n--transform SyN[ $gradient_step_size,\\n$update_regularization_weight,\\n$total_regularization_weight ]\\n--metric CC[ $fixed_image, $moving_image, 1,\\n$cross_correlation_radius ]\\n--convergence [ 2000x1000x500x250, 1e-6, 10 ]\\n--shrink-factors 8x4x2x1\\n--smoothing-sigmas {3+delta_3}x{2+delta_2}x{1+delta_1}x0vox MOREA: a GPU-accelerated Evolutionary Algorithm for Multi-Objective Deformable Registration of 3D Medical Images\\n(a) Patient 1.\\n (b) Patient 2.\\n(c) Patient 3.\\n (d) Patient 4.\\nFigure 28: Dice scores for per-patient fine-grained configuration runs in ANTs, with the baseline after rigid registration in\\nblue.\\n(a) Patient 1.\\n (b) Patient 2.\\n(c) Patient 3.\\n (d) Patient 4.\\nFigure 29: Hausdorff 95th percentiles for per-patient fine-grained configuration runs in ANTs, with the baseline after rigid\\nregistration in blue. Georgios Andreadis, Peter A.N. Bosman, and Tanja Alderliesten\\n(a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 30: Visual renders of deformations predicted by ANTs\\nwith different total regularization weights, on Patient 1.\\n(a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 31: Visual renders of deformations predicted by ANTs\\nwith different total regularization weights, on Patient 2.\\n(a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 32: Visual renders of deformations predicted by ANTs\\nwith different total regularization weights, on Patient 3.\\n(a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 33: Visual renders of deformations predicted by ANTs\\nwith different total regularization weights, on Patient 4. MOREA: a GPU-accelerated Evolutionary Algorithm for Multi-Objective Deformable Registration of 3D Medical Images\\nC.3 This Work: MOREA\\nWe describe several coarse-grained configuration experiments that\\nwe conducted with MOREA on Patient 1. The base parameter file\\nwe derived from these experiments can be found in Section C.3.2.\\nWe do not conduct fine-grained configuration steps, since MOREA\\nis a multi-objective approach.\\nFor MOREA\\u2019s guidance objective, we perform an additional pre-\\nprocessing step on each scan, to address the discrepancy between\\nresolutions in different dimensions. The initial resampling step\\nbringing each scan to a uniform voxel resolution of 1.5 mmleads to\\nthe between-slice dimension being over-sampled (originally, slices\\nare 3 mm apart). Contour annotations are placed only on slices,\\nwhich means that the new slices added by resampling to 1.5 mm,\\nbetween original slices, do not have contour information. These\\nslice \\u201cgaps\\u201d in the contours of objects can be exploited during\\noptimization. We address this with an intermediate step, building a\\n3D model of each object across slices and generating border points\\nfrom this model.\\nC.3.1 Coarse-grained configuration experiments.\\nHeterogeneous elasticity In Section 4.1, we describe a model that\\nenables capturing biomechanical properties of different tissue types\\nin the deformation magnitude objective. The core principle of this\\nbiomechanical model is to ascribe heterogeneous elasticities to\\ndifferent regions of image space, corresponding with objects (e.g.,\\norgans and bones) present. In this first configuration experiment,\\nwe compare the performance of this model with the performance of\\nthe model which is used by prior work [ 4], assuming homogeneous\\nelasticity of image space. This experiment was conducted without\\na contour on the body, later experiments do have this contour.\\nThe metric results in Figure 34 indicate that the heterogeneous\\nmodel generally receives higher Dice scores and similar Hausdorff\\n95th percentiles. Figure 35 shows renderings of selected solutions\\nwith the heterogeneous and homogeneous models, which confirm\\nthis trend. We observe in both slices that heterogeneous elasticity\\nespecially shows improved performance on the bladder deforma-\\ntion, potentially due to the increased elasticity that this models\\nassigns to the bladder.\\nMesh generation Using the biomechanical model that experiments\\nin the previous subsection covered, we now investigate the impact\\nof different mesh point placement strategies. The strategy used to\\ncreate meshes from these points is described in Section 4.2.1.\\nIn this experiment, compare how well a random (Sobol-sequence\\nbased) placement compares to a contour-based strategy where\\npoints are sampled per contour and a contour-based strategy which\\nhas special handling for the bladder\\u2019s surface. Figure 36a shows the\\nbladder being modeled best by the last strategy, with contour-based\\nstrategies in general performing better than random, across organs.\\nThe renders in Figure 37 indicate that a random placement method\\ncan model the general deformation, but is too coarse to accurately\\ntreat details of specific organs and parts of the bones. Both contour-\\nbased strategies perform well, but around the bladder\\u2019s surface, the\\nstrategy with special surface constraints excels.\\nSupplying guidance information The multi-objective line of reg-\\nistration approaches, which MOREA continues, can have a third\\n(a) Dice scores.\\n(b) 95th percentiles of the Hausdorff distance.\\nFigure 34: Comparison of the use of heterogeneous elas-\\nticities in the deformation magnitude objective of MOREA\\nagainst the prior use of a homogeneous elasticity model, for\\nmultiple runs. The baseline score after rigid registration is\\nplotted in blue.\\n(a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 35: Visual renders of deformations predicted by\\nMOREA with a heterogeneous elastic deformation model\\nand a homogeneous model.\\nobjective that captures guidance (contour) match. In this experi- Figure 35: Visual renders of deformations predicted by\\nMOREA with a heterogeneous elastic deformation model\\nand a homogeneous model.\\nobjective that captures guidance (contour) match. In this experi-\\nment, we assess what the impact of this objective is on the quality\\nof registrations.\\nThe quantitative results in Figure 38 leave little doubt that the\\nadoption of a guidance objective is crucial to modeling large defor-\\nmations. Without it, the bladder remains largely in place, as can be\\nseen in Figure 39. It seems that in this problem, image information\\nis not sufficient to guide the optimization. Georgios Andreadis, Peter A.N. Bosman, and Tanja Alderliesten\\n(a) Dice scores.\\n(b) 95th percentiles of the Hausdorff distance.\\nFigure 36: Comparison of different mesh point placement\\nstrategies, for multiple runs. The baseline score after rigid\\nregistration is plotted in blue.\\n(a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 37: Visual renders of deformations predicted by\\nMOREA with different mesh point placement strategies.\\n(a) Dice scores.\\n(b) 95th percentiles of the Hausdorff distance.\\nFigure 38: Comparison of MOREA registrations with and\\nwithout guidance information, for multiple runs. The base-\\nline score after rigid registration is plotted in blue.\\n(a) Sagittal slice.\\n (b) Coronal slice.\\nFigure 39: Visual renders of deformations predicted by\\nMOREA with and without guidance enabled. MOREA: a GPU-accelerated Evolutionary Algorithm for Multi-Objective Deformable Registration of 3D Medical Images\\nC.3.2 Parameter file. We pass parameters to MOREA in a self-\\nwritten parameter file format. Below we list the parameter file used\\nas basis for the experiments listed in this work.\\nListing 5: Parameter file used as basis for the main MOREA\\nexperiments.\\nsweep_descriptor = \\\"$experiment_descriptor\\\"\\nnum_runs = 5\\nproblem_id = \\\"$problem_id\\\"\\nzip = true\\nproblem_guidance_enabled = true\\nproblem_guidance_selection = \\\"-1\\\"\\ncuda_compute_level = 80\\ncuda_gpu_id = 0\\nea_num_generations = 500\\nea_population_size = 700\\nea_num_clusters = 10\\nea_archive_size = 2000\\nea_adaptive_steering_enabled = true\\nea_adaptive_steering_activated_at_num_generations = 100\\nea_adaptive_steering_guidance_threshold = 1.5\\nmorea_init_noise_method = \\\"global-gaussian\\\"\\nmorea_init_noise_factor = 1.0\\nmorea_mesh_generation_method = \\\"annotation-group-random-bladder-10\\\"\\nmorea_mesh_num_points = 600\\nmorea_max_num_mesh_levels = 1\\nmorea_num_generations_per_level = 0\\nmorea_magnitude_metric = \\\"biomechanical\\\"\\nmorea_image_metric = \\\"squared-differences\\\"\\nmorea_guidance_metric = \\\"continuous-per-group\\\"\\nmorea_sampling_rate = 1.0\\nmorea_fos_type = \\\"edges\\\"\\nmorea_symmetry_mode = \\\"transform-both\\\"\\nmorea_dual_dynamic_mode = \\\"dual\\\"\\nmorea_repair_method = \\\"gaussian\\\"\\nmorea_ams_strategy = \\\"none\\\"\\nmorea_num_disruption_kernels = 0\\nmorea_disruption_frequency = 0 Georgios Andreadis, Peter A.N. Bosman, and Tanja Alderliesten\\nD FULL EXPERIMENTAL RESULTS\\nIn this appendix, we list more extensive results of the experiments\\npresented in Section 6. Figure 40 and 41 give full metric results for all\\npatients, comparing the three approaches with parallel coordinate\\nplots. Table 6 lists significance test results for all organ Hausdorff\\ndistances. A visual perspective is provided by Table 8, which shows\\nan additional slice per patient, overlaid with the predicted deforma-\\ntions. Below, we analyze convergence behavior (Section D.1) and\\nlandmark performance (Section D.2).\\nD.1 Convergence Behavior\\nWe plot the convergence behavior of each approach on Patient 1\\nin Figure 42 to show how each approach has converged before\\nyielding the results we show here. Elastix and ANTs both have\\na multi-resolution approach. To deal with the discontinuities in\\nmulti-stage resolution, we mark resolution switches in those plots\\nwith red vertical lines. Our configuration of Elastix also has a mask\\nregistration step, meaning that there are in total 8 segments (4 reso-\\nlutions of mask registration and 4 resolutions of image registration).\\nThe scaling of the value to be optimized is not always normalized\\nacross resolutions, which explains the jumps in value ranges be-\\ntween resolutions. Note that ANTs uses a separate \\u201cconvergence\\nvalue\\u201d to determine when it has converged, plotted in Figure 42d.\\nFor MOREA, we plot the achieved hypervolume and the best guid-\\nance objective value achieved. The sudden decrease in hypervolume\\nat generation 100 is related to the adaptive steering strategy used,\\nwhich purges any solutions with unfavorable guidance objective\\nvalues from the elitist archive.\\nD.2 Landmark Accuracy\\nWe list landmark registration accuracy on all 4 patients in Table 7.\\nWe aggregate all errors of all landmarks across repeats for one pa-\\ntient and approach, and compute the mean and standard deviation\\non this sample set. Since these landmarks are generally placed on\\nvisible, anatomically stable locations, and typically not in strongly\\ndeforming regions, this accuracy should be interpreted as a measure\\nof how well the method preserves certain anatomical structures.\\nThis measure is therefore less suitable as a measure of how well\\nthe registration problem is \\u201csolved\\u201d, for which visual (DVF and\\nrendered) inspection is still key. For some landmarks, the precise lo-\\ncation can be ambiguously defined or less visible on certain patients.\\nThese landmarks are, however, still accurately placeable between\\nscans by using the visual context they are situated in and taking\\nconsistent placement decisions for each pair of scans.\\nGenerally, we observe that Elastix performs worse than ANTs\\nand MOREA, and MOREA always improves or roughly maintains\\nthe baseline landmark registration error. We do not see a consis-\\ntent correlation between actual registration performance on large\\ndeforming objects and target registration error values, due to the\\naforementioned reasons.Problem Contour MOREA \\/ Elastix MOREA \\/ ANTs\\nPatient 1bladder 0.011 (+) 0.007 (+)\\nbones 0.009 (+) 0.006 (+)\\nrectum 0.007 (+) 0.007 (+)\\nanal canal 0.007 (+) 0.007 (+)\\nsigmoid 0.007 (+) 0.007 (+)\\nbowel 0.010 (+) 0.011 (+)\\nbody 0.006 (+) 0.006 (-)\\nPatient 2bladder 0.007 (+) 0.007 (+)\\nbones 0.007 (+) 0.007 (+)\\nrectum 0.118 (+) 0.007 (-)\\nanal canal 0.123 (-) 0.180 (-)\\nsigmoid 0.007 (+) 0.007 (-)\\nbowel 0.401 (+) 0.007 (+)\\nbody 0.655 (+) 1.000 (-)\\nPatient 3bladder 0.012 (+) 0.007 (+)\\nbones 0.007 (+) 0.007 (+)\\nrectum 0.290 (+) 0.007 (-)\\nanal canal 0.118 (-) 0.007 (+)\\nsigmoid 0.007 (+) 0.007 (+)\\nbowel 0.007 (+) 0.056 (+)\\nbody 0.007 (+) 0.118 (+)\\nPatient 4bladder 0.007 (+) 0.195 (-)\\nbones 0.007 (-) 0.007 (-)\\nrectum 0.010 (-) 0.007 (-)\\nanal canal 0.606 (+) 0.007 (-)\\nsigmoid 0.009 (+) 0.118 (+)\\nbowel 0.119 (+) 0.119 (-)\\nbody 0.020 (-) 0.020 (-)\\nTable 6: p-values of pair-wise comparisons of Hausdorff dis-\\ntances for all contours between approaches, computed by sigmoid 0.009 (+) 0.118 (+)\\nbowel 0.119 (+) 0.119 (-)\\nbody 0.020 (-) 0.020 (-)\\nTable 6: p-values of pair-wise comparisons of Hausdorff dis-\\ntances for all contours between approaches, computed by\\nthe two-sided Mann-Whitney U test. A plus ( +) indicates a\\nbetter mean with MOREA, a minus ( -) the opposite. Signifi-\\ncant results are highlighted according to an \\ud835\\udefcof 0.025.\\nProblem Baseline Elastix ANTs MOREA\\nPatient 1 4.8 \\u00b13.1 5.6\\u00b12.8 4.2\\u00b12.0 4.8\\u00b12.0\\nPatient 2 7.5 \\u00b14.0 11.8\\u00b17.3 7.7\\u00b14.3 7.8\\u00b13.8\\nPatient 3 9.5 \\u00b16.7 6.4\\u00b12.0 7.7\\u00b12.6 6.5\\u00b11.9\\nPatient 4 14.1 \\u00b19.5 8.1\\u00b14.3 6.3\\u00b13.4 6.8\\u00b14.0\\nTable 7: Target registration errors (mean and standard devi-\\nation) for the shown registrations of each approach on each\\npatient, across repeats. All errors are specified in mm. MOREA: a GPU-accelerated Evolutionary Algorithm for Multi-Objective Deformable Registration of 3D Medical Images\\nInstance Transformed: sagittal Transformed: coronal\\nPatient 1\\nPatient 2\\nPatient 3\\nPatient 4\\nTable 8: A selection of the best predicted deformations of the compared registration approaches, represented by deformed\\ncontours compared to the target contours and image. Georgios Andreadis, Peter A.N. Bosman, and Tanja Alderliesten\\n(a) Patient 1.\\n (b) Patient 2.\\n(c) Patient 3.\\n (d) Patient 4.\\nFigure 40: Dice scores for all approaches on all patients. The baseline score after rigid registration is plotted in blue.\\n(a) Patient 1.\\n (b) Patient 2.\\n(c) Patient 3.\\n (d) Patient 4.\\nFigure 41: Hausdorff distances for all approaches on all patients. The baseline score after rigid registration is plotted in blue. MOREA: a GPU-accelerated Evolutionary Algorithm for Multi-Objective Deformable Registration of 3D Medical Images\\n0 10000 20000 30000 40000 50000 60000 70000 80000\\nIterations2.12.01.91.81.71.61.51.41.31.2Objective value\\n(a) Elastix: objective value at each iteration.\\n0 10000 20000 30000 40000 50000 60000 70000 80000\\nIterations2.12.01.91.81.71.61.51.4Objective value (b) Elastix: best objective value achieved at each point.\\n0 10 20 30 40 50 60 70 80\\nTime steps0.960.940.920.900.880.860.840.820.800.78Objective value\\n(c) ANTs: objective value at each iteration.\\n0 10 20 30 40 50 60 70 80\\nTime steps0.0020.0000.0020.0040.0060.0080.010Convergence value (d) ANTs: convergence measure at each iteration.\\n0 100 200 300 400 500\\nGenerations1015202530354045Hypervolume\\n(e) MOREA: hypervolume at each generation.\\n0 100 200 300 400 500\\nGenerations0.00.10.20.30.40.5Guidance error (f) MOREA: best guidance objective value found at each generation.\\nFigure 42: Convergence plots for all 3 approaches on one run of Patient 1. Vertical red lines indicate a change of resolution.\\nFor ANTs, this leads to 4 optimization segments. For Elastix, we first run a mask registration step (with 4 segments) and then\\nan image registration step (with again 4 segments). Georgios Andreadis, Peter A.N. Bosman, and Tanja Alderliesten\\nREFERENCES\\n[1]T. Alderliesten, P. A. N. Bosman, and A. Bel. 2015. Getting the most out of\\nadditional guidance information in deformable image registration by leveraging\\nmulti-objective optimization. In SPIE Medical Imaging 2015: Image Processing .\\n94131R.\\n[2]T. Alderliesten, J. J. Sonke, and P. A. N. Bosman. 2012. Multi-objective optimization\\nfor deformable image registration: proof of concept. In SPIE Medical Imaging\\n2012: Image Processing , Vol. 8314. 831420.\\n[3]T. Alderliesten, J. J. Sonke, and P. A. N. Bosman. 2013. Deformable image reg-\\nistration by multi-objective optimization using a dual-dynamic transformation\\nmodel to account for large anatomical differences. In SPIE Medical Imaging 2013:\\nImage Processing , Vol. 8669. 866910.\\n[4]G. Andreadis, P. A. N. Bosman, and T. Alderliesten. 2022. Multi-objective dual\\nsimplex-mesh based deformable image registration for 3D medical images - proof\\nof concept. In SPIE Medical Imaging 2022: Image Processing . 744\\u2013750.\\n[5]B. B. Avants, C. L. Epstein, M. Grossman, and J. C. Gee. 2008. Symmetric dif-\\nfeomorphic image registration with cross-correlation: Evaluating automated\\nlabeling of elderly and neurodegenerative brain. Medical Image Analysis 12, 1\\n(2008), 26\\u201341.\\n[6]F. Bartel, M. Visser, M. de Ruiter, J. Belderbos, F. Barkhof, H. Vrenken, J. C. de\\nMunck, and M. van Herk. 2019. Non-linear registration improves statistical power\\nto detect hippocampal atrophy in aging and dementia. NeuroImage: Clinical 23\\n(2019), 101902.\\n[7]D. L. J. Barten, B. R. Pieters, A. Bouter, M. C. van der Meer, S. C. Maree, K. A.\\nHinnen, H. Westerveld, P. A. N. Bosman, T. Alderliesten, N. van Wieringen,\\nand A. Bel. 2023. Towards artificial intelligence-based automated treatment\\nplanning in clinical practice: A prospective study of the first clinical experiences\\nin high-dose-rate prostate brachytherapy. Brachytherapy In Press (2023).\\n[8]L. Bondar, M. S. Hoogeman, E. M. V\\u00e1squez Osorio, and B. J.M. Heijmen. 2010. A\\nsymmetric nonrigid registration method to handle large organ deformations in\\ncervical cancer patients. Medical Physics 37, 7 (2010), 3760\\u20133772.\\n[9]P. A. N. Bosman and T. Alderliesten. 2016. Smart grid initialization reduces\\nthe computational complexity of multi-objective image registration based on a\\ndual-dynamic transformation model to account for large anatomical differences.\\nInSPIE Medical Imaging 2016: Image Processing . 978447.\\n[10] A. Bouter, T. Alderliesten, and P. A. N. Bosman. 2017. A novel model-based\\nevolutionary algorithm for multi-objective deformable image registration with\\ncontent mismatch and large deformations: benchmarking efficiency and quality.\\nInSPIE Medical Imaging 2017: Image Processing , Vol. 10133. 1013312.\\n[11] A. Bouter, T. Alderliesten, and P. A. N. Bosman. 2021. Achieving highly scal-\\nable evolutionary real-valued optimization by exploiting partial evaluations.\\nEvolutionary Computation 29, 1 (2021), 129\\u2013155.\\n[12] A. Bouter, T. Alderliesten, and P. A. N. Bosman. 2021. GPU-Accelerated Par-\\nallel Gene-pool Optimal Mixing applied to Multi-Objective Deformable Image\\nRegistration. In IEEE Congress on Evolutionary Computation . 2539\\u20132548.\\n[13] A. Bouter, T. Alderliesten, B. R. Pieters, A. Bel, Y. Niatsetski, and P. A. N. Bosman.\\n2019. GPU-accelerated bi-objective treatment planning for prostate high-dose-\\nrate brachytherapy. Medical Physics 46, 9 (2019), 3776\\u20133787.\\n[14] A. Bouter, N. H. Luong, C. Witteveen, T. Alderliesten, and P. A. N. Bosman. 2017.\\nThe multi-objective real-valued gene-pool optimal mixing evolutionary algorithm.\\nInProceedings of the 2017 Genetic and Evolutionary Computation Conference . 537\\u2013\\n544.\\n[15] D. Br\\u00e9laz. 1979. New methods to color the vertices of a graph. Commun. ACM\\n22, 4 (1979), 251\\u2013256.\\n[16] K. K. Brock, S. Mutic, T. R. McNutt, H. Li, and M. L. Kessler. 2017. Use of image\\nregistration and fusion algorithms and techniques in radiotherapy: Report of 22, 4 (1979), 251\\u2013256.\\n[16] K. K. Brock, S. Mutic, T. R. McNutt, H. Li, and M. L. Kessler. 2017. Use of image\\nregistration and fusion algorithms and techniques in radiotherapy: Report of\\nthe AAPM Radiation Therapy Committee Task Group No. 132: Report. Medical\\nPhysics 44, 7 (2017), e43\\u2013e76.\\n[17] K. K. Brock, M. B. Sharpe, L. A. Dawson, S. M. Kim, and D. A. Jaffray. 2005. Accu-\\nracy of finite element model-based multi-organ deformable image registration.\\nMedical Physics 32, 6 (2005), 1647\\u20131659.\\n[18] H. Chui and A. Rangarajan. 2000. A new algorithm for non-rigid point matching.\\nInIEEE Conference on Computer Vision and Pattern Recognition . 44\\u201351.\\n[19] K. Deb. 2001. Multi-Objective Optimization using Evolutionary Algorithms . Wiley.\\n[20] B. Eiben, V. Vavourakis, J. H. Hipwell, S. Kabus, T. Buelow, C. Lorenz, T.\\nMertzanidou, S. Reis, N. R. Williams, M. Keshtgar, and D. J. Hawkes. 2016. Symmet-\\nric Biomechanically Guided Prone-to-Supine Breast Image Registration. Annals\\nof Biomedical Engineering 44, 1 (2016), 154\\u2013173.\\n[21] C. Ericson. 2004. Real-time collision detection (1 ed.). CRC Press.\\n[22] M. Faisal Beg, M. I. Miller, A. Trouv\\u00e9trouv, and L. Younes. 2005. Computing\\nLarge Deformation Metric Mappings via Geodesic Flows of Diffeomorphisms.\\nInternational Journal of Computer Vision 61, 2 (2005), 139\\u2013157.\\n[23] B. Fischer and J. Modersitzki. 2008. Ill-posed medicine - An introduction to image\\nregistration. Inverse Problems 24, 3 (2008), 1\\u201316.\\n[24] J. Gascon, J. M. Espadero, A. G. Perez, R. Torres, and M. A. Otaduy. 2013. Fast\\ndeformation of volume data using tetrahedral mesh rasterization. In Proceed-\\nings - SCA 2013: 12th ACM SIGGRAPH \\/ Eurographics Symposium on ComputerAnimation . 181\\u2013186.\\n[25] S. Hang. 2015. TetGen, a Delaunay-Based Quality Tetrahedral Mesh Generator.\\nACM Trans. Math. Software 41, 2 (2015), 1\\u201336.\\n[26] F. Khalifa, G. M. Beache, G. Gimel\\u2019farb, J. S. Suri, and A. S. El-Baz. 2011. State-of-\\nthe-Art Medical Image Registration Methodologies: A Survey. In Multi Modal-\\nity State-of-the-Art Medical Image Segmentation and Registration Methodologies .\\nSpringer, 235\\u2013280.\\n[27] A. Klein, J. Andersson, B. A. Ardekani, J. Ashburner, B. Avants, M. C. Chiang,\\nG. E. Christensen, D. L. Collins, J. Gee, P. Hellier, J. H. Song, M. Jenkinson, C.\\nLepage, D. Rueckert, P. Thompson, T. Vercauteren, R. P. Woods, J. J. Mann, and\\nR. V. Parsey. 2009. Evaluation of 14 nonlinear deformation algorithms applied to\\nhuman brain MRI registration. NeuroImage 46, 3 (2009), 786\\u2013802.\\n[28] S. Klein, M. Staring, K. Murphy, M. A. Viergever, and J. P.W. Pluim. 2010. Elastix:\\nA toolbox for intensity-based medical image registration. IEEE Transactions on\\nMedical Imaging 29, 1 (2010), 196\\u2013205.\\n[29] M. Li, E. Castillo, X. L. Zheng, H. Y. Luo, R. Castillo, Y. Wu, and T. Guerrero. 2013.\\nModeling lung deformation: A combined deformable image registration method\\nwith spatially varying Young\\u2019s modulus estimates. Medical Physics 40, 8 (2013),\\n1\\u201310.\\n[30] G. Loi, M. Fusella, E. Lanzi, E. Cagni, C. Garibaldi, G. Iacoviello, F. Lucio, E.\\nMenghi, R. Miceli, L. C. Orlandini, Antonella Roggio, Federica Rosica, Michele\\nStasi, Lidia Strigari, Silvia Strolin, and Christian Fiandra. 2018. Performance\\nof commercially available deformable image registration platforms for contour\\npropagation using patient-based computational phantoms: A multi-institutional\\nstudy. Medical Physics 45, 2 (2018), 748\\u2013757.\\n[31] H. N. Luong and P. A. N. Bosman. 2012. Elitist Archiving for Multi-Objective\\nEvolutionary Algorithms: To Adapt or Not to Adapt. In Proceedings of the 12th\\nConference on Parallel Problem Solving from Nature . 72\\u201381.\\n[32] H. Maaranen, K. Miettinen, and A. Penttinen. 2007. On initial populations of\\na genetic algorithm for continuous optimization problems. Journal of Global\\nOptimization 37, 3 (2007), 405\\u2013436.\\n[33] R. Mohammadi, S. R. Mahdavi, R. Jaberi, Z. Siavashpour, L. Janani, A. S. Meigooni,\\nand R. Reiazi. 2019. Evaluation of deformable image registration algorithm for [33] R. Mohammadi, S. R. Mahdavi, R. Jaberi, Z. Siavashpour, L. Janani, A. S. Meigooni,\\nand R. Reiazi. 2019. Evaluation of deformable image registration algorithm for\\ndetermination of accumulated dose for brachytherapy of cervical cancer patients.\\nJournal of Contemporary Brachytherapy 11, 5 (2019), 469\\u2013478.\\n[34] S. Nithiananthan, S. Schafer, D. J. Mirota, J. W. Stayman, W. Zbijewski, D. D. Reh,\\nG. L. Gallia, and J. H. Siewerdsen. 2012. Extra-dimensional Demons: A method for\\nincorporating missing tissue in deformable image registration. Medical Physics\\n39, 9 (2012), 5718\\u20135731.\\n[35] D. F. Pace, M. Niethammer, and S. R. Aylward. 2012. Sliding Geometries in De-\\nformable Image Registration. In International MICCAI Workshop on Computational\\nand Clinical Challenges in Abdominal Imaging . 141\\u2013148.\\n[36] K. Pirpinia, P. A. N. Bosman, C. E. Loo, G. Winter-Warnars, N. N. Y. Janssen, A. N.\\nScholten, J. J. Sonke, M. van Herk, and T. Alderliesten. 2017. The feasibility of\\nmanual parameter tuning for deformable breast MR image registration from a\\nmulti-objective optimization perspective. Physics in Medicine and Biology 62, 14\\n(2017), 5723\\u20135743.\\n[37] B. Rigaud, A. Klopp, S. Vedam, A. Venkatesan, N. Taku, A. Simon, P. Haigron, R.\\nDe Crevoisier, K. K. Brock, and G. Cazoulat. 2019. Deformable image registration\\nfor dose mapping between external beam radiotherapy and brachytherapy images\\nof cervical cancer. Physics in Medicine and Biology 64, 11 (2019), 115023.\\n[38] L. Risser, F. X. Vialard, H. Y. Baluwala, and J. A. Schnabel. 2013. Piecewise-\\ndiffeomorphic image registration: Application to the motion estimation between\\n3D CT lung images with sliding conditions. Medical Image Analysis 17, 2 (2013),\\n182\\u2013193.\\n[39] B. Schaly, J. A. Kempe, G. S. Bauman, J. J. Battista, and J. van Dyk. 2004. Tracking\\nthe dose distribution in radiation therapy by accounting for variable anatomy.\\nPhysics in Medicine and Biology 49, 5 (2004), 791\\u2013805.\\n[40] A. Sotiras and N. Paragios. 2012. Deformable Image Registration: A Survey . Tech-\\nnical Report. Center for Visual Computing, Department of Applied Mathematics,\\nEcole Centrale de Paris, Equipe GALEN, INRIA Saclay.\\n[41] D. Thierens and P. A. N. Bosman. 2011. Optimal mixing evolutionary algorithms.\\nInProceedings of the 2011 Genetic and Evolutionary Computation Conference .\\n617\\u2013624.\\n[42] J.-P. Thirion. 1998. Image matching as a diffusion process: an analogy with\\nMaxwell\\u2019s Demons. Medical Image Analysis 2, 3 (1998), 243\\u2013260.\\n[43] M. Unser, A. Aldroubi, and C. R. Gerfen. 1993. Multiresolution image registra-\\ntion procedure using spline pyramids. In SPIE Mathematical Imaging: Wavelet\\nApplications in Signal and Image Processing , Vol. 2034. 160\\u2013170.\\n[44] E. M. V\\u00e1squez Osorio, M. S. Hoogeman, L. Bondar, P. C. Levendag, and B. J. M.\\nHeijmen. 2009. A novel flexible framework with automatic feature correspon-\\ndence optimization for nonrigid registration in radiotherapy. Medical Physics 36,\\n7 (2009), 2848\\u20132859.\\n[45] O. Weistrand and S. Svensson. 2015. The ANACONDA algorithm for deformable\\nimage registration in radiotherapy. Medical Physics 42, 1 (2015), 40\\u201353.\\n[46] S. Wognum, L. Bondar, A. G. Zolnay, X. Chai, M. C. C. M. Hulshof, M. S. Hoogeman,\\nand A. Bel. 2013. Control over structure-specific flexibility improves anatomical\\naccuracy for point-based deformable registration in bladder cancer radiotherapy. MOREA: a GPU-accelerated Evolutionary Algorithm for Multi-Objective Deformable Registration of 3D Medical Images\\nMedical Physics 40, 2 (2013), 1\\u201315.\\n[47] W. Zhang, Y. Ma, J. Zheng, and W. J. Allen. 2020. Tetrahedral mesh deformation\\nwith positional constraints. Computer Aided Geometric Design 81 (2020), 1\\u201316.[48] H. Zhong, J. Kim, H. Li, T. Nurushev, B. Movsas, and I. J. Chetty. 2012. A finite\\nelement method to correct deformable image registration errors in low-contrast\\nregions. Physics in Medicine and Biology 57, 11 (2012), 3499\\u20133515.\",\"3\":\" What Performance Indicators to Use for Self-Adaptation in\\nMulti-Objective Evolutionary Algorithms\\nFurong Ye\\nf.ye@liacs.leidenuniv.nl\\nLIACS, Leiden University\\nLeiden, NetherlandsFrank Neumann\\nfrank.neumann@adelaide.edu.au\\nThe University of Adelaide\\nAdelaide, AustraliaJacob de Nobel\\nj.p.de.nobel@liacs.leidenuniv.nl\\nLIACS, Leiden University\\nLeiden, Netherlands\\nAneta Neumann\\naneta.neumann@adelaide.edu.au\\nThe University of Adelaide\\nAdelaide, AustraliaThomas B\\u00e4ck\\nT.H.W.Baeck@liacs.leidenuniv.nl\\nLIACS, Leiden University\\nLeiden, Netherlands\\nABSTRACT\\nParameter control has succeeded in accelerating the convergence\\nprocess of evolutionary algorithms. Empirical and theoretical stud-\\nies for classic pseudo-Boolean problems, such as OneMax ,Leadin-\\ngOnes , etc., have explained the impact of parameters and helped us\\nunderstand the behavior of algorithms for single-objective optimiza-\\ntion. In this work, by transmitting the techniques of single-objective\\noptimization, we perform an extensive experimental investigation\\ninto the behavior of the self-adaptive GSEMO variants.\\nWe test three self-adaptive mutation techniques designed for\\nsingle-objective optimization for the OneMinMax ,COCZ ,LOTZ ,\\nandOneJumpZeroJump problems. While adopting these techniques\\nfor the GSEMO algorithm, we consider different performance met-\\nrics based on the current non-dominated solution set. These metrics\\nare used to guide the self-adaption process.\\nOur results indicate the benefits of self-adaptation for the tested\\nbenchmark problems. We reveal that the choice of metrics signifi-\\ncantly affects the performance of the self-adaptive algorithms. The\\nself-adaptation methods based on the progress in one objective\\ncan perform better than the methods using multi-objective met-\\nrics such as hypervolume, inverted generational distance, and the\\nnumber of the obtained Pareto solutions. Moreover, we find that\\nthe self-adaptive methods benefit from the large population size\\nforOneMinMax andCOCZ .\\nKEYWORDS\\nMulti-objective evolutionary algorithm, self-adaptation, mutation,\\nperformance metric\\nACM Reference Format:\\nFurong Ye, Frank Neumann, Jacob de Nobel, Aneta Neumann, and Thomas\\nB\\u00e4ck. 2023. What Performance Indicators to Use for Self-Adaptation in\\nMulti-Objective Evolutionary Algorithms . In Genetic and Evolutionary\\nPermission to make digital or hard copies of all or part of this work for personal or\\nclassroom use is granted without fee provided that copies are not made or distributed\\nfor profit or commercial advantage and that copies bear this notice and the full citation\\non the first page. Copyrights for components of this work owned by others than the\\nauthor(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or\\nrepublish, to post on servers or to redistribute to lists, requires prior specific permission\\nand\\/or a fee. Request permissions from permissions@acm.org.\\nGECCO \\u201923, July 15\\u201319, 2023, Lisbon, Portugal\\n\\u00a92023 Copyright held by the owner\\/author(s). Publication rights licensed to ACM.\\nACM ISBN xxx. . . $15.00\\nhttps:\\/\\/doi.org\\/xxxComputation Conference (GECCO \\u201923), July 15\\u201319, 2023, Lisbon, Portugal.\\nACM, New York, NY, USA, 9 pages. https:\\/\\/doi.org\\/xxx\\n1 INTRODUCTION\\nEvolutionary algorithms (EAs) are capable of finding global optima\\nby creating offspring solutions via global search variators. Apart\\nfrom the guarantee of reaching a global optimum, a major concern\\nin designing algorithms is the convergence rate, as we can not\\nafford infinite running time in real-world applications. The study\\nof parameter control is essential in this, in order to understand the\\nrelation between variator parameters and the dynamic behavior of\\nalgorithms. For example, in the context of single-objective pseudo-\\nboolean optimization \\ud835\\udc53:{0,1}\\ud835\\udc5b\\u2192R, the mutation of EAs creates\\noffspring\\ud835\\udc66by flipping 0<\\u2113\\u2264\\ud835\\udc5bbits of the parent solution \\ud835\\udc65,\\nwhere\\u2113can be sampled from different distributions depending on\\nthe design of mutation operators. Previous studies [ 2,3,5,10] have offspring\\ud835\\udc66by flipping 0<\\u2113\\u2264\\ud835\\udc5bbits of the parent solution \\ud835\\udc65,\\nwhere\\u2113can be sampled from different distributions depending on\\nthe design of mutation operators. Previous studies [ 2,3,5,10] have\\ndemonstrated the impact of the choice of \\u2113onOneMax ,Leadin-\\ngOnes , and other classic problems, and self-adaptive methods have\\nbeen proven to have higher convergence rates than the ones with\\nstatic settings.\\nWhile there have been detailed studies for single-objective prob-\\nlems, detailed investigations complement existing theoretical stud-\\nies for multi-objective benchmark problems that are missing in the\\nliterature. In the area of runtime analysis, the most basic multi-\\nobjective benchmark problems that have been studied in a rigor-\\nous way are LOTZ ,COCZ , and OneMinMax . Furthermore, One-\\nJumpZeroJump has recently been introduced as a multi-modal multi-\\nobjective benchmark problem. Investigations in the area of runtime\\nanalysis have been started by [ 15]. In this work, the authors stud-\\nied a variant of the simple evolutionary multi-objective optimizer\\n(SEMO) which produces an offspring by flipping a single bit and\\nalways maintaining a set of non-dominated trade-offs according to\\nthe given objective functions. Runtime bounds of \\u0398(\\ud835\\udc5b3)for LOTZ\\nand\\ud835\\udc42(\\ud835\\udc5b2log\\ud835\\udc5b)for COCZ have been shown in [ 15].OneMinMax\\nhas been investigated in [ 4,13,16] and it has been shown that\\nthe global simple evolutionary multi-objective optimizer (GSEMO),\\nwhich differs from SEMO by apply standard bit mutations instead of\\nsingle bit flips, computes the whole Pareto front for OneMinMax in\\nexpected time \\ud835\\udc42(\\ud835\\udc5b2log\\ud835\\udc5b). Furthermore, hypervolume-based evolu-\\ntionary algorithms have been studied for OneMinMax in [4,16] and\\nthe computation of structural diverse populations has been investi-\\ngated in [ 4]. In [ 17], different parent selection methods have beenarXiv:2303.04611v1 [cs.NE] 8 Mar 2023 GECCO \\u201923, July 15\\u201319, 2023, Lisbon, Portugal Furong Ye, Frank Neumann, Jacob de Nobel, Aneta Neumann, and Thomas B\\u00e4ck\\nanalyzed for OneMinMax andLOTZ and their benefit has been\\nshown when incorporated into GSEMO. Recently, OneMinMax\\nhas also been used to study the runtime behavior of the NSGA-II\\nalgorithm [ 22]. All the bounds obtained are asymptotic ones, i.e.\\nthey are missing the leading constants. Therefore, it is interesting\\nto carry out more detailed experimental investigations of simple\\nevolutionary multi-objective algorithms from the area of runtime\\nanalysis alongside some variations such as different mutation oper-\\nators and the use of larger offspring population sizes. We carry out\\nsuch experimental investigations in this work with the hope that\\nit provides complementary insights that enable further progress\\nin the theoretical understanding of evolutionary multi-objective\\noptimization.\\nIn this work, we analyze several variation operators that flip \\u2113\\nrandomly selected distinct bits and study the impact of \\u2113for multi-\\nobjective evolutionary algorithms (MOEAs). More precisely, we\\nequip GSEMO with several self-adaptive mutation operators and\\ninvestigate its performance for OneMinMax ,LOTZ ,COCZ , and\\nOneJumpZeroJump . Based on the experimental results, we study\\nthe impact of the number of offspring (i.e., population size \\ud835\\udf06) for the\\nself-adaptive GSEMO variants. Moreover, we investigate the impact\\nof the metrics used to guide self-adaptation. For the single-objective\\noptimization problems, each solution maps to a fitness function\\nvalue\\ud835\\udc53, which self-adaptive EAs can use directly as a performance\\nindicator to guide the dynamic parameter control. However, for the\\nmulti-objective optimization problems \\ud835\\udc53:{0,1}\\ud835\\udc5b\\u2192R\\ud835\\udc5a, where\\n\\ud835\\udc5ais the number of the objectives (note that we consider only bi-\\nobjectives in this work), each solution maps to a set of objective\\nfunction values, which renders the choice of the appropriate per-\\nformance indicator to guide adaptation less obvious. Since MOEAs\\nusually maintain a set of non-dominated solutions, we can use a\\nmetric\\ud835\\udc52(\\ud835\\udc65):\\ud835\\udc65\\u2192Rto calculate the relative improvement gain\\nfrom a solution \\ud835\\udc65with the non-dominated set, which can be used\\nto apply the single-objective techniques to MOEAs.\\nOverall, this paper performs an extensive experimental investi-\\ngation on GSEMO variants, addressing whether self-adaptive muta-\\ntion can benefit MOEAs. In practice, this involves the translation of\\nself-adaptive mutation mechanisms of single-objective optimization\\nalgorithms to MOEAs. Empirical analysis is conducted to illustrate\\nthe behavior of self-adaptive mutation for MOEAs, focusing on\\ntwo principal factors, population size, and performance indicator.\\nWe hope the presented results can inspire future theoretical under-\\nstanding of self-adaptation MOEAs. Moreover, such fundamental\\nanalysis of the classic problems can provide guidelines for designing\\nMOEAs in practical applications.\\n2 PRELIMINARIES\\n2.1 Benchmark Problems\\nAs discussed in the Introduction section, the motivation of this\\nwork is to perform empirical analyses and investigate the behav-\\nior of MOEAs. Therefore, 4classic multi-objective optimization\\nproblems that attracted many runtime analysis studies are selected\\nfor benchmarking. The problem definitions are provided in the\\nfollowing.2.1.1 OneMinMax .OneMinMax introduced in [ 13] is a bi-objective\\npseudo-Boolean optimization problem that generalizes the classical\\nsingle-objective OneMax problem to the bi-objective case:\\nOneMinMax :{0,1}\\ud835\\udc5b\\u2192N2,\\ud835\\udc65\\u21a6\\u2192 \\ud835\\udc5b\\u2211\\ufe01\\n\\ud835\\udc56=1\\ud835\\udc65\\ud835\\udc56,\\ud835\\udc5b\\u2212\\ud835\\udc5b\\u2211\\ufe01\\n\\ud835\\udc56=1\\ud835\\udc65\\ud835\\udc56!\\n(1)\\nThe problem is maximizing the numbers of both oneandzero bits,\\nand the objective of maximizing the number of onebits is identical\\nto the classic pseudo-Boolean optimization problem OneMax :\\n{0,1}\\ud835\\udc5b\\u2192N,\\ud835\\udc65\\u21a6\\u2192\\u00cd\\ud835\\udc5b\\n\\ud835\\udc56=1\\ud835\\udc65\\ud835\\udc56. For the OneMinMax problem, all\\nsolutions locate at the optimal Parent front, and the goal is to\\nobtain the complete set of Pareto front {(\\ud835\\udc56,\\ud835\\udc5b\\u22121)|\\ud835\\udc56\\u2208[0..\\ud835\\udc5b]}\\n2.1.2 COCZ ..COCZ [15] is another extension of the OneMax solutions locate at the optimal Parent front, and the goal is to\\nobtain the complete set of Pareto front {(\\ud835\\udc56,\\ud835\\udc5b\\u22121)|\\ud835\\udc56\\u2208[0..\\ud835\\udc5b]}\\n2.1.2 COCZ ..COCZ [15] is another extension of the OneMax\\nproblem which is called Count Ones Count Zeroes . Its definition is\\ngiven below.\\nCOCZ :{0,1}\\ud835\\udc5b\\u2192N2,\\ud835\\udc65\\u21a6\\u2192\\u00a9\\u00ad\\n\\u00ab\\ud835\\udc5b\\u2211\\ufe01\\n\\ud835\\udc56=1\\ud835\\udc65\\ud835\\udc56,\\ud835\\udc5b\\/2\\u2211\\ufe01\\n\\ud835\\udc56=1\\ud835\\udc65\\ud835\\udc56+\\ud835\\udc5b\\u2211\\ufe01\\n\\ud835\\udc56=\\ud835\\udc5b\\/2(1\\u2212\\ud835\\udc65\\ud835\\udc56)\\u00aa\\u00ae\\n\\u00ac(2)\\nwhere\\ud835\\udc5b=2\\ud835\\udc58,\\ud835\\udc58\\u2208N. The Pareto front of COCZ is a setPconsisting\\nof the solutions with \\ud835\\udc5b\\/2ones in the first half of the bit string. The\\nsize ofPis\\ud835\\udc5b\\/2. Differently from OneMinMax , of which all possible\\nsolutions locate at the Pareto front, many solutions that are strictly\\ndominated by others exist in the search space of COCZ .\\n2.1.3 LOTZ .The Leading Ones, Trailing Zeroes (LOTZ ) introduced\\nin [15] maximizes the numbers of leading onebits and trailing zero\\nbits, simultaneously. The problem can be defined as\\nLOTZ :{0,1}\\ud835\\udc5b\\u2192N2,\\ud835\\udc65\\u21a6\\u2192\\u00a9\\u00ad\\n\\u00ab\\ud835\\udc5b\\u2211\\ufe01\\n\\ud835\\udc56=1\\ud835\\udc56\\u00d6\\n\\ud835\\udc57=1\\ud835\\udc65\\ud835\\udc57,\\ud835\\udc5b\\u2211\\ufe01\\n\\ud835\\udc56=1\\ud835\\udc5b\\u00d6\\n\\ud835\\udc57=\\ud835\\udc56(1\\u2212\\ud835\\udc65\\ud835\\udc57)\\u00aa\\u00ae\\n\\u00ac(3)\\nThe Pareto front of LOTZ is{(\\ud835\\udc56,\\ud835\\udc5b\\u2212\\ud835\\udc56\\u22121)|\\ud835\\udc56\\u2208[1..\\ud835\\udc5b\\u22122]}\\u222a{(\\ud835\\udc56,\\ud835\\udc5b\\u2212\\ud835\\udc56)|\\n\\ud835\\udc56\\u2208 {0,\\ud835\\udc5b}}, given by the set of \\ud835\\udc5b+1solutions\\ud835\\udc65={1\\ud835\\udc560(\\ud835\\udc5b\\u2212\\ud835\\udc56)|\\n\\ud835\\udc56\\u2208[0..\\ud835\\udc5b]}. One property of LOTZ is that for all non-dominated\\nsolutions, the neighbors that are with 1Hamming distance are\\neither better or worse but incomparable.\\n2.1.4 OneJumpZeroJump .TheOneJumpZeroJump problem, which\\nwas originally proposed in [ 7], obtains objectives that are isomor-\\nphic to the classic single-objective problem Jump\\ud835\\udc58:{0,1}\\ud835\\udc5b\\u2192\\nN.Jump\\ud835\\udc58(\\ud835\\udc65)=\\ud835\\udc5b+|\\ud835\\udc65|1if|\\ud835\\udc65|1\\u2208[0..\\ud835\\udc5b\\u2212\\ud835\\udc58]\\u222a{\\ud835\\udc5b}, and Jump\\ud835\\udc58(\\ud835\\udc65)=\\n\\ud835\\udc5b\\u2212|\\ud835\\udc65|1otherwise , where|\\ud835\\udc65|\\ud835\\udc56=\\u00cd\\ud835\\udc5b\\n\\ud835\\udc56=0\\ud835\\udc65\\ud835\\udc56and\\ud835\\udc58\\u22652is a given\\nparameter. OneJumpZeroJump is defined as\\nOneJumpZeroJump \\ud835\\udc58:{0,1}\\ud835\\udc5b\\u2192N2,\\ud835\\udc65\\u21a6\\u2192(\\ud835\\udc661,\\ud835\\udc662),\\n\\ud835\\udc661=\\ud835\\udc58+|\\ud835\\udc65|1if|\\ud835\\udc65|1\\u2264\\ud835\\udc5b\\u2212\\ud835\\udc58or\\ud835\\udc65=1\\ud835\\udc5b,otherwise\\ud835\\udc5b\\u2212|\\ud835\\udc65|1,\\n\\ud835\\udc662=\\ud835\\udc58+|\\ud835\\udc65|0if|\\ud835\\udc65|0\\u2264\\ud835\\udc5b\\u2212\\ud835\\udc58or\\ud835\\udc65=0\\ud835\\udc5b,otherwise\\ud835\\udc5b\\u2212|\\ud835\\udc65|0(4)\\nThe Pareto front of OneJumpZeroJump is{(\\ud835\\udc4e,2\\ud835\\udc58+\\ud835\\udc5b\\u2212\\ud835\\udc4e)|\\ud835\\udc4e\\u2208\\n[2\\ud835\\udc58,\\ud835\\udc5b]\\u222a{\\ud835\\udc58,\\ud835\\udc5b+\\ud835\\udc58}}. We set\\ud835\\udc58=2for the experiments in this work.\\nOneJumpZeroJump is a multimodal problem obtaining a valley of\\nlow fitness values in the search space, and the size of this valley\\ndepends on \\ud835\\udc58.\\n2.2 The GSEMO Algorithm\\nThe simple evolutionary multi-objective optimizer (SEMO) [ 15]\\nuses a population \\ud835\\udc43of solutions that do not dominate each other. What Performance Indicators to Use for Self-Adaptation in Multi-Objective Evolutionary Algorithms GECCO \\u201923, July 15\\u201319, 2023, Lisbon, Portugal\\nThe population is initialized with a random solution. Then, the\\nalgorithm creates offspring by selecting a solution \\ud835\\udc65from\\ud835\\udc43uni-\\nformly at random (u.a.r.) and flipping one bit of \\ud835\\udc65. If any solutions\\nof\\ud835\\udc43that are dominated by \\ud835\\udc65, those solutions will be removed, and\\n\\ud835\\udc65will be added to \\ud835\\udc43. The algorithm terminates until reaching ter-\\nminate conditions, e.g., the budget is used out, or the full Parent\\nfront is reached. Global SEMO (GSEMO) [ 12] differs from SEMO by\\napplying the standard bit mutation instead of flipping exact one-bit\\nwhile creating offspring. As shown in Algorithm 1, GSEMO applies\\nthe standard bit mutation forcing to flip at least one bit each time.\\nMore precisely, \\u2113is sampled for a conditional binomial distribution\\nBin>0(\\ud835\\udc5b,\\ud835\\udc5d)following the suggestion in [ 18], and offspring is cre-\\nated by flip\\u2113(\\ud835\\udc65)flipping\\u2113bits chosen u.a.r. of \\ud835\\udc65. For the comparison\\nto the self-adaption methods, we denote the static GSEMO using\\n\\ud835\\udc5d=1\\/\\ud835\\udc5b.\\nAlgorithm 1: Global SEMO\\n1Input: mutation rate \\ud835\\udc5d=1\\/\\ud835\\udc5b;\\n2Initialization: Sample\\ud835\\udc65\\u2208{0,1}\\ud835\\udc5buniformly at random\\n(u.a.r.), and evaluate \\ud835\\udc53(\\ud835\\udc65);\\n3\\ud835\\udc43\\u2190{\\ud835\\udc65};\\n4Optimization: while not stop condition do\\n5 Select\\ud835\\udc65\\u2208\\ud835\\udc34u.a.r.;\\n6 Sample\\u2113\\u223cBin>0(\\ud835\\udc5b,\\ud835\\udc5d), create\\ud835\\udc66\\u2190flip\\u2113(\\ud835\\udc65), and\\nevaluate\\ud835\\udc53(\\ud835\\udc66);\\n7 ifthere is no\\ud835\\udc67\\u2208\\ud835\\udc34such that\\ud835\\udc66\\u2aaf\\ud835\\udc67then\\n8\\ud835\\udc34={\\ud835\\udc67\\u2208\\ud835\\udc34|\\ud835\\udc67\\u2aaf\\u0338\\ud835\\udc66}\\u222a{\\ud835\\udc66}\\n3 MULTI-OBJECTIVE SELF-ADAPTATION\\n3.1 The Self-Adaptive GSEMO Variants\\nAs discussed in previous studies [ 2,3,5,10], the optimal param-\\neter settings of EAs can change during the optimization process.\\nMany empirical and theoretical results [ 3,5,14,21] have shown\\nthat the EAs with self-adaptive mutation can outperform the stan-\\ndard bit mutation for classical single-objective problems such as\\nOneMax ,LeadingOnes , and Jump . We expect that GSEMO can\\nbenefit from using self-adaptation of mutation. Moreover, we are\\ncurious whether the self-adaption methods can perform similarly\\nas in single-objective optimization.\\nFor self-adaption in MOEAs, the (1+(\\ud835\\udf06,\\ud835\\udf06))GSEMO was pro-\\nposed and shown in [ 6] improvements in running time compared\\nto the classic GSEMO. The algorithm uses the variators of mutation\\nand crossover with dynamic parameters. Recently, a self-adjusting\\nGSEMO [ 7], which increases mutation rate after \\ud835\\udc47iterations not\\nobtaining new non-dominated solutions, was tested for OneJumpZe-\\nroJump . In this section, we test three GSEMO variants using self-\\nadaptive mutation that has been studied in much single-objective\\nbenchmarking work [ 8]. The GSEMO variants sample an offspring\\npopulation of size \\ud835\\udf06for each generation, and the sampling distribu-\\ntions adjust based on the performance \\ud835\\udc52(\\ud835\\udc65)of the new solutions.\\nThe procedures of GSEMO variants are introduced below, and we\\nintroduce the design of \\ud835\\udc52(\\ud835\\udc65)and study its impact in the later sec-\\ntions.3.1.1 The two-rate GSEMO. The two-rate EA with self-adaptive\\nmutation rate was proposed and analyzed in [ 5] for OneMax . It\\nstarts with an initial mutation rate of \\ud835\\udc5f\\/\\ud835\\udc5b. In each generation, it\\nsamples half offspring using doubled mutation rate and samples\\nthe other half using a half mutation rate. Then the mutation rate\\nthat is used to create the best offspring in this generation has a\\nhigher probability of 3\\/4to be chosen for the next generation. In\\nthe two-rate GSEMO (see Algorithm 4), we follow the outline of\\nGSEMO and sample offspring populations using the same mutation\\nstrategy of the two-rate EA. The solutions\\u2019 performance \\ud835\\udc52(\\ud835\\udc66)(line\\n12) is evaluated using the three measures mentioned above.\\nAlgorithm 2: The two-rate GSEMO\\n1Input: Population size \\ud835\\udf06,\\ud835\\udc5finit;\\n2Initialization: Sample\\ud835\\udc65\\u2208{0,1}\\ud835\\udc5buniformly at random\\n(u.a.r.), and evaluate \\ud835\\udc53(\\ud835\\udc65);\\n3\\ud835\\udc5f\\u2190\\ud835\\udc5finit,\\ud835\\udc43\\u2190{\\ud835\\udc65};\\n4Optimization: while not stop condition do\\n5 fori = 1,. . . ,\\ud835\\udf06do\\n6 Select\\ud835\\udc65\\u2208\\ud835\\udc43u.a.r.;\\n7 if\\ud835\\udc56<\\u230a\\ud835\\udf06\\/2\\u230bthen\\n8 Sample\\u2113(\\ud835\\udc56)\\u223cBin>0(\\ud835\\udc5b,\\ud835\\udc5f\\/(2\\ud835\\udc5b))\\n9 else\\n10 Sample\\u2113(\\ud835\\udc56)\\u223cBin>0(\\ud835\\udc5b,(2\\ud835\\udc5f)\\/\\ud835\\udc5b),\\n11 Create\\ud835\\udc66(\\ud835\\udc56)\\u2190flip\\u2113(\\ud835\\udc56)(\\ud835\\udc65), and evaluate \\ud835\\udc52(\\ud835\\udc66(\\ud835\\udc56));\\n12\\ud835\\udc66(\\ud835\\udc56\\u2217)\\u2190arg max{\\ud835\\udc52(\\ud835\\udc66(1)),...,\\ud835\\udc52(\\ud835\\udc66(\\ud835\\udf06))}; 6 Select\\ud835\\udc65\\u2208\\ud835\\udc43u.a.r.;\\n7 if\\ud835\\udc56<\\u230a\\ud835\\udf06\\/2\\u230bthen\\n8 Sample\\u2113(\\ud835\\udc56)\\u223cBin>0(\\ud835\\udc5b,\\ud835\\udc5f\\/(2\\ud835\\udc5b))\\n9 else\\n10 Sample\\u2113(\\ud835\\udc56)\\u223cBin>0(\\ud835\\udc5b,(2\\ud835\\udc5f)\\/\\ud835\\udc5b),\\n11 Create\\ud835\\udc66(\\ud835\\udc56)\\u2190flip\\u2113(\\ud835\\udc56)(\\ud835\\udc65), and evaluate \\ud835\\udc52(\\ud835\\udc66(\\ud835\\udc56));\\n12\\ud835\\udc66(\\ud835\\udc56\\u2217)\\u2190arg max{\\ud835\\udc52(\\ud835\\udc66(1)),...,\\ud835\\udc52(\\ud835\\udc66(\\ud835\\udf06))};\\n13 if\\ud835\\udc56\\u2217<\\u230a\\ud835\\udf06\\/2\\u230bthen\\ud835\\udc60\\u21903\\/4else\\ud835\\udc60\\u21901\\/4;\\n14 Sample\\ud835\\udc5e\\u2208[0,1]u.a.r.;\\n15 if\\ud835\\udc5e\\u2264\\ud835\\udc60then\\ud835\\udc5f\\u2190max{\\ud835\\udc5f\\/2,1\\/2}else\\n\\ud835\\udc5f\\u2190min{2\\ud835\\udc5f,\\ud835\\udc5b\\/4};\\n16 for\\ud835\\udc56=1,...,\\ud835\\udf06 do\\n17 ifthere is no\\ud835\\udc67\\u2208\\ud835\\udc34such that\\ud835\\udc66(\\ud835\\udc56)\\u2aaf\\ud835\\udc67then\\n18 \\ud835\\udc34={\\ud835\\udc67\\u2208\\ud835\\udc34|\\ud835\\udc67\\u2aaf\\u0338\\ud835\\udc66(\\ud835\\udc56)}\\u222a{\\ud835\\udc66(\\ud835\\udc56)}\\n3.1.2 The Log-Normal GSEMO. The logNormal GSEMO applies\\nthe standard bit mutation and adjusts the mutation rate \\ud835\\udc5dusing a\\nlog-normal update rule [ 14]. While creating offspring populations,\\na new mutation rate \\ud835\\udc5d\\u2032is sampled as shown in line 7 of Algorithm 3.\\nThis strategy allows the mutation rate to increase and decrease with\\nidentical probabilities. However, the new mutation rate can be any\\nvalue around the mean of the sample distribution, which is different\\nfrom the two-rate strategy that allows only double and half values.\\nMoreover, it chooses the \\ud835\\udc5d\\u2032that is used to create the best solution\\nas the\\ud835\\udc5dfor the next generation.\\n3.1.3 The Variance Controlled GSEMO. The variance-controlled\\nGSEMO (var-ctrl) applies the normalized bit mutation [ 21] that sam-\\nples\\u2113from a normal distribution \\ud835\\udc41(\\ud835\\udc5f,\\ud835\\udf0e2)(line 6 in Algorithm 4).\\nSimilar to the log-normal GSEMO, it also uses a greedy strategy to\\nadjust\\ud835\\udc5f, which is replaced by the value of \\u2113that creates the best\\nsolution. An advantage of using normal distributions is that we can\\ncontrol not only the mean of sampled \\u2113but also its variance. In this GECCO \\u201923, July 15\\u201319, 2023, Lisbon, Portugal Furong Ye, Frank Neumann, Jacob de Nobel, Aneta Neumann, and Thomas B\\u00e4ck\\nAlgorithm 3: The log-normal GSEMO\\n1Input: Population size \\ud835\\udf06, mutation rate \\ud835\\udc5d;\\n2Initialization: Sample\\ud835\\udc65\\u2208{0,1}\\ud835\\udc5buniformly at random\\n(u.a.r.), and evaluate \\ud835\\udc53(\\ud835\\udc65);\\n3\\ud835\\udc43\\u2190{\\ud835\\udc65};\\n4Optimization: while not stop condition do\\n5 for\\ud835\\udc56=1,...,\\ud835\\udf06 do\\n6 Select\\ud835\\udc65\\u2208\\ud835\\udc43u.a.r.;\\n7\\ud835\\udc5d(\\ud835\\udc56)=\\u00001+1\\u2212\\ud835\\udc5d\\n\\ud835\\udc5d\\u00b7exp(0.22\\u00b7N(0,1))\\u0001\\u22121;\\n8 Sample\\u2113(\\ud835\\udc56)\\u223cBin>0(\\ud835\\udc5b,\\ud835\\udc5d(\\ud835\\udc56));\\n9 create\\ud835\\udc66(\\ud835\\udc56)\\u2190flip\\u2113(\\ud835\\udc56)(\\ud835\\udc65), and evaluate \\ud835\\udc53(\\ud835\\udc66(\\ud835\\udc56));\\n10\\ud835\\udc66(\\ud835\\udc56\\u2217)\\u2190arg max{\\ud835\\udc52(\\ud835\\udc66(1)),...,\\ud835\\udc52(\\ud835\\udc66(\\ud835\\udf06))};\\n11\\ud835\\udc5d\\u2190\\ud835\\udc5d(\\ud835\\udc56\\u2217);\\n12 for\\ud835\\udc56=1,...,\\ud835\\udf06 do\\n13 ifthere is no\\ud835\\udc67\\u2208\\ud835\\udc43such that\\ud835\\udc66(\\ud835\\udc56)\\u2aaf\\ud835\\udc67then\\n14 \\ud835\\udc43={\\ud835\\udc67\\u2208\\ud835\\udc43|\\ud835\\udc67\\u2aaf\\u0338\\ud835\\udc66(\\ud835\\udc56)}\\u222a{\\ud835\\udc66(\\ud835\\udc56)}\\nAlgorithm 4: The var-ctrl GSEMO\\n1Input: Population size \\ud835\\udf06,\\ud835\\udc5finitand a Factor \\ud835\\udc39;\\nInitialization: Sample\\ud835\\udc65\\u2208{0,1}\\ud835\\udc5buniformly at random\\n(u.a.r.), and evaluate \\ud835\\udc53(\\ud835\\udc65);\\n2\\ud835\\udc5f\\u2190\\ud835\\udc5finit;\\ud835\\udc50\\u21900;\\ud835\\udc43\\u2190{\\ud835\\udc65};\\n3Optimization: while not stop condition do\\n4 for\\ud835\\udc56=1,...,\\ud835\\udf06 do\\n5 Select\\ud835\\udc65\\u2208\\ud835\\udc43u.a.r.;\\n6 Sample\\u2113(\\ud835\\udc56)\\u223cmin{\\ud835\\udc41>0(\\ud835\\udc5f,\\ud835\\udc39\\ud835\\udc50\\ud835\\udc5f(1\\u2212\\ud835\\udc5f\\/\\ud835\\udc5b)),\\ud835\\udc5b}, create\\n\\ud835\\udc66(\\ud835\\udc56)\\u2190flip\\u2113(\\ud835\\udc56)(\\ud835\\udc65), and evaluate \\ud835\\udc53(\\ud835\\udc66(\\ud835\\udc56));\\n7\\ud835\\udc66(\\ud835\\udc56\\u2217)\\u2190arg max{\\ud835\\udc52(\\ud835\\udc66(1)),...,\\ud835\\udc52(\\ud835\\udc66(\\ud835\\udf06))};\\n8 if\\ud835\\udc5f=\\u2113(\\ud835\\udc56\\u2217)then\\ud835\\udc50\\u2190\\ud835\\udc50+1else\\ud835\\udc50\\u21900;\\n9\\ud835\\udc5f\\u2190\\u2113(\\ud835\\udc56\\u2217);\\n10 for\\ud835\\udc56=1,...,\\ud835\\udf06 do\\n11 ifthere is no\\ud835\\udc67\\u2208\\ud835\\udc43such that\\ud835\\udc66(\\ud835\\udc56)\\u2aaf\\ud835\\udc67then\\n12 \\ud835\\udc43={\\ud835\\udc67\\u2208\\ud835\\udc43|\\ud835\\udc67\\u2aaf\\u0338\\ud835\\udc66(\\ud835\\udc56)}\\u222a{\\ud835\\udc66(\\ud835\\udc56)}\\nwork, we follow the setting in [ 21] reducing the variance by a factor\\n\\ud835\\udc39=0.98if the value of \\ud835\\udc5fremains the same after a generation.\\n3.2 Using Multi-objective Metrics\\nRecall that the GSEMO variants rely on the progress in \\ud835\\udc52(\\ud835\\udc65)for\\nself-adaption. Many performance measures have been applied to\\nassess the performance of MOEAs, and those measures would be\\nstraightforward candidates for calculating \\ud835\\udc52(\\ud835\\udc65). Therefore, we ap-\\nply three multi-objective performance measures, which have been\\ncommonly considered in MOEAs, for the metric of the GSEMO vari-\\nants. In practice, given a population \\ud835\\udc43of the current non-dominated\\nsolutions, we measure the performance of a new solution \\ud835\\udc65by cal-\\nculating for the set \\ud835\\udc43\\u222a{\\ud835\\udc65}thenumber of solutions (NUM) in the\\nfull Pareto frontP, the hypervolume (HV) regarding the reference\\npoint(\\u22121,\\u22121), and the inverted generational distance (IGD) to P.For ease of reading, we use \\u201cmetric\\u201d referring to the progress re-\\nsulting from the newly sampled solutions, and \\u201cmeasure\\u201d referring\\nto the algorithms\\u2019 performance.\\nWe provide the definitions of the three metrics below:\\n\\u2022Number of Pareto solutions (NUM): It is, for the current non-\\ndominated set, the number of its solutions that belong to the\\nPareto frontP. NUM(\\ud835\\udc43)=|P\\u2229\\ud835\\udc43|.\\n\\u2022Hypervolume (HV) [ 23]: The hypervolume indicator \\ud835\\udc3c\\ud835\\udc3b(\\ud835\\udc43)\\nis the volume of the object space of the solutions that are\\ndominated by the solution set \\ud835\\udc43. Given a reference point\\n\\ud835\\udc5f\\u2208R\\ud835\\udc5a,\\ud835\\udc3c\\ud835\\udc3b(\\ud835\\udc43)=\\u039b(\\u00d0\\n\\ud835\\udc65\\u2208\\ud835\\udc43[\\ud835\\udc531(\\ud835\\udc65),\\ud835\\udc5f1]\\u00d7...\\u00d7[\\ud835\\udc53\\ud835\\udc5a(\\ud835\\udc65),\\ud835\\udc5f\\ud835\\udc5a]),\\nwhere \\u039b(\\ud835\\udc43)is the Lebesgue measure of \\ud835\\udc43and[\\ud835\\udc531(\\ud835\\udc65),\\ud835\\udc5f1]\\u00d7\\n...\\u00d7[\\ud835\\udc53\\ud835\\udc5a(\\ud835\\udc65),\\ud835\\udc5f\\ud835\\udc5a]is the orthotope with \\ud835\\udc53(\\ud835\\udc65)and\\ud835\\udc5fin opposite\\ncorners.\\n\\u2022Inverted generational distance (IGD) [ 24]: The generational\\ndistance (GD) measures how far the obtained non-dominated\\nset\\ud835\\udc43is from the full Pareto front. It is defined as GD(\\ud835\\udc43)=\\u221a\\ufe03\\u00cd\\ud835\\udc512\\n\\ud835\\udc56\\/|\\ud835\\udc43|, where\\ud835\\udc51\\ud835\\udc56is the Euclidean distance between the\\n\\ud835\\udc56-th solution in \\ud835\\udc43and its nearest solution in P. Following\\nthe suggestion in [ 24], we apply IGD using the obtained\\nnon-dominated set \\ud835\\udc43as the reference set and calculate the\\ndistances from each solution of the Pareto front Pto its\\nnearest neighbor in \\ud835\\udc43. In this way, the dynamic size of \\ud835\\udc43has\\nless influence on the value of IGD.\\nIt is worth mentioning that NUM and IGD are accuracy metrics\\nreferring to the convergence to the full Pareto front P, and HV and\\nIGD are diversity metrics referring to the range of search space that\\nis covered by the obtained non-dominated set \\ud835\\udc43. The value of HV\\nis based on a given reference set instead of the Parent front.\\nWe introduce in the following our first results for the 100di-\\nmensional OneMinMax ,COCZ , and LOTZ , and 50dimensional\\nOneJumpZeroJump . For reproducibility, we provide the codes and\\ndata in GitHub [20].\\n3.2.1 Results for OneMinMax .Table 1 lists the average running\\ntime to find the full Pareto set for the GSEMO variants, which use\\nmulti-objective metrics to guide self-adaptation. Compared to the 3.2.1 Results for OneMinMax .Table 1 lists the average running\\ntime to find the full Pareto set for the GSEMO variants, which use\\nmulti-objective metrics to guide self-adaptation. Compared to the\\nGSEMO using static mutation rate, we observe that the two-rate\\nGSEMO presents better results for OneMinMax . However, the log-\\nnormal and var-ctrl use a significantly large number of function\\nevaluations to converge to the full Pareto front. This performance\\nsurprisingly differs from the performance for OneMax since, as\\nshown in [ 21], the var-ctrl shows competitive results for OneMax .\\nTherefore, we plot in Figure 1 the average function evaluations to\\nfind each Pareto solution. Because all solutions to OneMinMax\\nare located at the Pareto front, this figure can represent the com-\\nplete convergence procedure. We observe that the self-adaptation\\nmethods outperform the static setting in the premature optimiza-\\ntion stage. However, the performance of log-normal and var-ctrl\\ndeteriorates later in the convergence process.\\n3.2.2 Results for COCZ .For another extension of OneMax , we\\nobserve that for COCZ , the two-rate GSEMO can again outperform\\nthe static GSEMO, and the log-normal GSEMO can obtain compa-\\nrable results when using HV as the metric. However, the var-ctrl\\nstrategy seems not work when using the three metrics. Accord-\\ning to Figure 7 (four sub-figures on the left), the static GSEMO What Performance Indicators to Use for Self-Adaptation in Multi-Objective Evolutionary Algorithms GECCO \\u201923, July 15\\u201319, 2023, Lisbon, Portugal\\ntwo-rate log-normal var-ctrl\\nOneMinMax HV 61 269 135 509 464 034\\nstatic =67 442 IGD 62 082 137 870 510 337\\nNUM 64 409 103 103 553 427\\nCOCZ HV 29 535 31 323 45866\\nstatic =31 369 IGD 28 146 65 870 121771\\nNUM 34 843 35 540 62521\\nLOTZ HV 377 091 348 008 905 752\\nstatic =351 671 IGD 368 877 677 082 2 172 211\\nNUM 350 914 1 059 784 2 556 265\\nOneJumpZeroJump HV 335 151 449 930 452 865\\nstatic =360 623 IGD 423 578 368 994 449 893\\nNUM 372 991 522 165 468 275\\nTable 1: Function evaluations to obtain the full Parent front\\nof the 100-dimensional OneMinMax ,COCZ , and LOTZ , and\\n50-dimensional OneJumpZeroJump . The results are average\\nof20runs.\\ud835\\udf06=10. The results of the GSEMO with static \\ud835\\udc5d=\\n1\\/\\ud835\\udc5bare listed in the first column. Bold values indicate the\\nbest result for each problem.\\nFigure 1: Convergence process of the GSEMO variants\\nfor the 100-dimensional OneMinMax . The\\ud835\\udc65-axis \\u201ctarget\\u201d\\nrefers to the objective of \\u201cOneMax\\u201d, i.e., the number of\\none bits.\\ud835\\udc66-axis plots the average function evaluation times\\nto find the corresponding target. The algorithms use \\u2019HV\\u2019,\\n\\u2019IGD\\u2019, and \\u2019NUM\\u2019 metrics.\\ncan traverse an ample search space faster than the three GSEMO\\nvariants.\\n3.2.3 Results for LOTZ .We obtain for LOTZ similar results to\\nthe other two problems. The two-rate GSEMO shows comparable\\nresults to the static GSEMO, and the log-normal GSEMO obtains\\nfavorable results only when using HV as the metric. Figure 2 plots\\nthe average hitting time for each solution located at the Parent\\nfront of LOTZ , and Figure 7 visualizes complete convergence pro-\\ncesses. We observe in Figure 2 that the static and two-rate GSEMOs\\nuse a similar amount of function evaluations to reach the Pareto\\nfront. However, the log-normal and var-ctrl GSEMOs obtain the\\nfirst Pareto solution slower than the other two algorithms, espe-\\ncially when using IGD and NUM. In addition, we observe that the\\nfirst obtained Parent solutions contain similar amounts ( \\u2248\\ud835\\udc5b\\/2) of\\nleading ones and trailing zeros.\\n3.2.4 Results for OneJumpZeroJump .Regarding the function eval-\\nuations to reach the full Pareto front of OneJumpZeroJump , the\\nperformances of the algorithms except for the var-ctrl GSEMO are\\nFigure 2: Convergence process of the GSEMO variants for\\nthe100-dimensional LOTZ . The\\ud835\\udc65-axis \\u201ctarget\\u201d refers to the\\nobjective of \\u201cLeadingOnes\\u201d. \\ud835\\udc66-axis plots the average func-\\ntion evaluation times to find the corresponding target. The\\nalgorithms use HV, IGD, and NUM metrics.\\nFigure 3: Convergence process of the GSEMO variants for\\nthe50-dimensional OneJumpZeroJump . The\\ud835\\udc65-axis \\u201ctarget\\u201d\\nrefers to the objective of \\u201cJump\\u201d function regarding the\\nnumber of one bits. \\ud835\\udc66-axis plots the average function evalua-\\ntion times to find the corresponding target. The algorithms\\nuse HV, IGD, and NUM metrics.\\nsimilar to each other. Due to that GSEMO requires a large mutation\\nrate and takes much time jumping to the Pareto solutions that are\\nlocated at the edge of search space, i.e., (\\ud835\\udc58,\\ud835\\udc5b+\\ud835\\udc58)and(\\ud835\\udc5b+\\ud835\\udc58,\\ud835\\udc58),\\nwe do not expect the methods tested in this paper can learn useful\\ninformation in the last step of searching, though some of them, e.g.,\\nlogNormal and var-ctrl, may obtain good performance by sampling\\nlarge mutation rate in the adaptation process. However, we can still\\nobserve in Figure 3 that the two-rate GSEMO outperforms the oth-\\ners for reaching the Pareto solutions {(\\ud835\\udc4e,2\\ud835\\udc58+\\ud835\\udc5b\\u22121)|\\ud835\\udc4e\\u2208[2\\ud835\\udc58,\\ud835\\udc5b]}\\nbeing located at the smoothy search space.\\nSo far, the performance of the self-adaptive GSEMO variants is un-\\nforeseen based on the performance improvements that self-adaptation\\nachieved in single-objective EAs [ 3,5,14,21]. However, we do not\\nobserve such improvements, except for the two-rate GSEMO, when\\nusing the multi-objective metrics. Therefore, we are investigating\\nthe impact of metrics for self-adaptation in the next section.\\n4 THE BEHAVIOR OF SELF-ADAPTATION using the multi-objective metrics. Therefore, we are investigating\\nthe impact of metrics for self-adaptation in the next section.\\n4 THE BEHAVIOR OF SELF-ADAPTATION\\nTo understand why the GSEMO variants perform differently using\\nthe three multi-objectives, we investigate the algorithms\\u2019 conver-\\ngence process along the search space and the dynamic evolutions\\nof mutation rates.\\n4.1 The Impact of Metrics\\nWe plot in Figure 4 how mutation rates self-adjust along the opti-\\nmization process of one run for OneMinMax andLOTZ . For ease of\\nvisualization, we plot the mutation rates (i.e., \\ud835\\udc5dfor the log-normal, GECCO \\u201923, July 15\\u201319, 2023, Lisbon, Portugal Furong Ye, Frank Neumann, Jacob de Nobel, Aneta Neumann, and Thomas B\\u00e4ck\\nFigure 4: The self-adaptive mutation rates of the GSEMO variants along the optimization process. The results are from the\\nGSEMO variants using six metrics for 100-dimensional OneMinMax (Top) and LOTZ (Bottom). The plotted values are the\\nexact values of one run for each algorithm. For ease of visualization, we plot the values at every five generations and cap the\\nmaximal generation by 10 000 .\\nand\\ud835\\udc5f\\/\\ud835\\udc5bfor the two-rate and log-normal GSEMO) at every five gen-\\nerations. The maximum generation is capped by 10 000 . Due to the\\ndifference in the final function evaluations used to obtain the full\\nPareto front, the lines terminate at different generations at the \\ud835\\udc65-\\naxis. The results of using HV, IGD, and NUM (the three sub-figures\\non the left) indicate considerable fluctuation in the self-adaptation\\nof mutation rates.\\nForOneMax , optimizers search towards increasing the num-\\nber of one bit, and the optimal mutation rates decrease while the\\nfitness value increases along the optimization process [ 1]. When\\napplying self-adaptation for OneMax and other pseudo-Boolean\\noptimization problems, EAs usually compare the parent and off-\\nspring\\u2019s fitness values to obtain useful information even for the\\nnon-improvement situation. However, when we consider the con-\\ntributions to the obtained non-dominated set for self-adaptation,\\nuseful information can be missing for the non-improvement situa-\\ntion. Therefore, we can observe in Figure 4 considerable fluctuations\\nin mutation rates during adaption. However, as presented in the\\nprevious section, the two-rate GSEMO can still outperform the one\\nusing a static mutation rate, indicating that static settings can not\\nbe the optimal solution.\\nA similar story holds for LOTZ . However, we observe different\\nbehavior of GSEMO using HV compared to the other two metrics.\\nFor example, if we describe solving OneMinMax as filling the full\\nPareto front, solving LOTZ needs to search and fill in the Pareto\\nfront. However, before obtaining a Pareto solution, the NUM metric\\ncan not provide useful information for GSEMO. Also, after obtaining\\na solution close to the Pareto front but far away from the other\\nobtained non-dominated solutions, the IGD value may remain the\\nsame even though the set of non-dominated solutions is moving\\ntowards the Pareto front. In contrast, HV can be improved anytime\\nwhen the current non-dominated set is updated, which frequently\\nhappens in the optimization process. Therefore, for LOTZ , Figure 4\\nplots an appropriate adaptation of mutation rate for the GSEMO\\nusing HV.\\nAlso, the mutual interface between multiple objectives can im-\\npact the adaptation. We only plot the function evaluations to find\\nthe solutions in the partial search space (\\ud835\\udc661>40,\\ud835\\udc662>40)ofCOCZ because the algorithms do not necessarily traverse all so-\\nlutions to obtain the Pareto front. When using the static \\ud835\\udc5d=1\\/\\ud835\\udc5b,\\nwe observe that the GSEMO searches towards the Pareto front\\nand skips many non-Pareto solutions. However, the self-adaptation\\nmethods search an ample space to obtain the full Pareto front.\\nMoreover, the log-normal and var-ctrl adjust the mutation rates\\nusing a log-normal and normal distribution, respectively. They both\\ngreedily choose the best one for the next generation. Therefore,\\nthey are easier to adjust towards high values when the algorithms\\nachieve progress by using high mutation rates, or no improvement\\nis obtained in a generation. However, the two-rate carries the half\\nor double strategy, so it learns only trends instead of explicit new\\nvalues. Therefore, the two-rate GSEMO does not use fluctuated\\nhigh mutation rates and results in better performance than log-\\nnormal and var-ctrl, regarding our experiments using the three\\nmulti-objective metrics.\\n4.2 Concerning Single Objective\\nSince we observe that the tested self-adaptation may fail when using normal and var-ctrl, regarding our experiments using the three\\nmulti-objective metrics.\\n4.2 Concerning Single Objective\\nSince we observe that the tested self-adaptation may fail when using\\nthe multi-objective metrics, we propose the GSEMO variants using\\nthe metrics referring to only one objective in this section. More\\nprecisely, we evaluate the \\ud835\\udc52(\\ud835\\udc65)of a solution \\ud835\\udc65with objective values\\n(\\ud835\\udc661,\\ud835\\udc662)by either(\\ud835\\udc661\\u2212\\ud835\\udc66\\u2217\\n1)or(\\ud835\\udc662\\u2212\\ud835\\udc66\\u2217\\n2), where\\ud835\\udc66\\u2217\\n1and\\ud835\\udc66\\u2217\\n2are the\\nbest corresponding objective values of the current non-dominated\\nsolution set. We sample a value \\ud835\\udc5f\\ud835\\udc4e\\ud835\\udc5b\\ud835\\udc51\\u2208[0,1]u.a.r. regularly. When\\n\\ud835\\udc5f\\ud835\\udc4e\\ud835\\udc5b\\ud835\\udc51 <0.5,\\ud835\\udc52(\\ud835\\udc65)=(\\ud835\\udc661\\u2212\\ud835\\udc66\\u2217\\n1); otherwise, \\ud835\\udc52(\\ud835\\udc65)=(\\ud835\\udc662\\u2212\\ud835\\udc66\\u2217\\n2). As\\ndiscussed in Section 3, multi-objective metrics (e.g., IGD and NUM)\\nmay not provide information on the convergence process in time.\\nFurthermore, the mutual interface between two objectives may lead\\nto misunderstanding the current optimization state. Therefore, we\\nexpect more helpful information concerning only one objective\\ncompared to using multi-objective metrics.\\nIn this section, We test three different sampling frequencies of\\n\\ud835\\udc5f\\ud835\\udc4e\\ud835\\udc5b\\ud835\\udc51 , and OneObj%50, OneObj%10, and OneObj denote the metrics\\nof sampling \\ud835\\udc5f\\ud835\\udc4e\\ud835\\udc5b\\ud835\\udc51 at every 50,10, and 1generation, respectively.\\nWe apply the same experimental settings described in Section 3,\\nwhere the metric of calculating \\ud835\\udc52(\\ud835\\udc65)is the only difference. What Performance Indicators to Use for Self-Adaptation in Multi-Objective Evolutionary Algorithms GECCO \\u201923, July 15\\u201319, 2023, Lisbon, Portugal\\nFigure 5: Convergence process of the GSEMO variants\\nfor the 100-dimensional OneMinMax . The\\ud835\\udc65-axis \\u201ctar-\\nget\\u201d refers to the objective of \\u201cOneMax\\u201d, i.e., the number\\nof one bits. \\ud835\\udc66-axis plots the average function evaluation\\ntimes to find the corresponding target. The algorithms use\\nOneObj %50, OneObj %10, and OneObj metrics.\\n4.2.1 Results. Compared to Table 1, we observe in Table 2 that\\nthe log-normal and var-ctrl GSEMO perform better when using\\nthe metric concerning only one objective. According to Figure 4,\\nthe fluctuation in mutation rates reduces significantly. Also, we\\nobserve in Figure 5 that the self-adaptation methods outperform\\nthe static GSEMO in the premature optimization stage, though we\\nstill need to investigate in the future how to adjust the mutation\\nrate to fill in the full Pareto front of OneMinMax faster.\\nHowever, we achieve significant improvement for COCZ and\\nLOTZ by concerning each objective alternatively. According to\\nTable 2, the log-normal and var-ctrl GSEMO outperform the static\\nsetting. In contrast, the performance of the two-rate GSEMO dete-\\nriorates a bit compared to using the multi-objective metrics. After\\nfinding the first Pareto solution, the var-ctrl GSEMO converges\\nto the full Pareto front faster than the other algorithms (Figure 6).\\nMoreover, as shown in Figure 4, the GSEMO variants do not choose\\nthose extremely large mutation rates for LOTZ when using the\\nOneObj metrics. This result indicates that we can avoid the dis-\\nturbance by concerning only one objective while performing self-\\nadaptation.\\nMoreover, the convergence process shows differences for COCZ\\nandLOTZ in Figure 7, comparing using the multi-objective metrics\\nand concerning progress in one objective. For COCZ , The two-\\nrate GSEMO works slower in premature convergence after using\\nOneObj, but the log-normal one moves faster. The latter results\\nin the best performance among the tested algorithms. For LOTZ ,\\nit is evident that all the self-adaptive algorithms converge faster\\n(plotted in darker colors) than the static one in the early stages\\nof optimization, starting from the point (0,0). Moreover, we can\\nobserve that the optimization process differs for each algorithm. We\\nsee orthogonal color lines representing function evaluations along\\ntwo objectives for the static, two-rate, and log-normal GSEMO.\\nHowever, the function evaluations to find each solution distribute\\nmore smoothly over the search space for the var-ctrl GSEMO. Recall\\nthat the var-ctrl using OneObj metrics performs the best among the\\ntested algorithms. This algorithm is designed for interpolating local\\nand global searches. After an identical mutation length has been\\nchosen for a few generations, the variance of the normal distribution\\nused to sample \\u2113will decrease so that the EA can perform a local\\nsearch. In Figure 4, we can observe that the same mutation rates\\nhave indeed been used for successive generations, which results in\\npromising performance.two-rate log-normal var-ctrl\\nOneMinMax OneObj 100 524 79 751 135 250\\nstatic =67 443 OneObj%10 103 315 74 726 132 700\\nOneObj%50 104 009 87 285 119 209\\nCOCZ OneObj 39 217 29 413 51 515\\nstatic =31 369 OneObj%10 37 860 26 585 53 530\\nOneObj%50 41 560 33 083 45 776\\nLOTZ OneObj 414 198 316 529 255 346\\nstatic =351 671 OneObj%10 394 350 325 271 272 238\\nOneObj%50 428 544 310 864 261 280\\nOneJumpZeroJump OneObj 456 250 329 995 386 858\\nstatic =360 623 OneObj%10 254 748 518 555 331 763\\nOneObj%50 375 875 324 141 453 305\\nTable 2: Function evaluations that the GSEMO variants us-\\ning OneObj metrics use to obtain the full Parent front of\\nthe100-dimensional OneMinMax ,COCZ , and LOTZ , and 50-\\ndimensional OneJumpZeroJump . The results are average of\\n20runs.\\ud835\\udf06=10. The results of the GSEMO with static \\ud835\\udc5d=1\\/\\ud835\\udc5b\\nare listed in the first column. Bold values indicate the best\\nresult for each problem. dimensional OneJumpZeroJump . The results are average of\\n20runs.\\ud835\\udf06=10. The results of the GSEMO with static \\ud835\\udc5d=1\\/\\ud835\\udc5b\\nare listed in the first column. Bold values indicate the best\\nresult for each problem.\\nFigure 6: Convergence process of the GSEMO variants for\\nthe100-dimensional LOTZ . The\\ud835\\udc65-axis \\u201ctarget\\u201d refers to the\\nobjective of \\u201cLeadingOnes\\u201d. \\ud835\\udc66-axis plots the average func-\\ntion evaluation times to find the corresponding target. The\\nalgorithms use OneObj %50, OneObj %10, and OneObj met-\\nrics.\\ud835\\udf06=10\\nForOneJumpZeroJump , the two-rate GSEMO using OneObj%10\\nis the best-performed in Table 2. Moreover, our results show that\\nthe log-normal GSEMO outperforms the others in the search space\\nof{(\\ud835\\udc4e,2\\ud835\\udc58+\\ud835\\udc5b\\u22121) |\\ud835\\udc4e\\u2208 [2\\ud835\\udc58,\\ud835\\udc5b]}. Regarding the result for the\\ntargets(\\ud835\\udc58,\\ud835\\udc5b+\\ud835\\udc58)and(\\ud835\\udc5b+\\ud835\\udc58,\\ud835\\udc58), the two-rate GSEMO using OneObj\\npresents the best average hitting time, followed by the log-normal\\none using OneObj%50 and var-ctrl one using OneObj%10. However,\\nthe results for those solutions on the edges of the search space also\\nshow considerable variances.\\n4.3 The Impact of Population Size\\nAt last, we would like a brief insight into the impact of the pop-\\nulation size, which refers to the offspring size here, for GSEMO\\nvariants. The impact from the population may exist for EAs or not,\\ndepending on the expected function evaluations required to achieve\\nprogress [ 9,11]. Research regarding adjusting population size for\\nMOEAs has been discussed recently in [ 6,19]. The population size GECCO \\u201923, July 15\\u201319, 2023, Lisbon, Portugal Furong Ye, Frank Neumann, Jacob de Nobel, Aneta Neumann, and Thomas B\\u00e4ck\\nFigure 7: Average function evaluations to find each solution in the search space of 100-dimensional COCZ (Top) and LOTZ\\n(Bottom). For COCZ ,\\ud835\\udc661indicates the objective of \\u201cOneMax\\u201d, and \\ud835\\udc662indicates the objective of \\u201cCount Ones Count Zeroes\\u201d for\\neach half bit-string. For LOTZ ,\\ud835\\udc661indicates the objective of \\u201cLeadingOnes\\u201d, and \\ud835\\udc662indicates the objective of \\u201cTrailing Zeros\\u201d.\\nValues are in log 10 scale.\\ud835\\udf06=10\\nFigure 8: The average function evaluations to find the full\\nPareto front of 100-dimensional OneMinMax (Top, normal-\\nized by\\ud835\\udc5b2log\\ud835\\udc5b),LOTZ (Middle, by \\ud835\\udc5b3), and COCZ (Bottom,\\nby\\ud835\\udc5b2log\\ud835\\udc5b), for the GSEMO variants. \\ud835\\udc65-axis plots population\\nsize\\ud835\\udf06. The GSEMO variants use the OneObj metric.\\nand mutation rate can be complementary to speed up EAs since they\\nboth shall work with the probability of achieving improvement, i.e.,\\nsuccess rate.\\nTo investigate the impact of population size on our GSEMO vari-\\nants, we plot in Figure 8 how their performance changes across\\ndifferent population size \\ud835\\udf06. Note that we plot the GSEMO vari-\\nants using the OneObj metric due to the promising performances.\\nThe static and log-normal GSEMO show stable performance for\\nthe3plotted problems across all 5tested dimensions. For LOTZ ,\\nthe two-rate and var-ctrl GSEMO\\u2019s performance does not change\\nmuch either. This behavior is consistent with the results of EAs\\nforLeadingOnes [9]. The var-ctrl GSEMO outperforms the others\\nfor all tested dimensions for LOTZ , and the static and log-normal\\nshow competitive results for OneMinMax andCOCZ . However, as\\nthe dimension increases, the var-ctrl shows potential performance\\nimprovements for OneMinMax andCOCZ .5 CONCLUSION\\nSelf-adaptation of mutation has succeeded in accelerating the con-\\nvergence speed in single-objective pseudo-Boolean optimization.\\nMoreover, benchmarking work and related collaboration in theoret-\\nical studies helped us understand the behavior of EAs in those prob-\\nlems. In this paper, we conduct detailed experimental investigations\\nof self-adaptation in multi-objective optimization by transmitting\\nthe existing self-adaptive mutation in single-objective optimization.\\nWe test three variants of GSEMO for OneMinMax ,LOTZ , and\\nOneJumpZeroJump . To transmit the single-objective techniques\\nto multi-objective problems, we consider four metrics to evaluate\\nthe progress during the optimization process and use these metrics\\nto guide self-adaptation. The experimental results show the poten-\\ntial benefits of self-adaptive mutation for GSEMO. Moreover, the\\nchoice of metrics significantly impacts the algorithms\\u2019 performance.\\nDue to the mutual interface between the progress referring to the\\ntwo objectives and the prolonged absence of progress updates, the\\nself-adaptive MOEAs may be misled and result in deteriorated per-\\nformance. However, by concerning the progress referring to a single\\nobjective as the metric, we obtain performance improvement for\\nGSEMO.\\nWe conclude that hypervolume can be a helpful metric guiding\\nthe self-adaptation, compared to the inverted generational distance\\nand the number of obtained Pareto solutions. In addition, concern-\\ning only one objective can accelerate MOEAs to move towards the\\nPareto front and lead to faster convergence.\\nMoreover, we study the impact of the population size on our\\nGSEMO variants. The self-adaptive algorithms can benefit from\\nlarge population sizes for OneMinMax andCOCZ , which suggests\\nfuture work researching self-adaptive techniques manipulating the\\ntwo parameters, population size and mutation rates, for MOEAs.\\nOverall, we hope the extensive experimental results in this paper\\nmotivate insights into the theoretical understanding of MOEAs,\\ne.g., dynamic optimal mutation rates and population sizes, and\\nalgorithm design for practical problems, e.g., real-time systems. What Performance Indicators to Use for Self-Adaptation in Multi-Objective Evolutionary Algorithms GECCO \\u201923, July 15\\u201319, 2023, Lisbon, Portugal\\nREFERENCES\\n[1]Buzdalov, M., and Doerr, C. Optimal mutation rates for the (1+ \\ud835\\udf06) EA on\\nonemax. In Proc. of Parallel Problem Solving from Nature (PPSN\\u201920) , Proceedings,\\nPart II (2020), T. B\\u00e4ck, M. Preuss, A. H. Deutz, H. Wang, C. Doerr, M. T. M.\\nEmmerich, and H. Trautmann, Eds., vol. 12270 of Lecture Notes in Computer\\nScience , Springer, pp. 574\\u2013587.\\n[2]Dang, N., and Doerr, C. Hyper-parameter tuning for the (1 + ( \\ud835\\udf06,\\ud835\\udf06)) GA. In\\nProc. Genetic and Evolutionary Computation Conference (GECCO\\u201919) (2019), ACM,\\npp. 889\\u2013897.\\n[3]Doerr, B., and Doerr, C. Optimal static and self-adjusting parameter choices\\nfor the (1+(\\ud835\\udf06,\\ud835\\udf06)) genetic algorithm. Algorithmica 80 , 5 (2018), 1658\\u20131709.\\n[4]Doerr, B., Gao, W., and Neumann, F. Runtime analysis of evolutionary diversity\\nmaximization for oneminmax. In Proc. of Genetic and Evolutionary Computation\\nConference (GECCO 16\\u2019) (2016), ACM, pp. 557\\u2013564.\\n[5]Doerr, B., Giessen, C., Witt, C., and Yang, J. The (1+\\ud835\\udf06) evolutionary algorithm\\nwith self-adjusting mutation rate. Algorithmica 81 , 2 (2019), 593\\u2013631.\\n[6]Doerr, B., Hadri, O. E., and Pinard, A. The (1 + (\\ud835\\udf06,\\ud835\\udf06)) global SEMO algorithm.\\nInProc. of Genetic and Evolutionary Computation Conference (GECCO\\u201922) (2022),\\nJ. E. Fieldsend and M. Wagner, Eds., ACM, pp. 520\\u2013528.\\n[7]Doerr, B., and Zheng, W. Theoretical analyses of multi-objective evolutionary\\nalgorithms on multi-modal objectives. In Proc. of AAAI Conference on Artificial\\nIntelligence (AAAI\\u201921) (2021), AAAI Press, pp. 12293\\u201312301.\\n[8]Doerr, C., Ye, F., Horesh, N., Wang, H., Shir, O. M., and B\\u00e4ck, T. Benchmarking\\ndiscrete optimization heuristics with iohprofiler. Applied Soft Computing 88 (2020),\\n106027.\\n[9]Doerr, C., Ye, F., van Rijn, S., Wang, H., and B\\u00e4ck, T. Towards a theory-guided\\nbenchmarking suite for discrete black-box optimization heuristics: profiling (1 +\\n\\ud835\\udf06) EA variants on onemax and leadingones. In Proc. of Genetic and Evolutionary\\nComputation Conference (GECCO 18\\u2019) (2018), H. E. Aguirre and K. Takadama,\\nEds., ACM, pp. 951\\u2013958.\\n[10] Eiben, A. E., Hinterding, R., and Michalewicz, Z. Parameter control in evolu-\\ntionary algorithms. IEEE Transactions on Evolutionary Computation 3 , 2 (1999),\\n124\\u2013141.\\n[11] Fajardo, M. A. H., and Sudholt, D. Self-adjusting population sizes for non-\\nelitist evolutionary algorithms: why success rates matter. In Proc. of Genetic\\nand Evolutionary Computation Conference (GECOO\\u201921) (2021), F. Chicano and\\nK. Krawiec, Eds., ACM, pp. 1151\\u20131159.[12] Giel, O. Expected runtimes of a simple multi-objective evolutionary algorithm.\\nInProc. of the IEEE Congress on Evolutionary Computation, (CEC\\u201903) (2003), IEEE,\\npp. 1918\\u20131925.\\n[13] Giel, O., and Lehre, P. K. On the effect of populations in evolutionary multi-\\nobjective optimisation. Evolutionary Computation 18 , 3 (2010), 335\\u2013356.\\n[14] Kruisselbrink, J. W., Li, R., Reehuis, E., Eggermont, J., and B\\u00e4ck, T. On\\nthe log-normal self-adaptation of the mutation rate in binary search spaces. In\\nProc. of Genetic and Evolutionary Computation Conference (GECCO 11\\u2019) (2011),\\nN. Krasnogor and P. L. Lanzi, Eds., ACM, pp. 893\\u2013900.\\n[15] Laumanns, M., Thiele, L., and Zitzler, E. Running time analysis of multiobjec-\\ntive evolutionary algorithms on pseudo-boolean functions. IEEE Transactions on\\nEvolutionary Computtation 8 , 2 (2004), 170\\u2013182.\\n[16] Nguyen, A. Q., Sutton, A. M., and Neumann, F. Population size matters:\\nRigorous runtime results for maximizing the hypervolume indicator. Theoretical\\nComputer Science 561 (2015), 24\\u201336.\\n[17] Osuna, E. C., Gao, W., Neumann, F., and Sudholt, D. Design and analysis of\\ndiversity-based parent selection schemes for speeding up evolutionary multi-\\nobjective optimisation. Theoretical Computer Science 832 (2020), 123\\u2013142.\\n[18] Pinto, E. C., and Doerr, C. Towards a more practice-aware runtime analysis of\\nevolutionary algorithms. CoRR abs\\/1812.00493 (2018). [18] Pinto, E. C., and Doerr, C. Towards a more practice-aware runtime analysis of\\nevolutionary algorithms. CoRR abs\\/1812.00493 (2018).\\n[19] Shi, F., Schirneck, M., Friedrich, T., K\\u00f6tzing, T., and Neumann, F. Reoptimiza-\\ntion time analysis of evolutionary algorithms on linear functions under dynamic\\nuniform constraints. Algorithmica 81 , 2 (2019), 828\\u2013857.\\n[20] Ye, F., and de Nobel, J. GSEMO. https:\\/\\/github.com\\/FurongYe\\/GSEMO, 2022.\\n[21] Ye, F., Doerr, C., and B\\u00e4ck, T. Interpolating local and global search by controlling\\nthe variance of standard bit mutation. In Proc. of IEEE Congress on Evolutionary\\nComputation (CEC\\u201919) (2019), IEEE, pp. 2292\\u20132299.\\n[22] Zheng, W., Liu, Y., and Doerr, B. A first mathematical runtime analysis of\\nthe non-dominated sorting genetic algorithm II (NSGA-II). In Proc. of AAAI\\nConference on Artificial Intelligence (AAAI,22) (2022), AAAI Press, pp. 10408\\u2013\\n10416.\\n[23] Zitzler, E., and Thiele, L. Multiobjective evolutionary algorithms: a com-\\nparative case study and the strength pareto approach. IEEE Transactions on\\nEvolutionary Computation 3 , 4 (1999), 257\\u2013271.\\n[24] Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C. M., and Da Fonseca, V. G.\\nPerformance assessment of multiobjective optimizers: An analysis and review.\\nIEEE Transactions on Evolutionary Computation 7 , 2 (2003), 117\\u2013132.\",\"4\":\" Using Affine Combinations of BBOB Problems for Performance\\nAssessment\\nDiederick Vermetten\\nLeiden Institute for Advanced\\nComputer Science\\nLeiden, The Netherlands\\nd.l.vermetten@liacs.leidenuniv.nlFurong Ye\\nLeiden Institute for Advanced\\nComputer Science\\nLeiden, The Netherlands\\nf.ye@liacs.leidenuniv.nlCarola Doerr\\nSorbonne Universit\\u00e9, CNRS, LIP6\\nParis, France\\nCarola.Doerr@lip6.fr\\nABSTRACT\\nBenchmarking plays a major role in the development and analysis\\nof optimization algorithms. As such, the way in which the used\\nbenchmark problems are defined significantly affects the insights\\nthat can be gained from any given benchmark study. One way\\nto easily extend the range of available benchmark functions is\\nthrough affine combinations between pairs of functions. From the\\nperspective of landscape analysis, these function combinations\\nsmoothly transition between the two base functions.\\nIn this work, we show how these affine function combinations\\ncan be used to analyze the behavior of optimization algorithms. In\\nparticular, we highlight that by varying the weighting between the\\ncombined problems, we can gain insights into the effects of added\\nglobal structure on the performance of optimization algorithms. By\\nanalyzing performance trajectories on more function combinations,\\nwe also show that aspects such as the scaling of objective functions\\nand placement of the optimum can greatly impact how these results\\nare interpreted.\\nKEYWORDS\\nBlack-box Optimization, Benchmarking, Performance Analysis\\n1 INTRODUCTION\\nBenchmarking is a key aspect in the development of optimization\\nalgorithms. Not only are benchmark problems used to compare the\\neffectiveness of different optimizers with regard to a standardized\\nset of problems, the analysis of algorithm behavior on these prob-\\nlems is often used to gain insight into the characteristics of the\\nalgorithm. Because of this, the design of benchmark problems has\\na major impact on the field of optimization as a whole [1].\\nOne of the most common benchmark suites in single-objective,\\ncontinuous, noiseless optimization is fittingly called Black Box Op-\\ntimization Benchmark (BBOB) [ 7]. This suite is part of the COCO\\nframework [ 6], which has seen significant adoption in the last\\ndecade. This suite consists of 24 problems, each defined to repre-\\nsent a set of global landscape properties. For each of these problems,\\nmany different instances can be created through a set of transfor-\\nmations, allowing researchers to test different invariances of their\\nalgorithm. Because of its popularity, studies into the specifics of\\nthe BBOB suite are numerous [13, 16, 17].\\nOne particularly popular method to investigate continuous opti-\\nmization problems is Exploratory Landscape Analysis (ELA) [ 15].\\nThis technique aims to characterize the low-level landscape proper-\\nties through a large set of features. Applying this to the BBOB suite\\nshows that instances of the 24 functions generally group together,\\nwith separation between functions being relatively robust [ 20].This observation raised the question of how the spaces between\\nproblems could be explored.\\nIn a recent study, affine combinations between pairs of BBOB\\nproblems were proposed and analyzed using ELA [ 4]. The resulting\\nanalysis shows that varying the weight of these combinations has a\\nrelatively smooth impact on the landscape features. As such, these\\nnew functions could potentially be used to study the transition\\nbetween different landscapes, which opens up a more in-depth\\nanalysis of the relation between landscapes and algorithm behavior.\\nTo investigate to what extent the affine function combinations\\ncan be used to study algorithmic behavior, we perform a bench-\\nmarking study through which we investigate the effect of the affine\\ncombinations on the performance of five numerical black-box opti-\\nmization algorithms. We make use of function combinations which\\ninclude a sphere model to show the impact of added global structure combinations on the performance of five numerical black-box opti-\\nmization algorithms. We make use of function combinations which\\ninclude a sphere model to show the impact of added global structure\\non the relative ranking between algorithms. Additionally, we show\\nthat by combining functions with different global properties we\\ndon\\u2019t always obtain smooth transitions in performance. We pro-\\nvide examples where the combination of two functions can either\\nbe significantly more challenging or slightly easier than the base\\nfunctions it consists of.\\n2 RELATED WORK\\n2.1 BBOB Problem Suite\\nWithin continuous optimization benchmarking, one of the most\\npopular suites of benchmarks is the BBOB family, which has been\\ndesigned as part of the COCO framework. The noiseless, single-\\nobjective suite consists of 24 problems, each of which can be in-\\nstantiated with a set of different transformations. These function\\ninstances aim to preserve the global function properties while vary-\\ning factors such as the location of the global optimum, such that an\\noptimizer can not directly exploit these aspects. However, the exact\\ninfluence these transformations have on the low-level landscape\\nproperties is not as straightforward, which can lead to noticeable\\ndifferences in algorithm behavior on different instances of the same\\nfunction [13].\\n2.2 Affine Function Combinations\\nWhile using function instances allows the BBOB suite to cover a\\nwider range of problem landscapes than the raw functions alone,\\nthere are limits to the types of landscapes which can be created in\\nthis way. Recently, it has been proposed to use affine combinations\\nbetween pairs of BBOB functions to generate new benchmark func-\\ntions [ 4]. These combinations have been shown to smoothly fill the\\nspace of low-level landscape properties, as measured through a set\\nof ELA features. These results have shown that even a relatively\\n1arXiv:2303.04573v1 [cs.NE] 8 Mar 2023 Diederick Vermetten, Furong Ye, and Carola Doerr\\nsimple function creation procedure has the potential to give us new\\ninsights into the way function landscapes work.\\n3 EXPERIMENTAL SETUP\\nIn this work, we make use of a slightly modified version of the\\naffine function combinations from [ 4]. In particular, we define the\\ncombination between two functions from the BBOB suite as follows:\\n\\ud835\\udc36(\\ud835\\udc391,\\ud835\\udc3c1,\\ud835\\udc392,\\ud835\\udc3c2,\\ud835\\udefc)(\\ud835\\udc65)=\\nexp\\u0010\\n\\ud835\\udefclog\\u0000\\ud835\\udc391(\\ud835\\udc65)\\u2212\\ud835\\udc391(\\ud835\\udc421)\\u0001+\\n(1\\u2212\\ud835\\udefc)log\\u0000\\ud835\\udc392(\\ud835\\udc65\\u2212\\ud835\\udc421+\\ud835\\udc422)\\u2212\\ud835\\udc392(\\ud835\\udc422)\\u0001\\u0011\\nWhere\\ud835\\udc391,\\ud835\\udc3c1,\\ud835\\udc392,\\ud835\\udc3c2are the two base functions and their instance\\nnumber, as defined in BBOB [7]. \\ud835\\udc421and\\ud835\\udc422represent the location\\nof the optimum of functions \\ud835\\udc391and\\ud835\\udc392respectively. The transforma-\\ntion to\\ud835\\udc65when evaluating \\ud835\\udc392is performed to make sure the location\\nof the optimum is at \\ud835\\udc421. As opposed to the original definition, we\\nsubtract the optimal values before aggregating, so we can take a\\nlogarithmic mean between the problems. This way, we can use\\nconsistent values for \\ud835\\udefcacross problems, without having to perform\\nthe entropy-based selection performed in [ 4]. It has the additional\\nbenefit of ensuring the objective value of the optimal solution is\\nalways 0, so the comparison of performance across instances and\\nacross problems is simplified. In Figure 1, we illustrate the change\\nin landscape for the combination of F21 and F1, for different values\\nof\\ud835\\udefc.\\nIn order to implement these function combinations, we make\\nuse of the IOHexperimenter [ 3] framework. We access the BBOB\\nproblems, combine them together as described, and wrap them into\\na new problem. This enables us to use any of the built-in logging\\nand tracking options of IOHexperimter. In particular, it allows us to\\nstore the performance data into a file-format which can be directly\\nprocessed into IOHanalyzer [25] for post-processing.\\nFor our algorithm portfolio, we make use of the Nevergrad tool-\\nbox, which provides a common interface to a wide range of opti-\\nmization algorithms [ 19]. In this study, we benchmark the following\\nalgorithms:\\n\\u2022Particle Swarm Optimization (PSO) [10]\\n\\u2022Constrained Optimization BY Linear Approximation\\n(Cobyla) [18]\\n\\u2022Differential Evolution (DE) [21]\\n\\u2022Estimation of Multivariate Normal Algorithm (EMNA) [ 12]\\n\\u2022Diagonal Covariance Matrix Adaptation Evolution Strategy\\n(dCMA-ES) [8]\\nFor each of these algorithms, we make use of the default parameters\\nas chosen in Nevergrad. Each run of the algorithm has a budget\\nof2 000\\ud835\\udc37, where\\ud835\\udc37is the dimension of the problem. We perform\\n5independent runs per instance. In the remainder of this paper,\\nwe set\\ud835\\udc3c2=1. As such, when discussing the instance of an affine\\nfunction combination \\ud835\\udc36(\\ud835\\udc391,\\ud835\\udc3c1,\\ud835\\udc392,\\ud835\\udc3c2,\\ud835\\udefc), we are referring to \\ud835\\udc3c1.\\nReproducibility To ensure reproducibility, we make all code\\nused in the creation of this paper available in a Zenodo reposi-\\ntory [ 24]. This repository contains the data generation code, raw\\ndata generated, and post-processing scripts used to create the re-\\nsults discussed in the following sections, following the recommen-\\ndations proposed in [ 14]. In addition to this, we also make availablea Figshare repository containing additional figures and animations\\nwhich could not be included in this paper [24].\\n4 PERFORMANCE COMPARISON FOR\\nAFFINE COMBINATIONS WITH F1\\nFor a first set of experiments, we make use of affine combinations\\nwhere we combine each function with F1: the sphere model. As can\\nbe seen in Figure 1, adding a sphere model to another function cre-\\nates an additional global structure that can guide the optimization\\ntoward the global optimum. As such, these kinds of combinations\\nmight allow us to investigate the influence of an added global struc-\\nture on the performance of optimization algorithms. While to some\\nextent this can already be investigated by comparing results on the\\nfunction groups of the original BBOB with different levels of global\\nstructure, the affine function combinations allow for a much more\\nfine-grained investigation. Since the landscape features of these\\ncombined functions seem to shift smoothly when varying \\ud835\\udefc, we structure, the affine function combinations allow for a much more\\nfine-grained investigation. Since the landscape features of these\\ncombined functions seem to shift smoothly when varying \\ud835\\udefc, we\\nmight assume similar behavior on algorithmic performance.\\nIn Figure 2, we show the performance of diagonal CMA-ES,\\nmeasured as the area under the Empirical Cumulative Distribution\\nFunction (ECDF) [ 5], for varying function combinations and \\ud835\\udefc\\nvalues. As is widely accepted for BBOB functions, we make use of 51\\ntargets logarithmically spaced between 102and10\\u22128to compute the\\nECDF. The resulting Area Under the Curve (AUC) is normalized, so\\nan algorithm which reaches all targets in the first evaluation would\\nhave an AUC of 1. The top of this figure, with \\ud835\\udefc=0, shows the\\nperformance on the sphere function, on which CMA-ES performs\\nvery well. There are however differences between the columns,\\nsince the location of the affine function combination is set to the\\noptimum of the second function.\\nIn Figure 2, we can see that the performance of CMA-ES does\\nindeed seem to move smoothly between the sphere and the function\\nwith which it is combined. It is however interesting to note the\\ndifferences in speed at which this transition occurs. While the final\\nperformance on e.g. functions 3 and 11 seems similar, the transition\\nspeed differs significantly. This seems to indicate that for F11, the\\naddition of some global structure has a relatively weak influence on\\nthe challenges of this landscape from the perspective of the CMA-ES,\\nwhile even small amounts of global structure significantly simplify\\nthe landscape of F3.\\nWe can perform a similar analysis on other optimization algo-\\nrithms. In Figure 3 and Figure 4, we show the same heatmap as\\nFigure 2, but for Differential Evolution and Cobyla respectively. It\\nis clear from these heatmaps that the performance of DE is more\\nvariable than that of CMA-ES, while Cobyla\\u2019s performance drops\\noff much more quickly. The overall trendlines for DE do seem to be\\nsomewhat similar to those seen for diagonal CMA-ES: the transition\\npoints between high and low AUC in Figure 3 are comparable to\\nthose seen in Figure 2. There are however still some differences in\\nbehavior, espcially relative to Cobyla. These differences then lead\\nto the question of whether there exist transition points in ranking\\nbetween algorithms as well. Specifically, if one algorithm performs\\nwell for\\ud835\\udefc=0but gets overtaken as \\ud835\\udefc\\u21921, exploring this change\\nin ranking would give further insight into the relative strengths\\nand weaknesses of the considered algorithms.\\n2 Using Affine Combinations of BBOB Problems for Performance Assessment\\n\\u22124\\u22122 024\\n0\\u22124\\u22122024\\n\\u22124\\u22122 024\\n0.25\\u22124\\u22122 024\\n0.5\\u22124\\u22122 024\\n0.75\\u22124\\u22122 024\\n1\\u22129.6\\u22128.0\\u22126.4\\u22124.8\\u22123.2\\u22121.60.01.63.2\\nFigure 1: Evolution of the landscape (log-scaled function-values) of the affine combination between F21 ( \\ud835\\udefc=1) and F1 (\\ud835\\udefc=0),\\ninstance 1 for both functions, for varying \\ud835\\udefc. The red circle highlights the location of the global optimum.\\n23456789101112131415161718192021222324\\nFunction ID0.00\\n0.05\\n0.10\\n0.15\\n0.20\\n0.25\\n0.30\\n0.35\\n0.40\\n0.45\\n0.50\\n0.55\\n0.60\\n0.65\\n0.70\\n0.75\\n0.80\\n0.85\\n0.90\\n0.95\\n1.00Alpha\\n0.00.20.40.60.81.0\\nFigure 2: Normalized area under the ECDF curve of Diago-\\nnal CMA-ES for each combination of the BBOB-function (x-\\naxis) with a sphere model, for given value of \\ud835\\udefc(y-axis). AUC\\nis calculated after 10 000 function evaluations, based on 50\\nruns on 10 instances.\\n23456789101112131415161718192021222324\\nFunction ID0.00\\n0.05\\n0.10\\n0.15\\n0.20\\n0.25\\n0.30\\n0.35\\n0.40\\n0.45\\n0.50\\n0.55\\n0.60\\n0.65\\n0.70\\n0.75\\n0.80\\n0.85\\n0.90\\n0.95\\n1.00Alpha\\n0.00.20.40.60.81.0\\nFigure 3: Normalized area under the ECDF curve of Differen-\\ntial Evolution for each combination of the BBOB-function\\n(x-axis) with a sphere model, for given value of \\ud835\\udefc(y-axis).\\nAUC is calculated after 10 000 function evaluations, based on\\n50 runs on 10 instances.\\n23456789101112131415161718192021222324\\nFunction ID0.00\\n0.05\\n0.10\\n0.15\\n0.20\\n0.25\\n0.30\\n0.35\\n0.40\\n0.45\\n0.50\\n0.55\\n0.60\\n0.65\\n0.70\\n0.75\\n0.80\\n0.85\\n0.90\\n0.95\\n1.00Alpha\\n0.00.20.40.60.81.0Figure 4: Normalized area under the ECDF curve of Cobyla\\nfor each combination of the BBOB-function (x-axis) with a\\nsphere model, for given value of \\ud835\\udefc(y-axis). AUC is calculated\\nafter 10 000 function evaluations, based on 50 runs on 10 in-\\nstances.\\nIn order to answer this question about the relative ranking of\\nalgorithms, we make use of the portfolio of 5 algorithms and rank\\nthem based on AUC on each affine function combination. We then\\nvisualize the top ranking algorithm on each setting in Figure 5.\\nImportant to note is that both PSO and EMNA never ranked first\\nfor the selected budget, and are thus not visible on the figure.\\nFrom Figure 5, we can clearly see that Cobyla deals well with\\nthe sphere model, managing to outperform the other algorithm\\nwhen the weighting of the sphere is relatively high. Then, after a\\ncertain threshold, the CMA-ES consistently outperforms the rest\\nof the portfolio. However, as \\ud835\\udefcincreases further, and the influence\\nof the sphere model diminishes, an interesting pattern seems to\\noccur. For several problems, there is a second transition point, to\\neither DE or Cobyla. For some functions, e.g. F3 and F4, one factor\\nwhich might explain this phenomenon is the strength of the local\\noptima increasing, making it harder for CMA-ES to explore the full\\nlandscape, while the uniform initialization of DE causes it to be\\nslightly less impacted.\\nIn order to better understand what the transitions in algorithm\\nranking look like, we can zoom in on one of the functions and\\n3 Diederick Vermetten, Furong Ye, and Carola Doerr\\n23456789101112131415161718192021222324\\nFunction ID0.00\\n0.05\\n0.10\\n0.15\\n0.20\\n0.25\\n0.30\\n0.35\\n0.40\\n0.45\\n0.50\\n0.55\\n0.60\\n0.65\\n0.70\\n0.75\\n0.80\\n0.85\\n0.90\\n0.95\\n1.00Alpha\\nDiagonalCMA DE RCobyla\\nFigure 5: Algorithm with the highest area under the ECDF-\\ncurve for each combination of the BBOB-function (x-axis)\\nwith a sphere model, for given value of \\ud835\\udefc(y-axis). AUC is\\ncalculated after 10 000 function evaluations, based on 50 runs\\non 10 instances. PSO and EMNA are not shown since they\\nnever ranked first.\\nplot the expected running time (ERT) for several values of \\ud835\\udefc. This\\nis done in Figure 6, where we look at the combination between\\nF10 and the sphere model. we clearly see that Cobyla is very ef-\\nfective at optimizing the sphere model, solving it almost an order\\nof magnitude faster than the second ranked algorithm, which is\\nDiagonalCMA. However, when \\ud835\\udefcincreases, Cobyla quickly starts\\nto fail, while DiagonalCMA still manages to solve most instances\\nat\\ud835\\udefc=0.25within similar amounts of evaluations. However, it is\\nclear from the bifurcation in the plot that on some instances, the\\nDiagonalCMA is no longer able to find the optimum within the\\nallocated budget. When \\ud835\\udefcincreases further, none of the instances\\nare able to be solved anymore by any of the three algorithms. When\\n\\ud835\\udefc\\u22650.75, we see that DE overtakes the other two, which explains\\nthe better ranking seen in Figure 5.\\n5 COMBINATIONS BETWEEN DIFFERENT\\nFUNCTION GROUPS\\nWhile combining functions with a sphere model can be viewed\\nas adding global structure to a problem, combinations between\\nother functions can provide interesting insights into the transition\\npoints between different types of problems. To illustrate the kinds\\nof insights that can be gained from these combinations, we select\\na subset of 5 functions and collect performance data on each com-\\nbination with the same 21 \\ud835\\udefcvalues (with both orderings of the\\nfunction). We show the performance in terms of normalized AUC\\nof diagonal CMA-ES on these function combinations in Figure 7.\\nNote that for \\ud835\\udefc=1, we are using the function specified in the\\ncolumn label, while for \\ud835\\udefc=0we have the function specified in the\\nrow label, but with the optimum of the column function.\\nFrom Figure 7, we can see that the transition of performance\\nbetween the two extreme \\ud835\\udefcvalues is mostly smooth. While there\\nare some rather quick changes, e.g. for the transition between\\nF2 and F11, these seem to be the exception rather than the rule.\\nParticularly interesting are the settings where the performance of\\naffine combinations between two functions proves to be much easieror harder than the functions which are being combined. This is the\\ncase e.g. for the combinations of F21 and F9. Of note in this function\\ncombination is the fact that its mirrored combination around the\\ndiagonal does not display similar behavior. In fact, Figure 7 in\\ngeneral is not fully symmetric around the diagonal.\\nWe might expect(\\ud835\\udc391,\\ud835\\udc392,\\ud835\\udefc)to be similar to(\\ud835\\udc392,\\ud835\\udc391,1\\u2212\\ud835\\udefc). How-\\never, the combination between F9 and F21 shows that this is not al-\\nways the case. Specifically, the AUC for the combination (\\ud835\\udc3921,\\ud835\\udc399,1)\\nis significantly worse than that of (\\ud835\\udc399,\\ud835\\udc3921,0), even though F21 does\\nnot contribute directly to the function value of the affine combi-\\nnation. The only way in which these two problems differ is in the\\nlocation of the optima. For F21, the default location of the optimum\\nis hard-coded to be at distance 1 from the optimum [ 13], which\\nis not the case for F9. Since the CMA-ES initializes its center of\\nmass in the origin of the space and uses a default initial stepsize\\nof0.3[19], it is able to find the optimum in the default setting,\\nwhile the translated version of the function becomes much more\\nchallenging. This highlights a potential issue with the traditional\\nanalysis of performance on BBOB problems: if we don\\u2019t take into\\naccount the built-in limitations on e.g. the location of the optimum\\nin our analysis, there is a risk of misinterpreting the results of a account the built-in limitations on e.g. the location of the optimum\\nin our analysis, there is a risk of misinterpreting the results of a\\nstructurally biased algorithm [ 23] and viewing it as optimal on this\\ntype of multimodal problem, while it is unable to solve a translated\\nversion of the same function.\\nTo see how much this initialization really impacts the differences\\nin performance, we perform an additional experiment with a dif-\\nferent version of CMA-ES. We opt to use the modular CMA-ES [ 2]\\nand set the initial stepsize to 0.2times the range of the domain, so\\n2in our case. The resulting performance is visualized in Figure 8.\\nIn this figure, it is clear that the overall performance of this set-\\nting of CMA-ES performs better overall, but of particular note is\\nthat the asymmetries have been somewhat reduced, although not\\ndisappeared entirely.\\nAdditionally, Figure 8 shows several interesting trends in per-\\nformance which were not present for the Diagonal CMA-ES. For\\nexample, the combinations between F2 and F9 show a large dip\\nin AUC near the center, even though both functions separately\\nseem relatively easy to solve for this version of CMA-ES. While the\\ndifferences between the two versions of CMA-ES are noticeable,\\nmany of the trends, e.g. decreased performance for combinations\\nbetween F11 and F16, are present to some extent in Figure 7 as well.\\nAs a final algorithm, we run DE on the same set of function\\ncombinations. The results are visualized in Figure 9. In this figure,\\nwe see that the overall performance of DE is indeed worse than\\nthe two versions of CMA-ES. It is worth noting that the amount of\\nasymmetry along the diagonal is lower than for the diagonal CMA-\\nES. This could be caused by the change in initialization (Gaussian\\nfor CMA-ES, uniform for DE) reducing the initial bias to the cen-\\nter of the space. Another factor to consider is the variance of the\\nperformance. For CMA-ES, performance can vary significantly as\\n\\ud835\\udefcchanges, while the changes in AUC seem to be much smaller for\\nDE.\\n4 Using Affine Combinations of BBOB Problems for Performance Assessment\\n10610210\\u2212210\\u22126\\n0100101102103104105ERTAlgorithm\\nDiagonalCMA\\nDifferentialEvolution\\nRCobyla\\n10610210\\u2212210\\u22126\\n0.25Algorithm\\nDiagonalCMA\\nDifferentialEvolution\\nRCobyla\\n10610210\\u2212210\\u22126\\n0.5Algorithm\\nDiagonalCMA\\nDifferentialEvolution\\nRCobyla\\n10610210\\u2212210\\u22126\\n0.75Algorithm\\nDiagonalCMA\\nDifferentialEvolution\\nRCobyla\\n10610210\\u2212210\\u22126\\n1Algorithm\\nDiagonalCMA\\nDifferentialEvolution\\nRCobyla\\nFigure 6: ERT per instance for three algorithms on the affine combinations between F10 ( \\ud835\\udefc=1) and F1 (\\ud835\\udefc=0), for selected\\nvalues of\\ud835\\udefc. Each dot corresponds to the ERT calculated based on 5 runs on 1 instance, for a total of 10 instances.\\n0.10.50.9F2\\n0.10.50.9F9\\n0.10.50.9F11\\n0.10.50.9F16\\n00.5 1\\nF20.10.50.9F21\\n00.5 1\\nF900.5 1\\nF1100.5 1\\nF1600.5 1\\nF21\\nFigure 7: Area under the ECDF-curve for Diagonal CMA-\\nES on each of the affine combinations between the selected\\nBBOB problems. Each facet corresponds to the combination\\nof the row and column function, with the x-axis indicating\\nthe used\\ud835\\udefc. AUC values are calculated based on 50 runs on 5\\ninstances, with a budget of 10 000 function evaluations.\\n6 ZOOMING INTO ONE FUNCTION\\nCOMBINATION\\nTo further analyze the impact of changing the weighting of the\\nfunction combinations, we can zoom in on one particular combi-\\nnation and study it in more detail. First, we gauge the impact of\\nusing different instances to measure performance. This is done by\\nconsidering the distribution of AUC values for a specific function\\ncombination, F2 to F16, in Figure 10, on a per-instance basis. From\\nthis figure, we see that in general, the distribution of AUC values\\nis rather stable. However, at the transition point for the CMA-ES\\nvariants, around \\ud835\\udefc\\u22480.8, we see a clear increase in variance. To\\ncheck whether this behavior also occurs for other function com-\\nbinations, we create the same visualization for the combination\\nof F21 and F9 in Figure 11. In this figure, we see a similar pattern\\n0.10.50.9F2\\n0.10.50.9F9\\n0.10.50.9F11\\n0.10.50.9F16\\n00.5 1\\nF20.10.50.9F21\\n00.5 1\\nF900.5 1\\nF1100.5 1\\nF1600.5 1\\nF21Figure 8: Area under the ECDF-curve for modular CMA-ES\\non each of the affine combinations between the selected\\nBBOB problems. Each facet corresponds to the combination\\nof the row and column function, with the x-axis indicating\\nthe used\\ud835\\udefc. AUC values are calculated based on 50 runs on 5\\ninstances, with a budget of 10 000 function evaluations.\\nfor the diagonal CMA-ES, where the distribution of AUC at high \\ud835\\udefc\\nranges from almost 0to almost 1.\\nThe variance observed in Figure 10 might indicate that, in order\\nto get a stable view of the exact behavior at this transition point,\\na wider variety of instances should be used to get a more robust\\nperformance estimate. However, when considering the extreme\\ndifferences in AUC observed in Figure 11, this variance invites\\na more detailed study into the interaction between the instance\\ngeneration process (e.g., the placement of the optimal solution) and\\nthe search behavior of the used algorithm.\\nNext to the instance generation process, another important fac-\\ntor to consider when analyzing the performance of optimization\\nalgorithms on these affine function combinations is the scaling of\\nthe objective values. While it is common practice to ignore the\\n5 Diederick Vermetten, Furong Ye, and Carola Doerr\\n0.10.50.9F2\\n0.10.50.9F9\\n0.10.50.9F11\\n0.10.50.9F16\\n00.5 1\\nF20.10.50.9F21\\n00.5 1\\nF900.5 1\\nF1100.5 1\\nF1600.5 1\\nF21\\nFigure 9: Area under the ECDF-curve for Differential Evolu-\\ntion on each of the affine combinations between the selected\\nBBOB problems. Each facet corresponds to the combination\\nof the row and column function, with the x-axis indicating\\nthe used\\ud835\\udefc. AUC values are calculated based on 50 runs on 5\\ninstances, with a budget of 10 000 function evaluations.\\n0.0 0.2 0.4 0.6 0.8 1.0\\nalpha0.00.20.40.60.8aucAlgorithm\\nDiagonalCMA\\nDifferentialEvolution\\nmodcma\\nFigure 10: Distribution of per-instance normalized AUC val-\\nues for the selected algorithm on the affine combination be-\\ntween F2 and F16. AUC values are calculated based on 50\\nruns on 5 instances, with a budget of 10 000 function evalua-\\ntions.\\nscaling, so the same target values (precision to the optimum) can be\\nused, for example to compute aggregated ECDF curves, the ways in\\nwhich different problems scale their objective values does influence\\nhow we should interpret their results. This becomes increasingly ob-\\nvious when considering the affine combinations of these problems.\\nIn Figure 12, we show the convergence plot of diagonal CMA-ES\\non the combination of F16 and F11. We clearly see from the left\\npart of this curve that the initial values found vary widely for dif-\\nferent combinations, ranging from 107when\\ud835\\udefc=0to102when\\n\\ud835\\udefc=1. However, the change in scale is not the only factor impacting\\n0.0 0.2 0.4 0.6 0.8 1.0\\nalpha0.00.20.40.60.81.0auc\\nAlgorithm\\nDiagonalCMA\\nDifferentialEvolution\\nmodcmaFigure 11: Distribution of per-instance normalized AUC val-\\nues for the selected algorithm on the affine combination be-\\ntween F21 and F9. AUC values are calculated based on 50\\nruns on 5 instances, with a budget of 10 000 function evalua-\\ntions.\\n12 5 102 5 1002 5 1e+32 5 1e+41e\\u221240.0111001e+41e+6\\n0.00 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 0.60\\n0.65 0.70 0.75 0.80 0.85 0.90 0.95 1.00Function EvaluationsBest-so-far f(x)\\nFigure 12: Evolution of geometric mean function value\\nfound by modular CMA-ES for the affine combination of F16\\nand F11, instance 1 for both functions. Each line corresponds\\nto 50 runs with the specified \\ud835\\udefc.\\nthe performance. The shape of the curve changes noticeably after\\nthe initialization, which matches the change in AUC observed in\\nFigure 7.\\nTo investigate the reason for this change in behavior, we can\\nstudy the optimization trajectory of diagonal CMA-ES on these\\nfunctions. Since this is not feasible to visualize in the original 5-\\ndimensional space, we repeat the data collection on the 2-dimensional\\nversion of these functions. In Figure 13, we show the landscapes\\nof the affine combinations between F11 and F16 for several values\\nof\\ud835\\udefc. We highlight the best point found by the diagonal CMA-ES\\nin each of its 50 runs on this instance. This plot clearly shows the\\ndifferences in scale between the original problems. In addition, we\\nsee that as\\ud835\\udefcgets closer to 1, the algorithm gets stuck in the lo-\\ncal optima less often. The global structure added by F11 is strong\\nenough to guide the CMA-ES to the area containing the global\\noptimum. However, when the influence of F11 becomes too large,\\nthe difficulties of finding the correct search direction have a strong\\nimpact on the convergence behavior. As such, values of \\ud835\\udefccloser\\n6 Using Affine Combinations of BBOB Problems for Performance Assessment\\n\\u22124\\u22122 024\\n0\\u22124\\u22122024\\n\\u22124\\u22122 024\\n0.25\\u22124\\u22122 024\\n0.5\\u22124\\u22122 024\\n0.75\\u22124\\u22122024\\n1024681012141618\\nFigure 13: Evolution of the landscape (log-scaled function-values) of the affine combination between F11 ( \\ud835\\udefc=1) and F16\\n(\\ud835\\udefc=0), instance 1 for both functions, for varying \\ud835\\udefc. The red circle highlights the location of the global optimum. The crosses\\ncorrespond to the best point found in each of 50 runs of the modular CMA-ES.\\n\\u22124\\u22122 024\\n0\\u22124\\u22122024\\n\\u22124\\u22122 024\\n0.25\\u22124\\u22122 024\\n0.5\\u22124\\u22122 024\\n0.75\\u22124\\u22122 024\\n1\\u22129.6\\u22128.0\\u22126.4\\u22124.8\\u22123.2\\u22121.60.01.63.2\\nFigure 14: Evolution of the landscape (log-scaled function-values) of the affine combination between F21 ( \\ud835\\udefc=1) and F9 (\\ud835\\udefc=\\n0), instance 1 for both functions, for varying \\ud835\\udefc. The red circle highlights the location of the global optimum. The crosses\\ncorrespond to the best point found in each of 50 runs of Diagonal CMA-ES.\\nto0.5seem to provide a mix of the multimodality of F16 and the\\nchallenges of F11, which makes it a challenging problem to solve\\nfor the CMA-ES.\\nWhile the combination between F11 and F16 seems to create\\nfunctions that are more challenging, Figure 7 shows that there are\\nfunction combinations where the opposite is true. The combina-\\ntion between F9 and F21 displays interesting behavior. While the\\nway of performing initialization might explain the asymmetry be-\\ntween(\\ud835\\udc399,\\ud835\\udc3921,0)and(\\ud835\\udc3921,\\ud835\\udc399,1), it does not explain the increase\\nin AUC for \\ud835\\udefcclose to 0.5. We visualize the change in landscape,\\nand corresponding solutions found by the diagonal CMA-ES, in\\nFigure 14. In this figure, we see that when \\ud835\\udefc=0, the CMA-ES finds\\nsolutions on the ridge of the function, but most of the runs don\\u2019t\\nreach the optimum within the given budget. This indicates that the\\ncharacteristic difficulty of F9, the algorithm having to consistently\\nadapt its search direction [ 7], hinders the convergence of the used\\ndiagonal CMA-ES. However, as \\ud835\\udefcincreases, the structure of F21\\ngets added, which increases the ways in which the algorithm can\\napproach the optimum value. For \\ud835\\udefc=1, the multimodality from\\nF21 completely takes over, trapping some runs in local optima, thus\\ndecreasing the performance of the algorithm. This showcases thatcombining these two functions in this way creates a function where\\nthe original difficulties of both are combined in a way that negates\\nboth of them, which is then exploited by the CMA-ES.\\n7 CONCLUSIONS AND FUTURE WORK\\nAffine combinations of BBOB problems offer a new way to inves-\\ntigate the behavior of optimization algorithms. We have shown\\nhow combinations of arbitrary functions with a sphere model can\\nbe used to identify the impact of added global structure on the\\nperformance of a set of algorithms. In addition, combinations be-\\ntween functions with different high-level characteristics allowed\\nus to observe transitions between different optimization challenges.\\nWhile this investigation is not exhaustive, it highlights the potential\\nbenefit of utilizing these new function combinations for gaining an\\nunderstanding of the behavior of optimization algorithms.\\nHowever, these benefits in terms of analysis options also come\\nwith several challenges which have to be considered. We identified\\nthe following aspects:\\nScaling. As identified when these combinations were proposed [ 4],\\nthe differences in scale between two problems can be significant.\\n7 Diederick Vermetten, Furong Ye, and Carola Doerr\\nWhile we aimed to reduce this impact by considering a logarith-\\nmically scaled weighting, it is clear from our experiments that the\\nscale still plays a large role in the way we interpret the performance.\\nFinding ways to combine the landscapes of two functions while\\nmaintaining a consistent range of function values is still an open\\nquestion.\\nInstances. The BBOB suite is built on the idea that each func-\\ntion can be instantiated in many ways. This is achieved through\\nseveral transformations, the most common of which is moving\\nthe optimum to a different location in the domain. The results we\\npresent show that the way in which these optimal locations are\\nchosen can have a large impact on the performance of optimization\\nalgorithms. Since the optima are not distributed uniformly in the\\ndomain, some functions have different kinds of bias, which can\\nbe exploited by an algorithm. The question on how to fairly con-\\nsider different instance generation mechanisms when making use\\nof function combination is thus highly interlinked with questions\\nabout how well performance observed on a set of BBOB instances\\ngeneralizes.\\nEven with these challenges in mind, there are many potential\\nuse cases for these affine function combinations. One aspect in\\nwhich they can prove useful is in the training of algorithm selection\\nmodels [ 11], as they can significantly increase the size and variety of\\ntraining data, which is an important consideration towards testing\\ngeneralizability.\\nOne final aspect in which the benchmark data on these function\\ncombinations can be further utilized is by linking it back to the\\nexploratory landscape analysis which inspired their creation. Since\\nthe combinations can smoothly fill the landscape feature space,\\nthis can be combined with algorithm performance to get a more\\nfine-grained view of the way in which the landscape interacts with\\ndifferent algorithms [9, 22].\\nACKNOWLEDGMENTS\\nOur work is financially supported by ANR-22-ERCS-0003-01 project\\nVARIATION and by the CNRS INS2I project IOHprofiler. This work\\nwas performed using the ALICE compute resources provided by\\nLeiden University.\\nREFERENCES\\n[1]Thomas Bartz-Beielstein, Carola Doerr, Jakob Bossek, Sowmya Chandrasekaran,\\nTome Eftimov, Andreas Fischbach, Pascal Kerschke, Manuel L\\u00f3pez-Ib\\u00e1\\u00f1ez, Kather-\\nine M. Malan, Jason H. Moore, Boris Naujoks, Patryk Orzechowski, Vanessa Volz,\\nMarkus Wagner, and Thomas Weise. 2020. Benchmarking in Optimization:\\nBest Practice and Open Issues. CoRR abs\\/2007.03488 (2020). arXiv:2007.03488\\nhttps:\\/\\/arxiv.org\\/abs\\/2007.03488\\n[2]Jacob de Nobel, Diederick Vermetten, Hao Wang, Carola Doerr, and Thomas B\\u00e4ck.\\n2021. Tuning as a means of assessing the benefits of new ideas in interplay with\\nexisting algorithmic modules. In Proc. of Genetic and Evolutionary Computation\\nConference (GECCO\\u201921, Companion material) , Krzysztof Krawiec (Ed.). ACM,\\n1375\\u20131384. https:\\/\\/doi.org\\/10.1145\\/3449726.3463167\\n[3]Jacob de Nobel, Furong Ye, Diederick Vermetten, Hao Wang, Carola Doerr, and\\nThomas B\\u00e4ck. 2021. IOHexperimenter: Benchmarking Platform for Iterative\\nOptimization Heuristics. CoRR abs\\/2111.04077 (2021). arXiv:2111.04077 https:\\n\\/\\/arxiv.org\\/abs\\/2111.04077\\n[4]Konstantin Dietrich and Olaf Mersmann. 2022. Increasing the Diversity of\\nBenchmark Function Sets Through Affine Recombination. In Parallel Problem\\nSolving from Nature - PPSN XVII - 17th International Conference, PPSN 2022,\\nDortmund, Germany, September 10-14, 2022, Proceedings, Part I (Lecture Notes in\\nComputer Science, Vol. 13398) , G\\u00fcnter Rudolph, Anna V. Kononova, Hern\\u00e1n E.\\nAguirre, Pascal Kerschke, Gabriela Ochoa, and Tea Tusar (Eds.). Springer, 590\\u2013602.\\nhttps:\\/\\/doi.org\\/10.1007\\/978-3-031-14714-2_41[5]Nikolaus Hansen, Anne Auger, Dimo Brockhoff, and Tea Tu\\u0161ar. 2022. Any-\\ntime Performance Assessment in Blackbox Optimization Benchmarking. IEEE\\nTransactions on Evolutionary Computation 26, 6 (2022), 1293\\u20131305. time Performance Assessment in Blackbox Optimization Benchmarking. IEEE\\nTransactions on Evolutionary Computation 26, 6 (2022), 1293\\u20131305.\\n[6]Nikolaus Hansen, Anne Auger, Raymond Ros, Olaf Mersmann, Tea Tu\\u0161ar, and\\nDimo Brockhoff. 2021. COCO: A platform for comparing continuous optimizers\\nin a black-box setting. Optimization Methods and Software 36, 1 (2021), 114\\u2013144.\\n[7]Nikolaus Hansen, Steffen Finck, Raymond Ros, and Anne Auger. 2009. Real-\\nParameter Black-Box Optimization Benchmarking 2009: Noiseless Functions Defi-\\nnitions . Technical Report RR-6829. INRIA. https:\\/\\/hal.inria.fr\\/inria-00362633\\/\\ndocument\\n[8]Nikolaus Hansen and Andreas Ostermeier. 2001. Completely Derandomized\\nSelf-Adaptation in Evolution Strategies. Evolutionary Computation 9, 2 (2001),\\n159\\u2013195. https:\\/\\/doi.org\\/10.1162\\/106365601750190398\\n[9]Anja Jankovic and Carola Doerr. 2020. Landscape-aware fixed-budget perfor-\\nmance regression and algorithm selection for modular CMA-ES variants. In\\nProceedings of the 2020 Genetic and Evolutionary Computation Conference . ACM,\\n841\\u2013849.\\n[10] James Kennedy and Russell Eberhart. 1995. Particle swarm optimization. In\\nProceedings of International Conference on Neural Networks (ICNN\\u201995), Perth, WA,\\nAustralia, November 27 - December 1, 1995 . IEEE, 1942\\u20131948. https:\\/\\/doi.org\\/10.\\n1109\\/ICNN.1995.488968\\n[11] Pascal Kerschke, Holger H. Hoos, Frank Neumann, and Heike Trautmann. 2019.\\nAutomated Algorithm Selection: Survey and Perspectives. Evolutionary Compu-\\ntation 27, 1 (2019), 3\\u201345. https:\\/\\/doi.org\\/10.1162\\/evco_a_00242\\n[12] Pedro Larra\\u00f1aga and Jose A Lozano. 2001. Estimation of distribution algorithms:\\nA new tool for evolutionary computation . Vol. 2. Springer Science & Business\\nMedia.\\n[13] Fu Xing Long, Diederick Vermetten, Bas van Stein, and Anna V. Kononova. 2022.\\nBBOB Instance Analysis: Landscape Properties and Algorithm Performance\\nacross Problem Instances. CoRR abs\\/2211.16318 (2022). https:\\/\\/doi.org\\/10.48550\\/\\narXiv.2211.16318 arXiv:2211.16318\\n[14] Manuel L\\u00f3pez-Ib\\u00e1\\u00f1ez, Juergen Branke, and Lu\\u00eds Paquete. 2021. Reproducibility\\nin evolutionary computation. ACM Transactions on Evolutionary Learning and\\nOptimization 1, 4 (2021), 1\\u201321.\\n[15] Olaf Mersmann, Bernd Bischl, Heike Trautmann, Mike Preuss, Claus Weihs, and\\nG\\u00fcnter Rudolph. 2011. Exploratory landscape analysis. In Proc. of Genetic and\\nEvolutionary Computation (GECCO\\u201911) . ACM, 829\\u2013836.\\n[16] Mario Andr\\u00e9s Mu\\u00f1oz, Michael Kirley, and Kate Smith-Miles. 2022. Analyzing\\nrandomness effects on the reliability of exploratory landscape analysis. Natural\\nComputing 21, 2 (2022), 131\\u2013154.\\n[17] Mario A Mu\\u00f1oz, Yuan Sun, Michael Kirley, and Saman K Halgamuge. 2015.\\nAlgorithm selection for black-box continuous optimization problems: A survey\\non methods and challenges. Information Sciences 317 (2015), 224\\u2013245.\\n[18] Michael JD Powell. 1994. A direct search optimization method that models the\\nobjective and constraint functions by linear interpolation . Springer.\\n[19] J\\u00e9r\\u00e9my Rapin and Olivier Teytaud. 2018. Nevergrad: A gradient-free optimization\\nplatform. https:\\/\\/GitHub.com\\/FacebookResearch\\/Nevergrad.\\n[20] Quentin Renau, Johann Dr\\u00e9o, Carola Doerr, and Benjamin Doerr. 2021. To-\\nwards explainable exploratory landscape analysis: extreme feature selection for\\nclassifying BBOB functions. In Applications of Evolutionary Computation: 24th\\nInternational Conference, EvoApplications 2021, Held as Part of EvoStar 2021, Virtual\\nEvent, April 7\\u20139, 2021, Proceedings 24 . Springer, 17\\u201333.\\n[21] Rainer Storn and Kenneth Price. 1997. Differential evolution-a simple and effi-\\ncient heuristic for global optimization over continuous spaces. Journal of global\\noptimization 11, 4 (1997), 341.\\n[22] Risto Trajanov, Stefan Dimeski, Martin Popovski, Peter Koro\\u0161ec, and Tome Efti-\\nmov. 2021. Explainable landscape-aware optimization performance prediction.\\nIn2021 IEEE Symposium Series on Computational Intelligence (SSCI) . IEEE, 01\\u201308.\\n[23] Diederick Vermetten, Bas van Stein, Fabio Caraffini, Leandro L. Minku, and In2021 IEEE Symposium Series on Computational Intelligence (SSCI) . IEEE, 01\\u201308.\\n[23] Diederick Vermetten, Bas van Stein, Fabio Caraffini, Leandro L. Minku, and\\nAnna V. Kononova. 2022. BIAS: A Toolbox for Benchmarking Structural Bias\\nin the Continuous Domain. IEEE Trans. Evol. Comput. 26, 6 (2022), 1380\\u20131393.\\nhttps:\\/\\/doi.org\\/10.1109\\/TEVC.2022.3189848\\n[24] Diederick Vermetten, Furong Ye, and Carola Doerr. 2023. Reproducibility files\\nand additional figures. Code and data repository: https:\\/\\/doi.org\\/10.5281\\/zenodo.\\n7629706 Figure repository: https:\\/\\/figshare.com\\/s\\/68587b6a82d9c6e5eccf.\\n[25] Hao Wang, Diederick Vermetten, Furong Ye, Carola Doerr, and Thomas B\\u00e4ck.\\n2022. IOHanalyzer: Detailed Performance Analysis for Iterative Optimization\\nHeuristic. ACM Trans. Evol. Learn. Optim. 2, 1 (2022), 3:1\\u20133:29. https:\\/\\/doi.org\\/\\n10.1145\\/3510426 IOHanalyzer is available at CRAN, on GitHub, and as web-based\\nGUI, see https:\\/\\/iohprofiler.github.io\\/IOHanalyzer\\/ for links.\\n8\",\"5\":\" arXiv:2303.04347v1 [cs.NE] 8 Mar 2023Published as a conference paper at ICLR 2022\\nOPTIMAL ANN-SNN C ONVERSION FOR HIGH-\\nACCURACY AND ULTRA -LOW -LATENCY SPIKING\\nNEURAL NETWORKS\\nTong Bu1, Wei Fang1, Jianhao Ding1, PengLin Dai2, Zhaofei Yu1 *, Tiejun Huang1\\n1Peking University,2Southwest Jiaotong University\\n*Corresponding author: yuzf12@pku.edu.cn\\nABSTRACT\\nSpiking Neural Networks (SNNs) have gained great attractio n due to their dis-\\ntinctive properties of low power consumption and fast infer ence on neuromorphic\\nhardware. As the most effective method to get deep SNNs, ANN- SNN conversion\\nhas achieved comparable performance as ANNs on large-scale datasets. Despite\\nthis, it requires long time-steps to match the \\ufb01ring rates of SNNs to the activation\\nof ANNs. As a result, the converted SNN suffers severe perfor mance degradation\\nproblems with short time-steps, which hamper the practical application of SNNs.\\nIn this paper, we theoretically analyze ANN-SNN conversion error and derive the\\nestimated activation function of SNNs. Then we propose the q uantization clip-\\n\\ufb02oor-shift activation function to replace the ReLU activat ion function in source\\nANNs, which can better approximate the activation function of SNNs. We prove\\nthat the expected conversion error between SNNs and ANNs is z ero, enabling us\\nto achieve high-accuracy and ultra-low-latency SNNs. We ev aluate our method\\non CIFAR-10\\/100 and ImageNet datasets, and show that it outp erforms the state-\\nof-the-art ANN-SNN and directly trained SNNs in both accura cy and time-steps.\\nTo the best of our knowledge, this is the \\ufb01rst time to explore h igh-performance\\nANN-SNN conversion with ultra-low latency (4 time-steps). Code is available at\\nhttps:\\/\\/github.com\\/putshua\\/SNN conversion QCFS\\n1 I NTRODUCTION\\nSpiking neural networks (SNNs) are biologically plausible neural networks based on the dynamic\\ncharacteristic of biological neurons (McCulloch & Pitts, 1 943; Izhikevich, 2003). As the third gener-\\nation of arti\\ufb01cial neural networks (Maass, 1997), SNNs have attracted great attention due to their dis-\\ntinctive properties over deep analog neural networks (ANNs ) (Roy et al., 2019). Each neuron trans-\\nmits discrete spikes to convey information when exceeding a threshold. For most SNNs, the spiking\\nneurons will accumulate the current of the last layer as the o utput within Tinference time steps.\\nThe binarized activation has rendered dedicated hardware o f neuromorphic computing (Pei et al.,\\n2019; DeBole et al., 2019; Davies et al., 2018). This kind of h ardware has excellent advantages in\\ntemporal resolution and energy budget. Existing work has sh own the potential of tremendous energy\\nsaving with considerably fast inference (St\\u00a8 ockl & Maass, 2 021).\\nIn addition to ef\\ufb01ciency advantages, the learning algorith m of SNNs has been improved by leaps\\nand bounds in recent years. The performance of SNNs trained b y backpropagation through time\\nand ANN-SNN conversion techniques has gradually been compa rable to ANNs on large-scale\\ndatasets (Fang et al., 2021; Rueckauer et al., 2017). Both te chniques bene\\ufb01t from the setting of\\nSNN inference time. Setting longer time-steps in backpropa gation can make the gradient of surro-\\ngate functions more reliable (Wu et al., 2018; Neftci et al., 2019; Zenke & V ogels, 2021). However,\\nthe price is enormous resource consumption during training . Existing platforms such as TensorFlow\\nand PyTorch based on CUDA have limited optimization for SNN t raining. In contrast, ANN-SNN\\nconversion usually depends on a longer inference time to get comparable accuracy as the original\\nANN (Sengupta et al., 2019) because it is based on the equival ence of ReLU activation and integrate-\\nand-\\ufb01re model\\u2019s \\ufb01ring rate (Cao et al., 2015). Although long er inference time can further reduce the\\nconversion error, it also hampers the practical applicatio n of SNNs on neuromorphic chips.\\n1 Published as a conference paper at ICLR 2022\\nThe dilemma of ANN-SNN conversion is that there exists a rema ining potential in the conver-\\nsion theory, which is hard to be eliminated in a few time steps (Rueckauer et al., 2016). Although\\nmany methods have been proposed to improve the conversion ac curacy, such as weight normaliza-\\ntion (Diehl et al., 2015), threshold rescaling (Sengupta et al., 2019), soft-reset (Han & Roy, 2020)\\nand threshold shift (Deng & Gu, 2020), tens to hundreds of tim e-steps in the baseline works are still\\nunbearable. To obtain high-performance SNNs with ultra-lo w latency (e.g., 4 time-steps), we list the\\ncritical errors in ANN-SNN conversion and provide solution s for each error. Our main contributions\\nare summarized as follows:\\n\\u2022 We go deeper into the errors in the ANN-SNN conversion and as cribe them to clipping\\nerror, quantization error, and unevenness error. We \\ufb01nd tha t unevenness error, which is\\ncaused by the changes in the timing of arrival spikes and has b een neglected in previous\\nworks, can induce more spikes or fewer spikes as expected.\\n\\u2022 We propose the quantization clip-\\ufb02oor-shift activation f unction to replace the ReLU activa-\\ntion function in source ANNs, which better approximates the activation function of SNNs.\\nWe prove that the expected conversion error between SNNs and ANNs is zero, indicating\\nthat we can achieve high-performance converted SNN at ultra -low time-steps.\\n\\u2022 We evaluate our method on CIFAR-10, CIFAR-100, and ImageNe t datasets. Compared\\nwith both ANN-SNN conversion and backpropagation training methods, the proposed\\nmethod exceeds state-of-the-art accuracy with fewer time- steps. For example, we reach\\ntop-1 accuracy 91.18% on CIFAR-10 with unprecedented 2 time -steps.\\n2 PRELIMINARIES\\nIn this section, we \\ufb01rst brie\\ufb02y review the neuron models for S NNs and ANNs. Then we introduce\\nthe basic framework for ANN-SNN conversion.\\nNeuron model for ANNs. For ANNs, the computations of analog neurons can be simpli\\ufb01e d as the\\ncombination of a linear transformation and a non-linear map ping:\\nal=h(Wlal\\u22121), l= 1,2,...,M (1)\\nwhere the vector aldenotes the output of all neurons in l-th layer, Wldenotes the weight matrix\\nbetween layer land layer l\\u22121, andh(\\u00b7)is the ReLU activation function.\\nNeuron model for SNNs. Similar to the previous works (Cao et al., 2015; Diehl et al., 2015;\\nHan et al., 2020), we consider the Integrate-and-Fire (IF) m odel for SNNs. If the IF neurons in\\nl-th layer receive the input xl\\u22121(t)from last layer, the temporal potential of the IF neurons can be\\nde\\ufb01ned as:\\nml(t) =vl(t\\u22121)+Wlxl\\u22121(t), (2)\\nwhereml(t)andvl(t)represent the membrane potential before and after the trigg er of a spike\\nat time-step t.Wldenote the weight in l-th layer. As soon as any element ml\\ni(t)ofml(t)ex-\\nceeds the \\ufb01ring threshold \\u03b8l, the neuron will elicit a spike and update the membrane poten tialvl\\ni(t).\\nTo avoid information loss, we use the \\u201creset-by-subtractio n\\u201d mechanism (Rueckauer et al., 2017;\\nHan et al., 2020) instead of the \\u201creset-to-zero\\u201d mechanism, which means the membrane potential\\nvl\\ni(t)is subtracted by the threshold value \\u03b8lif the neuron \\ufb01res. Based on the threshold-triggered\\n\\ufb01ring mechanism and the \\u201creset-by-subtraction\\u201d of the memb rane potential after \\ufb01ring discussed\\nabove, we can write the uplate rule of membrane potential as:\\nsl(t) =H(ml(t)\\u2212\\u03b8l), (3)\\nvl(t) =ml(t)\\u2212sl(t)\\u03b8l. (4)\\nHeresl(t)refers to the output spikes of all neurons in layer lat timet, the element of which equals\\n1 if there is a spike and 0 otherwise. H(\\u00b7)is the Heaviside step function. \\u03b8lis the vector of the\\n\\ufb01ring threshold \\u03b8l. Similar to Deng & Gu (2020), we suppose that the postsynapti c neuron in l-th\\nlayer receives unweighted postsynaptic potential \\u03b8lif the presynaptic neuron in l\\u22121-th layer \\ufb01res\\na spike, that is:\\nxl(t) =sl(t)\\u03b8l. (5)\\n2 Published as a conference paper at ICLR 2022\\nTable 1: Summary of notations in this paper\\nSymbol De\\ufb01nition Symbol De\\ufb01nition\\nl Layer index xl(t) Unweighted PSP1\\ni Neuron index sl(t) Output spikes\\nWlWeight \\u03c6l(T) Average unweigthed PSP before time T\\nalANN activation values zlWeighted input from l\\u22121layer\\nt Time-steps h(\\u00b7) ReLU function\\nT Total time-step H(\\u00b7) Heaviside step function\\n\\u03b8lThreshold L Quantization step for ANN\\n\\u03bblTrainable threshold in ANN ErrlConversion Error\\nml(t) Potential before \\ufb01ring \\/tildewidestErrlEstimated conversion Error\\nvl(t) Potential after \\ufb01ring \\u03d5 Shift of quantization clip-\\ufb02oor function\\n1Postsynaptic potential\\nANN-SNN conversion. The key idea of ANN-SNN conversion is to map the activation va lue of an\\nanalog neuron in ANN to the \\ufb01ring rate (or average postsynapt ic potential) of a spiking neuron in\\nSNN. Speci\\ufb01cally, we can get the potential update equation b y combining Equation 2 \\u2013 Equation 4:\\nvl(t)\\u2212vl(t\\u22121) =Wlxl\\u22121(t)\\u2212sl(t)\\u03b8l. (6)\\nEquation 6 describes the basic function of spiking neurons u sed in ANN-SNN conversion. By\\nsumming Equation 6 from time 1toTand dividing Ton both sides, we have:\\nvl(T)\\u2212vl(0)\\nT=Wl\\/summationtextT\\ni=1xl\\u22121(i)\\nT\\u2212\\/summationtextT\\ni=1sl(i)\\u03b8l\\nT. (7)\\nIf we use \\u03c6l\\u22121(T) =\\/summationtextT\\ni=1xl\\u22121(i)\\nTto denote the average postsynaptic potential during the per iod\\nfrom 0 to Tand substitute Equation 5 into Equation 7, then we get:\\n\\u03c6l(T) =Wl\\u03c6l\\u22121(T)\\u2212vl(T)\\u2212vl(0)\\nT. (8)\\nEquation 8 describes the relationship of the average postsy naptic potential of neurons in adjacent\\nlayers. Note that \\u03c6l(T)\\/greaterorequalslant0. If we set the initial potential vl(0)to zero and neglect the remaining\\ntermvl(T)\\nTwhen the simulation time-steps Tis long enough, the converted SNN has nearly the\\nsame activation function as source ANN (Equation 1). Howeve r, highTwould cause long inference\\nlatency that hampers the practical application of SNNs. The refore, this paper aims to implement\\nhigh-performance ANN-SNN conversion with extremely low la tency.\\n3 CONVERSION ERROR ANALYSIS\\nIn this section, we will analyze the conversion error betwee n the source ANN and the converted\\nSNN in each layer in detail. In the following, we assume that b oth ANN and SNN receive the same\\ninput from the layer l\\u22121, that is, al\\u22121=\\u03c6l\\u22121(T), and then analyze the error in layer l. For\\nsimplicity, we use zl=Wl\\u03c6l\\u22121(T) =Wlal\\u22121to substitute the weighted input from layer l\\u22121\\nfor both ANN and SNN. The absolute conversion error is exactl y the outputs from converted SNN\\nsubtract the outputs from ANN:\\nErrl=\\u03c6l(T)\\u2212al=zl\\u2212vl(T)\\u2212vl(0)\\nT\\u2212h(zl), (9)\\nwhereh(zl) =ReLU(zl). It can be found from Equation 9 that the conversion error is n onzero if\\nvl(T)\\u2212vl(0)\\/negationslash= 0andzl>0. In fact, the conversion error is caused by three factors.\\nClipping error. The output \\u03c6l(T)of SNNs is in the range of [0,\\u03b8l]as\\u03c6l(T) =\\/summationtextT\\ni=1xl(i)\\nT=\\n\\/summationtextT\\ni=1sl(i)\\nT\\u03b8l(see Equation 5). However, the output alof ANNs is in a much lager range of [0,al\\nmax],\\nwhereal\\nmax denotes the maximum value of al. As illustrated in Figure 1a, alcan be mapped to\\n\\u03c6l(T)by the following equation:\\n\\u03c6l(T) =clip\\/parenleftbigg\\u03b8l\\nT\\/floorleftbiggalT\\n\\u03bbl\\/floorrightbigg\\n,0,\\u03b8l\\/parenrightbigg\\n. (10)\\n3 Published as a conference paper at ICLR 2022\\n\\u000f\\u000f\\u000f \\nclipped\\n\\u000f\\u000f\\u000f\\n(a) Clipping error\\/uni00000013 \\/uni00000014 \\/uni00000015 \\/uni00000016 \\/uni00000017 \\/uni00000018\\n\\/uni00000037\\/uni0000004c\\/uni00000050\\/uni00000048\\/uni00000010\\/uni00000056\\/uni00000057\\/uni00000048\\/uni00000053Sl\\u2212 1\\n2 Sl\\u2212 1\\n1\\/uni00000010\\/uni00000017\\/uni00000010\\/uni00000015\\/uni00000013\\/uni00000015\\/uni00000017\\/uni00000033\\/uni00000052\\/uni00000057\\/uni00000048\\/uni00000051\\/uni00000057\\/uni0000004c\\/uni00000044\\/uni0000004fSl\\n1\\n(b) Even spikes\\/uni00000013 \\/uni00000014 \\/uni00000015 \\/uni00000016 \\/uni00000017 \\/uni00000018\\n\\/uni00000037\\/uni0000004c\\/uni00000050\\/uni00000048\\/uni00000010\\/uni00000056\\/uni00000057\\/uni00000048\\/uni00000053Sl\\u2212 1\\n2 Sl\\u2212 1\\n1\\/uni00000010\\/uni00000017\\/uni00000010\\/uni00000015\\/uni00000013\\/uni00000015\\/uni00000017\\/uni00000033\\/uni00000052\\/uni00000057\\/uni00000048\\/uni00000051\\/uni00000057\\/uni0000004c\\/uni00000044\\/uni0000004fSl\\n1\\n(c) More spikes\\/uni00000013 \\/uni00000014 \\/uni00000015 \\/uni00000016 \\/uni00000017 \\/uni00000018\\n\\/uni00000037\\/uni0000004c\\/uni00000050\\/uni00000048\\/uni00000010\\/uni00000056\\/uni00000057\\/uni00000048\\/uni00000053Sl\\u2212 1\\n2 Sl\\u2212 1\\n1\\/uni00000010\\/uni00000017\\/uni00000010\\/uni00000015\\/uni00000013\\/uni00000015\\/uni00000017\\/uni00000033\\/uni00000052\\/uni00000057\\/uni00000048\\/uni00000051\\/uni00000057\\/uni0000004c\\/uni00000044\\/uni0000004fSl\\n1\\n(d) Fewer spikes\\nFigure 1: Conversion error between source ANN and converted SNN.sl\\u22121\\n1andsl\\u22121\\n2denote the\\noutput spikes of two neurons in layer l\\u22121, andsl\\n1denotes the output spikes of a neuron in layer l.\\nHere the clip function sets the upper bound \\u03b8land the lower bound 0.\\u230a\\u00b7\\u230bdenotes the \\ufb02oor func-\\ntion.\\u03bblrepresents the actual maximum value of output almapped to the maximum value \\u03b8lof\\n\\u03c6l(T). Considering that nearly 99.9% activations of alin ANN are in the range of [0,al\\nmax\\n3],\\nRueckauer et al. (2016) suggested to choose \\u03bblaccording to 99.9% activations. The activations\\nbetween\\u03bblandal\\nmaxin ANN are mapped to the same value \\u03b8lin SNN, which will cause conversion\\nerror called clipping error.\\nQuantization error (\\ufb02ooring error). The output spikes sl(t)are discrete events, thus \\u03c6l(T)are\\ndiscrete with quantization resolution\\u03b8l\\nT(see Equation 10). When mapping alto\\u03c6l(T), there exists\\nunavoidable quantization error. For example, as illustrat ed in Figure 1a, the activations of ANN in\\nthe range of [\\u03bbl\\nT,2\\u03bbl\\nT)are mapped to the same value\\u03b8l\\nTof SNN.\\nUnevenness error. Unevenness error is caused by the unevenness of input spikes . If the timing of\\narrival spikes changes, the output \\ufb01ring rates may change, w hich causes conversion error. There are\\ntwo situations: more spikes as expected or fewer spikes as ex pected. To see this, in source ANN,\\nwe suppose that two analog neurons in layer l\\u22121are connected to an analog neuron in layer l\\nwith weights 2 and -2, and the output vector al\\u22121of neurons in layer l\\u22121is[0.6,0.4]. Besides, in\\nconverted SNN, we suppose that the two spiking neurons in lay erl\\u22121\\ufb01re 3 spikes and 2 spikes in\\n5 time-steps (T=5), respectively, and the threshold \\u03b8l\\u22121= 1. Thus,\\u03c6l\\u22121(T) =\\/summationtextT\\ni=1sl\\u22121(i)\\nT\\u03b8l\\u22121=\\n[0.6,0.4]. Even though \\u03c6l\\u22121(T) =al\\u22121and the weights are same for the ANN and SNN, \\u03c6l(T)\\ncan be different from alif the timing of arrival spikes changes. According to Equati on 1, the ANN\\noutputal=Wlal\\u22121= [2,\\u22122][0.6,0.4]T= 0.4. As for SNN, supposing that the threshold \\u03b8l= 1,\\nthere are three possible output \\ufb01ring rates, which are illus trated in Figure 1 (b)-(d). If the two\\npresynaptic neurons \\ufb01res at t= 1,3,5andt= 2,4(red bars) respectively with weights 2 and -2, the\\npostsynaptic neuron will \\ufb01re two spikes at t= 1,3(red bars), and \\u03c6l(T) =\\/summationtextT\\ni=1sl(i)\\nT\\u03b8l= 0.4 =al.\\nHowever, if the presynaptic neurons \\ufb01res at t= 1,2,3andt= 4,5, respectively, the postsynaptic\\nneuron will \\ufb01re four spikes at t= 1,2,3,4, and\\u03c6l(T) = 0.8>al. If the presynaptic neurons \\ufb01res\\natt= 3,4,5andt= 1,2, respectively, the postsynaptic neuron will \\ufb01re only one sp ikes att= 5,\\nand\\u03c6l(T) = 0.24. These results demonstrate that our quantization clip-\\ufb02oo r-shift activation function\\nhardly affects the performance of ANN.\\n6.2 C OMPARISON WITH THE STATE -OF-THE-ART\\nTable 2 compares our method with the state-of-the-art ANN-S NN conversion methods on CIFAR-\\n10. As for low latency inference (T \\u226464), our model outperforms all the other methods with the\\nsame time-step setting. For T = 32 , the accuracy of our method is slightly better than that of AN N\\n(95.54% vs. 95.52%), whereas RMP, RTS, RNL, and SNNC-AP meth ods have accuracy loss of\\n33.3%, 19.48%, 7.42%, and 2.01%. Moreover, we achieve an acc uracy of 93.96% using only 4\\ntime-steps, which is 8 times faster than SNNC-AP that takes 3 2 time-steps. For ResNet-20, we\\nachieve an accuracy of 83.75% with 4 time-steps. Notably, ou r ultra-low latency performance is\\n7 Published as a conference paper at ICLR 2022\\n\\/uni00000014\\/uni00000015\\/uni00000017\\/uni0000001b\\/uni00000014\\/uni00000019\\/uni00000016\\/uni00000015\\/uni00000019\\/uni00000017\\/uni00000014\\/uni00000015\\/uni0000001b\\n\\/uni00000036\\/uni0000004c\\/uni00000050\\/uni00000058\\/uni0000004f\\/uni00000044\\/uni00000057\\/uni0000004c\\/uni00000052\\/uni00000051\\/uni00000003\\/uni00000057\\/uni0000004c\\/uni00000050\\/uni00000048\\/uni00000010\\/uni00000056\\/uni00000057\\/uni00000048\\/uni00000053\\/uni00000056\\/uni00000013\\/uni00000011\\/uni00000014\\/uni00000018\\/uni00000013\\/uni00000011\\/uni00000016\\/uni00000018\\/uni00000013\\/uni00000011\\/uni00000018\\/uni00000018\\/uni00000013\\/uni00000011\\/uni0000001a\\/uni00000018\\/uni00000013\\/uni00000011\\/uni0000001c\\/uni00000018\\/uni00000024\\/uni00000046\\/uni00000046\\/uni00000058\\/uni00000055\\/uni00000044\\/uni00000046\\/uni0000005c\\n\\/uni0000000b\\/uni00000044\\/uni0000000c\\/uni00000003\\/uni00000039\\/uni0000002a\\/uni0000002a\\/uni00000010\\/uni00000014\\/uni00000019\\/uni00000003\\/uni00000052\\/uni00000051\\/uni00000003\\/uni00000026\\/uni0000002c\\/uni00000029\\/uni00000024\\/uni00000035\\/uni00000010\\/uni00000014\\/uni00000013\\/uni0000005a\\/uni00000012\\/uni00000003\\/uni00000056\\/uni0000004b\\/uni0000004c\\/uni00000049\\/uni00000057\\n\\/uni0000005a\\/uni00000012\\/uni00000052\\/uni00000003\\/uni00000056\\/uni0000004b\\/uni0000004c\\/uni00000049\\/uni00000057\\n\\/uni00000014 \\/uni00000015 \\/uni00000017 \\/uni0000001b \\/uni00000014\\/uni00000019 \\/uni00000016\\/uni00000015 \\/uni00000019\\/uni00000017 \\/uni00000014\\/uni00000015\\/uni0000001b\\n\\/uni00000036\\/uni0000004c\\/uni00000050\\/uni00000058\\/uni0000004f\\/uni00000044\\/uni00000057\\/uni0000004c\\/uni00000052\\/uni00000051\\/uni00000003\\/uni00000057\\/uni0000004c\\/uni00000050\\/uni00000048\\/uni00000010\\/uni00000056\\/uni00000057\\/uni00000048\\/uni00000053\\/uni00000056\\/uni00000013\\/uni00000011\\/uni00000014\\/uni00000018\\/uni00000013\\/uni00000011\\/uni00000016\\/uni00000018\\/uni00000013\\/uni00000011\\/uni00000018\\/uni00000018\\/uni00000013\\/uni00000011\\/uni0000001a\\/uni00000018\\/uni00000013\\/uni00000011\\/uni0000001c\\/uni00000018\\/uni00000024\\/uni00000046\\/uni00000046\\/uni00000058\\/uni00000055\\/uni00000044\\/uni00000046\\/uni0000005c\\n\\/uni0000000b\\/uni00000045\\/uni0000000c\\/uni00000003\\/uni00000035\\/uni00000048\\/uni00000056\\/uni00000031\\/uni00000048\\/uni00000057\\/uni00000010\\/uni00000015\\/uni00000013\\/uni00000003\\/uni00000052\\/uni00000051\\/uni00000003\\/uni00000026\\/uni0000002c\\/uni00000029\\/uni00000024\\/uni00000035\\/uni00000010\\/uni00000014\\/uni00000013\\/uni00000014\\/uni00000015\\/uni00000017\\/uni0000001b\\/uni00000014\\/uni00000019\\/uni00000016\\/uni00000015\\/uni00000019\\/uni00000017\\/uni00000014\\/uni00000015\\/uni0000001b\\n\\/uni00000036\\/uni0000004c\\/uni00000050\\/uni00000058\\/uni0000004f\\/uni00000044\\/uni00000057\\/uni0000004c\\/uni00000052\\/uni00000051\\/uni00000003\\/uni00000057\\/uni0000004c\\/uni00000050\\/uni00000048\\/uni00000010\\/uni00000056\\/uni00000057\\/uni00000048\\/uni00000053\\/uni00000056\\/uni00000013\\/uni00000011\\/uni00000014\\/uni00000018\\/uni00000013\\/uni00000011\\/uni00000016\\/uni00000018\\/uni00000013\\/uni00000011\\/uni00000018\\/uni00000018\\/uni00000013\\/uni00000011\\/uni0000001a\\/uni00000018\\/uni00000024\\/uni00000046\\/uni00000046\\/uni00000058\\/uni00000055\\/uni00000044\\/uni00000046\\/uni0000005c\\n\\/uni0000000b\\/uni00000046\\/uni0000000c\\/uni00000003\\/uni00000039\\/uni0000002a\\/uni0000002a\\/uni00000010\\/uni00000014\\/uni00000019\\/uni00000003\\/uni00000052\\/uni00000051\\/uni00000003\\/uni00000026\\/uni0000002c\\/uni00000029\\/uni00000024\\/uni00000035\\/uni00000010\\/uni00000014\\/uni00000013\\/uni00000013\\/uni00000014\\/uni00000015\\/uni00000017\\/uni0000001b\\/uni00000014\\/uni00000019\\/uni00000016\\/uni00000015\\/uni00000019\\/uni00000017\\/uni00000014\\/uni00000015\\/uni0000001b\\n\\/uni00000036\\/uni0000004c\\/uni00000050\\/uni00000058\\/uni0000004f\\/uni00000044\\/uni00000057\\/uni0000004c\\/uni00000052\\/uni00000051\\/uni00000003\\/uni00000057\\/uni0000004c\\/uni00000050\\/uni00000048\\/uni00000010\\/uni00000056\\/uni00000057\\/uni00000048\\/uni00000053\\/uni00000056\\/uni00000013\\/uni00000011\\/uni00000014\\/uni00000018\\/uni00000013\\/uni00000011\\/uni00000016\\/uni00000018\\/uni00000013\\/uni00000011\\/uni00000018\\/uni00000018\\/uni00000013\\/uni00000011\\/uni0000001a\\/uni00000018\\/uni00000024\\/uni00000046\\/uni00000046\\/uni00000058\\/uni00000055\\/uni00000044\\/uni00000046\\/uni0000005c \\/uni0000000b\\/uni00000047\\/uni0000000c\\/uni00000003\\/uni00000035\\/uni00000048\\/uni00000056\\/uni00000031\\/uni00000048\\/uni00000057\\/uni00000010\\/uni00000015\\/uni00000013\\/uni00000003\\/uni00000052\\/uni00000051\\/uni00000003\\/uni00000026\\/uni0000002c\\/uni00000029\\/uni00000024\\/uni00000035\\/uni00000010\\/uni00000014\\/uni00000013\\/uni00000013\\nFigure 4: Compare quantization clip-\\ufb02oor activation with\\/ without shift term\\nTable 2: Comparison between the proposed method and previou s works on CIFAR-10 dataset.\\nArchitecture Method ANN T=2 T=4 T=8 T=16 T=32 T=64 T \\u2265512\\nVGG-16RMP 93.63% - - - - 60.30% 90.35% 93.63%\\nTSC 93.63% - - - - - 92.79% 93.63%\\nRTS 95.72% - - - - 76.24% 90.64% 95.73%\\nRNL 92.82% - - - 57.90% 85.40% 91.15% 92.95%\\nSNNC-AP 95.72% - - - - 93.71% 95.14% 95.79%\\nOurs 95.52% 91.18% 93.96% 94.95% 95.40% 95.54% 95.55% 95.59%\\nResNet-20RMP 91.47% - - - - - - 91.36%\\nTSC 91.47% - - - - - 69.38% 91.42%\\nOurs 91.77% 73.20% 83.75% 89.55% 91.62% 92.24% 92.35% 92.41%\\nResNet-18RTS195.46% - - - - 84.06% 92.48% 94.42%\\nSNNC-AP195.46% - - - - 94.78% 95.30% 95.45%\\nOurs 96.04% 75.44% 90.43% 94.82% 95.92% 96.08% 96.06% 96.06%\\n1RTS and SNNC-AP use altered ResNet-18, while ours use standa rd ResNet-18.\\ncomparable with other state-of-the-art supervised traini ng methods, which is shown in Table S3 of\\nthe Appendix.\\nWe further test the performance of our method on the large-sc ale dataset. Table 3 reports the results\\non ImageNet, our method also outperforms the others both in t erms of high accuracy and ultra-low\\nlatency. For ResNet-34, the accuracy of the proposed method is 4.83% higher than SNNC-AP and\\n69.28% higher than RTS when T= 32 . When the time-steps is 16, we can still achieve an accuracy\\nof 59.35%. For VGG-16, the accuracy of the proposed method is 4.83% higher than SNNC-AP\\nand 68.356% higher than RTS when T= 32 . When the time-steps is 16, we can still achieve an\\naccuracy of 50.97%. These results demonstrate that our meth od outperforms the previous conversion\\nmethods. More experimental results on CIFAR-100 is in Table S4 of the Appendix.\\n6.3 C OMPARISON OF QUANTIZATION CLIP -FLOOR AND QUANTIZATION CLIP -FLOOR -SHIFT\\nHere we further compare the performance of SNNs converted fr om ANNs with quantization clip-\\n\\ufb02oor activation and ANN with quantization clip-\\ufb02oor-shift activation. In Sec. 4, we prove that\\nthe expectation of the conversion error reaches 0 with quant ization clip-\\ufb02oor-shift activation, no\\nmatter whether TandLare the same or not. To verify these, we set Lto 4 and train ANNs with\\nquantization clip-\\ufb02oor activation and quantization clip- \\ufb02oor-shift activation, respectively. Figure 4\\nshows how the accuracy of converted SNNs changes with respec t to the time-steps T. The accuracy\\nof the converted SNN (green curve) from ANN with quantizatio n clip-\\ufb02oor activation (green dotted\\nline) \\ufb01rst increases and then decreases rapidly with the inc rease of time-steps, because we cannot\\nguarantee that the conversion error is zero when Tis not equal to L. The best performance is still\\nlower than source ANN (green dotted line). In contrast, the a ccuracy of the converted SNN from\\nANN with quantization clip-\\ufb02oor-shift activation (blue cu rve) increases with the increase of T. It\\ngets the same accuracy as source ANN (blue dotted line) when t he time-steps is larger than 16.\\n6.4 E FFECT OF QUANTIZATION STEPS L\\nIn our method, the quantization steps Lis a hyperparameter, which affects the accuracy of the con-\\nverted SNN. To analyze the effect of Land better determine the optimal value, we train VGG-\\n16\\/ResNet-20 networks with quantization clip-\\ufb02oor-shift activation using different quantization\\nsteps L, including 2,4,8,16 and 32, and then converted them t o SNNs. The experimental results\\n8 Published as a conference paper at ICLR 2022\\n\\/uni00000014\\/uni00000015\\/uni00000017\\/uni0000001b\\/uni00000014\\/uni00000019\\/uni00000016\\/uni00000015\\/uni00000019\\/uni00000017\\/uni00000014\\/uni00000015\\/uni0000001b\\n\\/uni00000036\\/uni0000004c\\/uni00000050\\/uni00000058\\/uni0000004f\\/uni00000044\\/uni00000057\\/uni0000004c\\/uni00000052\\/uni00000051\\/uni00000003\\/uni00000057\\/uni0000004c\\/uni00000050\\/uni00000048\\/uni00000010\\/uni00000056\\/uni00000057\\/uni00000048\\/uni00000053\\/uni00000056\\/uni00000013\\/uni00000011\\/uni00000014\\/uni00000018\\/uni00000013\\/uni00000011\\/uni00000016\\/uni00000018\\/uni00000013\\/uni00000011\\/uni00000018\\/uni00000018\\/uni00000013\\/uni00000011\\/uni0000001a\\/uni00000018\\/uni00000013\\/uni00000011\\/uni0000001c\\/uni00000018\\/uni00000024\\/uni00000046\\/uni00000046\\/uni00000058\\/uni00000055\\/uni00000044\\/uni00000046\\/uni0000005c\\n\\/uni0000000b\\/uni00000044\\/uni0000000c\\/uni00000003\\/uni00000039\\/uni0000002a\\/uni0000002a\\/uni00000010\\/uni00000014\\/uni00000019\\/uni00000003\\/uni00000052\\/uni00000051\\/uni00000003\\/uni00000026\\/uni0000002c\\/uni00000029\\/uni00000024\\/uni00000035\\/uni00000010\\/uni00000014\\/uni00000013\\/uni0000002f\\/uni00000020\\/uni00000015\\n\\/uni0000002f\\/uni00000020\\/uni00000017\\n\\/uni0000002f\\/uni00000020\\/uni0000001b\\n\\/uni0000002f\\/uni00000020\\/uni00000014\\/uni00000019\\n\\/uni0000002f\\/uni00000020\\/uni00000016\\/uni00000015\\n\\/uni00000014 \\/uni00000015 \\/uni00000017 \\/uni0000001b \\/uni00000014\\/uni00000019 \\/uni00000016\\/uni00000015 \\/uni00000019\\/uni00000017 \\/uni00000014\\/uni00000015\\/uni0000001b\\n\\/uni00000036\\/uni0000004c\\/uni00000050\\/uni00000058\\/uni0000004f\\/uni00000044\\/uni00000057\\/uni0000004c\\/uni00000052\\/uni00000051\\/uni00000003\\/uni00000057\\/uni0000004c\\/uni00000050\\/uni00000048\\/uni00000010\\/uni00000056\\/uni00000057\\/uni00000048\\/uni00000053\\/uni00000056\\/uni00000013\\/uni00000011\\/uni00000014\\/uni00000018\\/uni00000013\\/uni00000011\\/uni00000016\\/uni00000018\\/uni00000013\\/uni00000011\\/uni00000018\\/uni00000018\\/uni00000013\\/uni00000011\\/uni0000001a\\/uni00000018\\/uni00000013\\/uni00000011\\/uni0000001c\\/uni00000018\\/uni00000024\\/uni00000046\\/uni00000046\\/uni00000058\\/uni00000055\\/uni00000044\\/uni00000046\\/uni0000005c\\n\\/uni0000000b\\/uni00000045\\/uni0000000c\\/uni00000003\\/uni00000035\\/uni00000048\\/uni00000056\\/uni00000031\\/uni00000048\\/uni00000057\\/uni00000010\\/uni00000015\\/uni00000013\\/uni00000003\\/uni00000052\\/uni00000051\\/uni00000003\\/uni00000026\\/uni0000002c\\/uni00000029\\/uni00000024\\/uni00000035\\/uni00000010\\/uni00000014\\/uni00000013\\/uni00000014\\/uni00000015\\/uni00000017\\/uni0000001b\\/uni00000014\\/uni00000019\\/uni00000016\\/uni00000015\\/uni00000019\\/uni00000017\\/uni00000014\\/uni00000015\\/uni0000001b\\n\\/uni00000036\\/uni0000004c\\/uni00000050\\/uni00000058\\/uni0000004f\\/uni00000044\\/uni00000057\\/uni0000004c\\/uni00000052\\/uni00000051\\/uni00000003\\/uni00000057\\/uni0000004c\\/uni00000050\\/uni00000048\\/uni00000010\\/uni00000056\\/uni00000057\\/uni00000048\\/uni00000053\\/uni00000056\\/uni00000013\\/uni00000011\\/uni00000014\\/uni00000018\\/uni00000013\\/uni00000011\\/uni00000016\\/uni00000018\\/uni00000013\\/uni00000011\\/uni00000018\\/uni00000018\\/uni00000013\\/uni00000011\\/uni0000001a\\/uni00000018\\/uni00000024\\/uni00000046\\/uni00000046\\/uni00000058\\/uni00000055\\/uni00000044\\/uni00000046\\/uni0000005c\\n\\/uni0000000b\\/uni00000046\\/uni0000000c\\/uni00000003\\/uni00000039\\/uni0000002a\\/uni0000002a\\/uni00000010\\/uni00000014\\/uni00000019\\/uni00000003\\/uni00000052\\/uni00000051\\/uni00000003\\/uni00000026\\/uni0000002c\\/uni00000029\\/uni00000024\\/uni00000035\\/uni00000010\\/uni00000014\\/uni00000013\\/uni00000013\\/uni00000014\\/uni00000015\\/uni00000017\\/uni0000001b\\/uni00000014\\/uni00000019\\/uni00000016\\/uni00000015\\/uni00000019\\/uni00000017\\/uni00000014\\/uni00000015\\/uni0000001b\\n\\/uni00000036\\/uni0000004c\\/uni00000050\\/uni00000058\\/uni0000004f\\/uni00000044\\/uni00000057\\/uni0000004c\\/uni00000052\\/uni00000051\\/uni00000003\\/uni00000057\\/uni0000004c\\/uni00000050\\/uni00000048\\/uni00000010\\/uni00000056\\/uni00000057\\/uni00000048\\/uni00000053\\/uni00000056\\/uni00000013\\/uni00000011\\/uni00000014\\/uni00000018\\/uni00000013\\/uni00000011\\/uni00000016\\/uni00000018\\/uni00000013\\/uni00000011\\/uni00000018\\/uni00000018\\/uni00000013\\/uni00000011\\/uni0000001a\\/uni00000018\\/uni00000024\\/uni00000046\\/uni00000046\\/uni00000058\\/uni00000055\\/uni00000044\\/uni00000046\\/uni0000005c \\/uni0000000b\\/uni00000047\\/uni0000000c\\/uni00000003\\/uni00000035\\/uni00000048\\/uni00000056\\/uni00000031\\/uni00000048\\/uni00000057\\/uni00000010\\/uni00000015\\/uni00000013\\/uni00000003\\/uni00000052\\/uni00000051\\/uni00000003\\/uni00000026\\/uni0000002c\\/uni00000029\\/uni00000024\\/uni00000035\\/uni00000010\\/uni00000014\\/uni00000013\\/uni00000013\\nFigure 5: In\\ufb02uence of different quantization steps\\nTable 3: Comparison between the proposed method and previou s works on ImageNet dataset.\\nArchitecture Method ANN T=16 T=32 T=64 T=128 T=256 T \\u22651024\\nResNet-34RMP 70.64% - - - - - 65.47%\\nTSC 70.64% - - - - 61.48% 65.10%\\nRTS 75.66% - 0.09% 0.12% 3.19% 47.11% 75.08%\\nSNNC-AP 75.66% - 64.54% 71.12% 73.45% 74.61% 75.45%\\nOurs 74.32% 59.35% 69.37% 72.35% 73.15% 73.37% 73.39%\\nVGG-16RMP 73.49% - - - - 48.32% 73.09%\\nTSC 73.49% - - - - 69.71% 73.46%\\nRTS 75.36% - 0.114% 0.118% 0.122% 1.81% 73.88%\\nSNNC-AP 75.36% - 63.64% 70.69% 73.32% 74.23% 75.32%\\nOurs 74.29% 50.97% 68.47% 72.85% 73.97% 74.22% 74.32%\\non CIFAR-10\\/100 dataset are shown in Table S2 and Figure 5, wh ere the black dotted line denotes\\nthe ANN accuracy and the colored curves represent the accura cy of the converted SNN. In order to\\nbalance the trade-off between low latency and high accuracy , we evaluate the performance of con-\\nverted SNN mainly in two aspects. First, we focus on the SNN ac curacy at ultra-low latency (within\\n4 time-steps). Second, we consider the best accuracy of SNN. It is obvious to \\ufb01nd that the SNN\\naccuracy at ultra-low latency decreases as Lincreases. However, a too small Lwill decrease the\\nmodel capacity and further lead to accuracy loss. When L= 2, there exists a clear gap between the\\nbest accuracy of SNN and source ANN. The best accuracy of SNN a pproaches source ANN when\\nL >4. In conclusion, the setting of parameter Lmainly depends on the aims for low latency or best\\naccuracy. The recommend quantization step Lis 4 or 8, which leads to high-performance converted\\nSNN at both small time-steps and very large time-steps.\\n7 D ISCUSSION AND CONCLUSION\\nIn this paper, we present ANN-SNN conversion method, enabli ng high-accuracy and ultra-low-\\nlatency deep SNNs. We propose the quantization clip-\\ufb02oor-s hift activation to replace ReLU activa-\\ntion, which hardly affects the performance of ANNs and is clo ser to SNNs activation. Furthermore,\\nwe prove that the expected conversion error is zero, no matte r whether the time-steps of SNNs and\\nthe quantization steps of ANNs is the same or not. We achieve s tate-of-the-art accuracy with fewer\\ntime-steps on CIFAR-10, CIFAR-100, and ImageNet datasets. Our results can bene\\ufb01t the imple-\\nmentations on neuromorphic hardware and pave the way for the large-scale application of SNNs.\\nDifferent from the work of Deng & Gu (2020), which adds the bia s of the converted SNNs to shift\\nthe theoretical ANN-SNN curve to minimize the quantization error, we add the shift term in the\\nquantization clip-\\ufb02oor activation function, and use this q uantization clip-\\ufb02oor-shift function to train\\nthe source ANN. We show that the shift term can overcome the pe rformance degradation problem\\nwhen the time-steps and the quantization steps are not match ed. Due to the unevenness error, there\\nstill exists a gap between ANN accuracy and SNN accuracy, eve n whenL=T. Moreover, it is hard\\nto achieve high-performance ANN-SNN conversion when the ti me-steps T= 1. All these problems\\ndeserve further research. One advantage of conversion-bas ed methods is that they can reduce the\\noverall computing cost while maintaining comparable perfo rmance as source ANN. Combining the\\nconversion-based methods and model compression may help si gni\\ufb01cantly reduce the neuron activity\\n9 Published as a conference paper at ICLR 2022\\nand thus reduce energy consumptions without suffering from accuracy loss (Kundu et al., 2021;\\nRathi & Roy, 2021), which is a promising direction.\\n10 Published as a conference paper at ICLR 2022\\nACKNOWLEDGEMENT\\nThis work was supported by the National Natural Science Foun dation of China under contracts\\nNo.62176003 and No.62088102.\\nREFERENCES\\nYoshua Bengio, Nicholas L\\u00b4 eonard, and Aaron Courville. Est imating or propagating gradients\\nthrough stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432 , 2013.\\nSander M Bohte, Joost N Kok, and Han La Poutre. Error-backpro pagation in temporally encoded\\nnetworks of spiking neurons. Neurocomputing , 48(1-4):17\\u201337, 2002.\\nL\\u00b4 eon Bottou. Stochastic gradient descent tricks. In Neural networks: Tricks of the trade , pp. 421\\u2013\\n436. Springer, 2012.\\nYongqiang Cao, Yang Chen, and Deepak Khosla. Spiking deep co nvolutional neural networks for\\nenergy-ef\\ufb01cient object recognition. International Journal of Computer Vision , 113(1):54\\u201366,\\n2015.\\nEkin D Cubuk, Barret Zoph, Dandelion Mane, Vijay Vasudevan, and Quoc V Le. Autoaugment:\\nLearning augmentation strategies from data. In IEEE Conference on Computer Vision and Pattern\\nRecognition , pp. 113\\u2013123, 2019.\\nMike Davies, Narayan Srinivasa, Tsung-Han Lin, Gautham Chi nya, Yongqiang Cao, Sri Harsha\\nChoday, Georgios Dimou, Prasad Joshi, Nabil Imam, Shweta Ja in, et al. Loihi: A neuromorphic\\nmanycore processor with on-chip learning. IEEE Micro , 38(1):82\\u201399, 2018.\\nMichael V DeBole, Brian Taba, Arnon Amir, Filipp Akopyan, Al exander Andreopoulos, William P\\nRisk, Jeff Kusnitz, Carlos Ortega Otero, Tapan K Nayak, Rath inakumar Appuswamy, et al.\\nTrueNorth: Accelerating from zero to 64 million neurons in 1 0 years. Computer , 52(5):20\\u201329,\\n2019.\\nJia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li F ei-Fei. Imagenet: A large-scale\\nhierarchical image database. In IEEE Conference on Computer Vision and Pattern Recognition ,\\npp. 248\\u2013255. Ieee, 2009.\\nShikuang Deng and Shi Gu. Optimal conversion of conventiona l arti\\ufb01cial neural networks to spiking\\nneural networks. In International Conference on Learning Representations , 2020.\\nTerrance DeVries and Graham W Taylor. Improved regularizat ion of convolutional neural networks\\nwith cutout. arXiv preprint arXiv:1708.04552 , 2017.\\nPeter U Diehl, Daniel Neil, Jonathan Binas, Matthew Cook, Sh ih-Chii Liu, and Michael Pfeiffer.\\nFast-classifying, high-accuracy spiking deep networks th rough weight and threshold balancing.\\nInInternational Joint Conference on Neural Networks , pp. 1\\u20138, 2015.\\nJianhao Ding, Zhaofei Yu, Yonghong Tian, and Tiejun Huang. O ptimal ann-snn conversion for fast\\nand accurate inference in deep spiking neural networks. In International Joint Conference on\\nArti\\ufb01cial Intelligence , pp. 2328\\u20132336, 2021.\\nWei Fang, Zhaofei Yu, Yanqi Chen, Tiejun Huang, Timoth\\u00b4 ee Ma squelier, and Yonghong Tian. Deep\\nresidual learning in spiking neural networks. arXiv preprint arXiv:2102.04159 , 2021.\\nBing Han and Kaushik Roy. Deep spiking neural network: Energ y ef\\ufb01ciency through time based\\ncoding. In European Conference on Computer Vision , pp. 388\\u2013404, 2020.\\nBing Han, Gopalakrishnan Srinivasan, and Kaushik Roy. RMP- SNN: Residual membrane poten-\\ntial neuron for enabling deeper high-accuracy and low-late ncy spiking neural network. In IEEE\\nConference on Computer Vision and Pattern Recognition , pp. 13558\\u201313567, 2020.\\nKaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep r esidual learning for image recog-\\nnition. In IEEE conference on Computer Vision and Pattern Recognition , pp. 770\\u2013778, 2016.\\n11 Published as a conference paper at ICLR 2022\\nNguyen-Dong Ho and Ik-Joon Chang. Tcl: an ann-to-snn conver sion with trainable clipping layers.\\narXiv preprint arXiv:2008.04509 , 2020.\\nEugene M Izhikevich. Simple model of spiking neurons. IEEE Transactions on neural networks ,\\n14(6):1569\\u20131572, 2003.\\nSaeed Reza Kheradpisheh and Timoth\\u00b4 ee Masquelier. Tempora l backpropagation for spiking neural\\nnetworks with one spike per neuron. International Journal of Neural Systems , 30(06):2050027,\\n2020.\\nJinseok Kim, Kyungsu Kim, and Jae-Joon Kim. Unifying activa tion- and timing-based learning\\nrules for spiking neural networks. In Advances in Neural Information Processing Systems , pp.\\n19534\\u201319544, 2020.\\nAlex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images.\\n2009.\\nSouvik Kundu, Gourav Datta, Massoud Pedram, and Peter A Beer el. Spike-thrift: Towards energy-\\nef\\ufb01cient deep spiking neural networks by limiting spiking a ctivity via attention-guided compres-\\nsion. In Proceedings of the IEEE\\/CVF Winter Conference on Applicati ons of Computer Vision\\n(WACV) , pp. 3953\\u20133962, 2021.\\nYann LeCun, L\\u00b4 eon Bottou, Yoshua Bengio, and Patrick Haffne r. Gradient-based learning applied to\\ndocument recognition. Proceedings of the IEEE , 86(11):2278\\u20132324, 1998.\\nChankyu Lee, Syed Shakib Sarwar, Priyadarshini Panda, Gopa lakrishnan Srinivasan, and Kaushik\\nRoy. Enabling spike-based backpropagation for training de ep neural network architectures. Fron-\\ntiers in Neuroscience , 14, 2020.\\nJun Haeng Lee, Tobi Delbruck, and Michael Pfeiffer. Trainin g deep spiking neural networks using\\nbackpropagation. Frontiers in Neuroscience , 10:508, 2016.\\nYuhang Li, Shikuang Deng, Xin Dong, Ruihao Gong, and Shi Gu. A free lunch from ann: Towards\\nef\\ufb01cient, accurate spiking neural networks calibration. I nInternational Conference on Machine\\nLearning , pp. 6316\\u20136325, 2021.\\nIlya Loshchilov and Frank Hutter. Sgdr: Stochastic gradien t descent with warm restarts. In Interna-\\ntional Conference on Learning Representations , 2016.\\nWolfgang Maass. Networks of spiking neurons: the third gene ration of neural network models.\\nNeural Networks , 10(9):1659\\u20131671, 1997.\\nRiccardo Massa, Alberto Marchisio, Maurizio Martina, and M uhammad Sha\\ufb01que. An ef\\ufb01cient\\nspiking neural network for recognizing gestures with a DVS c amera on the Loihi neuromorphic\\nprocessor. In International Joint Conference on Neural Networks , pp. 1\\u20139, 2020.\\nWarren S McCulloch and Walter Pitts. A logical calculus of th e ideas immanent in nervous activity.\\nThe Bulletin of Mathematical Biophysics , 5(4):115\\u2013133, 1943.\\nPaul A Merolla, John V Arthur, Rodrigo Alvarez-Icaza, Andre w S Cassidy, Jun Sawada, Filipp\\nAkopyan, Bryan L Jackson, Nabil Imam, Chen Guo, Yutaka Nakam ura, et al. A million spiking-\\nneuron integrated circuit with a scalable communication ne twork and interface. Science , 345\\n(6197):668\\u2013673, 2014.\\nEmre O Neftci, Hesham Mostafa, and Friedemann Zenke. Surrog ate gradient learning in spiking\\nneural networks: Bringing the power of gradient-based opti mization to spiking neural networks.\\nIEEE Signal Processing Magazine , 36(6):51\\u201363, 2019.\\nJing Pei, Lei Deng, Sen Song, Mingguo Zhao, Youhui Zhang, Shu ang Wu, Guanrui Wang, Zhe\\nZou, Zhenzhi Wu, Wei He, et al. Towards arti\\ufb01cial general int elligence with hybrid tianjic chip\\narchitecture. Nature , 572(7767):106\\u2013111, 2019.\\nNing Qiao, Hesham Mostafa, Federico Corradi, Marc Osswald, Fabio Stefanini, Dora Sumislawska,\\nand Giacomo Indiveri. A recon\\ufb01gurable on-line learning spi king neuromorphic processor com-\\nprising 256 neurons and 128K synapses. Frontiers in neuroscience , 9:141, 2015.\\n12 Published as a conference paper at ICLR 2022\\nNitin Rathi and Kaushik Roy. Diet-snn: A low-latency spikin g neural network with direct input\\nencoding and leakage and threshold optimization. IEEE Transactions on Neural Networks and\\nLearning Systems , 2021.\\nNitin Rathi, Gopalakrishnan Srinivasan, Priyadarshini Pa nda, and Kaushik Roy. Enabling deep\\nspiking neural networks with hybrid conversion and spike ti ming dependent backpropagation. In\\nInternational Conference on Learning Representations , 2019.\\nKaushik Roy, Akhilesh Jaiswal, and Priyadarshini Panda. To wards spike-based machine intelligence\\nwith neuromorphic computing. Nature , 575(7784):607\\u2013617, 2019.\\nBodo Rueckauer, Iulia-Alexandra Lungu, Yuhuang Hu, and Mic hael Pfeiffer. Theory and\\ntools for the conversion of analog to spiking convolutional neural networks. arXiv preprint\\narXiv:1612.04052 , 2016.\\nBodo Rueckauer, Iulia-Alexandra Lungu, Yuhuang Hu, Michae l Pfeiffer, and Shih-Chii Liu. Con-\\nversion of continuous-valued deep networks to ef\\ufb01cient eve nt-driven networks for image classi\\ufb01-\\ncation. Frontiers in Neuroscience , 11:682, 2017.\\nOlga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjee v Satheesh, Sean Ma, Zhiheng\\nHuang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei.\\nImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision\\n(IJCV) , 115(3):211\\u2013252, 2015. doi: 10.1007\\/s11263-015-0816-y.\\nAbhronil Sengupta, Yuting Ye, Robert Wang, Chiao Liu, and Ka ushik Roy. Going deeper in spiking\\nneural networks: VGG and residual architectures. Frontiers in Neuroscience , 13:95, 2019.\\nKaren Simonyan and Andrew Zisserman. Very deep convolution al networks for large-scale image\\nrecognition. arXiv preprint arXiv:1409.1556 , 2014.\\nSonali Singh, Anup Sarma, Sen Lu, Abhronil Sengupta, Vijayk rishnan Narayanan, and Chita R\\nDas. Gesture-snn: Co-optimizing accuracy, latency and ene rgy of snns for neuromorphic vision\\nsensors. In IEEE\\/ACM International Symposium on Low Power Electronics and Design , pp. 1\\u20136,\\n2021.\\nChristoph St\\u00a8 ockl and Wolfgang Maass. Optimized spiking ne urons can classify images with high\\naccuracy through temporal coding with two spikes. Nature Machine Intelligence , 3(3):230\\u2013238,\\n2021.\\nAmirhossein Tavanaei, Masoud Ghodrati, Saeed Reza Kheradp isheh, Timoth\\u00b4 ee Masquelier, and\\nAnthony Maida. Deep learning in spiking neural networks. Neural Networks , 111:47\\u201363, 2019.\\nYujie Wu, Lei Deng, Guoqi Li, Jun Zhu, and Luping Shi. Spatio- temporal backpropagation for\\ntraining high-performance spiking neural networks. Frontiers in Neuroscience , 12:331, 2018.\\nYujie Wu, Lei Deng, Guoqi Li, Jun Zhu, Yuan Xie, and Luping Shi . Direct training for spiking\\nneural networks: Faster, larger, better. In AAAI Conference on Arti\\ufb01cial Intelligence , pp. 1311\\u2013\\n1318, 2019.\\nFriedemann Zenke and Tim P V ogels. The remarkable robustnes s of surrogate gradient learning\\nfor instilling complex function in spiking neural networks .Neural Computation , 33(4):899\\u2013925,\\n2021.\\nWenrui Zhang and Peng Li. Temporal spike sequence learning v ia backpropagation for deep spiking\\nneural networks. In Advances in Neural Information Processing Systems , pp. 12022\\u201312033, 2020.\\n13 Published as a conference paper at ICLR 2022\\nA A PPENDIX\\nA.1 NETWORK STRUCTURE AND TRAINING CONFIGURATIONS\\nBefore training ANNs, we \\ufb01rst replace max-pooling with aver age-pooling and then replace the\\nReLU activation with the proposed quantization clip-\\ufb02oor- shift activation (Equation 15). After\\ntraining, we copy all weights from the source ANN to the conve rted SNN, and set the threshold\\n\\u03b8lin each layer of the converted SNN equal to the maximum activa tion value \\u03bblof the source ANN\\nin the same layer. Besides, we set the initial membrane poten tialvl(0)in converted SNN as \\u03b8l\\/2to\\nmatch the optimal shift \\u03d5=1\\n2of quantization clip-\\ufb02oor-shift activation in the source A NN.\\nDespite the common data normalization, we use some data pre- processing techniques. For CIFAR\\ndatasets, we resize the images into 32\\u00d732, and for ImageNet dataset, we resize the image into\\n224\\u00d7224. Besides, we use random crop images, Cutout (DeVries & Taylo r, 2017) and AutoAug-\\nment (Cubuk et al., 2019) for all datasets.\\nWe use the Stochastic Gradient Descent optimizer (Bottou, 2 012) with a momentum parameter of\\n0.9. The initial learning rate is set to 0.1 for CIFAR-10 and I mageNet, and 0.02 for CIFAR-100. A\\ncosine decay scheduler (Loshchilov & Hutter, 2016) is used t o adjust the learning rate. We apply a\\n5\\u00d710\\u22124weight decay for CIFAR datasets while applying a 1\\u00d710\\u22124weight decay for ImageNet.\\nWe train all models for 300 epochs. The quantization steps Lis set to 4 when training all the\\nnetworks on CIFAR-10, and VGG-16, ResNet-18 on CIFAR-100 da taset. When training ResNet-20\\non CIFAR-100, the parameter Lis set to 8. When training ResNet-34 and VGG-16 on ImageNet,\\nthe parameter Lis set to 8, 16, respectively. We use constant input when eval uating the converted\\nSNNs.\\nA.2 I NTRODUCTION OF DATASETS\\nCIFAR-10. The CIFAR-10 dataset (Krizhevsky et al., 2009) consists of 6 000032\\u00d732images in\\n10 classes. There are 50000 training images and 10000 test im ages.\\nCIFAR-100. The CIFAR-100 dataset (Krizhevsky et al., 2009) consists of 6000032\\u00d732images in\\n100 classes. There are 50000 training images and 10000 test i mages.\\nImageNet. We use the ILSVRC 2012 dataset (Russakovsky et al., 2015), wh ich consists 1,281,167\\ntraining images and 50000 testing images.\\nA.3 D ERIVATION OF EQUATION 12AND PROOF OF THEOREM 2\\nDerivation of Equation 11\\nSimilar to ,, We de\\ufb01ne\\nul(t) =Wlxl\\u22121(t). (S1)\\nWe useul\\ni(t)andzl\\nito denote the i-th element in vector ul(t)andzl, respectively. To derive\\nEquation 11, some extra assumptions on the relationship bet ween ANN activation value and SNN\\npostsynaptic potentials are needed, which are showed in Equ ation S2.\\n\\uf8f1\\n\\uf8f2\\n\\uf8f3ifzl\\ni<0,then\\u2200t ul\\ni(t)<0,\\nif 0\\/lessorequalslantzl\\ni\\/lessorequalslant\\u03b8l,then\\u2200t0\\/lessorequalslantul\\ni(t)\\/lessorequalslant\\u03b8l,\\nifzl\\ni> \\u03b8l,then\\u2200t ul\\ni(t)> \\u03b8l.(S2)\\nWith the assumption above, we can discuss the \\ufb01ring behavior of the neurons in each time-step.\\nWhenzl\\ni<0orzl\\ni> \\u03b8l, the neuron will never \\ufb01re or \\ufb01re all the time-steps, which me ans\\u03c6l\\ni(T) = 0\\nor\\u03c6l\\ni(T) =\\u03b8l. In this situation, we can use a clip function to denote \\u03c6l\\ni(T).\\n\\u03c6l\\ni(T) = clip(zl\\ni,0,\\u03b8l). (S3)\\n14 Published as a conference paper at ICLR 2022\\nWhen0< zl\\ni< \\u03b8l, every input from the presynaptic neuron in SNNs falls into [0,\\u03b8l], then we have\\n\\u2200t, vl\\ni(t)\\u2208[0,\\u03b8]. We can rewrite Equation 8 into the following equation.\\n\\u03c6l\\ni(T)T\\n\\u03b8l=zl\\niT+vl\\ni(0)\\n\\u03b8l\\u2212vl\\ni(T)\\n\\u03b8l. (S4)\\nConsidering that\\u03c6l\\ni(T)T\\n\\u03b8l=\\/summationtextT\\nt=1sl\\ni(t)\\u2208Nand0al.\\nBesides, setting maxs\\u2208{0,1}n\\/parenleftbig\\nmax(\\u03b8l\\u22121Wls)\\/parenrightbig\\nas the threshold brings two other problems. First,\\nthe spiking neurons will take a long time to \\ufb01re spikes becaus e of the large value of the threshold,\\nwhich makes it hard to maintain SNN performance within a few t ime-steps. Second, the quantization\\nerror will be large as it is proportional to the threshold. If the conversion error is not zero for\\none layer, it will propagate layer by layer and will be magni\\ufb01 ed by larger quantization errors. We\\ncompare our method and the method of setting the maximum acti vation on the CIFAR-100 dataset.\\nThe results are reported in Table S1, where DT represents the dynamic threshold in our method. The\\nresults show that our method can achieve better performance .\\n16 Published as a conference paper at ICLR 2022\\nTable S1: Comparison between our method and the method of set ting the maximum activation.\\nDT1w\\/o shift T=4 T=8 T=16 T=32 T=64 T=128 T=256 T \\u2265512\\nVGG-16 on CIFAR-100 with L=4\\n\\/check \\/check 69.62% 73.96% 76.24% 77.01% 77.10% 77.05% 77.08% 77.08%\\n\\/check\\u00d7 21.57% 41.13% 58.92% 65.38% 64.19% 58.60% 52.99% 49.41%\\n\\u00d7\\/check 1.00% 0.96% 1.00% 1.10% 2.41% 13.76% 51.70% 77.10%\\n\\u00d7 \\u00d7 1.00% 1.00% 0.90% 1.00% 1.01% 2.01% 19.59% 70.86%\\n1Dynamic threshold.\\nTheorem 3. If the threshold is set to the maximum value of ANN activation , that is \\u03b8l=\\nmaxs\\u2208{0,1}n\\/parenleftbig\\nmax(\\u03b8l\\u22121Wls)\\/parenrightbig\\n,andvl\\ni(0)< \\u03b8l.Then at any time-step, the membrane potential\\nof each neuron after spike vl\\ni(t)will be less than \\u03b8l, whereirepresents the index of each neuron.\\nProof. We prove it by induction. For t= 0, it is easy to see vl\\ni(0)< \\u03b8l. Fort >0, we suppose\\nthatvl\\ni(t\\u22121)< \\u03b8l. Since we have set the threshold to the maximum possible inpu t, andxl\\u22121\\ni(t)\\nrepresents the input from layer l\\u22121to thei-th neuron in layer l,xl\\u22121\\ni(t)will be no larger than \\u03b8l\\nfor arbitrary t. Thus we have\\nml\\ni(t) =vl\\ni(t\\u22121)+xl\\u22121\\ni(t)< \\u03b8l+\\u03b8l= 2\\u03b8l, (S14)\\nsl\\ni(t) =H(ml\\ni(t)\\u2212\\u03b8l), (S15)\\nvl\\ni(t) =ml\\ni(t)\\u2212sl\\ni(t)\\u03b8l. (S16)\\nIf\\u03b8l\\/lessorequalslantml\\ni(t)<2\\u03b8l, then we have vl\\ni(t) =ml\\ni(t)\\u2212\\u03b8l< \\u03b8l. Ifml\\ni(t)< \\u03b8l, thenvl\\ni(t) =ml\\ni(t)< \\u03b8l.\\nBy mathematical induction, vl\\ni(t)< \\u03b8lholds for any t\\/greaterorequalslant0.\\nA.5 E FFECT OF QUANTIZATION STEPS L\\nTable S2 reports the performance of converted SNNs with diff erent quantization steps Land differ-\\nent time-steps T. For VGG-16 and quantization steps L= 2, we achieve an accuracy of 86.53% on\\nCIFAR-10 dataset and an accuracy of 61.41% on CIFAR-100 data set with 1 time-steps. When the\\nquantization steps L= 1, we cannot train the source ANN.\\nA.6 C OMPARISON WITH STATE -OF-THE-ART SUPERVISED TRAINING METHODS ON\\nCIFAR-10 DATASET\\nNotably, our ultra-low latency performance is comparable w ith other state-of-the-art supervised\\ntraining methods. Table S3 reports the results of hybrid tra ining and backpropagation methods\\non CIFAR-10. The backpropagation methods require suf\\ufb01cien t time-steps to convey discriminate\\ninformation. Thus, the list methods need at least 5 time-ste ps to achieve\\u223c91% accuracy. On the\\ncontrary, our method can achieve 94.73% accuracy with 4 time -steps. Besides, the hybrid training\\nmethod requires 200 time-steps to obtain 92.02% accuracy be cause of further training with STDB,\\nwhereas our method achieves 93.96% accuracy with 4 time-ste ps.\\nA.7 C OMPARISON ON CIFAR-100 DATASET\\nTable S4 reports the results on CIFAR-100, our method also ou tperforms the others both in terms of\\nhigh accuracy and ultra-low latency. For VGG-16, the accura cy of the proposed method is 3.46%\\nhigher than SNNC-AP and 69.37% higher than RTS when T= 32 . When the time-steps is only 4,\\nwe can still achieve an accuracy of 69.62%. These results dem onstrate that our method outperforms\\nthe previous conversion methods.\\n17 Published as a conference paper at ICLR 2022\\nTable S2: In\\ufb02uence of different quantization steps.\\nquantization\\nsteps T=1 T=2 T=4 T=8 T=16 T=32 T=64 T=128\\nVGG-16 on CIFAR-10\\nL=2 86.53% 91.98% 93.00% 93.95% 94.18% 94.22% 94.18% 94.14%\\nL=4 88.41% 91.18% 93.96% 94.95% 95.40% 95.54% 95.55% 95.59%\\nL=8 62.89% 83.93% 91.77% 94.45% 95.22% 95.56% 95.74% 95.79%\\nL=16 61.48% 76.76% 89.61% 93.03% 93.95% 94.24% 94.25% 94.22 %\\nL=32 13.05% 73.33% 89.67% 94.13% 95.31% 95.66% 95.73% 95.77 %\\nResNet-20 on CIFAR-10\\nL=2 77.54% 82.12% 85.77% 88.04% 88.64% 88.79% 88.85% 88.76%\\nL=4 62.43% 73.2% 83.75% 89.55% 91.62% 92.24% 92.35% 92.35%\\nL=8 46.19% 58.67% 75.70% 87.79% 92.14% 93.04% 93.34% 93.24%\\nL=16 30.96% 39.87% 57.04% 79.5% 90.87% 93.25% 93.44% 93.48%\\nL=32 22.15% 27.83% 43.56% 70.15% 88.81% 92.97% 93.48% 93.48 %\\nVGG-16 on CIFAR-100\\nL=2 61.41% 64.96% 68.0% 70.72% 71.87% 72.28% 72.35% 72.4%\\nL=4 57.5% 63.79% 69.62% 73.96% 76.24% 77.01% 77.1% 77.05%\\nL=8 44.98% 52.46% 62.09% 70.71% 74.83% 76.41% 76.73% 76.73%\\nL=16 33.12% 41.71% 53.38% 65.76% 72.80% 75.6% 76.37% 76.36%\\nL=32 15.18% 21.41% 32.21% 50.46% 67.32% 74.6% 76.18% 76.24%\\nResNet-20 on CIFAR-100\\nL=2 38.65% 47.35% 55.23% 59.69% 61.29% 61.5% 61.03% 60.81%\\nL=4 25.62% 36.33% 51.55% 63.14% 66.70% 67.47% 67.47% 67.41%\\nL=8 13.19% 19.96% 34.14% 55.37% 67.33% 69.82% 70.49% 70.55%\\nL=16 6.09% 9.25% 17.48% 38.22% 60.92% 68.70% 70.15% 70.20%\\nL=32 5.44% 7.41% 13.36% 31.66% 58.68% 68.12% 70.12% 70.27%\\nA.8 E NERGY CONSUMPTION ANALYSIS\\nWe evaluate the energy consumption of our method and the comp ared methods (Li et al., 2021;\\nDeng & Gu, 2020) on CIFAR-100 datasets. Here we use the same ne twork structure of VGG-16.\\nFollowing the analysis in Merolla et al. (2014), we use synap tic operation (SOP) for SNN to rep-\\nresent the required basic operation numbers to classify one image. We utilize 77fJ\\/SOP for SNN\\nand 12.5pJ\\/FLOP for ANN as the power consumption baseline, w hich is reported from the ROLLS\\nneuromorphic processor (Qiao et al., 2015). Note that we do n ot consider the memory access energy\\nin our study because it depends on the hardware. As shown in Ta ble S5, when the time-steps is the\\nsame, the energy consumption of our method is about two times of SNNC-AP. However, to achieve\\nthe same accuracy of 73.55%, our method requires less energy consumption.\\nA.9 PSEUDO -CODE FOR OVERALL CONVERSION ALGORITHM\\nIn this section, we summarize the entire conversion process in Algorithm 1, including training ANNs\\nfrom scratch and converting ANNs to SNNs. The QCFS in the pseu do-code represents the proposed\\nquantization clip-\\ufb02oor-shift function.\\n18 Published as a conference paper at ICLR 2022\\nTable S3: Compare with state-of-the-art supervised traini ng methods on CIFAR-10 dataset\\nModel Method Architecture SNN Accuracy Timesteps\\nCIFAR-10\\nHC Hybrid VGG-16 92.02 200\\nSTBP Backprop CIFARNet 90.53 12\\nDT Backprop CIFARNet 90.98 8\\nTSSL Backprop CIFARNet 91.41 5\\nDThIR1ANN-SNN cNet 77.10 256\\nOurs ANN-SNN VGG-16 93.96 4\\nOurs ANN-SNN CIFARNet294.73 4\\n1Implemented on Loihi neuromorphic processor\\n2For CIFARNet, we use the same architecture as Wu et al. (2018) .\\nTable S4: Comparison between the proposed method and previo us works on CIFAR-100 dataset.\\nArchitecture Method ANN T=2 T=4 T=8 T=16 T=32 T=64 T \\u2265512\\nVGG-16RMP 71.22% - - - - - - 70.93%\\nTSC 71.22% - - - - - - 70.97%\\nRTS 77.89% - - - - 7.64% 21.84% 77.71%\\nSNNC-AP 77.89% - - - - 73.55% 76.64% 77.87%\\nOurs 76.28% 63.79% 69.62% 73.96% 76.24% 77.01% 77.10% 77.08%\\nResNet-20RMP 68.72% - - - - 27.64% 46.91% 67.82%\\nTSC 68.72% - - - - - - 68.18%\\nOurs 69.94% 19.96% 34.14% 55.37% 67.33% 69.82% 70.49% 70.50%\\nResNet-18RTS 77.16% - - - - 51.27% 70.12% 77.19%\\nSNNC-AP 77.16% - - - - 76.32% 77.29% 77.25%\\nOurs 78.80% 70.79% 75.67% 78.48% 79.48% 79.62% 79.54% 79.61%\\n1RTS and SNNC-AP use altered ResNet-18, while ours use standa rd ResNet-18.\\nTable S5: Comparison of the energy consumption with previou s works\\nMethod ANN T=2 T=4 T=8 T=16 T=32 T=64\\nRTSAccuracy 77.89% - - - - 7.64% 21.84%\\nOP (GFLOP\\/GSOP) 0.628 - - - - 0.508 0.681\\nEnergy (mJ) 7.85 - - - - 0.039 0.052\\nSNNC-APAccuracy 77.89% - - - - 73.55% 76.64%\\nOP (GFLOP\\/GSOP) 0.628 - - - - 0.857 1.22\\nEnergy (mJ) 7.85 - - - - 0.660 0.094\\nOursAccuracy 76.28% 63.79% 69.62% 73.96% 76.24% 77.01% 77.10%\\nOP (GFLOP\\/GSOP) 0.628 0.094 0.185 0.364 0.724 1.444 2.884\\nEnergy (mJ) 7.85 0.007 0.014 0.028 0.056 0.111 0.222\\n19 Published as a conference paper at ICLR 2022\\nAlgorithm 1 Algorithm for ANN-SNN conversion.\\nInput : ANN model MANN(x;W)with initial weight W; Dataset D; Quantization step L; Initial\\ndynamic thresholds \\u03bb; Learning rate \\u01eb.\\nOutput :MSNN(x;\\u02c6W)\\n1:forl= 1toMANN.layers do\\n2: ifis ReLU activation then\\n3: Replace ReLU (x)by QCFS (x;L,\\u03bbl)\\n4: end if\\n5: ifis MaxPooling layer then\\n6: Replace MaxPooling layer by AvgPooling layer\\n7: end if\\n8:end for\\n9:fore= 1to epochs do\\n10: forlength of Dataset Ddo\\n11: Sample minibatch (x0,y)fromD\\n12: forl= 1toMANN.layers do\\n13: xl=QCFS(Wlxl\\u22121;L,\\u03bbl)\\n14: end for\\n15: Loss = CrossEntropy (xl,y)\\n16: forl= 1toMANN.layers do\\n17: Wl\\u2190Wl\\u2212\\u01eb\\u2202Loss\\n\\u2202Wl\\n18: \\u03bbl\\u2190\\u03bbl\\u2212\\u01eb\\u2202Loss\\n\\u2202\\u03bbl\\n19: end for\\n20: end for\\n21:end for\\n22:forl= 1toMANN.layers do\\n23:MSNN.\\u02c6Wl\\u2190MANN.Wl\\n24:MSNN.\\u03b8l\\u2190MANN.\\u03bbl\\n25:MSNN.vl(0)\\u2190MSNN.\\u03b8l\\/2\\n26:end for\\n27:returnMSNN\\n20\",\"6\":\" Evolutionary Reinforcement Learning: A Survey\\nHui Bai1, Ran Cheng1, and Yaochu Jin2,3\\n1Department of Computer Science and Engineering, Southern University of Science\\nand Technology, Shenzhen, China.\\n2Faculty of Technology, Bielefeld University, 33615 Bielefeld, Germany.\\n3Department of Computer Science, University of Surrey, Guildford, Surrey GU2\\n7XH, U.K.\\nAbstract\\nReinforcement learning (RL) is a machine learning approach that trains agents to maximize\\ncumulative rewards through interactions with environments. The integration of RL with deep\\nlearning has recently resulted in impressive achievements in a wide range of challenging tasks,\\nincluding board games, arcade games, and robot control. Despite these successes, there remain\\nseveral crucial challenges, including brittle convergence properties caused by sensitive hyperpa-\\nrameters, di\\u000eculties in temporal credit assignment with long time horizons and sparse rewards,\\na lack of diverse exploration, especially in continuous search space scenarios, di\\u000eculties in credit\\nassignment in multi-agent reinforcement learning, and con\\ricting objectives for rewards. Evolu-\\ntionary computation (EC), which maintains a population of learning agents, has demonstrated\\npromising performance in addressing these limitations. This article presents a comprehensive\\nsurvey of state-of-the-art methods for integrating EC into RL, referred to as evolutionary rein-\\nforcement learning (EvoRL). We categorize EvoRL methods according to key research \\felds in\\nRL, including hyperparameter optimization, policy search, exploration, reward shaping, meta-\\nRL, and multi-objective RL. We then discuss future research directions in terms of e\\u000ecient\\nmethods, benchmarks, and scalable platforms. This survey serves as a resource for researchers\\nand practitioners interested in the \\feld of EvoRL, highlighting the important challenges and\\nopportunities for future research. With the help of this survey, researchers and practitioners\\ncan develop more e\\u000ecient methods and tailored benchmarks for EvoRL, further advancing this\\npromising cross-disciplinary research \\feld.\\n1 Introduction\\nReinforcement learning (RL) has achieved remarkable success in recent years, particularly with the\\nintegration of deep learning (DL), in solving complex sequential decision-making problems [1, 2].\\nDespite these advancements, RL still faces several challenges, such as sensitivity to hyperparameters\\n[3], di\\u000eculties in credit assignment in tasks with long time horizons, sparse rewards, and multiple\\n1arXiv:2303.04150v1 [cs.NE] 7 Mar 2023 agents [4, 5], limited diverse exploration in tasks with deceptive rewards or continuous state and\\naction spaces [6], and con\\ricting objectives for rewards [7].\\nTo address these challenges, the \\feld of evolutionary reinforcement learning (EvoRL) has emerged\\nby integrating RL with evolutionary computation (EC) [8, 9]. EvoRL involves maintaining a popula-\\ntion of agents, which o\\u000bers several bene\\fts, such as provision of redundant information for improved\\nrobustness [9], enabling diverse exploration [10], ability to evaluate agents using an episodic \\ftness\\nmetric [9], and the ease of generating trade-o\\u000b solutions through multi-objective EC algorithms [11].\\nEvoRL has a rich history, dating back to early work in neuroevolution, which used EC algorithms\\nto generate the weights and\\/or topology of arti\\fcial neural networks (ANNs) for agent policies\\n[12, 13]. Since the proposal of OpenAI ES [8], EvoRL has gained increasing attention in both EC\\nand RL communities. While there have been some surveys focusing on various aspects of EvoRL,\\nsuch as neuroevolution [14], multi-objective RL [7, 15], automated RL [16], and derivative-free RL\\n[17], they either focus on a narrow research \\feld within RL or lack a comprehensive overview of EC\\nmethods as applied to RL.\\nTo bridge the gap between EC and RL communities, this article provides a comprehensive survey\\nof EvoRL, elaborating on six key research \\felds of RL, as shown in Figure 1. The EvoRL methods\\nare introduced and discussed separately for each \\feld, focusing on their advantages and limitations.\\nFinally, the article discusses potential improvement approaches for future research, including e\\u000ecient\\nmethods in terms of EvoRL processes, tailored benchmarks, and scalable platforms.\\n2 Background\\n2.1 Reinforcement Learning\\nReinforcement learning (RL) is a powerful tool for decision-making in complex and stochastic en-\\nvironments. In RL, an agent interacts with its environment by taking a sequence of actions and\\nreceiving a sequence of rewards over time. The objective of the agent is to maximize the expected\\ncumulative reward. This problem can be modeled as a Markov Decision Process (MDP), which is\\nde\\fned as< S;A;T;R;\\u001a 0;\\r > , with a state space S, an action space A, a stochastic transition\\nfunctionT:S\\u0002A!P(S) that represents the probability distribution over possible next states, a\\nreward function R:S\\u0002A!R, an initial state distribution \\u001a0:S!R2[0;1], and a discount factor\\n\\r2[0;1).\\nThe agent's behavior is determined by its policy, which is denoted by \\u0019\\u0012:S!P(A), withP(A)\\nbeing the set of probability measures on Aand\\u00122Rnbeing a vector of nparameters. The agent\\nupdates its policy over time to maximize the expected cumulative discounted reward, as given by\\nJ(\\u0019) =E\\u001a0;\\u0019;T\\\"1X\\nt=0\\rtrt#\\n; (1)\\nwheres0\\u0018\\u001a0(s0),at\\u0018\\u0019(st),st+1\\u0018T(\\u0001jst;at), andrt=R(st;at).\\nRL algorithms can be divided into two categories: model-based and model-free. While model-\\nbased algorithms establish a complete MDP by estimating the transition function and reward func-\\n2 Darwinian evolutionary approaches\\nLamarckian evolutionary approaches\\nEvolution strategies based methods\\nGenetic algorithms based methodsHybrid approaches\\nEvolutionary \\nreinforcement learningHyperparameter\\n optimization\\nPolicy search\\nExploration\\nReward shaping\\nMeta-RL\\nMulti-objective RLGenetic programming based methods\\nDiversity encouragement methods\\nEvolution of reward functions\\nHyperparameter optimization methodsEvolution-guided exploration methods\\nParameters initialization\\nLoss learning\\nEnvironment synthesis\\nAlgorithms generation\\nMulti-objective evolutionary algorithms\\nMulti-objectivizationPopulation based training\\nNatural evolution strategies\\nCanonical evolution strategies\\nCMA-ES\\nAlgorithmic frameworks\\nIndirect encoding\\nVariation operators\\nRepresentations\\nEvolution of direct controllers\\nSymbolic regression\\nFeature Discovery\\nNovelty search\\nQuality diversity\\nSurprise search\\nEvolvability search\\nEvolvability search\\nUse of diverse experiences\\nUse of gradient informationFigure 1: Key research \\felds of evolutionary reinforcement learning. Hyperparameter optimization\\nis a universal method for algorithms in the other \\fve research \\felds to realize end-to-end learning\\nand improve performance simultaneously. Policy search seeks to identify a policy that maximizes\\nthe cumulative reward for a given task. Exploration encourages agents to explore more states and\\nactions and trains robust agents to better respond to dynamic changes in environments. Reward\\nshaping is aimed at enhancing the original reward with additional shaping rewards for tasks with\\nsparse rewards. Meta-RL seeks to develop a general-purpose learning algorithm that can adapt\\nto di\\u000berent tasks. Multi-objective RL aims to obtain trade-o\\u000b agents in tasks with a number of\\ncon\\ricting objectives.\\n3 tion, by contrast, model-free algorithms are data-driven and optimize the policy by using a large\\nnumber of samples, without the need to know the transition function and reward function. Due to\\nthe di\\u000eculties of establishing a complete MDP and the success of neural networks (NNs) in repre-\\nsenting policies, model-free RL has become the main focus of research in recent years [18]. In this\\nsurvey, we focus on model-free RL methods.\\nMore speci\\fcally, model-free RL methods can further be divided into two categories: policy-based\\nand value-based methods. In policy-based methods, the parameters \\u0012of the policy are adjusted in\\nthe direction of the performance gradient, according to the Policy Gradients Theorem [1]. Some\\nof the state-of-the-art policy-based algorithms include TRPO [19], PPO [20], A3C [21], DDPG\\n[22], TD3 [23], and SAC [24]. In value-based methods, a parameterized Q-function is optimized to\\nestimate the value of a state-action pair. One of the state-of-the-art value-based methods is DQN\\n[25], which updates the parameters of the Q-function by minimizing the Temporal Di\\u000berence (TD)\\nloss using a batch of samples. Techniques such as experience replay [26] and double Q-network [27]\\nhave been proposed to improve the sample e\\u000eciency and exploration of DQN.\\n2.2 Evolutionary Computation\\nEvolutionary computation (EC) refer to a family of stochastic search algorithms that have been\\ndeveloped based on the principle of natural evolution. The primary objective of EC is to approximate\\nglobal optima of optimization problems by iteratively performing a range of mechanisms, such as\\nvariation (i.e., crossover and mutation), evaluation, and selection. Among various EC paradigms, the\\nEvolution Strategies (ESs) [28] are the mostly adopted in EvoRL, together with the classic Genetic\\nAlgorithm (GAs) [29] and the Genetic Programming (GP) [30].\\nESs primarily tackle continuous black-box optimization problems where the search space lies\\nwithin the continuous domain. Therefore, it is predominantly applied to weight optimization of pol-\\nicy search in RL. ESs for RL are typically categorized into three main classes, which are Canonical\\nES [31], Covariance Matrix Adaptation ES (CMA-ES) [32], and Natural ES (NES) [33]. Canonical\\nES is aimed at obtaining \\fnal solutions with high \\ftness values while using a small number of \\ftness\\nevaluations through conducting an iterative process involving variation, evaluation, and selection.\\nHere, we illustrate the canonical ( \\u0016,\\u0015)-ES algorithm. Following the initialization of policy param-\\netersx2Rnand a set of hyperparameters, the algorithm generates \\u0015o\\u000bspringx1;:::;x\\u0015from a\\nsearch distribution with mean xand variance \\u001b2C. Subsequently, all o\\u000bspring are evaluated using\\na \\ftness evaluation function. Following this, a new population mean is generated by moving the old\\npopulation mean towards the best \\u0016o\\u000bspring. Then, \\u001bis optionally updated, and an adaptive \\u001b\\ncan improve ESs performance. CMA-ES shares the same procedure with Canonical ES but is more\\ne\\u000bective since its mutation step size \\u001band covariance matrix Care updated adaptively, enabling\\nthe capture of the anisotropy properties of general optimization problems. NES also shares the same\\niteration with Canonical ES. However, it updates a search distribution iteratively by estimating a\\nsearch gradient (i.e., the second-order gradient) on the distribution parameters towards higher ex-\\npected \\ftness values. The distribution parameters are updated through estimating a natural gradient\\nthat can \\fnd a parameterization-independent ascent direction compared to a plain search gradient\\n4 EnvironmentGiven current state \\ud835\\udc60\\u2208\\ud835\\udc46\\nTake action \\ud835\\udc4e\\u2208\\ud835\\udc34\\nGet reward \\ud835\\udc5f\\nNext state \\ud835\\udc60\\u2032\\u2208\\ud835\\udc46Agent Policy\\nOffspring \\ngenerationMating selectionPopulation\\nEnvironmental selectionEvaluationFigure 2: A simple and general framework of EvoRL. The framework consists of two loops: the\\nouter loop shows the evolution process of EC, while the inner loop illustrates the agent-environment\\ninteraction process in RL. Initially, a population of parent candidate solutions is randomly initialized,\\nand then the o\\u000bspring candidate solutions are generated from the parents via variation. Each\\no\\u000bspring is evaluated using an RL task to obtain its \\ftness value, and a new population is selected\\nfor the next iteration by combining all parents and o\\u000bspring.\\n[34]. The natural gradient er\\u0012Jis formulated as F\\u00001r\\u0012J, where Fis the Fisher information matrix\\n(FIM) of the parametric family of the search distribution, and r\\u0012Jis the estimated search gradient\\nof expected \\ftness by Monte Carlo estimation. The Fisher information matrix implies the degree of\\ncertainty of updating \\u0012, which has the e\\u000bect of punishing the natural gradient with high variance\\nand boosting the natural gradient with low variance.\\nGAs, as the most classic EC paradigm, are also commonly adopted in EvoRL. GAs follow\\nthe work\\row where a population of candidate solutions is iteratively improved through selection,\\ncrossover, and mutation. The encoding or representation of the search space in GAs can be tailored\\nto the speci\\fc problem at hand, allowing for binary encoding and discrete encoding for combinatorial\\noptimization problems, and real encoding for numerical optimization problems. This versatility in\\nencoding types makes GAs a widely applicable method for solving various problems in RL [35, 36].\\nGP is a distinctive EC paradigm that is di\\u000berent from ESs or GAs which mainly solve numerical\\noptimization problems. In GP, the search space is composed of a set of programs that are represented\\nusing various encoding methods, such as abstract syntax tree, executable graph (e.g., Cartesian GP\\nand tangled program graphs), \\fnite-state machine, and context-free grammar, among others [28].\\nThe \\ftness of a program is evaluated by executing it to observe its behavior, and the programs can\\nbe viewed as data when crossover and mutation operations are performed on them. In contrast, the\\ndata are interpreted as programs when they are executed. GP has several advantages over other EC\\nparadigms, including the ability to handle complex problems that require a programmatic solution,\\nsuch as symbolic regression and control problems, among others [37, 38, 39].\\n5 2.3 Discussion\\nEC algorithms have been recognized as competitive tools to handle complex optimization problems\\nthat exhibit non-convex, non-di\\u000berentiable, non-smooth, and multi-modal properties [40]. In RL, EC\\nis particularly useful for complex problems with numerous local optima and is suitable for problems\\nwithout gradient information. Furthermore, it is applicable for problems without explicit objective\\nfunctions by novelty search [41]. The population-based search strategy of EC makes it robust to\\ndynamic changes that are commonly found in real-world applications of RL, such as sim-to-real\\ntransfer in robot control [42]. A simple and general framework that combines EC and RL is shown\\nin Figure 2.\\nSpeci\\fcally, EC has been introduced into RL in six major key research \\felds of RL, including hy-\\nperparameter optimization ,policy search ,exploration ,reward shaping ,meta-RL , and multi-objective\\nRL, as presented in Figure 1. These six key research \\felds will be introduced in detail in the following\\nsections, respectively.\\n3 Evolutionary Computation in Hyperparameter Optimiza-\\ntion\\nFinding the optimal hyperparameter con\\fgurations for reinforcement learning (RL) can be a chal-\\nlenging task due to the large number of hyperparameters involved, including those related to RL\\nalgorithms (e.g., the learning rate \\u000band discount factor of future rewards \\r) and the neural network\\narchitectures of policies (e.g., the number and size of layers). To overcome this challenge, researchers\\nhave introduced hyperparameter optimization (HPO) to automatically set con\\fgurations for optimal\\nperformance. HPO has been shown to improve the performance and robustness of RL algorithms\\n[43, 44].\\nHowever, HPO for RL faces several challenges. First, performance evaluation can be extremely\\nexpensive for complex tasks. Second, the search space of hyperparameters can be complex, involving\\nmixed encoding, high dimensionality, and non-convexity. Third, there may be two or more objectives\\nthat need to be traded o\\u000b. To address these challenges, there are several major classes of HPO\\nmethods, including random search [45], Bayesian optimization [46], gradient-based methods [47],\\nand evolutionary computation (EC) methods [43]. Among these, EC methods can simultaneously\\nmeet the challenges of HPO, owing to their high degree of parallelism, gradient-free properties, and\\nability to obtain a set of trade-o\\u000b optimal solutions.\\nEC-based HPO methods can be mainly classi\\fed into three categories: Darwinian evolution-\\nary methods ,Lamarckian evolutionary methods , and hybrid methods . In Darwinian evolutionary\\nmethods, parameters are initialized while hyperparameters are evolved. In contrast, in Lamarckian\\nevolutionary methods, parameters are inherited while hyperparameters are evolved. Additionally,\\nhybrid methods further combine the former two methods and gradient-based methods.\\n6 3.1 Darwinian Evolutionary Methods\\nIn Darwinian evolutionary methods, the parameters are randomly initialized while hyperparameters\\nare evolved using genetic algorithms (GAs) [48]. For instance, Eriksson et al. applied GA to evolve\\ntwo hyperparameters, the learning rate \\u000band the temperature \\u001c, which control the trade-o\\u000b between\\nexploration and exploitation in softmax action selection, for Sarsa( \\u0015) in food capture tasks. Elfwing\\net al. [49] also applied GA to evolve hyperparameters and weights in potential-based reward shaping\\nfor Sarsa(\\u0015) in the same tasks. These methods integrate learning and evolution to e\\u000bectively improve\\nthe performance of RL algorithms and obtain sim-to-real robust policies. Nonetheless, the Darwinian\\nevolutionary methods are ine\\u000ecient since parameters are reinitialized in each generation, causing a\\nloss of knowledge already acquired during previous generations.\\n3.2 Lamarckian Evolutionary Methods\\nIn Lamarckian evolutionary methods, parameters are inherited while hyperparameters are evolved,\\nmeaning that hyperparameters are adapted to the current learning process to make agents learn\\nmore e\\u000eciently. A state-of-the-art asynchronous parallel evolutionary method called population-\\nbased training (PBT) has recently been proposed to improve the e\\u000eciency of HPO [43]. In the\\nasynchronous PBT, only one ready individual is compared with a randomly selected individual from\\nthe remaining population in each generation, and then the worse individual copies the parameters\\nand hyperparameters of the better individual and adds noise to its hyperparameters. PBT has\\nsuccessfully trained a series of RL agents in a number of complex tasks, such as the 3D multi-player\\n\\frst-person video game, DMLab, the MuJoCo multi-agent soccer game, and ELF OpenGo, achieving\\nnew state-of-the-art performance of RL algorithms [50, 51, 52, 53]. The PBT-style evolution is\\nquite similar to the steady-state EC (i.e., a new individual is inserted into the population in each\\ngeneration) that is believed to be e\\u000bective for non-stationary\\/dynamic environments [54]. However,\\nPBT does not consider the diversity of the population. That is, PBT prefers the higher-performing\\ncon\\fgurations, but it may lose the individuals that are \\\\late bloomers\\\". Therefore, the faster\\nimprovement rate PBT (FIRE PBT) has been proposed based on the assumption that when two\\nNNs have similar performance and hyperparameters, the NN with a faster rate of improvement\\nwill bring about a better \\fnal performance [55]. FIRE PBT derives a \\ftness metric based on\\nthe assumption and introduces subpopulations to increase diversity. Further, a sample-e\\u000ecient\\nautomated RL framework (SEARL) has been proposed for o\\u000b-policy RL algorithms, which follows\\nthe PBT-style to evolve dynamic con\\fgurations of hyperparameters and shares experiences across\\nthe population with di\\u000berent con\\fgurations [44].\\n3.3 Hybrid Methods\\nHybrid methods combine the above two evolutionary methods and other gradient-based methods to\\nimprove training e\\u000eciency. For example, in the work of [56], a hybrid method combines Darwinian\\nand Lamarckian evolutionary methods, following the mainstream of Lamarckian evolution while con-\\nducting multiple random restarts of parameters in the evolutionary process to escape local optima.\\n7 In addition, an evolutionary stochastic gradient descent framework is proposed, aiming at combining\\nthe merits of stochastic gradient descent and evolutionary computation [57]. In the framework, a\\nset of neural network weights with distinct hyperparameters are optimized independently by vari-\\nous stochastic gradient descent variants, and then their information is exchanged by evolutionary\\ncomputation. However, the initial hyperparameters are preset by humans, which still involves a\\ncertain amount of human knowledge. Moreover, [58] has proposed a collection of benchmarks de-\\nrived from hyperparameter optimization to verify the performance of quality diversity methods. In\\nother words, hyperparameter optimization methods can be resolved by quality diversity methods to\\nintroduce diversity by niches.\\n3.4 Discussion\\nHPO is crucial for achieving state-of-the-art performance in RL, and EC methods have shown\\ngreat potential in automating this process. However, the current literature on HPO still faces\\nseveral challenges, such as the lack of comprehensive performance metrics and the need for e\\u000ecient\\nconvergence speed. Additionally, selecting hyperparameters from a large number of options is a\\ncombinatorial optimization problem that brings new challenges to EC methods.\\nTo address these challenges, future research should focus on developing comprehensive evaluation\\nmetrics that consider both e\\u000bectiveness and e\\u000eciency. This can be achieved by benchmarking\\ndi\\u000berent HPO methods on a wide range of tasks and considering factors such as training time,\\nconvergence speed, and performance. Additionally, future research should explore new EC methods\\nthat can e\\u000eciently search high-dimensional and combinatorial search spaces for hyperparameter\\noptimization. Furthermore, it is essential to investigate how to combine di\\u000berent HPO methods\\nto improve the overall e\\u000eciency and e\\u000bectiveness of the optimization process. By addressing these\\nchallenges, EC-based HPO can further accelerate the development and deployment of RL algorithms\\nin real-world scenarios.\\n4 Evolutionary Computation in Policy Search\\nIn the context of RL, policy search seeks to identify a policy that maximizes the cumulative reward\\nfor a given task. The incorporation of neural networks as function approximators for policies has been\\nfacilitated by the surge of deep learning, despite the vast search space of states and actions. Stochas-\\ntic gradient descent (SGD) methods are widely used for training neural network weights in deep RL.\\nAlternatively, neuroevolution has emerged as an alternative approach, leveraging gradient-free EC\\nmethods for policy search, which can optimize neural network weights, architectures, hyperparame-\\nters, building blocks, and even learning rules [59].\\nEarly work in neuroevolution focused primarily on the evolution of the weights of small and\\n\\fxed-architecture neural networks. Recent advancements, however, have demonstrated the promise\\nof evolving the architecture together with the weights of neural networks for complex RL tasks\\n[13]. Moreover, a new perspective on policy search has been established by ignoring the weights and\\nconducting only architecture search [60]. This section will review EC techniques such as evolutionary\\n8 strategies (ESs) ,genetic algorithms (GAs) , and genetic programming (GP) for policy search in RL.\\n4.1 ES based Methods\\nThe subsection will review three popular ESs used in RL tasks, namely Canonical ES, Natural\\nEvolution Strategies (NES), and Covariance Matrix Adaptation ES (CMA-ES).\\n4.1.1 Canonical ES based Methods\\nThe high-parallel framework of OpenAI ES has led to the successful application of a simpli\\fed\\ncanonical ES to Atari games [61]. Prior to this, the canonical ES had not been applied much to RL\\ntasks, since it performs poorly on high-dimensional tasks. Although the canonical ES can achieve\\nsimilar performance to OpenAI ES on several Atari games with discrete state and action spaces, its\\nperformance on continuous state and action spaces (where EC is more preferable [62]) has not been\\ninvestigated.\\nAs ESs treat RL tasks as black-box optimization problems directly instead of taking advantage of\\ntheir intrinsic MDP structures, it may have a large variance in multiple runs. To address this issue,\\nseveral variance reduction techniques have been introduced. On the one hand, various ESs gradient\\nestimators using Monte Carlo techniques have been proposed, such as the antithetic ES gradient\\nestimator in OpenAI ES [8] and the forward \\fnite-di\\u000berence ES gradient estimator in the structured\\nES [63]. Speci\\fcally, the structured ES performed well on most of the MuJoCo tasks using less than\\n300 policy parameters. Furthermore, the adaptive ES-active subspace method further combines\\nstructured ES with the techniques from active subspaces to learn the changing dimensionality of the\\ngradient space, which achieved competitive performance compared to PPO, TRPO, and several ES\\nvariants on a subset of MuJoCo tasks [64]. On the other hand, structured methods leveraging the\\nunderlying MDP structures have been developed. Speci\\fcally, the control variate, also known as the\\nadvantages function in RL [21], has been introduced into ESs to reduce the variance of Monte Carlo\\ngradient estimation [65].\\nSince applying ESs to large-scale RL tasks is low-e\\u000ecient, a number of works have been proposed\\nto improve sample e\\u000eciency in two ways: sampling from diverse search directions and making full\\nuse of previous samples. In the \\frst way, the Gaussian orthogonal exploration searches a number of\\ndiverse directions [63]. Based on this method, the Guided ES further combines ES with surrogate\\ngradients (correlated with the true gradients) [66]. Then, the self-guided ES trades o\\u000b exploitation\\nin the gradient subspace and exploration in its orthogonal complement subspace, which has obtained\\nhigher returns and faster convergence speed than PPO, TRPO, and Guided ES on several MuJoCo\\ntasks [67]. In the second way, the trust region ES (TRES) approximately optimizes a surrogate\\nobjective by reusing the data sampled from the old policy parameters instead of sampling from new\\nparameters, which has achieved faster convergence speed than PPO and TRPO on several MuJoCo\\ntasks [68].\\n9 4.1.2 NES based Methods\\nThe original NES is known to be limited in scalability for high dimensional problems due to the time-\\nconsuming calculation of Fisher information matrix (FIM) [33]. To improve e\\u000eciency and robustness,\\nthe exact NES computes the inverse of the exact FIM instead of the empirical FIM and has shown\\ncompetitive performance on the double pole balancing task using an NN with 21 weights [69].\\nHowever, NES was not successful for large-scale tasks until the development of OpenAI ES [8], which\\nis a simpli\\fed variant of NES with an isotropic multivariate Gaussian and \\fxed variances \\u0006. OpenAI\\nES is closely related to the policy-based RL algorithm PEPG in the theoretical relation [70]. To\\nreduce variance and realize high-parallel ability, OpenAI ES has introduced several techniques such as\\nmirrored sampling of the perturbation, rank-based \\ftness shaping, virtual batch normalization, and\\nthe random seeds sharing strategy. As a result, OpenAI ES has achieved competitive performance on\\nMuJoCo and Atari games with over a million policy parameters by using thousands of CPU workers,\\ndemonstrating the advantages of ESs as a black-box optimization algorithm for complex RL tasks.\\nMoreover, OpenAI ES has been shown to be close to SGD with a large number of o\\u000bspring [71] and\\nresembles traditional \\fnite-di\\u000berence approximators [72].\\nSeveral diversity encouragement methods from EC such as novelty search (NS) and quality\\ndiversity (QD) have been introduced to enhance the exploration of OpenAI ES, encouraging agents\\nto exhibit diverse behaviors [6]. Hybrid algorithms NS-ES and NSR-ES can solve tasks with noisy\\nand deceptive rewards. Furthermore, progressive episode lengths (PEL) have been proposed to\\nimprove the learning e\\u000eciency in evaluating \\ftness values of samples [73]. PEL enables agents\\nto learn from simple tasks to complex tasks by dividing the time budget and episode lengths into\\nincreasing numbers of fragments concurrently.\\n4.1.3 CMA-ES based Methods\\nThe application of the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) to RL was\\n\\frst proposed by Igel in 2003, who demonstrated that CMA-ES can achieve faster convergence than\\nseveral state-of-the-art GAs based neuroevolution algorithms on double pole balancing tasks using a\\nsingle hidden layer policy [74]. CMA-ES typically uses rank-based \\ftness shaping instead of absolute\\n\\ftness values to reduce its susceptibility to noise. However, the accuracy of rank still plays a critical\\nrole in performance. To address this, Heidrich-Meisner augmented CMA-ES with Hoe\\u000bding- and\\nBernstein-based racing algorithms to obtain a reliable rank, which led to faster convergence and\\nmore robust hyperparameter selection on single and double pole balancing tasks than several GAs\\nbased neuroevolutionary algorithms [75, 76].\\nRecently, Chen proposed a restart-based rank-1 ES (R-R1-ES), a simpli\\fed CMA-ES, to play\\nAtari games using a two-hidden-layer neural network, which is a groundbreaking work applying\\ne\\u000ecient CMA-ES to complex RL tasks [77]. R-R1-ES integrates a Gaussian-distributed model with\\ntwo mechanisms, including the adaptation of the number of parents and a restart procedure, and\\nhas achieved higher scores than OpenAI ES, canonical ES, NS-ES, and NSR-ES on a subset of Atari\\ngames. Despite the development of several high-e\\u000ecient CMA-ES variants, such as R1\\/Rm-ES [78],\\nLM-MA-ES [79], and fast CMA-ES [80] to deal with the time-consuming adaptation of the full\\n10 covariance matrix for large scale optimization problems with up to 10,000 dimensions, OpenAI ES\\nhas been veri\\fed on millions of policy parameters. Therefore, their potential for complex RL tasks\\nis yet to be studied.\\n4.2 GA based Methods\\nGAs have been widely adopted to optimize the weights and architectures for policy search in RL,\\nowing to their diverse encoding types. The GAs based methods have primarily focused on three\\nresearch topics: algorithmic frameworks, indirect encoding, and variation operators.\\n4.2.1 Algorithmic Frameworks\\nPure GAs based Frameworks In the 1990s, several studies utilized GAs to optimize policy\\nweights for pole balancing problems [81, 12]. Among these works, GENITOR, which represented\\nweight with real values instead of binary strings, improved the precision and e\\u000eciency of the search\\nspace. Subsequently, a range of works has been developed, with the most popular one being NEAT,\\nwhich can obtain a minimal neural network by adding nodes and connections from a smallest neural\\nnetwork without hidden nodes [13]. NEAT has several highlights, including genetic encoding that\\naligns corresponding genes easily when mating two genomes, historical markings that enable tracking\\nand matching of genes during crossover, and speciation within smaller niches to protect topological\\ninnovations. NEAT has since been improved and tailored to various tasks, such as evolving dynamic\\npolicies to adapt to environmental changes in a dangerous foraging task [82], evolving complex\\npolicy architectures in robot competition and coevolution tasks [83], video games [84], and strategic\\ndecision-making problems [85].\\nDespite the e\\u000bectiveness of NEAT, it does not fully optimize the weights under a potential\\narchitecture. Hence, ENAT was developed based on NEAT, which adopts the idea of incremental\\ngrowth from a minimal structure [86]. However, ENAT applies CMA-ES to fully optimize the weights\\nand introduces a compact genetic encoding to encode a tree-based program in a linear genome. As\\na result, ENAT can \\fnd better weights than NEAT with the same network size on a robot arm\\ncontrol task. To reduce the side e\\u000bect of topology change, CMA-TWEANN replaces the mutation\\nby random weights in NEAT with a seamless topology mutation by zero weights and applies CMA-\\nES to optimize weights [87]. However, whether weight optimization is more important than topology\\noptimization is still debatable. The weight-agnostic search method answers this problem by only\\nsearching the topology by NEAT without training of weights. This method has found policies with\\nminimal architectures in continuous control tasks [60].\\nIn the highly parallel framework of OpenAI ES, a simple genetic algorithm has been used to\\noptimize large-scale policies with millions of weights in Atari games and MuJoCo [88]. In a simi-\\nlar vein, a massively parallel method has been applied to search recurrent neural network (RNN)\\narchitectures using only mutation. This method has achieved high performance with orders of mag-\\nnitude fewer parameters than several state-of-the-art RL methods in MuJoCo tasks [89]. While\\nthese methods require huge computational resources, the training of large-scale policies using EC\\nmethods is still very low-e\\u000ecient. To improve the e\\u000eciency, a hybrid agent model consisting of a\\n11 large world model and a small controller model has been proposed to tackle complex RL tasks [90].\\nThe world model extracts low-dimensional features from real-world observations and predicts future\\nstates based on historical information. The controller model, such as a single-layer linear neural\\nnetwork, determines the actions to take by receiving current and predicted features to maximize the\\nexpected cumulative reward. The controller model has been evolved by EC methods in vision-based\\ngame tasks [91, 92]. Moreover, an end-to-end training of the whole agent model using GAs has\\nshown comparable performance in car racing tasks [93].\\nFrameworks Hybridizing GAs and RL A combination of GAs based methods (i.e., NEAT)\\nand TD methods (i.e., Q-learning) has led to the development of two methods, namely Lamarckian\\nNEAT+Q and Darwinian NEAT+Q. In these methods, NEAT is used to optimize the architectures\\nand initial weights of Q networks, while the policy weights are updated using backpropagation [94].\\nTo balance the exploration and exploitation of the EC method, \\\"-greedy selection and softmax selec-\\ntion in RL have been incorporated into NEAT. However, the high sample complexity of NEAT+Q,\\nresulting from the fully training of each candidate policy in highly stochastic domains, has prompted\\nthe proposal of an e\\u000ecient NEAT+Q method that reuses previous samples to train a population\\nof candidate policies [95]. A comparison study has suggested that EC methods are more e\\u000bective\\nwhen \\ftness can be rapidly evaluated in deterministic domains, whereas TD methods exhibit an\\nadvantage in fully observable but non-deterministic domains [96].\\nCooperative Coevolution In the \\feld of neural network optimization, the search space can\\nbecome excessively large when the numbers of the neural network input, output, architecture, and\\nweight are large. To overcome this challenge, Cooperative Coevolution (CC) based GAs can be\\nused, which decompose the problem into smaller components to reduce complexity and enable more\\ne\\u000ecient resolution through CC methods [97]. In a CC algorithm, each individual represents a partial\\nsolution, or a component of a complete solution, which is resolved by a species, or a set of individuals,\\nindependently. The individuals are evaluated based on their contributions to the complete solution.\\nThis approach introduces diversity and robustness in the maintenance of various components and\\nenables parallel search to improve training e\\u000eciency. The search granularity in the decomposition\\nmethods of neural networks indirectly in\\ruences search performance and e\\u000eciency. In neuron level\\nCC methods, weights connected to a neuron are grouped into a component. SANE is an earlier CC\\nalgorithm that evolves a population of hidden neurons for a neural network with a \\fxed architecture,\\nand has shown better performance than Q-learning and GENITOR on pole balancing tasks [98]. ESP\\nimproves the e\\u000eciency of SANE and supports the evolution of Recurrent Neural Networks (RNN)\\nby allocating a species for a hidden neuron and conducting variation within species [99]. NSP further\\ngroups the weights connected to a hidden neuron into \\fner granularity [100]. In synapse-level CC\\nmethods, each weight is considered a component. Based on ESP, CoSyNE groups each weight into\\na component and has shown better e\\u000eciency than SANE, NEAT, and ESP on pole balancing tasks\\n[101]. However, CoSyNE may not be applicable for large-scale neural networks since it cannot fully\\nexploit the weights to avoid inaccurate evaluation, which is a problem in neuron-level methods.\\nOther state-of-the-art decomposition methods include COVNET [102], Modular NEAT [103], and\\n12 CCNCS [104].\\n4.2.2 Indirect Encoding\\nResearch on indirect encoding has been promoted for two reasons. Firstly, direct encoding has\\nlimitations in scaling up to large-scale NN scenarios. Secondly, in biological genetic encoding, phe-\\nnotypes typically contain more genetic components than genotypes, and the mapping of phenotypes\\nto genotypes is indirect.\\nArti\\fcial embryogeny, which includes cellular encoding [105] and generative encoding [106], uti-\\nlizes a developmental phase by reusing genes or rules to evolve arti\\fcial systems from a small starting\\npoint [107]. In addition to repetition by reuse, the properties of physical space, such as symmetry\\nand locality, have motivated the design of encoding that seeks to discover regularity by local con-\\nnectivity. Compositional pattern producing networks (CPPNs) capture structural relationships and\\nare encoded by a composition of functions organized in the form of neural networks [108]. CPPNs\\ncan be trained in the same way as neural networks, and HyperNEAT is tailored to training CPPNs,\\nwhich can evolve increasingly complex expression patterns to capture the complete regularities of\\nproblem structures [109, 110]. The ability to learn from geometric regularity has enabled Hyper-\\nNEAT to be successfully applied to complex tasks such as checkers [111], Go [35], and Atari games\\n[112]. Additionally, several HyperNEAT variants have been proposed, such as adaptive HyperNEAT\\n[113] and ES-HyperNEAT [114].\\nBy modeling modularity as an optimization objective, NSGA-II (a multi-objective evolutionary\\nalgorithm) [115] has been applied to evolve CPPNs. This method performs better than HyperNEAT\\nby generating lower modularity of genotypes and phenotypes in a robotics task [116]. However,\\nCPPNs may lose continuity when mapping genotypes to phenotypes (i.e., a small change in the\\ngenotype may lead to a large change in the phenotype). Therefore, compressed encoding uses\\ndiscrete cosine transform (DCT) to reduce the dimensionality of the search space by exploiting the\\nspatial regularities of the weight matrix and obtains large-scale neural networks on a vision-input\\ncar driving task [117].\\nFurthermore, several works have combined indirect encoding and direct encoding to discover\\nregularities by indirect encoding and compensate for irregularities by direct encoding [118, 119, 120].\\n4.2.3 Variation Operators\\nVariation operators aim to preserve the characteristics of parents while introducing diversity, and\\nthey typically include crossover and mutation. Crossover combines the properties of more than one\\nparent, while mutation inherits the properties from one parent to a large extent. In earlier binary\\nencoding, the single-point crossover and \\rip mutation are widely used operators [28]. Since the\\nlength of the binary string determines the representation precision of a real number and in\\ruences the\\nvariation granularity, real encoding has been proposed to represent a weight as a real number directly.\\nAccordingly, the simulated binary crossover and the polynomial mutation have been proposed for\\ncontinuous search space [121]. Gaussian mutation is also widely used by adding a random value\\nfrom a Gaussian distribution to a real-encoded weight [88].\\n13 However, since NNs are sensitive to small modi\\fcations of weights, the above variation operators\\ntypically cause catastrophic forgetting of the characteristics of parents. Hence, imitation learning\\nor network distillation has been applied to variation operators for NNs, such as the state-space\\ncrossover [122], the Q-\\fltered distillation crossover [123], as well as the distilled topology mutation\\n[124]. Both crossover methods apply imitation learning to distill better behaviors of parents into\\no\\u000bspring. The mutation method \\frst generates an o\\u000bspring by augmenting topology and then\\npretrains the o\\u000bspring by distilling the behavior of its parent as a necessary initialization.\\nFurthermore, [125] proposes a family of safe mutation to deal with the catastrophic forgetting\\nproblem, where the mutation degree of each weight is scaled by the sensitivity of the weight to the\\nNN outputs, and thus an o\\u000bspring will not diverge too much from its parent. In contrast, [126]\\nproposes a di\\u000berent concept of safe mutation for safe exploration, which uses visited unsafe states\\nto explore safer actions.\\nApart from the catastrophic forgetting problem, the permutation problem (i.e., the same solution\\ncan be represented by di\\u000berent NNs) can lead to low-e\\u000ecient search when encoding schemes or vari-\\nation operators are not designed sophisticated [127]. Hence, two types of approaches are developed.\\nOne approach tailors encoding methods for evolving NN architectures. For example, NEAT designs\\na genetic encoding to track parent solutions by using an innovation number [13]. Another approach\\naligns neurons from two NNs by analyzing their functional correlations before crossover and provides\\na general method for di\\u000berent encoding [127].\\n4.3 Genetic Programming based Methods\\nGenetic programming (GP) is a popular method for solving complex RL tasks since a program can\\nemulate any model of computation given su\\u000ecient time and search space [128]. Three commonly\\nused representations in GP are the abstract syntax tree, Cartesian GP (CGP), and tangled program\\ngraphs (TPG). These representations are applied to various tasks such as evolving direct controllers,\\nsymbolic regression, and feature discovery for RL tasks.\\n4.3.1 Representations\\nThe syntax tree is a commonly used representation in GP, where programs are directly encoded as\\ngenomes. As shown in Figure 3a, nodes can represent functions, operations, variables, and constants,\\nand the tree outputs a unique program through tree traversal algorithms [28]. Typically, the tree\\ngrows randomly by crossover and mutation from a null node, but it can su\\u000ber from the bloat issue,\\nwhere programs grow in size without showing obvious \\ftness improvement over time if depth limiting\\nis not enforced.\\nTo address the bloat issue, Cartesian GP (CGP) uses an integer-array genome with a \\fxed length\\nto encode an executable tree graph [129], as depicted in Figure 3b. All nodes are placed in a grid\\nmanner and are indexed sequentially. The nodes in the \\frst column are input nodes. The integer-\\narray genome is divided into blocks, with each block consisting of three integer genes representing\\na single non-input node, where the third integer speci\\fes a function, and the former two integers\\nspecify the indices of its input nodes. The data \\row is from left to right in the graph, and the graph\\n14 +\\n*\\nx y 2sin\\nOutput programs:\\nxy+sin(2)(a) Syntax Tree\\nyx\\nz*\\n\\/+\\n++-0\\n1\\n20\\n1\\n1\\n2\\n0\\n13\\n4\\n56\\n7\\n83\\n4\\n3\\n4\\n4\\n5[012 120 013 341 340 450]\\n[01* 12+ 01\\/ 34 -34+ 45+]\\nOutput programs:\\nNode 6: xy \\u2013(y + z )\\nNode 7: xy + y + z\\nNode 8: xy + y \\/ z (b) Cartesian GP\\n{ }{ }{ }{ }{ }\\n{ }\\n{ }{ }\\n{ }\\n{ }{ }\\n{ }{ }{ }{ }\\n{ }\\nN A { }\\n{ }{ }\\n{ } { }{ }\\nTeam\\nProgram\\nAtomic ActionRoot Team (c) Tangled Program Graph\\nFigure 3: Three illustrative examples for the representations of GP in RL: syntax tree, Cartesian\\nGP (CGP), and tangled program graphs (TGP), respectively.\\ncan have multiple output programs. Finally, users can specify only one output among the multiple\\noutputs.\\nTangled Program Graphs (TPG) is a framework for organizing multiple programs into a structure\\nwith high modularity [130]. TPG evolves two populations: a Node (i.e., Team) population and a\\nProgram population, where the Node population constructs a good organization of multiple teams\\nof programs from the Program population, and the Program population discovers programs that\\noutput useful Atomic Actions. As shown in Figure 3c, the black point represents the root node that\\nreceives state inputs. Evaluation of an agent starts at the root node and reaches an Atomic Action\\nthrough a path in the direction of the arrow. This process only executes a fraction of the programs\\nfor a task, making TPG more e\\u000ecient than other neuroevolutionary algorithms that require covering\\nall topology for a decision.\\n4.3.2 Evolution of Direct Controllers\\nThe syntax tree directly corresponds to the parse tree created by compilers, and has been applied\\nto evolve controllers for robot control tasks [131], bipedal locomotion tasks [132], acrobat tasks,\\nand helicopter hovering tasks [133] through syntax tree-based GP. Integration of the tree-based GP\\nand RL has been tailored for real robots such that a precise simulator is not required in complex\\nrobot control tasks [134]. This method executes GP in a simpli\\fed simulator to generate simple\\ncontrollers, which are then adapted to a particular real robot by RL. It has outperformed Q-learning\\non two complex robot tasks. In addition, cellular encoding and the syntax tree have been integrated\\nto evolve NNs with particular structures [135]. Furthermore, a hybrid of NEAT and GP, known\\nas HyperGP, has been applied to evolve weights of CPPN, and showed similar performance to\\nHyperNEAT with signi\\fcantly fewer evaluations [37].\\n15 Both Cartesian Genetic Programming (CGP) and neural networks (NNs) can be viewed as ex-\\necutable graphs, which has motivated researchers to extend the \\rexible CGP representation to the\\nevolution of NNs. The CGP-based NN has been proposed to evolve both topology and weights for\\nNNs with feed-forward or recurrent architectures, outperforming NEAT, ESP, and CoSyNE in pole\\nbalancing tasks [136]. Additionally, CGP has been applied to evolving the parameters of trans-\\nfer functions such as the Gaussian function and the logistic sigmoid function, which improved the\\nperformance of NNs on a ball throwing task [137]. Furthermore, CGP has been applied to evolv-\\ning game-playing agents directly from high-dimensional pixel inputs of Atari games [138], and the\\nevolved programs are easier to understand than NNs.\\nTPG is tailored for visual RL tasks with high-dimensional pixel state inputs from environments,\\nand has been applied to 20 Challenging Atari games, where TPG exceeds DQN in 15 of the 20\\ngames, and further exceeds the human level in 7 of the 15 games [139]. Surprisingly, TPG requires\\nsigni\\fcantly lower computational resources and is without specialized hardware such as GPUs. Be-\\nsides, by interacting with environments, TPG introduces emergent modularity and thereby leads to\\ntask decomposition. Due to these advantages, TPG is capable of generating multi-task policies in\\nAtari games [140, 141], and has been successfully applied to the partially-observable ViZDoom [142]\\nand Dota2 [143].\\n4.3.3 Symbolic Regression\\nInterpretable RL is of high interest to academics and industrial areas, and interpretable controllers\\nare more likely to be employed especially in industrial systems. The symbolic regression of GP\\nis an e\\u000bective approach to interpretable RL by \\ftting the policy or value functions in a human-\\nunderstandable way. The syntax tree is naturally the appropriate representation due to its high in-\\nterpretability and understandability for humans. For example, the value function discovery method\\nhas been proposed to discover algebraic expressions of an obtained V-function by minimizing a sim-\\nulation error between the expressions and sampled V-function data [144]; the genetic programming\\nfor RL method has been proposed to generate simple algebraic policies by the data sampled from\\nthe world models [145].\\nIn addition to interpretability, symbolic regression can generate smoother and more adaptive\\nsymbolic approximators than numerical approximators, such as neural networks (NNs). For example,\\na variant of the single-node GP has been applied to evolve a smooth proxy of the V-function by\\nmaximizing the number of correct choices of actions for sampled training states [146]. In addition,\\nthe single-node GP has constructed a symbolic process model for model-based RL methods to reduce\\nthe number of training data and adapt to the dynamic system in real-time robot control tasks. This\\nmethod has advantages over NNs in that it requires no hyperparameter tuning and can generate\\nsmooth V-functions [147].\\n4.3.4 Feature Discovery\\nFeature discovery is the process of transforming input data into a form that can be more easily\\nprocessed by RL agents. Unlike most feature extraction methods in machine learning, where useful\\n16 features are extracted from inputs, feature discovery in RL may add features to aid in better learning\\nby agents. For example, in pole balancing tasks, the optimal policy can be learned faster by adding\\ntwo angle features of the pole.\\nIn the works of [148, 149], each individual encodes programs with multiple S-expressions (feature-\\nfunctions), where each S-expression corresponds to a unique feature. In [149], the number of useful\\nfeatures is \\fxed, while in [148], the number of useful features can vary within a range since the prior\\nis unknown in advance. The obtained features are human-readable, allowing for \\fne-tuning and\\nknowledge transfer during the process of feature discovery.\\n4.4 Discussion\\nIn general, EC based methods have shown great potential in solving complex RL tasks. Each method\\nhas its unique advantages and disadvantages, making them suitable for di\\u000berent situations. ESs are\\nsimple and e\\u000ecient but can be limited by the high-dimensional search space and \\ftness noise.\\nNES improves upon ES by using the reward gradient of all o\\u000bspring, but there is still room for\\nbetter exploitation of low-\\ftness o\\u000bspring. GA-based methods can employ safe mutation methods\\nto broaden their applicability. GP-based methods, such as CGP can evolve policies with better\\ninterpretability, and TPG is a unique method that is speci\\fcally tailored to visual RL tasks and\\ncan solve challenging games with high-dimensional pixel inputs while using fewer computational\\nresources than other EC based methods.\\nIt is also worth noting that there are still areas for improvement in EC-based RL methods. The\\nhighly-parallel framework of OpenAI ES requires a large number of CPU resources, which is low-\\ne\\u000ecient for large-scale image inputs, and allocating more resources to promising samples may enable\\nbetter performance within limited computational resources. Furthermore, ENAS has shown great\\npotential in automatically designing the architecture of deep NNs for image classi\\fcation, but more\\nresearch is needed to explore its applicability to policy search of complex RL tasks. Overall, as EC\\nbased RL methods continue evolving and improving, their potential in solving complex RL tasks\\nmakes them an exciting area of research.\\n5 Evolutionary Computation in Exploration\\nIn RL, agents must interact with environments by taking actions and observing environmental\\nstates to collect trajectories to improve their behaviors. The learning e\\u000eciency of agents relies on\\nthe data that they gather. However, if an agent only visits a small portion of its environment, its\\nknowledge will be limited, leading to suboptimal decision-making. Therefore, diverse exploration of\\nthe environment is desired. Agents typically explore environments by adding noise to the action space\\nor to the parameter space of their policies. In state-of-the-art methods, the \\u000f-greedy exploration\\nmethod encourages agents to take di\\u000berent feasible actions instead of the current optimal action for\\na state with a certain probability [1], while the parameter space noise method adds Gaussian noise\\nto the policy weights to change the original output actions [150].\\nTo achieve e\\u000ecient exploration, four key challenges must be addressed [151]. First, the state-\\n17 action space is often large, making it challenging for the agent to access the e\\u000bective space. Second,\\nthe environment returns sparse and delayed rewards, meaning that agents cannot receive timely\\nand informative feedback on their behaviors. Third, real-world environments often contain highly-\\nrandom and unpredictable things (i.e., white-noise problems), making it challenging for the agent to\\ndistinguish important information from unimportant information, leading to unstable and ine\\u000ecient\\nexploration. Fourth, exploration in multi-agent RL is more challenging since the state-action space\\nincreases exponentially, and agents must explore coordinately to achieve local and global exploration\\ntrade-o\\u000bs.\\nTo deal with these challenges, EC methods for RL enable extreme exploration, competition and\\ncooperation, and massive parallelization by maintaining a set of diverse agents during search. In\\nthe implementation, EC methods introduce exploration for RL from two perspectives: diversity\\nencouragement methods , especially for neuroevolution, and evolution-guided exploration methods for\\ntraditional RL algorithms.\\n5.1 Diversity Encouragement Methods\\nIn diversity encouragement methods for neuroevolution, EC as a policy search method directly\\nevolves a set of diverse agents by encouraging diversity in the parameter space or in the behav-\\nior space. Most initial work focuses on the parameter space with the purpose of avoiding local\\noptima. The widely applied diversity maintenance techniques have speciation (i.e., niching) and\\n\\ftness sharing [13]. Speciation divides a population into a number of species according to their\\ngenetic similarity, and \\ftness sharing enables individuals with similar genomes to share their \\ftness\\nso that innovation can be protected in their own species. However, diversity in the parameter space\\ncannot ensure diversity in the behavior space, since there are in\\fnite ways for NN weight settings to\\nproduce the same outputs of behaviors. Therefore, a number of recent approaches from or inspired\\nby the diversity maintenance techniques of EC have been introduced into RL to directly reward\\ndiverse behaviors or novel states, such as novelty search [41], quality diversity [152], surprise search\\n[153], evolvability search [154], and curiosity search [155].\\n5.1.1 Novelty Search\\nNovelty search (NS) abandons the \\ftness objective while rewarding novel behaviors that are di\\u000berent\\nfrom previous behaviors. Behavior characterizations (BCs) should be \\frst designed to map the high-\\ndimensional search space into a lower-dimensional behavior space. Then, to measure the novelty of a\\nnewly generated individual, a novel metric is de\\fned as the task-speci\\fc distance between behaviors.\\nAfter that, NS can be easily integrated into EC algorithms with little change of replacing the \\ftness\\nobjective with the novel metric. NS has been applied to NEAT and outperformed \\ftness-based\\nmethods on the deceptive T-Maze and biped walking tasks [156, 41]. Empirical studies demonstrated\\nthat NS can bring unique advantages over \\ftness-based EC methods in overriding the deceptiveness\\nof most \\ftness functions and making the evolutionary process more open-ended.\\nHowever, when faced with a task with a large state-action space, pursuing novelty alone does not\\nperform better than \\ftness-based methods [157]. Thus, in high-dimensional evolutionary robotics\\n18 tasks, NS is used to augment \\ftness-based EC methods as a diversity maintenance technique or\\nserved as the second objective to be optimized simultaneously with the \\ftness objective [158]. NS is\\naugmented with local competition (NSLC) to create a set of diverse locomotion evolutionary robotics\\n[159]. Additionally, NS is combined with OpenAI ES in three ways. The \\frst version, NS-ES, replaces\\nthe gradients of expected rewards with the gradients of expected novelty, and the other two versions,\\nNSR-ES and NSRA-ES, trade o\\u000b the gradients of expected rewards and novelty [6]. Empirical studies\\nhave shown that the three versions performed better than OpenAI ES on Humanoid Locomotion\\nand Atari tasks with deceptive traps. NS is also combined with sub-population to promote directed\\nexploration in the population-guided NS method [160]. In general, all the above empirical studies\\nhave demonstrated that increasing behavioral diversity makes problems more easily resolved.\\n5.1.2 Quality Diversity\\nThe pursuit of both \\ftness and novelty has led to the development of quality diversity (QD) algo-\\nrithms, which aim to \\fnd a large set of both diverse and high-performing solutions in a single run.\\nThe set of solutions aims to cover as many solution types, or behavioral characterizations (BCs),\\nas possible and \\fnd the best solution for each type. Two main state-of-the-art QD algorithms are\\nnovelty search with local competition (NSLC) and the multi-dimensional archive of phenotypic elites\\n(MAP-Elites) [161]. A comparative study of the two algorithms in a set of maze tasks has revealed\\nthat the selection of BCs is a crucial and challenging issue, since it is task-dependent and should\\nalign with quality, otherwise it can change the di\\u000eculty of \\fnding a good solution [152]. Therefore,\\na number of automated BCs methods have been proposed to improve exploration e\\u000eciency, such\\nas using dimensionality reduction methods to autonomously learn BCs [162, 163], or mapping the\\nhigh-dimensional parameter space into a low-dimensional manifold in which a high-density of good\\npolicies is located [164].\\nMoreover, exploration e\\u000eciency can be improved from other aspects. Several e\\u000ecient behavioral\\ndiversity measurement methods have been proposed to measure the diversity of the entire popula-\\ntion by determinants of behavioral embedding of policies [165] or use a string edit metric to measure\\nbehavioral distance [166]. To improve evaluation e\\u000eciency, the quality and novelty of new candi-\\ndate solutions are predicted by a neural network (NN) in open-ended robot object manipulation\\ntasks [167]. To improve sample e\\u000eciency, a few-shot quality-diversity optimization method learns\\na population of prior policies for the initialization of QD [168]. To improve selection e\\u000eciency, an\\nevolutionary diversity optimization algorithm with clustering-based selection selects a high-quality\\npolicy in each cluster for reproduction [169].\\nIn addition, QD o\\u000bers the potential for open-ended innovation that RL agents can generate and\\nlearn their own never-ending curriculum without human intervention. The paired open-ended trail-\\nblazer (POET) algorithm has been proposed to generate increasingly complex environments and\\noptimize their solutions concurrently by combining the methods of NS, MAP-Elites, and minimal\\ncriterion coevolution [170, 171]. The empirical studies on 2-D bipedal-walking obstacle-course tasks\\nhave demonstrated that solutions found by POET for challenging environments cannot be found\\nby directly learning from scratch for the same environments. Further, a sample-e\\u000ecient QD envi-\\n19 ronment generation algorithm is proposed to apply a deep surrogate model to predict behaviors of\\nagents in new environments [172]. Surprisingly, the open-ended coevolution of environments and\\nsolutions has provided novel ideas for addressing complex tasks.\\n5.1.3 MAP-Elites\\nThe focus of QD algorithms is mainly on MAP-Elites, which divides the behavior space of BCs\\ninto discrete bins according to the number of discretizations required for each dimension. Each bin\\nrecords the best-found solution, and only one solution replaces a previous one if it outperforms the\\nprevious one both in terms of quality and diversity. MAP-Elites has been applied to generate elites of\\ndiverse behaviors (e.g., walking strategies) to help a robot adapt quickly to various kinds of damages\\n[10], and has achieved better performance and robustness than PPO for simulated hexapod robot\\ntasks [173]. However, MAP-Elites su\\u000bers from a scaling-up limitation in that the dimensionality of\\nBCs must be low since the number of discrete bins increases exponentially with the dimensionality\\nof BCs. To address this limitation, CVT-MAP-Elites uses centroidal Voronoi tessellation instead of\\ngrid-shaped bins to divide the behavior space into a desired number of regions [174]. In addition,\\nseveral improvement methods have been proposed to scale up MAP-Elites to high-dimensional tasks.\\nThese include biased cell sampling [175] and gradient-based mutation operators [176] for e\\u000ecient\\nreproduction, approximated gradient [177], policy gradient assisted MAP-Elites [178], and deep\\nsurrogate-assisted MAP-Elites [179] for the acceleration of optimization.\\nWhen facing hard-exploration tasks with sparse and deceptive rewards, RL algorithms, even with\\nintrinsic motivation, perform poorly due to two challenges: detachment and derailment. Contem-\\nporary RL algorithms do not remember well-explored states (detachment), and random exploration\\nmay not lead back to well-explored states (derailment). Hence, [4, 180] proposed Go-Explore, a\\nfamily of QD algorithms based on MAP-Elites. Go-Explore follows the key ideas of remembering\\nstates, returning to them (GO), and exploring from them (Explore). Go-Explore has greatly sur-\\npassed state-of-the-art RL algorithms on two challenging games such as Montezuma's Revenge and\\nPitfall.\\n5.1.4 Surprise Search\\nSurprise search is a new method of evolutionary divergent search that rewards deviation from the\\nexpected solution [153, 181]. In contrast, novelty search rewards deviation from the prior solutions.\\nSurprise search models the prediction of expected behavior and derivation. The expectation is based\\non the reasoning about past information, and thus surprise search can be viewed as a temporal\\nnovelty process. Both surprise search and novelty search are divergent search variants of QD, and\\ntheir combination along with local competition has led to comparable \\ftness, higher e\\u000eciency,\\nand better robustness (i.e., exploration and behavioral diversity) than novelty search on 60 highly\\ndeceptive maze navigation tasks [181].\\n20 5.1.5 Evolvability Search\\nEvolvability search is a new class of EC algorithms where the \\ftness function is a direct measure\\nof the evolvability of an individual [154]. Evolvability is the potential for the future evolution of\\nan individual. Evolvability search calculates the behavioral diversity of immediate o\\u000bspring of an\\nindividual to estimate its future potential for diversity and then directly selects individuals with\\nbetter potential diversity to enter into the next environmental selection. Encouraging behavioral\\ndiversity increases the adaptive ability of a lineage. Though resembling diversity-seeking methods\\nsuch as novelty search, evolvability search outperforms novelty search on maze navigation and biped\\nlocomotion tasks [154]. However, evolvability search is computationally expensive due to its \\ftness\\nevaluation process.\\n5.1.6 Curiosity Search\\nCuriosity search is a class of intrinsic motivated methods that learn intrinsic reward signals to\\nenable agents to explore their environments. Exploration by intrinsic curiosity is a widely used\\nmethod in RL algorithms, where intrinsic curiosity is used to complement the extrinsic rewards [182],\\npredict the sequences of future actions or states [155], and achieve self-generated goals [183, 184].\\nGoal exploration processes (GEP) from EC methods explore robustly, designing a set of behavioral\\nfeatures (i.e., goals) based on the outcome trajectories of policies, and then exploit around these\\ngenerated goals by directed behavioral diversity without being aware of external rewards. GEP has\\nbeen integrated with o\\u000b-policy RL methods to exploit policy parameters, and the diverse samples\\ngenerated by GEP can be inserted into the replay bu\\u000ber of deep deterministic policy gradient\\n(DDPG) for training [184]. The intrinsically motivated GEP method integrates curiosity search\\nand GEP to discover and acquire skills by self-generation, self-selection, self-ordering, and self-\\nexperimentation of learning goals [183]. Additionally, [185] rewards intra-life novelty to encourage\\nagents to explore new states within their lifetime. This method discretizes the pixel space into\\ncuriosity grids and rewards agents for visiting new locations on the grids. In contrast to the across-\\ntraining novelty of novelty search, curiosity search can revisit previously visited potential states.\\n5.2 Evolution-guided Exploration Methods\\nIn evolution-guided exploration methods, the Evolutionary Reinforcement Learning (ERL) method\\nwas the pioneer [9]. Since then, a number of works sharing the framework of ERL have been\\nproposed. In these methods, EC introduces exploration in mainly two ways: EC agents generate\\ndiverse experiences stored in the replay bu\\u000ber for the training of o\\u000b-policy RL methods (e.g., DDPG,\\nTD3, SAC), or RL agents are directly updated using the gradient information of EC agents.\\n5.2.1 Use of Diverse Experiences\\nERL is a basic framework for evolution-guided exploration methods that has outperformed PPO,\\nDDPG, and GA on MuJoCo continuous control benchmarks [9]. In this method, EC employs a\\npopulation of agents to explore the parameter space to generate diverse experiences for the training\\n21 of o\\u000b-policy RL agents, and periodically copies the RL agent into the EC population to inject\\ngradient information into evolution. As a result, ERL is able to deal with the challenges of sparse\\nrewards, ine\\u000bective exploration, and brittle convergence properties. EC is indi\\u000berent to the reward\\nsparsity by using an episodic \\ftness metric and enables diverse exploration and introduces redundant\\ninformation with a population of actors. The periodic injection of gradient information of RL into\\nEC deals with the ine\\u000ecient exploration issues of EC.\\nA more general framework, Collaborative ERL (CERL), maintains a population of TD3 agents\\nto optimize over di\\u000berent hyperparameters (e.g., discount rate \\r) and applies a resource manager\\nto allocate computational resources to the agents adaptively according to their cumulative returns\\n[3]. CERL has outperformed ERL and TD3 in Humanoid and Swimmer tasks, where state-of-the-\\nart RL algorithms are highly sensitive to their hyperparameters. The competitive and cooperative\\nheterogeneous DRL (C2HRL) leverages the advantages of both gradient-based and gradient-free\\nagents and introduces two agent management mechanisms to compete for computational resources\\nand share exploration experiences [186]. C2HRL has shown faster convergence than CERL on\\nMuJoCo tasks.\\nTo generate legal o\\u000bspring, more sophisticated variation operators such as safe mutation [125,\\n126] and Q-\\fltered distillation crossovers [122] have been introduced into the proximal distilled\\nERL algorithm [123] and the safe-oriented search method [126]. Additionally, ERL is augmented\\nwith imitation learning, where RL agents learn from the experiences sampled by high-\\ftness EC\\nindividuals, and low-\\ftness EC individuals learn from RL agents by imitating behavior patterns\\n[187]. This method has outperformed DDPG and ERL on four MuJoCo tasks.\\nHowever, the EC part of ERL applies undirected exploration by adding noise to the parameters.\\nHence, directed exploration methods such as NS, QD, and curiosity search have been applied to cover\\nthe state-action space more uniformly and e\\u000eciently. For example, GEP-PG applies curiosity search\\nto generate diverse targeted samples for the training of DDPG [184]. In addition to adding noise\\nin the parameter space, EC enables the introduction of action noise for o\\u000b-policy RL algorithms.\\nThe evolutionary action selection-TD3 (EASTD3) uses samples generated by RL agents to form an\\nEC population, applies particle swarm optimization (PSO) to evolve continuous action values, and\\n\\fnally uses the best actions to guide action selection for RL agents [188]. EASTD3 has shown better\\nperformance than ERL, PDERL, CERL, and TD3 on MuJoCo tasks.\\nOther exploration methods in robotics include evolving a foot trajectory generator to provide\\ndiversi\\fed motion priors to guide policy learning [189] and augmenting NS with multiple behavior\\nspaces to deal with the challenge of automated data collection in robotic grasping tasks [190].\\n5.2.2 Use of Gradient Information\\nIn addition to the combinations of GAs and o\\u000b-policy RL algorithms in the ERL variants, the\\ncross-entropy method (CEM) has been combined with TD3 or DDPG to create CEM-RL [191]. In\\nCEM-RL, the gradient information of the RL agent is directly applied to half of the CEM agents at\\neach iteration to increase training e\\u000eciency. Asynchronous ES-RL, based on CEM-RL and OpenAI\\nES, has been developed by [192] to integrate ES with o\\u000b-policy RL methods and improve time\\n22 e\\u000eciency and performance over ERL and CEM-RL. Since ESs are similar to gradient-based RL\\nmethods, they are naturally applied to the EC loop of ERL to share gradient information with\\nRL methods. The combination of ES and SAC, called ESAC, enables e\\u000bective exploration in the\\nparameter space [193]. ESAC has obtained improved performance over SAC, TD3, PPO, and ES\\non many MuJoCo and DeepMind control suite locomotion tasks.\\nERL and its variants have been applied only to the o\\u000b-policy actor-critic methods. Therefore,\\nSupe-RL has been proposed by [194] to generalize to any RL methods using soft updates for policy\\nevolution. Supe-RL generates a set of children by adding Gaussian mutation to the policy weights,\\nand then soft updates the best individual from the children periodically or keeps the weights the\\nsame to avoid detrimental behaviors. Supe-RL has outperformed ERL and PPO on several MuJoCo\\ntasks. Additionally, [195] has proposed the gradient-evolutionary algorithm with temporal logic for\\non-policy methods.\\n5.3 Discussion\\nWhile QD algorithms have shown promise in encouraging diversity in neuroevolution and potentially\\nrealizing the third pillar of AI-generating algorithms (AI-GAs) [196], their e\\u000bectiveness relies heavily\\non the selection of appropriate behavioral characteristics (BCs). Ensuring that the extraction of BCs\\naligns with quality is crucial to the success of QD, as otherwise, it may perform worse than other\\nmethods like NS. This highlights the importance of careful consideration and experimentation when\\nselecting BCs for QD algorithms.\\nAnother important consideration when using evolution-guided exploration methods in o\\u000b-policy\\nRL algorithms is the alignment between the EC population and the RL agents. If the EC population\\nis too di\\u000berent from the RL agents, the experiences or gradients generated by EC may not be e\\u000ecient\\nfor updating the RL agents. This underscores the need to carefully design and tune EC algorithms\\nto ensure that they are suitable for the speci\\fc RL tasks at hand. Additionally, it may be bene\\fcial\\nto explore hybrid methods that combine EC and gradient-based methods to leverage the strengths\\nof both methods.\\n6 Evolutionary Computation in Reward Shaping\\nThe reward signal is crucial in re\\recting the task objective in RL. However, in many scenarios, the\\nreward is sparse, making it di\\u000ecult for agents to learn useful information. To tackle this issue,\\nreward shaping has been introduced, which enhances the original reward with additional shaping\\nrewards. These subrewards provide feedback about the progress of the task, adjust the importance\\nof di\\u000berent aspects of the task, or learn proxy rewards [197]. Empirical studies have shown that\\nreward shaping can reduce the amount of exploration and accelerate convergence [198]. Despite\\nthese bene\\fts, reward shaping still faces several challenges. Firstly, it can alter the problem itself.\\nSecondly, designing appropriate subrewards requires expert domain knowledge. Thirdly, achieving\\na balance between multiple subrewards requires careful manual tuning. Finally, credit assignment\\nis a di\\u000ecult problem in multi-agent reinforcement learning. EC has been applied to deal with\\n23 these challenges by the evolution of reward functions andhyperparameters for both single-agent and\\nmulti-agent RL.\\n6.1 Evolution of Reward Functions\\nPotential-based reward shaping, which originated from the potential energy, was proposed in the\\n1990s to deal with the challenge of changes in the problem itself [199]. The method derives a\\npotential-based shaping function that can guarantee its consistency with the optimal policy. How-\\never, designing the potential function is still an open question, requiring extensive manual search to\\nproduce acceptable results. Therefore, the reward network in [200] employs potential-based reward\\nshaping, representing the potential function as a neural network whose weights are optimized by\\nnatural evolution strategies (NES) in a highly parallel way, as in OpenAI ES. The reward network,\\nalong with the proposed synthetic environments (both trained by NES), is robust to hyperparameter\\nvariation and can be transferred to unseen agents.\\nIn contrast, a general computational reward framework from the evolutionary perspective achieves\\nthe optimal reward function by maximizing the expected \\ftness over the distribution of environ-\\nments [201]. Experiments in a hungry-thirsty task have shown that the optimal reward function can\\ncapture both physical regularities across environments and speci\\fc properties of agent-environment\\ninteractions. Moreover, maximizing the expected \\ftness can lead to the emergence of interesting\\nreward functions such as intrinsic and extrinsic rewards. As a result, the framework avoids the issues\\nof designing task-dependent subrewards. However, the automated quasi-exhaustive search used for\\n\\fnding good reward functions is quite time-consuming. Hence, PushGP has been applied to \\fnd\\nreward functions e\\u000eciently, which can discover common features of environments and reduce the\\nexpensive costs for sim-to-real transfer in robot control tasks [202]. Other EC methods to obtain\\nintrinsic rewards include hyperparameter tuning for reward shaping [203], symbolic reward search\\n[204], and exploration methods such as novelty search [205] and curiosity search [155] (demonstrated\\nin Section 5).\\nIn a multi-agent Cyber Rodent robots task, the parameters of an intrinsic reward function are\\nevolved to dynamically control the exploration for multiple hand-coded extrinsic rewards [203]. In\\nseveral MuJoCo and Atari tasks, dense intrinsic reward functions are evolved directly by symbolic\\nregression for the purpose of obtaining interpretable low-dimensional reward functions [204]. In a\\ngrounded communication environment, the multi-agent evolutionary reinforcement learning (MERL)\\nmethod maintains a population of evolving teams by neuroevolutionary algorithms to achieve sparse\\nteam-based rewards and optimizes agent-speci\\fc policies by gradient-based methods to achieve a\\nlocal reward for each agent [206]. MERL has outperformed the state-of-the-art MARL algorithm\\nMADDPG [207]. In tasks with both spatial and temporal constraints (i.e., a team of agents should\\ncomplete a task in temporal and spatial order), the multi-agent evolution via dynamic skill selection\\n(MAEDyS) method \\frst decomposes a task into subcomponents with local rewards and then applies\\na coevolutionary method to optimize multiple local rewards and a global reward [208].\\n24 6.2 Hyperparameter Optimization Methods\\nThe issue of manual tuning can be addressed through hyperparameter optimization (HPO) methods.\\nFor example, the AutoRL method applies large-scale HPO to the parameterized reward function and\\nthe neural network (NN) architecture of the policy [209]. The method \\frst learns the reward function\\nusing evolutionary strategies with \\fxed NN architectures of the actor and critic, and then learns\\nNN architectures by only adjusting the number of neurons in each layer with the previously learned\\nreward. Although it can \\fnd robust policies for point-to-point robot navigation tasks, it is still\\nrelatively ine\\u000ecient. Besides, AutoRL has been applied to several MuJoCo tasks and outperformed\\nstate-of-the-art algorithms such as SAC and PPO [197]. In addition, in [49], a number of weights in\\nthe hand-designed potential-based reward shaping is evolved together with other hyperparameters\\nto control the trade-o\\u000b between exploration and exploitation in a robot foraging task.\\nThe credit assignment issue in tasks with sparse rewards is quite challenging, as estimating\\nthe contribution of an agent is particularly hard, making it di\\u000ecult to optimize team rewards. A\\nnumber of works have dealt with this issue using HPO methods. The state-of-the-art population-\\nbased training (PBT) has been used to automatically learn dense internal rewards for the popular\\n3D multiplayer \\frst-person video game [50], and optimize the relative importance between a set\\nof dense shaping rewards along with their discount rates automatically for the continuous multi-\\nagent MuJoCo soccer game [52]. Both methods aim to align the myopic shaping rewards with the\\nsparse long-horizon team rewards and generate cooperative behaviors. In addition, [210] deals with\\nintertemporal social dilemmas by trading o\\u000b collective welfare and individual utility. Speci\\fcally,\\na shared intrinsic reward network takes features input from all agents, while each agent trains a\\ndistinct policy network in each episode. Then, PBT is applied to optimize the weights of the reward\\nnetwork and other hyperparameters to evolve altruistic behavior.\\n6.3 Discussion\\nThe use of EC for reward shaping has shown promising results in addressing the challenges of sparse\\nrewards in RL. Potential-based reward shaping, a method originating from potential energy, can\\nguarantee consistency with the optimal policy. A computational reward framework can achieve\\noptimal reward functions through a quasi-exhaustive search, and PushGP can e\\u000eciently \\fnd reward\\nfunctions that discover common features of environments. Hyperparameter optimization methods,\\nsuch as AutoRL, can be used for parameterized reward functions and neural network architecture\\ntuning. In multi-gent RL, PBT has been used to automatically learn dense internal rewards and\\noptimize the relative importance between shaping rewards to generate cooperative behaviors.\\nAlthough EC methods o\\u000ber an automated and e\\u000ecient approach to reward shaping in RL, relying\\nsolely on EC methods for reward shaping can be ine\\u000ecient. To mitigate this, incorporating prior\\nknowledge can be bene\\fcial. Intrinsic rewards can guide agents to explore interesting parts of the\\nstate space, while extrinsic rewards de\\fne the task's objective. EC methods can generate intrinsic\\nrewards and dynamically tune their weights to address the issue of sparse rewards.\\n25 7 Evolutionary Computation in Meta-RL\\nRL has proven successful in tackling complex tasks, but it often requires a large number of samples\\nto learn from scratch for each task. Moreover, the choice of a pre-speci\\fed RL algorithm can impact\\nthe performance, such as cumulative rewards or sample e\\u000eciency. To address these challenges,\\nmeta-RL seeks to develop a general-purpose learning algorithm that can adapt to di\\u000berent tasks. In\\nother words, it aims to leverage knowledge from previous tasks to facilitate fast learning in new ones.\\nMeta-RL can handle various scenarios, including learning in similar environments from a single task\\nor substantially distinct environments from multiple tasks.\\nFrom the perspective of optimization-based methods, meta-RL can be formulated as a bi-level\\noptimization problem, where the inner level involves learning an agent using standard RL techniques,\\nwhile the outer level optimizes RL con\\fgurations such as policy update rules, hyperparameters, and\\nreward formulation to achieve a meta-objective. Gradient-based methods [211], RL methods [212],\\nand EC methods [213] can be used to optimize either level. EC methods are particularly promising\\nfor meta-RL since they are applicable for non-di\\u000berentiable meta-objectives and can avoid the high\\ncomputational overhead of high-order gradients. Speci\\fcally, EC methods have been introduced in\\nvarious aspects of meta-RL, such as parameter initialization ,loss learning ,environment synthesis ,\\nand algorithm generation .\\n7.1 Parameter Initialization\\nParameter initialization is a critical aspect of meta-RL that aims to \\fnd a suitable policy initializa-\\ntion such that good general performance can be achieved with only a few gradient steps over di\\u000berent\\nenvironments sampled from a distribution of tasks. The model-agnostic meta-learning (MAML) is\\na state-of-the-art method that formulates the learning of an easily-adaptable policy as an optimiza-\\ntion problem with a meta-objective of minimizing the loss of a small number of gradient steps on\\na new task [211]. MAML has no requirement for the model representation and has been applied\\nsuccessfully to various tasks, including regression, classi\\fcation, and RL. However, estimating the\\nsecond derivatives of the reward function is challenging using backpropagation, and policy gradient\\nmethods have inherently high variance. To overcome these limitations, ES-MAML integrates MAML\\ninto the ES framework, which avoids calculating the second derivatives by Gaussian smoothing of\\nthe MAML reward [214]. Additionally, ES-MAML simultaneously optimizes hyperparameters and\\ninitial parameters, leading to better performance and exploration on tasks with sparse rewards.\\nAlternatively, the Baldwin evolutionary methods, which involve reinitializing parameters and hy-\\nperparameters, have been used for meta-learning when MAML is not applicable [215]. However,\\nit is worth noting that meta-learning parameter initialization is limited to a single task or similar\\ndistribution of tasks.\\n7.2 Loss Learning\\nMeta-learning the loss has demonstrated generalization ability across substantially di\\u000berent tasks,\\nincluding out-of-distribution tasks. Evolved Policy Gradient (EPG) is a representative work that\\n26 meta-learns a di\\u000berentiable loss function parameterized by temporal experiences [213]. EPG has two\\noptimization loops: an inner loop learns a policy to minimize the loss, and an outer loop learns the\\nloss such that an agent trained by the loss can achieve high expected returns in a task distribution.\\nThe parameters of the loss function, represented by a neural network, are optimized by OpenAI ES,\\nwhere a population of workers runs in parallel to obtain the update gradients of the loss. EPG has\\ndemonstrated better generalization than MAML on several MuJoCo tasks, although its e\\u000bectiveness\\nis limited to a small family of tasks at a time.\\nSeveral works have meta-learned interpretable loss functions by symbolic regression of GP. For\\nexample, Co-Reyes et al. [216] generate symbolic loss functions for general policy update rules\\nby regularized evolution and directly evolve a population of RL algorithms. Regularized evolution\\nremoves the oldest solutions to prevent over\\ftting to training noise, rather than removing the worst\\nsolutions from the whole population, such that algorithms retrained well are more likely to remain in\\nthe population. Experiments on complex tasks demonstrate that, when learning from scratch, this\\nmethod can rediscover DQN, and when inserting the existing algorithm DQN into the population\\nas bootstrap, the method can further improve generalization of DQN.\\nAdditionally, to simultaneously meet the requirements of performance, generalizability, and sta-\\nbility, MetaPG formulates the requirements as three meta-objectives and applies NSGA-II to discover\\nnew RL algorithms by designing directed acyclic graphs, a symbolic representation [217]. Experi-\\nments on three continuous control tasks show that MetaPG has improved both the performance and\\ngeneralizability of SAC by using a graph-based implementation of SAC to initialize the population\\n[24].\\n7.3 Environment Synthesis\\nEnvironment synthesis aims to generate synthetic data that can enhance the training e\\u000eciency of\\nRL models. In the world models approach, for instance, the MDN-RNN model receives compressed\\ninput from the agent's visual perception and predicts future states to improve training e\\u000eciency\\n[90]. However, in real-world applications, it is essential to consider both states and rewards. There-\\nfore, [200] proposes a method that simultaneously learns synthetic environments (SEs) and reward\\nnetworks (RNs) to generate synthetic MDPs that mimic real-world environments. The method rep-\\nresents SEs and RNs as NN proxies and applies a bi-level optimization method to optimize them for\\nbetter performance on real environments. The inner optimization trains RL agents on the proxies,\\nwhile the outer loop evolves parameters of SEs and RNs using NES to maximize performance on\\nreal environments. This method allows training competitive agents with fewer interactions with\\nreal environments on MuJoCo tasks. In [50], the population based training (PBT) [43] has been\\napplied to meta-learn the internal rewards and hyperparameters of the RL algorithm simultaneously.\\nThis method can be viewed as a two-tier optimization problem where the inner tier maximizes the\\nexpected discounted internal rewards, and the outer tier maximizes a meta-reward based on the\\ninternal rewards and hyperparameters. The joint optimization of policy and the RL process itself\\nenables the training of better agents in large-scale complex tasks.\\n27 7.4 Algorithms Generation\\nWhen rewards from extrinsic environments are sparse, curiosity can provide intrinsic motivation for\\nagents to explore environments. In [218], learning how to explore is formulated as a meta-learning\\nproblem of generating curious behaviors, where e\\u000bective curiosity algorithms can be generated by\\nlearning proxy rewards for exploration. The meta-learning method has two optimization loops: the\\nouter loop evolves a population of curiosity algorithms (represented as programs) by GP to learn\\nintrinsic reward signals dynamically, and the inner loop performs a standard RL pipeline using the\\nlearned reward signals. The method designs a domain-speci\\fc language, represented as directed\\nacyclic graphs, to generate programs that include building blocks such as NNs, objective functions,\\nensembles, bu\\u000bers, and other regressors as polymorphic data types. Empirical studies have shown\\nthat the method can discover algorithms similar to novelty search and generalize across a much\\nbroader distribution of environments.\\n7.5 Discussion\\nMeta-RL has undoubtedly made signi\\fcant progress in enabling RL to learn new tasks e\\u000eciently\\nand generalize across di\\u000berent tasks. However, the number and complexity of tasks that can be\\nsolved using meta-RL are still limited, particularly in real-world applications. Furthermore, the\\ncomputational cost of meta-RL is high due to the concurrent optimization of two loops and training\\nover a large number of tasks. Therefore, exploiting the model-agnostic and highly-parallel properties\\nof EC is a promising direction to unlock the full potential of meta-RL. EC methods can avoid the high\\ncomputational overhead of high-order gradients and can handle non-di\\u000berentiable meta-objectives,\\nwhich are challenging for gradient-based methods. The use of EC methods in various aspects of meta-\\nRL, such as parameter initialization, loss learning, environment synthesis, and algorithm generation,\\nhas demonstrated promising results. With further research and development, EC-based meta-RL\\nhas the potential to enable RL to learn and generalize across more complex tasks and to be more\\ncomputationally e\\u000ecient in real-world scenarios.\\nAlthough EC methods have shown great potential in meta-RL, there are still several challenges\\nto overcome. One challenge is the high dimensionality of the search space, which can be very large,\\nespecially when dealing with complex environments or when optimizing both policy and hyperpa-\\nrameters. This can lead to slow convergence and high computational costs. Another challenge is\\n\\fnding a balance between exploration and exploitation. EC methods often rely on some form of\\nexploration to \\fnd good solutions, but too much exploration can result in wasted computational re-\\nsources and slow progress. To address these challenges, future research in EC-based meta-RL could\\nfocus on developing more e\\u000ecient search algorithms, reducing the dimensionality of the search space,\\nand designing exploration strategies that balance exploration and exploitation more e\\u000bectively.\\n8 Evolutionary Computation in Multi-objective RL\\nWhile the aforementioned methods focus on single-objective RL, many real-world problems have\\nmultiple con\\ricting objectives. For example, an agent may need to grasp an object while minimizing\\n28 its energy consumption. These problems can be formulated as multi-objective MDPs (MOMDPs),\\nwhere the reward function R= [r1;:::;rm]Trepresents a vector of mrewards, and the discount\\nfactor is a vector \\r= [\\r1;::;\\rm]T. Multi-objective RL (MORL) learns a policy \\u0019\\u0012to simultaneously\\noptimize the multiple objectives J(\\u0019) = [J1(\\u0019);:::;Jm(\\u0019)]T, where each objective Ji(\\u0019) is associated\\nwith a dimension of the reward vector:\\nJi(\\u0019) =E\\u001a0;\\u0019;T\\\"1X\\nt=0\\rt\\nirt\\ni#\\n; (2)\\nTherefore, MORL is intrinsically a multi-objective optimization problem, where there is no single\\noptimal solution that can satisfy multiple con\\ricting objectives. Instead, a set of trade-o\\u000b solutions\\nnamed Pareto optimality is desired. In this context, Pareto optimality de\\fnes a solution that is\\nnot dominated by any other solution in the objective space. The set of such solutions is called the\\nPareto set, and the mapping of the Pareto set in the objective space is referred to as the Pareto front.\\nTo \\fnd a set of solutions approximating the Pareto front, multi-objective evolutionary algorithms\\n(MOEAs) are e\\u000bective tools to obtain optimal solutions in a single run [219]. The solution qualities\\nare often justi\\fed based on two aspects: convergence toward the true Pareto front and diversity along\\nthe Pareto front [220]. The hypervolume (HV) metric is often employed to measure convergence and\\ndiversity concurrently, which calculates the size of the hypervolume enclosed by the solution set and\\na reference point (e.g., a vector of nadir points of all obtained solutions in each dimension) [221].\\nThus, a larger HV metric value indicates a closer approximation of the Pareto front of solutions.\\nOther metrics employed in MORL include the (inverted) generational distance [222], the generalized\\nspread indicator [223], the cardinality indicator [224], and sparsity metrics [225].\\nAccording to the number of policies obtained at the end of optimization, MORL methods can be\\nroughly divided into two categories: single-policy methods and multi-policy methods [15]. Single-\\npolicy methods aim to learn a unique optimal policy each time. To achieve this, multiple rewards are\\ntransformed into a scalar reward by using a scalarization function, and then the scalar-rewarded task\\nis learned by general RL methods [226]. However, single-policy methods have several drawbacks,\\nincluding the requirement for domain knowledge, low e\\u000eciency, and sub-optimality due to preference\\n[15]. In contrast, multi-policy methods aim to \\fnd a set of diverse policies that approximate the\\nPareto front. These methods learn a set of policies that provide users with multiple options to\\nchoose from. One class of multi-policy methods focuses on how to apply the weights of objectives to\\nguide the optimization of policies [227]. However, scalarizing for all weights and approximating the\\nwhole Pareto front are both challenging. In contrast, another class of multi-policy methods applies\\nMOEAs to obtain a set of optimal solutions without setting weights. Further, multi-objectivization\\nis used to transfer single-objective problems into multi-objective problems to make problems more\\neasily to be resolved.\\n8.1 Multi-objective Evolutionary Algorithms\\nTo deal with the existing issues of MORL, EC methods are e\\u000bective tools to obtain a set of trade-\\no\\u000b solutions that can not only converge toward the Pareto front but also distribute well. In [11],\\n29 NSGA-II or MCMA (a multi-objective variant of CMA-ES) augmented with a local search method\\nhas been applied to search the graph-represented policies in robots load tasks with two, three, and\\n\\fve objectives. In addition, evaluation metrics have been applied to select solutions with good\\nperformance to enter into the next generation. The metrics-based MORL method applies HV and\\nsparsity metrics to select the best policy-weight pairs [225]. With a prediction model to predict policy\\nimprovements and an intra-family interpolation method to construct continuous Pareto fronts, the\\nmethod has achieved better HV and sparsity metrics over MOEA\\/D and a meta-learning method\\non a set of multi-objective MuJoCo tasks. In addition, the ideas of selecting good performance\\nby metrics have been borrowed into the action selection mechanism in MORL. The hypervolume-\\nbased MORL algorithm selects the action with the largest HV contribution into the next generation,\\nand this method has outperformed a linear scalarization-based action selection mechanism on two-\\nobjective deep sea treasure and three-objective mountain car benchmark tasks [228]. Besides, HV\\nhas been applied to select actions in an interactive way [229]. The Pareto-Q-learning algorithm used\\nthree evaluation methods, i.e., Pareto dominance relation, HV metric, and the cardinality metric,\\nto select the most potential actions in the Q-learning process [230].\\n8.2 Multi-objectivization\\nMulti-objectivization is a technique that transforms a single-objective problem into a multi-objective\\nproblem by decomposing a single objective or adding extra objectives [231]. The motivation behind\\nthis approach is to make the problem easier to solve by introducing more exploration through prior\\nknowledge. In the context of RL, multi-objectivization is typically used as a reward shaping method\\nthat focuses on not only completing tasks but also on encouraging behavior diversity among learned\\nagents. Recently, the evolutionary multi-objective game AI (EMOGI) framework was proposed as\\na method for generating behavior-diverse game agents [232]. EMOGI \\frst tailors a reward function\\nwith multiple objectives consisting of a performance objective and a number of behavior objectives,\\nand then applies NSGA-II to evolve a set of trade-o\\u000b policies. Further studies on EMOGI have\\nshown that introducing behavior objectives alone, without the performance objective, is able to\\ngenerate a set of behavior-diverse agents [233].\\n8.3 Discussion\\nEC has a wide range of MOEAs such as Pareto-based, decomposition-based, and indicator-based\\nmethods, but MORL has mainly used NSGA-II and the HV metric. The lack of diversity in MOEAs\\nfor MORL has limited the development of e\\u000ecient and e\\u000bective algorithms for multi-objective RL\\nproblems. One possible solution to address this issue is to introduce user preference-based MORL\\nmethods. Such methods can take inspiration from preference-based MOEAs and enable users to\\nspecify their preferences for di\\u000berent objectives. Furthermore, to advance the \\feld of MORL, re-\\nsearchers need to design and study tasks with more than three objectives. This would allow the\\nuse of many-objective evolutionary algorithms (MaOEAs) [234], which have the potential to provide\\nbetter solutions for such problems.\\nWhile EC has proved to be a valuable tool for MORL, there is still a need to explore more\\n30 Application scenarios\\nIndustry Healthcare Finance Robotics Games Scheduling\\nEvoRL platforms EvoRL benchmarks\\nLamarckian, EvoJAX, Evosax, EvoX,\\u2026 OpenAI Gym, ViZDoom, Maze navigation,\\u2026\\nEvoRL processes\\nEncoding\\nSamplingSearch \\noperatorsAlgorithmic \\nframeworksEvaluation\\nFigure 4: Overview of future directions from four \\felds of EvoRL: processes, benchmarks, platforms,\\nand application scenarios.\\nMOEAs and essential ideas to develop e\\u000ecient and e\\u000bective algorithms for multi-objective RL prob-\\nlems. Preference-based MORL methods and MaOEAs can o\\u000ber new opportunities to overcome the\\nlimitations of existing approaches and advance the \\feld further. Future research e\\u000borts should fo-\\ncus on designing and studying more complex tasks with multiple objectives to better evaluate the\\nperformance of MORL algorithms and compare them with existing state-of-the-art methods.\\n9 Future Research Directions\\nAlthough EvoRL has been successfully applied to large-scale complex RL tasks, even with sparse\\nand deceptive rewards, it is still computationally expensive. A number of e\\u000ecient methods in terms\\nof EvoRL processes including encoding, sampling, search operators, algorithmic framework, and\\nevaluation, as well as benchmarks, platforms, and applications, are desirable research directions, as\\noverviewed in Figure 4.\\n9.1 Encodings\\nE\\u000ecient encodings of decision variables are essential for the optimization of EC algorithms. While\\nreal-encoding is the most widely used representation in most research \\felds of RL, it is not the most\\ne\\u000ecient representation for policy search. Indirect encodings, such as CPPN, can be easily evolved\\nto capture structural relationships that exist in the human body structure. Additionally, TPG for\\nGP is a highly compressed representation that organizes multiple execution programs into modular\\n31 structures. Despite these indirect encodings, the development of e\\u000ecient encoding schemes is still\\nlimited. This includes the development of human-understandable encodings and e\\u000ecient encodings\\napplied to ESs. Furthermore, it is desirable to apply existing encodings to large-scale complex tasks\\nto investigate their scalability and generality.\\nE\\u000ecient encodings have the potential to improve the scalability and generality of EC algorithms.\\nIt is crucial to develop e\\u000ecient encoding schemes that can handle large-scale complex tasks to\\nreduce computational costs. Furthermore, there is a need for human-understandable encodings\\nthat can increase the interpretability of evolved policies. The development of e\\u000ecient encoding\\nschemes can also bene\\ft from incorporating insights from other research \\felds, such as information\\ntheory and statistical learning theory. E\\u000ecient encoding schemes can also be combined with other\\ne\\u000ecient methods, such as e\\u000ecient sampling and e\\u000ecient sample utilization, to further improve the\\nperformance of EC algorithms. Overall, the development of e\\u000ecient representations is an important\\nresearch direction for improving the scalability and generality of EC algorithms.\\n9.2 Sampling Methods\\nThe fundamental process of EC is to continuously sample solutions in search of the optimal direction.\\nTherefore, e\\u000ecient sampling can signi\\fcantly reduce time and computational overhead. In ESs based\\nmethods for policy search, e\\u000ecient sampling can decrease variance and accelerate convergence, such\\nas mirrored sampling in OpenAI ES [8]. E\\u000ecient sampling from the decision space is advantageous\\nfor ESs, but it has not been thoroughly investigated in GAs or GP based methods.\\nOn the other hand, in GAs based methods, sampling is often conducted based on the performance\\nin the objective space (i.e., behavior space). For instance, in exploration, new individuals are\\nfrequently generated in sparsely populated areas to help \\fnd agents with diverse behaviors as quickly\\nas possible. However, designing e\\u000ecient sampling methods remains a challenging issue for scaling\\nup algorithms to tasks with large-scale search spaces, sparse and deceptive rewards, and complex\\nand varied landscapes during training.\\n9.3 Sample Utilization\\nIn o\\u000b-policy RL algorithms such as DQN, samples are stored in a replay bu\\u000ber for multiple epochs of\\nupdates to improve sample utilization and break sample correlations. On-policy RL algorithms such\\nas PPO use importance sampling to enable sample reuse by correcting for the discrepancy between\\nold and new policies. Other RL techniques such as GAE and V-trace allow the reuse of old samples\\nto further reduce variance in updates [235].\\nIn ESs (similar to policy-gradient RL methods), techniques such as importance sampling have\\nbeen introduced to enable sample reuse [68]. However, importance sampling alone is not su\\u000ecient\\nto compete with RL methods in terms of sample e\\u000eciency. Hence, more e\\u000ecient techniques need\\nto be introduced into ESs. Furthermore, it remains to be investigated whether these techniques are\\nfeasible in GAs.\\n32 9.4 Search Operators\\nAlthough there have been a number of variational operators tailored for the search of NNs, such as\\nsafe mutation [125] and distillation crossover [123], the related research is still limited due to the\\nintrinsic features of various encodings. Besides topology distillation and gradient sensitive variation\\nthrough imitation learning and distillation, more methods or techniques can be employed to improve\\nthe e\\u000eciency of variational operators, such as transfer learning and surrogate gradient methods.\\nIn addition, the operators need to be customized according to the encoding method and can\\nbe controlled to generate safe o\\u000bspring. To do this, the use of surrogate models or other auxiliary\\ntools can help identify the most promising o\\u000bspring to preserve during the selection process, while\\ndiscarding less promising ones. Finally, more search operators can be borrowed from EC (e.g., DE\\n[236], PSO [237], and CSO [238]) to investigate their e\\u000bectiveness for RL tasks. The development\\nof e\\u000ecient search operators can signi\\fcantly reduce the search time and improve the e\\u000bectiveness of\\nEvoRL algorithms.\\n9.5 Algorithmic Frameworks\\nSeveral EvoRL frameworks have been developed, such as NEAT, OpenAI ES, ERL, and PBT. How-\\never, there are still several issues that require further research. In policy search, optimizing a policy\\nwith a large number of parameters can be challenging. If the policy can be divided into components\\nat a \\fner granularity, various functional modules can be automatically discovered using cooperative\\ncooperation methods. Moreover, this divide-and-conquer method enables parallel computation of\\ndi\\u000berent components, thereby further reducing computational resources.\\nResearch on evolution-guided exploration methods has mainly focused on o\\u000b-policy RL algo-\\nrithms since experiences of EC are more easily used than gradient information to ensure RL algo-\\nrithms do not degenerate. If gradient information of EC can be introduced, the training e\\u000eciency\\nof RL algorithms can be improved. Additionally, more e\\u000bort is needed to develop frameworks for\\non-policy RL algorithms. Although PBT is e\\u000bective, selecting optimized hyperparameters from a\\nlarge number of hyperparameters is challenging. This issue can be formulated as a combinatorial op-\\ntimization problem, and then EC can be applied to the whole framework of hyperparameter selection\\nand optimization to realize end-to-end automated HPO.\\n9.6 Evaluation Methods\\nE\\u000ecient evaluation methods are crucial to reduce the computational burden of evaluating the \\ft-\\nness of each newly-generated agent in EvoRL. Surrogate-assisted methods have been introduced in\\nEvoRL to predict \\ftness values [239, 240]. However, accurately modeling the relationship between\\npolicy parameters\\/behaviors and performance remains a challenging issue, especially in tasks with\\na large number of parameters and multiple performance indicators. In addition, ranking agents\\nwithout \\ftness estimation is a feasible method that has been explored, such as rank-based \\ftness\\nshaping which introduced racing methods into ESs to judge the relative performance of agents [75].\\nThese existing attempts have shown that accurate \\ftness values are not always necessary for e\\u000e-\\n33 cient evaluation. A partial order relation among agents may be su\\u000ecient, further facilitating the\\nparallelism of algorithms.\\nE\\u000ecient evaluation methods are not only essential for speeding up the evaluation process but\\nalso for improving the accuracy and robustness of evaluation results. In addition to surrogate-\\nassisted methods and rank-based \\ftness shaping, other techniques such as transfer learning, meta-\\nlearning, and Bayesian optimization can be explored to reduce the number of interactions with\\nthe environment and improve the e\\u000eciency of evaluation. These techniques can also be used to\\nimprove the generalization capability of the agents, making them more adaptable to new tasks and\\nenvironments. E\\u000ecient evaluation methods can signi\\fcantly improve the scalability and generality\\nof EvoRL algorithms, enabling them to tackle more complex and challenging tasks in a more e\\u000ecient\\nmanner.\\n9.7 Benchmarks for EvoRL\\nThe earlier tasks resolved by EvoRL are simple, such as pole balancing tasks and mountain car tasks\\n[74]. Then, EvoRL has been applied to various mobile robotics tasks such as maze navigation and\\nrobot arm control, as well as sim-to-real robotic tasks. These works have contributed to the formation\\nof the research \\feld of evolutionary robotics [158]. Recently, bene\\fting from the uni\\fed platform\\nand agent-environment interaction interface of integrated OpenAI Gym [241], games have become\\nexcellent benchmarks for evaluating EvoRL compared to existing testbeds. Especially, EvoRL has\\nbeen widely applied to tasks with continuous state-action space (e.g., robot control in MuJoCo),\\nand has shown promise for learning agents in large-scale visual-input games (e.g., ViZDoom [142]\\nand Dota2 [143]).\\nHowever, in empirical studies, EvoRL methods have typically been compared with RL methods\\nto observe their performance improvement, rather than compared with other EvoRL methods. The\\nreason is that existing RL benchmarks are not su\\u000ecient to investigate the properties of various\\nEvoRL methods. On the other hand, multi-objective EvoRL methods and quality diversity EvoRL\\nmethods do not have tailored benchmarks. Therefore, developing tailored EvoRL benchmarks is\\nmuch needed. While designing EvoRL benchmarks is not easy, a quick way is to modify existing RL\\nbenchmarks, such as the modi\\fed multi-objective MuJoCo tasks [7] to verify multi-objective EvoRL\\nmethods and the 2D bipedal walking adapted tasks to verify the open-endedness of EvoRL methods\\n[171].\\n9.8 Scalable Platforms\\nRecently, several scalable platforms for EvoRL have been developed, such as the Lamarckian plat-\\nform, an open-source high-performance platform that can scale up to thousands of CPU cores and\\nhas been veri\\fed to perform well in large-scale commercial games [242]. Another platform is the\\nparallel evolutionary and reinforcement learning library (PEARL), although it does not show highly\\nscalable abilities [243]. With the fast computational ability of GPUs, the JAX library has been re-\\nleased, which o\\u000bers a NumPy-like API for GPU-accelerated numerical calculations. Based on JAX,\\nthe platforms for neuroevolution such as EvoJAX [244] and evosax [245], the platform EvoX [246]\\n34 for general EC algorithms, and the platform QDax [247] for QD algorithms have been developed.\\nThese platforms have shown the ability to \\fnd solutions in Atari or MuJoCo tasks with signi\\fcantly\\nless time than using CPUs.\\nIn addition, [248] has proposed a platform for developing co-evolution algorithms for co-optimizing\\nthe design and control of robots. Despite these advancements, the research on e\\u000ecient and scalable\\nplatforms for EvoRL is still limited, as they may not be user-friendly or focus on limited EvoRL\\nalgorithms. Further research is needed to develop more e\\u000ecient and scalable platforms that can\\nhandle large-scale complex tasks and integrate with various EvoRL algorithms.\\n10 Conclusion\\nThe article has presented a comprehensive survey of EvoRL, mainly focusing on its methodologies\\nand future directions. Firstly, the article has introduced EvoRL methods by classifying them ac-\\ncording to six key research \\felds of RL: hyperparameter optimization, policy search, exploration,\\nreward shaping, meta-RL, and multi-objective RL. For each \\feld, the applied EC methods (ESs,\\nGAs, and GP) are elaborated and their main advantages and disadvantages are discussed. Secondly,\\nthe article has discussed several future directions for e\\u000ecient methods in EvoRL processes, as well\\nas tailored EvoRL benchmarks and platforms. By discussing these future directions, the article has\\nprovided guidance for researchers and practitioners interested in the \\feld of EvoRL, and promotes\\nthe further development of this cross-disciplinary research \\feld. Overall, this survey is a resource\\nfor anyone interested in learning about EvoRL and its potential applications in RL, and provides\\ninsights into the future direction of this rapidly growing \\feld.\\nReferences\\n[1] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction . MIT press, 2018.\\n[2] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves,\\nM. Riedmiller, A. K. Fidjeland, G. Ostrovski et al. , \\\\Human-level control through deep rein-\\nforcement learning,\\\" Nature , vol. 518, no. 7540, pp. 529{533, 2015.\\n[3] S. Khadka, S. Majumdar, T. Nassar, Z. Dwiel, E. Tumer, S. Miret, Y. Liu, and K. Tumer,\\n\\\\Collaborative evolutionary reinforcement learning,\\\" International Conference on Machine\\nLearning , 2019.\\n[4] A. Eco\\u000bet, J. Huizinga, J. Lehman, K. O. Stanley, and J. Clune, \\\\Go-explore: A new approach\\nfor hard-exploration problems,\\\" arXiv preprint arXiv:1901.10995 , 2019.\\n[5] Q. Long, Z. Zhou, A. Gupta, F. Fang, Y. Wu, and X. Wang, \\\\Evolutionary population curricu-\\nlum for scaling multi-agent reinforcement learning,\\\" in International Conference on Learning\\nRepresentations , 2020.\\n35 [6] E. Conti, V. Madhavan, F. Petroski Such, J. Lehman, K. Stanley, and J. Clune, \\\\Improving\\nexploration in evolution strategies for deep reinforcement learning via a population of novelty-\\nseeking agents,\\\" Advances in Neural Information Processing Systems , vol. 31, 2018.\\n[7] D. M. Roijers, P. Vamplew, S. Whiteson, and R. Dazeley, \\\\A survey of multi-objective sequen-\\ntial decision-making,\\\" Journal of Arti\\fcial Intelligence Research , vol. 48, pp. 67{113, 2013.\\n[8] T. Salimans, J. Ho, X. Chen, S. Sidor, and I. Sutskever, \\\\Evolution strategies as a scalable\\nalternative to reinforcement learning,\\\" arXiv preprint arXiv:1703.03864 , 2017.\\n[9] S. Khadka and K. Tumer, \\\\Evolution-guided policy gradient in reinforcement learning,\\\" in\\nInternational Conference on Neural Information Processing Systems , 2018.\\n[10] A. Cully, J. Clune, D. Tarapore, and J.-B. Mouret, \\\\Robots that can adapt like animals,\\\"\\nNature , vol. 521, no. 7553, p. 503, 2015.\\n[11] H. Soh and Y. Demiris, \\\\Evolving policies for multi-reward partially observable markov deci-\\nsion processes (mr-pomdps),\\\" in Proceedings of the 13th Annual Conference on Genetic and\\nEvolutionary Computation , 2011, pp. 713{720.\\n[12] D. Whitley, S. Dominic, R. Das, and C. W. Anderson, \\\\Genetic reinforcement learning for\\nneurocontrol problems,\\\" Machine Learning , vol. 13, no. 2, pp. 259{284, 1993.\\n[13] K. O. Stanley and R. Miikkulainen, \\\\Evolving neural networks through augmenting topolo-\\ngies,\\\" Evolutionary Computation , vol. 10, no. 2, pp. 99{127, 2002.\\n[14] O. Sigaud, \\\\Combining evolution and deep reinforcement learning for policy search: A survey,\\\"\\narXiv preprint arXiv:2203.14009 , 2022.\\n[15] C. Liu, X. Xu, and D. Hu, \\\\Multiobjective reinforcement learning: A comprehensive overview,\\\"\\nIEEE Transactions on Systems, Man, and Cybernetics: Systems , vol. 45, no. 3, pp. 385{398,\\n2014.\\n[16] J. Parker-Holder, R. Rajan, X. Song, A. Biedenkapp, Y. Miao, T. Eimer, B. Zhang, V. Nguyen,\\nR. Calandra, A. Faust et al. , \\\\Automated reinforcement learning (autorl): A survey and open\\nproblems,\\\" arXiv preprint arXiv:2201.03916 , 2022.\\n[17] H. Qian and Y. Yu, \\\\Derivative-free reinforcement learning: A review,\\\" Frontiers of Computer\\nScience , 2021.\\n[18] Y. Li, \\\\Deep reinforcement learning: An overview,\\\" arXiv preprint arXiv:1701.07274 , 2018.\\n[19] J. Schulman, S. Levine, P. Abbeel, M. Jordan, and P. Moritz, \\\\Trust region policy optimiza-\\ntion,\\\" in International Conference on Machine Learning . PMLR, 2015, pp. 1889{1897.\\n[20] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, \\\\Proximal policy optimiza-\\ntion algorithms,\\\" arXiv preprint arXiv:1707.06347 , 2017.\\n36 [21] V. Mnih, A. P. Badia, M. Mirza, A. Graves, T. Lillicrap, T. Harley, D. Silver, and\\nK. Kavukcuoglu, \\\\Asynchronous methods for deep reinforcement learning,\\\" in International\\nConference on Machine Learning , 2016.\\n[22] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wier-\\nstra, \\\\Continuous control with deep reinforcement learning,\\\" in International Conference on\\nLearning Representations , 2016.\\n[23] S. Fujimoto, H. Hoof, and D. Meger, \\\\Addressing function approximation error in actor-critic\\nmethods,\\\" in International Conference on Machine Learning . PMLR, 2018, pp. 1587{1596.\\n[24] T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine, \\\\Soft actor-critic: O\\u000b-policy maximum\\nentropy deep reinforcement learning with a stochastic actor,\\\" in International Conference on\\nMachine Learning . PMLR, 2018, pp. 1861{1870.\\n[25] V. Mnih, K. Kavukcuoglu, D. Silver, A. Graves, I. Antonoglou, D. Wierstra, and M. Riedmiller,\\n\\\\Playing atari with deep reinforcement learning,\\\" arXiv preprint arXiv:1312.5602 , 2013.\\n[26] M. Hessel, J. Modayil, H. Van Hasselt, T. Schaul, G. Ostrovski, W. Dabney, D. Horgan,\\nB. Piot, M. Azar, and D. Silver, \\\\Rainbow: Combining improvements in deep reinforcement\\nlearning,\\\" in Thirty-second AAAI Conference on Arti\\fcial Intelligence , 2018.\\n[27] H. Van Hasselt, A. Guez, and D. Silver, \\\\Deep reinforcement learning with double q-learning,\\\"\\ninProceedings of the AAAI Conference on Arti\\fcial Intelligence , vol. 30, no. 1, 2016.\\n[28] N. Hansen, D. V. Arnold, and A. Auger, \\\\Evolution strategies,\\\" in Springer handbook of\\ncomputational intelligence . Springer, 2015, pp. 871{898.\\n[29] D. Whitley, \\\\A genetic algorithm tutorial,\\\" Statistics and Computing , vol. 4, no. 2, pp. 65{85,\\n1994.\\n[30] E. K. Burke, S. Gustafson, and G. Kendall, \\\\Diversity in genetic programming: an analysis\\nof measures and correlation with \\ftness,\\\" IEEE Transactions on Evolutionary Computation ,\\nvol. 8, no. 1, pp. 47{62, 2004.\\n[31] G. Rudolph, Convergence properties of evolutionary algorithms . Verlag Dr. Kova\\u0014 c, 1997.\\n[32] N. Hansen, S. D. M\u007f uller, and P. Koumoutsakos, \\\\Reducing the time complexity of the de-\\nrandomized evolution strategy with covariance matrix adaptation (CMA-ES),\\\" Evolutionary\\nComputation , vol. 11, no. 1, pp. 1{18, 2003.\\n[33] D. Wierstra, T. Schaul, T. Glasmachers, Y. Sun, J. Peters, and J. Schmidhuber, \\\\Natural\\nevolution strategies,\\\" The Journal of Machine Learning Research , vol. 15, no. 1, pp. 949{980,\\n2014.\\n[34] S. Amari and S. C. Douglas, \\\\Why natural gradient?\\\" in IEEE International Conference on\\nAcoustics , 1998.\\n37 [35] J. Gauci and K. O. Stanley, \\\\Indirect encoding of neural networks for scalable go,\\\" in Inter-\\nnational Conference on Parallel Problem Solving from Nature . Springer, 2010, pp. 354{363.\\n[36] S. Risi and J. Togelius, \\\\Neuroevolution in games: State of the art and open challenges,\\\"\\nIEEE Transactions on Computational Intelligence and AI in Games , vol. PP, no. 99, 2015.\\n[37] Z. Buk, J. Koutn\\u0013 \\u0010k, and M. \\u0014Snorek, \\\\Neat in hyperneat substituted with genetic program-\\nming,\\\" in International Conference on Adaptive and Natural Computing Algorithms . Springer,\\n2009, pp. 243{252.\\n[38] A. Moraglio, C. Di Chio, J. Togelius, and R. Poli, \\\\Geometric particle swarm optimization,\\\"\\nJournal of Arti\\fcial Evolution and Applications , 2008.\\n[39] R. I. McKay, N. X. Hoai, P. A. Whigham, Y. Shan, and M. O'neill, \\\\Grammar-based genetic\\nprogramming: A survey,\\\" Genetic Programming and Evolvable Machines , vol. 11, no. 3, pp.\\n365{396, 2010.\\n[40] K. Deb, Multi-Objective Optimization Using Evolutionary Algorithms , 1st ed., ser. Wiley-\\nInterscience series in systems and optimization. Chichester, New York: John Wiley & Sons,\\n2001.\\n[41] J. Lehman and K. O. Stanley, \\\\Abandoning objectives: Evolution through the search for\\nnovelty alone,\\\" Evolutionary Computation , vol. 19, no. 2, pp. 189{223, 2011.\\n[42] W. Zhao, J. P. Queralta, and T. Westerlund, \\\\Sim-to-real transfer in deep reinforcement learn-\\ning for robotics: A survey,\\\" in 2020 IEEE Symposium Series on Computational Intelligence ,\\n2020, pp. 737{744.\\n[43] M. Jaderberg, V. Dalibard, S. Osindero, W. M. Czarnecki, J. Donahue, A. Razavi, O. Vinyals,\\nT. Green, I. Dunning, K. Simonyan et al. , \\\\Population based training of neural networks,\\\"\\narXiv preprint arXiv:1711.09846 , 2017.\\n[44] J. K. Franke, G. K\u007f ohler, A. Biedenkapp, and F. Hutter, \\\\Sample-e\\u000ecient automated deep\\nreinforcement learning,\\\" arXiv preprint arXiv:2009.01555 , 2020.\\n[45] J. Bergstra and Y. Bengio, \\\\Random search for hyper-parameter optimization.\\\" Journal of\\nmachine learning research , vol. 13, no. 2, 2012.\\n[46] J. Snoek, H. Larochelle, and R. P. Adams, \\\\Practical bayesian optimization of machine learning\\nalgorithms,\\\" Advances in Neural Information Processing Systems , vol. 25, 2012.\\n[47] T. Zahavy, Z. Xu, V. Veeriah, M. Hessel, J. Oh, H. P. van Hasselt, D. Silver, and S. Singh,\\n\\\\A self-tuning actor-critic algorithm,\\\" Advances in Neural Information Processing Systems ,\\nvol. 33, pp. 20 913{20 924, 2020.\\n[48] A. Eriksson, G. Capi, and K. Doya, \\\\Evolution of meta-parameters in reinforcement learning\\nalgorithm,\\\" in Proceedings 2003 IEEE\\/RSJ International Conference on Intelligent Robots and\\nSystems , vol. 1. IEEE, 2003, pp. 412{417.\\n38 [49] S. Elfwing, E. Uchibe, K. Doya, and H. I. Christensen, \\\\Co-evolution of shaping rewards and\\nmeta-parameters in reinforcement learning,\\\" Adaptive Behavior , vol. 16, no. 6, pp. 400{412,\\n2008.\\n[50] M. Jaderberg, W. M. Czarnecki, I. Dunning, L. Marris, G. Lever, A. G. Castaneda, C. Beat-\\ntie, N. C. Rabinowitz, A. S. Morcos, A. Ruderman et al. , \\\\Human-level performance in 3D\\nmultiplayer games with population-based reinforcement learning,\\\" Science , vol. 364, no. 6443,\\npp. 859{865, 2019.\\n[51] S. Schmitt, J. J. Hudson, A. Zidek, S. Osindero, C. Doersch, W. M. Czarnecki, J. Z. Leibo,\\nH. Kuttler, A. Zisserman, K. Simonyan et al. , \\\\Kickstarting deep reinforcement learning,\\\"\\narXiv preprint arXiv:1803.03835 , 2018.\\n[52] S. Liu, G. Lever, J. Merel, S. Tunyasuvunakool, N. Heess, and T. Graepel, \\\\Emergent co-\\nordination through competition,\\\" in International Conference on Learning Representations ,\\n2019.\\n[53] T. R. Wu, T. H. Wei, and I. C. Wu, \\\\Accelerating and improving alphazero using population\\nbased training,\\\" in Proceedings of the AAAI Conference on Arti\\fcial Intelligence , 2020.\\n[54] F. Vavak and T. C. Fogarty, \\\\Comparison of steady state and generational genetic algorithms\\nfor use in nonstationary environments,\\\" in Proceedings of IEEE International Conference on\\nEvolutionary Computation . IEEE, 1996, pp. 192{195.\\n[55] V. Dalibard and M. Jaderberg, \\\\Faster improvement rate population based training,\\\" arXiv\\npreprint arXiv:2109.13800 , 2021.\\n[56] F. C. Fernandez and W. Caarls, \\\\Parameters tuning and optimization for reinforcement learn-\\ning algorithms using evolutionary computing,\\\" in 2018 International Conference on Informa-\\ntion Systems and Computer Science . IEEE, 2018, pp. 301{305.\\n[57] X. Cui, W. Zhang, Z. T\u007f uske, and M. Picheny, \\\\Evolutionary stochastic gradient descent for\\noptimization of deep neural networks,\\\" in Advances in Neural Information Processing Systems ,\\n2018.\\n[58] L. Schneider, F. P\\fsterer, J. Thomas, and B. Bischl, \\\\A collection of quality diversity opti-\\nmization problems derived from hyperparameter optimization of machine learning models,\\\" in\\nProceedings of the Genetic and Evolutionary Computation Conference , 2022.\\n[59] K. O. Stanley, J. Clune, J. Lehman, and R. Miikkulainen, \\\\Designing neural networks through\\nneuroevolution,\\\" Nature Machine Intelligence , vol. 1, no. 1, pp. 24{35, 2019.\\n[60] A. Gaier and D. Ha, \\\\Weight agnostic neural networks,\\\" Advances in Neural Information\\nProcessing Systems , 2019.\\n[61] P. Chrabaszcz, I. Loshchilov, and F. Hutter, \\\\Back to basics: Benchmarking canonical evolu-\\ntion strategies for playing atari,\\\" in International Joint Conference on Arti\\fcial Intelligence ,\\n2018.\\n39 [62] S. Whiteson, Evolutionary computation for reinforcement learning . Springer Berlin Heidel-\\nberg, 2012, pp. 325{355.\\n[63] K. Choromanski, M. Rowland, V. Sindhwani, R. Turner, and A. Weller, \\\\Structured evolution\\nwith compact architectures for scalable policy optimization,\\\" in International Conference on\\nMachine Learning . PMLR, 2018, pp. 970{978.\\n[64] K. M. Choromanski, A. Pacchiano, J. Parker-Holder, Y. Tang, and V. Sindhwani, \\\\From\\ncomplexity to simplicity: Adaptive es-active subspaces for blackbox optimization,\\\" Advances\\nin Neural Information Processing Systems , vol. 32, 2019.\\n[65] Y. Tang, K. Choromanski, and A. Kucukelbir, \\\\Variance reduction for evolution strategies\\nvia structured control variates,\\\" in International Conference on Arti\\fcial Intelligence and\\nStatistics . PMLR, 2020, pp. 646{656.\\n[66] N. Maheswaranathan, L. Metz, G. Tucker, D. Choi, and J. Sohl-Dickstein, \\\\Guided evolu-\\ntionary strategies: augmenting random search with surrogate gradients,\\\" in Proceedings of the\\n36th International Conference on Machine Learning . PMLR, 2019, pp. 4264{4273.\\n[67] F.-Y. Liu, Z.-N. Li, and C. Qian, \\\\Self-guided evolution strategies with historical estimated\\ngradients,\\\" in International Joint Conference on Artifcial Intelligence , 2020, pp. 1474{1480.\\n[68] G. Liu, L. Zhao, F. Yang, J. Bian, T. Qin, N. Yu, and T.-Y. Liu, \\\\Trust region evolution\\nstrategies,\\\" in Proceedings of the AAAI Conference on Arti\\fcial Intelligence , vol. 33, no. 01,\\n2019, pp. 4352{4359.\\n[69] S. Yi, D. Wierstra, T. Schaul, and J. Schmidhuber, \\\\Stochastic search using the natural\\ngradient,\\\" in International Conference on Machine Learning , 2009.\\n[70] F. Sehnke, C. Osendorfer, T. R\u007f uckstiess, A. Graves, J. Peters, and J. Schmidhuber,\\n\\\\Parameter-exploring policy gradients,\\\" Neural Networks , vol. 23, no. 4, pp. 551{559, 2010.\\n[71] X. Zhang, J. Clune, and K. O. Stanley, \\\\On the relationship between the openai evolution\\nstrategy and stochastic gradient descent,\\\" arXiv preprint arXiv:1712.06564 , 2017.\\n[72] J. Lehman, J. Chen, J. Clune, and K. O. Stanley, \\\\Es is more than just a traditional \\fnite-\\ndi\\u000berence approximator,\\\" in Proceedings of the Genetic and Evolutionary Computation Con-\\nference , 2018, pp. 450{457.\\n[73] L. Fuks, N. H. Awad, F. Hutter, and M. Lindauer, \\\\An evolution strategy with progressive\\nepisode lengths for playing games.\\\" in International Joint Conferences on Arti\\fcial Intelli-\\ngence , 2019, pp. 1234{1240.\\n[74] C. Igel, \\\\Neuroevolution for reinforcement learning using evolution strategies,\\\" in The Congress\\non Evolutionary Computation , vol. 4. IEEE, 2003, pp. 2588{2595.\\n40 [75] V. Heidrich-Meisner and C. Igel, \\\\Hoe\\u000bding and bernstein races for selecting policies in evo-\\nlutionary direct policy search,\\\" in International Conference on Machine Learning , 2009, pp.\\n401{408.\\n[76] ||, \\\\Neuroevolution strategies for episodic reinforcement learning,\\\" Journal of Algorithms ,\\nvol. 64, no. 4, pp. 152{168, 2009.\\n[77] Z. Chen, Y. Zhou, X. He, and S. Jiang, \\\\A restart-based rank-1 evolution strategy for rein-\\nforcement learning,\\\" in International Joint Conferences on Arti\\fcial Intelligence , 2019, pp.\\n2130{2136.\\n[78] Z. Li and Q. Zhang, \\\\A simple yet e\\u000ecient evolution strategy for large-scale black-box op-\\ntimization,\\\" IEEE Transactions on Evolutionary Computation , vol. 22, no. 5, pp. 637{646,\\n2017.\\n[79] I. Loshchilov, T. Glasmachers, and H.-G. Beyer, \\\\Large scale black-box optimization\\nby limited-memory matrix adaptation,\\\" IEEE Transactions on Evolutionary Computation ,\\nvol. 23, no. 2, pp. 353{358, 2018.\\n[80] Z. Li, Q. Zhang, X. Lin, and H.-L. Zhen, \\\\Fast covariance matrix adaptation for large-scale\\nblack-box optimization,\\\" IEEE Transactions on Cybernetics , vol. 50, no. 5, pp. 2073{2083,\\n2020.\\n[81] A. P. Wieland, \\\\Evolving controls for unstable systems,\\\" in Connectionist Models . Elsevier,\\n1991, pp. 91{102.\\n[82] K. O. Stanley, B. D. Bryant, and R. Miikkulainen, \\\\Evolving adaptive neural networks with\\nand without adaptive synapses,\\\" in The 2003 Congress on Evolutionary Computation , vol. 4.\\nIEEE, 2003, pp. 2557{2564.\\n[83] K. O. Stanley and R. Miikkulainen, \\\\Competitive coevolution through evolutionary complex-\\ni\\fcation,\\\" Journal of Arti\\fcial Intelligence Research , vol. 21, pp. 63{100, 2004.\\n[84] K. O. Stanley, B. D. Bryant, and R. Miikkulainen, \\\\Evolving neural network agents in the\\nnero video game,\\\" Proceedings of the IEEE , pp. 182{189, 2005.\\n[85] N. Kohl and R. Miikkulainen, \\\\Evolving neural networks for strategic decision-making prob-\\nlems,\\\" Neural Networks , vol. 22, no. 3, pp. 326{337, 2009.\\n[86] Y. Kassahun and G. Sommer, \\\\E\\u000ecient reinforcement learning through evolutionary acqui-\\nsition of neural topologies,\\\" in Proceedings of The European Symposium on Arti\\fcial Neural\\nNetworks , 2005, pp. 259{266.\\n[87] H. Moriguchi and S. Honiden, \\\\Cma-tweann: E\\u000ecient optimization of neural networks via\\nself-adaptation and seamless augmentation,\\\" in Proceedings of the 14th Annual Conference on\\nGenetic and Evolutionary Computation , 2012, pp. 903{910.\\n41 [88] F. P. Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and J. Clune, \\\\Deep neuroevo-\\nlution: Genetic algorithms are a competitive alternative for training deep neural networks for\\nreinforcement learning,\\\" arXiv preprint arXiv:1712.06567 , 2017.\\n[89] M. Le Clei and P. Bellec, \\\\Neuroevolution of recurrent architectures on control tasks,\\\" in\\nInternational Conference on Learning Representations Workshop on Agent Learning in Open-\\nEndedness , 2022.\\n[90] D. Ha and J. Schmidhuber, \\\\Recurrent world models facilitate policy evolution,\\\" Advances in\\nNeural Information Processing Systems , vol. 31, 2018.\\n[91] J. Koutn\\u0013 \\u0010k, J. Schmidhuber, and F. Gomez, \\\\Evolving deep unsupervised convolutional net-\\nworks for vision-based reinforcement learning,\\\" in Proceedings of the 2014 Annual Conference\\non Genetic and Evolutionary Computation , 2014, pp. 541{548.\\n[92] S. Alvernaz and J. Togelius, \\\\Autoencoder-augmented neuroevolution for visual doom play-\\ning,\\\" in 2017 IEEE Conference on Computational Intelligence and Games . IEEE, 2017, pp.\\n1{8.\\n[93] S. Risi and K. O. Stanley, \\\\Deep neuroevolution of recurrent and discrete world models,\\\" in\\nProceedings of the Genetic and Evolutionary Computation Conference , 2019, pp. 456{462.\\n[94] S. Whiteson and P. Stone, \\\\Evolutionary function approximation for reinforcement learning,\\\"\\nJournal of Machine Learning Research , vol. 7, 2006.\\n[95] ||, \\\\Sample-e\\u000ecient evolutionary function approximation for reinforcement learning,\\\" in\\nProceedings of the National Conference on Arti\\fcial Intelligence , vol. 21, no. 1, 2006, p. 518.\\n[96] S. Whiteson, M. E. Taylor, and P. Stone, \\\\Critical factors in the empirical performance of\\ntemporal di\\u000berence and evolutionary methods for reinforcement learning,\\\" Autonomous Agents\\nand Multi-Agent Systems , vol. 21, no. 1, pp. 1{35, 2010.\\n[97] M. A. Potter and K. A. D. Jong, \\\\Cooperative coevolution: An architecture for evolving\\ncoadapted subcomponents,\\\" Evolutionary Computation , vol. 8, no. 1, pp. 1{29, 2000.\\n[98] D. E. Moriarty and R. Mikkulainen, \\\\E\\u000ecient reinforcement learning through symbiotic evo-\\nlution,\\\" Machine Learning , vol. 22, no. 1, pp. 11{32, 1996.\\n[99] F. Gomez and R. Miikulainen, \\\\Solving non-markovian tasks with neuroevolution,\\\" in Pro-\\nceeding of the Sixteenth International Joint Conference on Arti\\fcial Intelligence , 1999, pp.\\n1356{1361.\\n[100] R. Chandra, M. Frean, M. Zhang, and C. W. Omlin, \\\\Encoding subcomponents in cooperative\\nco-evolutionary recurrent neural networks,\\\" Neurocomputing , vol. 74, no. 17, pp. 3223{3234,\\n2011.\\n42 [101] F. Gomez, J. Schmidhuber, R. Miikkulainen, and M. Mitchell, \\\\Accelerated neural evolution\\nthrough cooperatively coevolved synapses,\\\" Journal of Machine Learning Research , vol. 9,\\nno. 5, 2008.\\n[102] N. Garc\\u0013 \\u0010a-Pedrajas, C. Herv\\u0013 as-Mart\\u0013 \\u0010nez, and J. Mu~ noz-P\\u0013 erez, \\\\Covnet: A cooperative co-\\nevolutionary model for evolving arti\\fcial neural networks,\\\" IEEE Transactions on Neural\\nNetworks , vol. 14, no. 3, pp. 575{596, 2003.\\n[103] J. Reisinger, K. O. Stanley, and R. Miikkulainen, \\\\Evolving reusable neural modules,\\\" in\\nGenetic and Evolutionary Computation Conference . Springer, 2004, pp. 69{81.\\n[104] P. Yang, H. Zhang, Y. Yu, M. Li, and K. Tang, \\\\Evolutionary reinforcement learning via coop-\\nerative coevolutionary negatively correlated search,\\\" Swarm and Evolutionary Computation ,\\nvol. 68, p. 100974, 2022.\\n[105] F. Gruau, \\\\Automatic de\\fnition of modular neural networks,\\\" Adaptive Behavior , vol. 3, no. 2,\\npp. 151{183, 1994.\\n[106] G. S. Hornby and J. B. Pollack, \\\\Creating high-level components with a generative represen-\\ntation for body-brain evolution,\\\" Arti\\fcial Life , vol. 8, no. 3, pp. 223{246, 2002.\\n[107] K. O. Stanley and R. Miikkulainen, \\\\A taxonomy for arti\\fcial embryogeny,\\\" Arti\\fcial Life ,\\nvol. 9, no. 2, pp. 93{130, 2003.\\n[108] K. O. Stanley, \\\\Compositional pattern producing networks: A novel abstraction of develop-\\nment,\\\" Genetic Programming and Evolvable Machines , vol. 8, no. 2, pp. 131{162, 2007.\\n[109] K. O. Stanley, D. B. D'Ambrosio, and J. Gauci, \\\\A hypercube-based encoding for evolving\\nlarge-scale neural networks,\\\" Arti\\fcial Life , vol. 15, no. 2, pp. 185{212, 2009.\\n[110] J. Clune, K. O. Stanley, R. T. Pennock, and C. Ofria, \\\\On the performance of indirect encoding\\nacross the continuum of regularity,\\\" IEEE Transactions on Evolutionary Computation , vol. 15,\\nno. 3, pp. 346{367, 2011.\\n[111] J. Gauci and K. O. Stanley, \\\\A case study on the critical role of geometric regularity in machine\\nlearning,\\\" in Proceedings of the 23rd National Conference on Arti\\fcial Intelligence . AAAI\\nPress, 2008, pp. 628 { 633.\\n[112] M. Hausknecht, J. Lehman, R. Miikkulainen, and P. Stone, \\\\A neuroevolution approach to\\ngeneral atari game playing,\\\" IEEE Transactions on Computational Intelligence and AI in\\nGames , vol. 6, no. 4, pp. 355{366, 2014.\\n[113] S. Risi and K. O. Stanley, \\\\Indirectly encoding neural plasticity as a pattern of local rules,\\\" in\\nInternational Conference on Simulation of Adaptive Behavior . Springer, 2010, pp. 533{543.\\n[114] ||, \\\\An enhanced hypercube-based encoding for evolving the placement, density, and con-\\nnectivity of neurons,\\\" Arti\\fcial Life , vol. 18, no. 4, pp. 331{363, 2012.\\n43 [115] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, \\\\A fast and elitist multiobjective genetic\\nalgorithm: NSGA-II,\\\" IEEE Transactions on Evolutionary Computation , vol. 6, no. 2, pp.\\n182{197, 2002.\\n[116] J. Huizinga, J.-B. Mouret, and J. Clune, \\\\Does aligning phenotypic and genotypic modularity\\nimprove the evolution of neural networks?\\\" in Proceedings of the Genetic and Evolutionary\\nComputation Conference 2016 , 2016, pp. 125{132.\\n[117] J. Koutn\\u0013 \\u0010k, G. Cuccu, J. Schmidhuber, and F. Gomez, \\\\Evolving large-scale neural networks\\nfor vision-based reinforcement learning,\\\" in Proceedings of the 15th Annual Conference on\\nGenetic and Evolutionary Computation , 2013.\\n[118] J. Clune, B. E. Beckmann, R. T. Pennock, and C. Ofria, \\\\Hybrid: A hybridization of indirect\\nand direct encodings for evolutionary computation,\\\" in European Conference on Arti\\fcial Life .\\nSpringer, 2009, pp. 134{141.\\n[119] G.-A. Vargas-H\\u0013 akim, E. Mezura-Montes, and H.-G. Acosta-Mesa, \\\\Hybrid encodings for neu-\\nroevolution of convolutional neural networks: A case study,\\\" in Proceedings of the Genetic and\\nEvolutionary Computation Conference Companion , 2021, pp. 1762{1770.\\n[120] J. Schrum, B. Capps, K. Steckel, V. Volz, and S. Risi, \\\\Hybrid encoding for generating large\\nscale game llevel patterns with local variations,\\\" IEEE Transactions on Games , 2022.\\n[121] K. Deb and A. Kumar, \\\\Real-coded genetic algorithms with simulated binary crossover: Stud-\\nies on multimodal and multiobjective problems,\\\" Complex Systems , vol. 9, no. 6, pp. 431{454,\\n1995.\\n[122] T. Gangwani and J. Peng, \\\\Genetic policy optimization,\\\" in International Conference on\\nLearning Representations , 2018.\\n[123] C. Bodnar, B. Day, and P. Li\\u0013 o, \\\\Proximal distilled evolutionary reinforcement learning,\\\"\\nProceedings of the AAAI Conference on Arti\\fcial Intelligence , vol. 34, no. 04, pp. 3283{3290,\\n2020.\\n[124] J. K. Franke, G. K\u007f ohler, N. Awad, and F. Hutter, \\\\Neural architecture evolution in deep\\nreinforcement learning for continuous control,\\\" arXiv preprint arXiv:1910.12824 , 2019.\\n[125] L. J, C. J, C. J, and S. KO, \\\\Safe mutations for deep and recurrent neural networks through\\noutput gradients,\\\" in Proceedings of the Genetic and Evolutionary Computation Conference ,\\n2018.\\n[126] E. Marchesini, D. Corsi, and A. Farinelli, \\\\Exploring safer behaviors for deep reinforcement\\nlearning,\\\" in Proceedings of the AAAI Conference on Arti\\fcial Intelligence , vol. 36, no. 7,\\n2022, pp. 7701{7709.\\n[127] T. Uriot and D. Izzo, \\\\Safe crossover of neural networks through neuron alignment,\\\" in Pro-\\nceedings of the 2020 Genetic and Evolutionary Computation Conference , 2020, pp. 435{443.\\n44 [128] J. R. Woodward, \\\\Evolving turing complete representations,\\\" in The Congress on Evolutionary\\nComputation , vol. 2. IEEE, 2003, pp. 830{837.\\n[129] J. F. Miller, \\\\Cartesian genetic programming,\\\" in Cartesian Genetic Programming . Springer,\\n2011, pp. 17{34.\\n[130] S. Kelly, R. J. Smith, and M. I. Heywood, \\\\Emergent policy discovery for visual reinforcement\\nlearning through tangled program graphs: A tutorial,\\\" Genetic Programming Theory and\\nPractice XVI , pp. 37{57, 2019.\\n[131] J. R. Koza and J. P. Rice, \\\\Automatic programming of robots using genetic programming,\\\" in\\nProceedings of the Tenth National Conference on Arti\\fcial Intelligence . AAAI Press, 1992,\\npp. 194{207.\\n[132] S. Ok, K. Miyashita, and K. Hase, \\\\Evolving bipedal locomotion with genetic programming-a\\npreliminary report,\\\" in Proceedings of the 2001 Congress on Evolutionary Computation , vol. 2.\\nIEEE, 2001, pp. 1025{1032.\\n[133] D. C. Dracopoulos, D. E\\u000braimidis, and B. D. Nichols, \\\\Genetic programming as a solver to\\nchallenging reinforcement learning problems,\\\" International Journal of Computer Research ,\\nvol. 20, no. 3, p. 351, 2013.\\n[134] S. Kamio and H. Iba, \\\\Adaptation technique for integrating genetic programming and rein-\\nforcement learning for real robots,\\\" IEEE Transactions on Evolutionary Computation , vol. 9,\\nno. 3, pp. 318{333, 2005.\\n[135] F. Gruau, D. Whitley, and L. Pyeatt, \\\\A comparison between cellular encoding and direct\\nencoding for genetic neural networks,\\\" in Proceedings of the 1st Annual Conference on Genetic\\nProgramming , 1996, pp. 81{89.\\n[136] M. M. Khan, A. M. Ahmad, G. M. Khan, and J. F. Miller, \\\\Fast learning neural networks\\nusing cartesian genetic programming,\\\" Neurocomputing , vol. 121, pp. 274{289, 2013.\\n[137] A. J. Turner and J. F. Miller, \\\\Neuroevolution: Evolving heterogeneous arti\\fcial neural net-\\nworks,\\\" Evolutionary Intelligence , vol. 7, no. 3, pp. 135{154, 2014.\\n[138] D. G. Wilson, S. Cussat-Blanc, H. Luga, and J. F. Miller, \\\\Evolving simple programs for\\nplaying atari games,\\\" in Proceedings of the Genetic and Evolutionary Computation Conference ,\\n2018, pp. 229{236.\\n[139] S. Kelly and M. I. Heywood, \\\\Emergent tangled graph representations for atari game playing\\nagents,\\\" in European Conference on Genetic Programming . Springer, 2017, pp. 64{79.\\n[140] ||, \\\\Emergent tangled program graphs in multi-task learning,\\\" in International Joint Con-\\nference on Artifcial Intelligence , 2018, pp. 5294{5298.\\n45 [141] S. Kelly, T. Voegerl, W. Banzhaf, and C. Gondro, \\\\Evolving hierarchical memory-prediction\\nmachines in multi-task reinforcement learning,\\\" Genetic Programming and Evolvable Ma-\\nchines , vol. 22, no. 4, pp. 573{605, 2021.\\n[142] R. J. Smith and M. I. Heywood, \\\\A model of external memory for navigation in partially\\nobservable visual reinforcement learning tasks,\\\" in European Conference on Genetic Program-\\nming . Springer, 2019, pp. 162{177.\\n[143] ||, \\\\Evolving dota 2 shadow \\fend bots using genetic programming with external memory,\\\"\\ninProceedings of the Genetic and Evolutionary Computation Conference , 2019, pp. 179{187.\\n[144] M. Onderwater, S. Bhulai, and R. van der Mei, \\\\Value function discovery in markov decision\\nprocesses with evolutionary algorithms,\\\" IEEE Transactions on Systems, Man, and Cybernet-\\nics: Systems , vol. 46, no. 9, pp. 1190{1201, 2015.\\n[145] D. Hein, S. Udluft, and T. A. Runkler, \\\\Interpretable policies for reinforcement learning by\\ngenetic programming,\\\" Engineering Applications of Arti\\fcial Intelligence , vol. 76, pp. 158{169,\\n2018.\\n[146] E. Alibekov, J. Kubal\\u0013 \\u0010k, and R. Babu\\u0014 ska, \\\\Symbolic method for deriving policy in reinforce-\\nment learning,\\\" in 2016 IEEE 55th Conference on Decision and Control . IEEE, 2016, pp.\\n2789{2795.\\n[147] E. Derner, J. Kubal\\u0013 \\u0010k, and R. Babu\\u0014 ska, \\\\Data-driven construction of symbolic process models\\nfor reinforcement learning,\\\" in 2018 IEEE International Conference on Robotics and Automa-\\ntion, 2018, pp. 5105{5112.\\n[148] S. Girgin and P. Preux, \\\\Feature discovery in reinforcement learning using genetic program-\\nming,\\\" in European Conference on Genetic Programming . Springer, 2008, pp. 218{229.\\n[149] K. Krawiec, \\\\Genetic programming-based construction of features for machine learning and\\nknowledge discovery tasks,\\\" Genetic Programming and Evolvable Machines , vol. 3, no. 4, pp.\\n329{343, 2002.\\n[150] M. Plappert, R. Houthooft, P. Dhariwal, S. Sidor, R. Y. Chen, X. Chen, T. Asfour, P. Abbeel,\\nand M. Andrychowicz, \\\\Parameter space noise for eeploration,\\\" International Conference on\\nLearning Representations, 2018.\\n[151] T. Yang, H. Tang, C. Bai, J. Liu, J. Hao, Z. Meng, P. Liu, and Z. Wang, \\\\Exploration in deep\\nreinforcement learning: A comprehensive survey,\\\" arXiv preprint arXiv:2109.06668 , 2021.\\n[152] J. K. Pugh, L. B. Soros, and K. O. Stanley, \\\\Quality diversity: A new frontier for evolutionary\\ncomputation,\\\" Frontiers in Robotics and AI , 2016.\\n[153] D. Gravina, A. Liapis, and G. Yannakakis, \\\\Surprise search: Beyond objectives and novelty,\\\"\\ninProceedings of the Genetic and Evolutionary Computation Conference 2016 , 2016, pp. 677{\\n684.\\n46 [154] H. Mengistu, J. Lehman, and J. Clune, \\\\Evolvability search: Directly selecting for evolvability\\nin order to study and produce it,\\\" in Proceedings of the Genetic and Evolutionary Computation\\nConference 2016 , 2016, pp. 141{148.\\n[155] D. Pathak, P. Agrawal, A. A. Efros, and T. Darrell, \\\\Curiosity-driven exploration by self-\\nsupervised prediction,\\\" in International Conference on Machine Learning . PMLR, 2017, pp.\\n2778{2787.\\n[156] S. Risi, S. D. Vanderbleek, C. E. Hughes, and K. O. Stanley, \\\\How novelty search escapes the\\ndeceptive trap of learning to learn,\\\" in Proceedings of the 11th Annual Conference on Genetic\\nand Evolutionary Computation , 2009, pp. 153{160.\\n[157] G. Cuccu and F. Gomez, \\\\When novelty is not enough,\\\" in European Conference on the\\nApplications of Evolutionary Computation . Springer, 2011, pp. 234{243.\\n[158] J.-B. Mouret and S. Doncieux, \\\\Encouraging behavioral diversity in evolutionary robotics: An\\nempirical study,\\\" Evolutionary Computation , vol. 20, no. 1, pp. 91{133, 2012.\\n[159] J. Lehman and K. O. Stanley, \\\\Evolving a diversity of virtual creatures through novelty\\nsearch and local competition,\\\" in Proceedings of the 13th Annual Conference on Genetic and\\nEvolutionary Computation , 2011.\\n[160] Q. Liu, Y. Wang, and X. Liu, \\\\Pns: Population-guided novelty search for reinforcement learn-\\ning in hard exploration environments,\\\" in 2021 IEEE\\/RSJ International Conference on Intel-\\nligent Robots and Systems , 2021.\\n[161] J.-B. Mouret and J. Clune, \\\\Illuminating search spaces by mapping elites,\\\" arXiv preprint\\narXiv:1504.04909 , 2015.\\n[162] A. Cully, \\\\Autonomous skill discovery with quality-diversity and unsupervised descriptors,\\\"\\ninProceedings of the Genetic and Evolutionary Computation Conference , 2019, pp. 81{89.\\n[163] R. Y. Tao, V. Fran\\u0018 cois-Lavet, and J. Pineau, \\\\Novelty search in representational space for\\nsample e\\u000ecient exploration,\\\" Advances in Neural Information Processing Systems , vol. 33, pp.\\n8114{8126, 2020.\\n[164] N. Rakicevic, A. Cully, and P. Kormushev, \\\\Policy manifold search: Exploring the mani-\\nfold hypothesis for diversity-based neuroevolution,\\\" in Genetic and Evolutionary Computation\\nConference , 2021.\\n[165] J. Parker-Holder, A. Pacchiano, K. Choromanski, and S. Roberts, \\\\E\\u000bective diversity in\\npopulation-based reinforcement learning,\\\" arXiv preprint arXiv:2002.00632 , 2020.\\n[166] E. C. Jackson and M. Daley, \\\\Novelty search for deep reinforcement learning policy network\\nweights by action sequence edit metric distance,\\\" in Proceedings of the Genetic and Evolution-\\nary Computation Conference Companion , 2019, pp. 173{174.\\n47 [167] L. Keller, D. Tanneberg, S. Stark, and J. Peters, \\\\Model-based quality-diversity search for\\ne\\u000ecient robot learning,\\\" in 2020 IEEE\\/RSJ International Conference on Intelligent Robots\\nand Systems . IEEE, 2020, pp. 9675{9680.\\n[168] A. Salehi, A. Coninx, and S. Doncieux, \\\\Few-shot quality-diversity optimization,\\\" IEEE\\nRobotics and Automation Letters , vol. 7, no. 2, pp. 4424{4431, 2022.\\n[169] Y. Wang, K. Xue, and C. Qian, \\\\Evolutionary diversity optimization with clustering-based\\nselection for reinforcement learning,\\\" in International Conference on Learning Representations ,\\n2022.\\n[170] R. Wang, J. Lehman, J. Clune, and K. O. Stanley, \\\\Poet: Open-ended coevolution of en-\\nvironments and their optimized solutions,\\\" in Proceedings of the Genetic and Evolutionary\\nComputation Conference , 2019, pp. 142{151.\\n[171] R. Wang, J. Lehman, A. Rawal, J. Zhi, Y. Li, J. Clune, and K. Stanley, \\\\Enhanced POET:\\nOpen-ended reinforcement learning through unbounded invention of learning challenges and\\ntheir solutions,\\\" in International Conference on Machine Learning . PMLR, 2020.\\n[172] V. Bhatt, B. Tjanaka, M. C. Fontaine, and S. Nikolaidis, \\\\Deep surrogate assisted generation\\nof environments,\\\" arXiv preprint arXiv:2206.04199 , 2022.\\n[173] S. Brych and A. Cully, \\\\Competitiveness of map-elites against proximal policy optimization\\non locomotion tasks in deterministic simulations,\\\" arXiv preprint arXiv:2009.08438 , 2020.\\n[174] V. Vassiliades, K. Chatzilygeroudis, and J.-B. Mouret, \\\\Using centroidal voronoi tessellations\\nto scale up the multidimensional archive of phenotypic elites algorithm,\\\" IEEE Transactions\\non Evolutionary Computation , vol. 22, no. 4, pp. 623{630, 2017.\\n[175] C. Colas, J. Huizinga, V. Madhavan, and J. Clune, \\\\Scaling map-elites to deep neuroevolu-\\ntion,\\\" arXiv preprint arXiv:2003.01825 , 2020.\\n[176] T. Pierrot, V. Mac\\u0013 e, F. Chalumeau, A. Flajolet, G. Cideron, K. Beguir, A. Cully, O. Sigaud,\\nand N. Perrin-Gilbert, \\\\Diversity policy gradient for sample e\\u000ecient quality-diversity opti-\\nmization,\\\" in ICLR Workshop on Agent Learning in Open-Endedness , 2022.\\n[177] B. Tjanaka, M. C. Fontaine, J. Togelius, and S. Nikolaidis, \\\\Di\\u000berentiable quality diversity for\\nreinforcement learning by approximating gradients,\\\" in International Conference on Learning\\nRepresentations Workshop on Agent Learning in Open-Endedness , 2022.\\n[178] O. Nilsson and A. Cully, \\\\Policy gradient assisted map-elites,\\\" in Genetic and Evolutionary\\nComputation Conference , 2021.\\n[179] Y. Zhang, M. C. Fontaine, A. K. Hoover, and S. Nikolaidis, \\\\Dsa-me: Deep surrogate assisted\\nmap-elites,\\\" in International Conference on Learning Representations Workshop on Agent\\nLearning in Open-Endedness , 2022.\\n48 [180] A. Eco\\u000bet, J. Huizinga, J. Lehman, K. O. Stanley, and J. Clune, \\\\First return, then explore,\\\"\\nNature , vol. 590, no. 7847, pp. 580{586, 2021.\\n[181] D. Gravina, A. Liapis, and G. N. Yannakakis, \\\\Quality diversity through surprise,\\\" IEEE\\nTransactions on Evolutionary Computation , vol. 23, no. 4, pp. 603{616, 2018.\\n[182] M. Bellemare, S. Srinivasan, G. Ostrovski, T. Schaul, D. Saxton, and R. Munos, \\\\Unifying\\ncount-based exploration and intrinsic motivation,\\\" Advances in neural information processing\\nsystems , vol. 29, pp. 1471{1479, 2016.\\n[183] S. Forestier, R. Portelas, Y. Mollard, and P.-Y. Oudeyer, \\\\Intrinsically motivated goal ex-\\nploration processes with automatic curriculum learning,\\\" arXiv preprint arXiv:1708.02190 ,\\n2017.\\n[184] C. Colas, O. Sigaud, and P.-Y. Oudeyer, \\\\GEP-PG: Decoupling exploration and exploitation\\nin deep reinforcement learning algorithms,\\\" in International Conference on Machine Learning .\\nPMLR, 2018, pp. 1039{1048.\\n[185] C. Stanton and J. Clune, \\\\Deep curiosity search: Intra-life exploration improves performance\\non challenging deep reinforcement learning problems,\\\" arXiv preprint arXiv:1806.00553 , 2018.\\n[186] H. Zheng, J. Jiang, P. Wei, G. Long, and C. Zhang, \\\\Competitive and cooperative heteroge-\\nneous deep reinforcement learning,\\\" in Proceedings of the International Joint Conference on\\nAutonomous Agents and Multiagent Systems , 2020.\\n[187] S. L\u007f u, S. Han, W. Zhou, and J. Zhang, \\\\Recruitment-imitation mechanism for evolutionary\\nreinforcement learning,\\\" Information Sciences , vol. 553, pp. 172{188, 2021.\\n[188] Y. Ma, T. Liu, B. Wei, Y. Liu, K. Xu, and W. Li, \\\\Evolutionary action selection for gradient-\\nbased policy learning,\\\" arXiv preprint arXiv:2201.04286 , 2022.\\n[189] H. Shi, B. Zhou, H. Zeng, F. Wang, Y. Dong, J. Li, K. Wang, H. Tian, and M. Q.-H.\\nMeng, \\\\Reinforcement learning with evolutionary trajectory generator: A general approach for\\nquadrupedal locomotion,\\\" IEEE Robotics and Automation Letters , vol. 7, no. 2, pp. 3085{3092,\\n2022.\\n[190] A. Morel, Y. Kunimoto, A. Coninx, and S. Doncieux, \\\\Automatic acquisition of a repertoire\\nof diverse grasping trajectories through behavior shaping and novelty search,\\\" arXiv preprint\\narXiv:2205.08189 , 2022.\\n[191] A. Pourchot and O. Sigaud, \\\\Cem-rl: Combining evolutionary and gradient-based methods\\nfor policy search,\\\" International Conference on Learning Representations , 2019.\\n[192] K. Lee, B.-U. Lee, U. Shin, and I. S. Kweon, \\\\An e\\u000ecient asynchronous method for integrating\\nevolutionary and gradient-based policy search,\\\" Advances in Neural Information Processing\\nSystems , vol. 33, pp. 10 124{10 135, 2020.\\n49 [193] K. Suri, \\\\O\\u000b-policy evolutionary reinforcement learning with maximum mutations,\\\" in Pro-\\nceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems ,\\n2022.\\n[194] E. Marchesini, D. Corsi, and A. Farinelli, \\\\Genetic soft updates for policy evolution in deep\\nreinforcement learning,\\\" in International Conference on Learning Representations , 2020.\\n[195] S. Zhu, F. Belardinelli, and B. G. Le\\u0013 on, \\\\Evolutionary reinforcement learning for sparse re-\\nwards,\\\" in Proceedings of the Genetic and Evolutionary Computation Conference , 2021, pp.\\n1508{1512.\\n[196] J. Clune, \\\\Ai-gas: Ai-generating algorithms, an alternate paradigm for producing general\\narti\\fcial intelligence,\\\" arXiv preprint arXiv:1905.10985 , 2019.\\n[197] A. Faust, A. Francis, and D. Mehta, \\\\Evolving rewards to automate reinforcement learning,\\\"\\narXiv preprint arXiv:1905.07628 , 2019.\\n[198] A. Laud and G. DeJong, \\\\The in\\ruence of reward on the speed of reinforcement learning: An\\nanalysis of shaping,\\\" in Proceedings of the 20th International Conference on Machine Learning ,\\n2003, pp. 440{447.\\n[199] A. Y. Ng, D. Harada, and S. Russell, \\\\Policy invariance under reward transformations: Theory\\nand application to reward shaping,\\\" in International Conference on Machine Learning , vol. 99,\\n1999, pp. 278{287.\\n[200] F. Ferreira, T. Nierho\\u000b, A. Saelinger, and F. Hutter, \\\\Learning synthetic environments and\\nreward networks for reinforcement learning,\\\" International Conference on Learning Represen-\\ntations , 2022.\\n[201] S. Singh, R. L. Lewis, A. G. Barto, and J. Sorg, \\\\Intrinsically motivated reinforcement learning:\\nAn evolutionary perspective,\\\" IEEE Transactions on Autonomous Mental Development , vol. 2,\\nno. 2, pp. 70{82, 2010.\\n[202] S. Niekum, A. G. Barto, and L. Spector, \\\\Genetic programming for reward function search,\\\"\\nIEEE Transactions on Autonomous Mental Development , vol. 2, no. 2, pp. 83{90, 2010.\\n[203] E. Uchibe and K. Doya, \\\\Finding intrinsic rewards by embodied evolution and constrained\\nreinforcement learning,\\\" Neural Networks , vol. 21, no. 10, pp. 1447{1455, 2008.\\n[204] H. U. Sheikh, S. Khadka, S. Miret, S. Majumdar, and M. Phielipp, \\\\Learning intrinsic symbolic\\nrewards in reinforcement learning,\\\" in International Joint Conference on Neural Networks .\\nIEEE, 2022, pp. 1{8.\\n[205] G. Paolo, A. Coninx, S. Doncieux, and A. La\\raqui\\u0012 ere, \\\\Sparse reward exploration via novelty\\nsearch and emitters,\\\" in Proceedings of the Genetic and Evolutionary Computation Conference ,\\n2021, pp. 154{162.\\n50 [206] S. Majumdar, S. Khadka, S. Miret, S. Mcaleer, and K. Tumer, \\\\Evolutionary reinforcement\\nlearning for sample-e\\u000ecient multiagent coordination,\\\" in International Conference on Machine\\nLearning , 2020.\\n[207] R. Lowe, Y. Wu, A. Tamar, J. Harb, P. Abbeel, and I. Mordatch, \\\\Multi-agent actor-critic for\\nmixed cooperative-competitive environments,\\\" arXiv preprint arXiv:1706.02275 , 2017.\\n[208] E. Sachdeva, S. Khadka, S. Majumdar, and K. Tumer, \\\\Maedys: Multiagent evolution via\\ndynamic skill selection,\\\" in Proceedings of the Genetic and Evolutionary Computation Confer-\\nence, 2021, pp. 163{171.\\n[209] H.-T. L. Chiang, A. Faust, M. Fiser, and A. Francis, \\\\Learning navigation behaviors end-\\nto-end with autorl,\\\" IEEE Robotics and Automation Letters , vol. 4, no. 2, pp. 2007{2014,\\n2019.\\n[210] J. X. Wang, E. Hughes, C. Fernando, W. M. Czarnecki, E. A. Du\\u0013 e~ nez-Guzm\\u0013 an, and J. Z. Leibo,\\n\\\\Evolving intrinsic motivations for altruistic behavior,\\\" arXiv preprint arXiv:1811.05931 ,\\n2018.\\n[211] C. Finn, P. Abbeel, and S. Levine, \\\\Model-agnostic meta-learning for fast adaptation of deep\\nnetworks,\\\" in International Conference on Machine Learning . JMLR. org, 2017, pp. 1126{\\n1135.\\n[212] Y. Duan, J. Schulman, X. Chen, P. L. Bartlett, I. Sutskever, and P. Abbeel, \\\\Rl2: Fast rein-\\nforcement learning via slow reinforcement learning,\\\" arXiv preprint arXiv:1611.02779 , 2016.\\n[213] R. Houthooft, Y. Chen, P. Isola, B. Stadie, F. Wolski, O. Jonathan Ho, and P. Abbeel, \\\\Evolved\\npolicy gradients,\\\" Advances in Neural Information Processing Systems , vol. 31, 2018.\\n[214] X. Song, W. Gao, Y. Yang, K. Choromanski, A. Pacchiano, and Y. Tang, \\\\Es-maml: Simple\\nhessian-free meta learning,\\\" arXiv preprint arXiv:1910.01215 , 2019.\\n[215] C. Fernando, J. Sygnowski, S. Osindero, J. Wang, T. Schaul, D. Teplyashin, P. Sprechmann,\\nA. Pritzel, and A. Rusu, \\\\Meta-learning by the baldwin e\\u000bect,\\\" in Proceedings of the Genetic\\nand Evolutionary Computation Conference Companion , 2018, pp. 1313{1320.\\n[216] J. D. Co-Reyes, Y. Miao, D. Peng, E. Real, Q. V. Le, S. Levine, H. Lee, and A. Faust, \\\\Evolving\\nreinforcement learning algorithms,\\\" in International Conference on Learning Representations ,\\n2021.\\n[217] J. J. Garau-Luis, Y. Miao, J. D. Co-Reyes, A. Parisi, J. Tan, E. Real, and A. Faust, \\\\Multi-\\nobjective evolution for generalizable policy gradient algorithms,\\\" International Conference on\\nLearning Representations , 2022.\\n[218] F. Alet, M. F. Schneider, T. Lozano-Perez, and L. P. Kaelbling, \\\\Meta-learning curiosity\\nalgorithms,\\\" in International Conference on Learning Representations , 2020.\\n51 [219] C. A. Coello Coello, S. Gonz\\u0013 alez Brambila, J. Figueroa Gamboa, M. G. Castillo Tapia, and\\nR. Hern\\u0013 andez G\\u0013 omez, \\\\Evolutionary multiobjective optimization: open research areas and\\nsome challenges lying ahead,\\\" Complex & Intelligent Systems , vol. 6, pp. 221{236, 2020.\\n[220] K. Van Mo\\u000baert, M. M. Drugan, and A. Now\\u0013 e, \\\\Scalarized multi-objective reinforcement learn-\\ning: Novel design techniques,\\\" in 2013 IEEE Symposium on Adaptive Dynamic Programming\\nand Reinforcement Learning . IEEE, 2013, pp. 191{199.\\n[221] J. M. Bader, Hypervolume-based search for multiobjective optimization: theory and methods .\\nJohannes Bader, 2010, no. 112.\\n[222] E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. G. da Fonseca, \\\\Performance\\nassessment of multiobjective optimizers: An analysis and review,\\\" IEEE Transactions on\\nEvolutionary Computation , vol. 7, no. 2, pp. 117{132, 2003.\\n[223] C. M. Fonseca and P. J. Fleming, \\\\An overview of evolutionary algorithms in multiobjective\\noptimization,\\\" Evolutionary Computation , vol. 3, no. 1, pp. 1{16, 1995.\\n[224] N. Beume, C. M. Fonseca, M. Lopez-Ibanez, L. Paquete, and J. Vahrenhold, \\\\On the com-\\nplexity of computing the hypervolume indicator,\\\" IEEE Transactions on Evolutionary Com-\\nputation , vol. 13, no. 5, pp. 1075{1082, October 2009.\\n[225] J. Xu, Y. Tian, P. Ma, D. Rus, S. Sueda, and W. Matusik, \\\\Prediction-guided multi-objective\\nreinforcement learning for continuous robot control,\\\" in International Conference on Machine\\nLearning , 2020.\\n[226] E. A. Feinberg and A. Shwartz, \\\\Constrained markov decision models with weighted discounted\\nrewards,\\\" Mathematics of Operations Research , vol. 20, no. 2, pp. 302{320, 1995.\\n[227] A. Abels, D. Roijers, T. Lenaerts, A. Now\\u0013 e, and D. Steckelmacher, \\\\Dynamic weights in multi-\\nobjective deep reinforcement learning,\\\" in International Conference on Machine Learning .\\nPMLR, 2019, pp. 11{20.\\n[228] K. V. Mo\\u000baert, M. M. Drugan, and A. Now\\u0013 e, \\\\Hypervolume-based multi-objective reinforce-\\nment learning,\\\" in International Conference on Evolutionary Multi-Criterion Optimization .\\nSpringer, 2013, pp. 352{366.\\n[229] H. Yamamoto, T. Hayashida, I. Nishizaki, and S. Sekizaki, \\\\Hypervolume-based multi-\\nobjective reinforcement learning: Interactive approach,\\\" Advances in Science, Technology and\\nEngineering Systems Journal , vol. 4, 2019.\\n[230] K. Van Mo\\u000baert and A. Now\\u0013 e, \\\\Multi-objective reinforcement learning using sets of pareto\\ndominating policies,\\\" The Journal of Machine Learning Research , vol. 15, no. 1, pp. 3483{3512,\\n2014.\\n[231] T. Brys, A. Harutyunyan, P. Vrancx, M. E. Taylor, D. Kudenko, and A. Now\\u0013 e, \\\\Multi-\\nobjectivization of reinforcement learning problems by reward shaping,\\\" in 2014 international\\njoint conference on neural networks . IEEE, 2014, pp. 2315{2322.\\n52 [232] R. Shen, Y. Zheng, J. Hao, Z. Meng, Y. Chen, C. Fan, and Y. Liu, \\\\Generating behavior-diverse\\ngame AIs with evolutionary multi-objective deep reinforcement learning,\\\" in International\\nJoint Conference on Arti\\fcial Intelligence , 2020.\\n[233] V. Villin, N. Masuyama, and Y. Nojima, \\\\E\\u000bects of di\\u000berent optimization formulations in evo-\\nlutionary reinforcement learning on diverse behavior generation,\\\" in IEEE Symposium Series\\non Computational Intelligence , 2021.\\n[234] B. Li, J. Li, K. Tang, and X. Yao, \\\\Many-objective evolutionary algorithms: A survey,\\\" Acm\\nComputing Surveys , vol. 48, no. (1), pp. 1{35., 2015.\\n[235] S. Han and Y. Sung, \\\\Dimension-wise importance sampling weight clipping for sample-e\\u000ecient\\nreinforcement learning,\\\" in International Conference on Machine Learning , 2019.\\n[236] R. Storn and K. Price, \\\\Di\\u000berential evolution { a simple and e\\u000ecient heuristic for global\\noptimization over continuous spaces,\\\" Journal of Global Optimization , vol. 11, no. 4, pp. 341{\\n359, December 1997.\\n[237] J. Kennedy and R. Eberhart, \\\\Particle swarm optimization,\\\" in Proceedings of International\\nConference on Neural Networks , vol. 4. IEEE, 1995, pp. 1942{1948.\\n[238] R. Cheng and Y. Jin, \\\\A competitive swarm optimizer for large scale optimization,\\\" IEEE\\nTransactions on Cybernetics , vol. 45, no. 2, pp. 191{204, 2014.\\n[239] J. Stork, M. Zae\\u000berer, N. Eisler, P. Tichelmann, T. Bartz-Beielstein, and A. Eiben, \\\\Behavior-\\nbased neuroevolutionary training in reinforcement learning,\\\" in Proceedings of the Genetic and\\nEvolutionary Computation Conference , 2021, pp. 1753{1761.\\n[240] Y. Wang, T. Zhang, Y. Chang, B. Liang, X. Wang, and B. Yuan, \\\\A surrogate-assisted\\ncontroller for expensive evolutionary reinforcement learning,\\\" Information Sciences , vol. 616,\\npp. 539{557, 2022.\\n[241] G. Brockman, V. Cheung, L. Pettersson, J. Schneider, J. Schulman, J. Tang, and W. Zaremba,\\n\\\\OpenAI Gym,\\\" arXiv preprint arXiv:1606.01540 , 2016.\\n[242] H. Bai, R. Shen, Y. Lin, B. Xu, and R. Cheng, \\\\Lamarckian platform: Pushing the bound-\\naries of evolutionary reinforcement learning towards asynchronous commercial games,\\\" IEEE\\nTransactions on Games , pp. 1{14, 2022.\\n[243] R. Tangri, D. P. Mandic, and A. G. Constantinides, \\\\Pearl: Parallel evolutionary and rein-\\nforcement learning library,\\\" arXiv preprint arXiv:2201.09568 , 2022.\\n[244] Y. Tang, Y. Tian, and D. Ha, \\\\Evojax: Hardware-accelerated neuroevolution,\\\" 2022.\\n[245] R. T. Lange, \\\\Evosax: Jax-based evolution strategies,\\\" 2022.\\n[246] B. Huang, R. Cheng, Y. Jin, and K. C. Tan, \\\\Evox: A distributed gpu-accelerated library\\ntowards scalable evolutionary computation,\\\" arXiv preprint arXiv:2301.12457 , 2023.\\n53 [247] B. Lim, M. Allard, L. Grillotti, and A. Cully, \\\\Accelerated quality-diversity for robotics\\nthrough massive parallelism,\\\" in ICLR Workshop on Agent Learning in Open-Endedness , 2022.\\n[248] J. Bhatia, H. Jackson, Y. Tian, J. Xu, and W. Matusik, \\\\Evolution gym: A large-scale\\nbenchmark for evolving soft robots,\\\" Advances in Neural Information Processing Systems ,\\nvol. 34, pp. 2201{2214, 2021.\\n54\",\"7\":\" RADAM: T EXTURE RECOGNITION THROUGH RANDOMIZED\\nAGGREGATED ENCODING OF DEEPACTIVATION MAPS\\nLeonardo Scabini1,2, Kallil M. Zielinski1, Lucas C. Ribas3, Wesley N. Gon\\u00e7alves4, Bernard De Baets2, and Odemir M.\\nBruno1\\n1S\\u00e3o Carlos Institute of Physics, University of S\\u00e3o Paulo, postal code 13560-970, S\\u00e3o Carlos - SP, Brazil\\n2KERMIT, Department of Data Analysis and Mathematical Modelling, Ghent University, Coupure links 653, postal code 9000,\\nGhent, Belgium\\n3Institute of Biosciences, Humanities and Exact Sciences, S\\u00e3o Paulo State University, postal code 15054-000, S\\u00e3o Jos\\u00e9 do Rio Preto -\\nSP, Brazil\\n4Faculty of Computing, Federal University of Mato Grosso do Sul, postal code 79070-900, Campo Grande - MS, Brazil\\nABSTRACT\\nTexture analysis is a classical yet challenging task in computer vision for which deep neural networks\\nare actively being applied. Most approaches are based on building feature aggregation modules\\naround a pre-trained backbone and then \\ufb01ne-tuning the new architecture on speci\\ufb01c texture recogni-\\ntion tasks. Here we propose a new method named Random encoding of Aggregated DeepActivation\\nMaps (RADAM) which extracts rich texture representations without ever changing the backbone.\\nThe technique consists of encoding the output at different depths of a pre-trained deep convolutional\\nnetwork using a Randomized Autoencoder (RAE). The RAE is trained locally to each image using a\\nclosed-form solution, and its decoder weights are used to compose a 1-dimensional texture represen-\\ntation that is fed into a linear SVM. This means that no \\ufb01ne-tuning or backpropagation is needed. We\\nexplore RADAM on several texture benchmarks and achieve state-of-the-art results with different\\ncomputational budgets. Our results suggest that pre-trained backbones may not require additional\\n\\ufb01ne-tuning for texture recognition if their learned representations are better encoded.\\n1 Introduction\\nFor several decades, texture has been studied in Computer Vision as a fundamental visual cue for image recognition in\\nseveral applications. Despite lacking a widely accepted theoretical de\\ufb01nition, we all have developed an intuition for\\ntextures by analyzing the world around us from material surfaces in our daily life, through microscopic images, and\\neven through macroscopic images from telescopes and remote sensing. In digital images, one abstract de\\ufb01nition is\\nthat texture elements emerge from the local intensity constancy and\\/or variations of pixels producing spatial patterns\\nroughly independently at different scales [39].\\nThe classical approaches to texture recognition focus on the mathematical description of the textural patterns, considering\\nproperties such as statistics [10, 15, 24], frequency [1, 12], complexity\\/fractality [2, 38], and others [52]. Many such\\naspects of texture are challenging to model even in controlled imaging scenarios. Moreover, the wild nature of digital\\nimages also results in additional variability, making the task even more complex in real-world applications.\\nRecently, the power of deep neural networks has been extended to texture analysis by taking advantage of models\\npre-trained on big natural image datasets [4 \\u20136, 22, 46 \\u201351]. These transfer-learning approaches combine the general\\nvision capabilities of pre-trained models with dedicated techniques to capture additional texture information, achieving\\nstate-of-the-art performance on several texture recognition tasks. Therefore, most of the recent works on deep texture\\nrecognition propose to build new modules around a pre-trained deep network (backbone) and to retrain the new\\narchitecture for a speci\\ufb01c texture analysis task. However, even if the new modules are relatively cheap in terms of\\ncomputational complexity, resulting in good inference ef\\ufb01ciency, the retraining of the backbone itself is usually costly.\\nGoing in a different direction, Randomized Neural Networks [13,25,26,40] proposes a closed-form solution for training Going in a different direction, Randomized Neural Networks [13,25,26,40] proposes a closed-form solution for training\\nneural networks, instead of the common backpropagation, with various potential applications. For instance, the training\\ntime of randomization-based models was analyzed [17] on datasets such as MNIST, resulting in gains up to 150 times.\\nThese gains can be expressive when hundreds of thousands of images are used to train a model.arXiv:2303.04554v1 [cs.CV] 8 Mar 2023 In this work, we propose a new module for texture feature extraction from pre-trained deep convolutional neural\\nnetworks (DCNNs). The method, called Random encoding of Aggregated DeepActivation Maps (RADAM), goes\\nin a different direction than recent literature on deep texture recognition. Instead of increasing the complexity of\\nthe backbone and then retraining everything, we propose a simple codi\\ufb01cation of the backbone features using a new\\nrandomized module. The method is based on aggregating deep activation maps from different depths of a pre-trained\\nconvolutional network, and then training Randomized Autoencoders (RAEs) in a pixel-wise fashion for each image,\\nusing a closed-form solution. This module outputs the decoder weights from the learned RAEs, which are used as a\\n1-dimensional feature representation of the input image. This approach is simple and does not require hyperparameter\\ntuning or backpropagation training. Instead, we propose to attach a linear SVM at the top of our features, which can be\\nsimply used with standard parameters. Our code is open and is available in a public repository1. In summary, our main\\ncontributions are:\\n(i)We propose the RADAM texture feature encoding technique applied over a pre-trained DCNN backbone and\\ncoupled with a simple linear SVM. The model achieves impressive classi\\ufb01cation performance without needing\\nto \\ufb01ne-tune the backbone, in contrast to what has been proposed in previous works.\\n(ii)Bigger backbones and better pre-training improve the performance of RADAM considerably, suggesting that\\nour approach scales well.\\n2 Background\\nWe start by conducting a literature review on texture analysis with deep learning and randomized neural networks. The\\nmethods covered here are also considered for comparison in our experiments.\\n2.1 Texture Analysis with Deep Neural Networks\\nIn this work, we focus on transfer-learning-based texture analysis by taking advantage of pre-trained deep neural\\nnetworks. For a more comprehensive review of different approaches to texture analysis, the reader may consult [19].\\nThere have been numerous studies involving deep learning for texture recognition, and here we review them according\\nto two approaches: feature extraction or end-to-end \\ufb01ne-tuning. Some studies explore CNNs only for texture feature\\nextraction and use a dedicated classi\\ufb01er apart from the model architecture. Cimpoi et al. [6] was one of the \\ufb01rst works\\non the subject, where the authors compare the ef\\ufb01ciency of two different CNN architectures for feature extraction:\\nFC-CNN, which uses a fully connected (FC) layer, and FV-CNN, which uses a Fisher vector (FV) [5] as a pooling\\nmethod. They demonstrated that, in general, FC features are not that ef\\ufb01cient because their output is highly correlated\\nwith the spatial order of the pixels. Later on, Condori and Bruno [22] developed a model, called RankGP-3M-CNN,\\nwhich performs multi-layer feature aggregation employing Global Average Pooling (GAP) to extract the feature vectors\\nof activation maps at different depths of three combined CNNs (VGG-19, Inception-V3, and ResNet50). They propose\\na ranking technique to select the best activation maps given a training dataset, achieving promising results in some\\ncases but at the cost of increased computational load, since three backbones are needed. Lyra et al. [21] also proposes\\nfeature aggregation from multiple convolutional layers, but pooling is performed using an FV-based approach.\\nNumerous studies propose end-to-end architectures that enable \\ufb01ne-tuning of the backbone for texture recognition.\\nZhang et al. [51] proposed an orderless encoding layer on top of a DCNN, called Deep Texture Encoding Network\\n(Deep-TEN), which allows images of arbitrary size. Xue et al. [46] introduces a Deep Encoding Pooling Network\\n(DEPNet), which combines features from the texture encoding layer from Deep-TEN and a global average pooling (DEPNet), which combines features from the texture encoding layer from Deep-TEN and a global average pooling\\n(GAP) to explore both the local appearance and global context of the images. These features are further processed by\\na bilinear pooling layer [18]. In another work, Xue et al. [47] also combined features from differential images with\\nthe features of DEPNet into a new architecture. Using a different approach, Zhai et al. [50] proposed the Multiple-\\nAttribute-Perceived Network (MAP-Net), which incorporated visual texture attributes in a multi-branch architecture that\\naggregates features of different layers. Later on [49], they explored the spatial dependency among texture primitives for\\ncapturing structural information of the images by using a model called Deep Structure-Revealed Network (DSRNet).\\nChen et al. [4] introduced the Cross-Layer Aggregation of a Statistical Self-similarity Network (CLASSNet). This CNN\\nfeature aggregation module uses a differential box-counting pooling layer that characterizes the statistical self-similarity\\nof texture images. More recently, Yang et al. [48] proposed DFAEN (Double-order Knowledge Fusion and Attentional\\nEncoding Network), which takes advantage of attention mechanisms to aggregate \\ufb01rst- and second-order information\\nfor encoding texture features. Fine-tuning is employed in these methods to adapt the backbone to the new architecture\\nalong with the new classi\\ufb01cation head.\\n1https:\\/\\/github.com\\/scabini\\/RADAM\\n2 As an alternative to CNNs, Vision Transformers (ViTs) [8] are emerging in the visual recognition literature. Some works\\nhave brie\\ufb02y explored their potential for texture analysis through the Describable Textures Dataset (DTD) achieving state-\\nof-the-art results. Firstly, ViTs achieve competitive results compared to CNNs, but the lack of the typical convolutional\\ninductive bias usually results in the need for more training data. To overcome this issue, a promising alternative is\\nto use attention mechanisms to learn directly from text descriptions about images, e.g.using Contrastive Language\\nImage Pre-training (CLIP) [31]. There have also been proposed bigger datasets for the pre-training of ViTs, such as\\nBamboo [53], showing that these models scale well. Another approach is to optimize the construction of multitask\\nlarge-scale ViTs such as proposed by Gesmundo [9] with the \\u00162Net+ method.\\n2.2 Randomized Neural Networks for Texture Analysis\\nA Randomized Neural Network [13, 25, 26, 40], in its simplest form, is a single-hidden-layer feed-forward neural\\nnetwork whose input weights are random, while the weights of the output layer are learned by a closed-form solution,\\nin contrast to gradient-descent-based learning. Recently, several works have investigated RNNs to learn texture features\\nfor image analysis. S\\u00e1 Junior et al. [35] used small local regions of one image as inputs to an RNN, and the central\\npixel of the region as the target. The trained weights of the output layer for each image are then used as a texture\\nrepresentation. Ribas et al. [33] improved the previous approach with the incorporation of graph theory to model the\\ntexture image. Other works [16, 32] have also extended these concepts to video texture analysis (dynamic texture).\\nThe training of 1-layer RNNs as employed in previous works is a least-squares solution at the output layer. First,\\nconsiderX2Rn\\u0002zas the input matrix with ntraining samples and zfeatures, and g=\\u001e(XW)as the forward pass of\\nthe hidden layer with a sigmoid nonlinearity, where W2Rz\\u0002qrepresents the random input weights for qneurons.\\nGiven the desired output labels Y, the output weights fare obtained as the least-squares solution of a system of linear\\nequations:\\nf=YgT(ggT)\\u00001; (1)\\nwheregT(ggT)\\u00001is the Moore\\u2013Penrose pseudo-inverse [23, 29] of matrix g.\\nAn important aspect of RNNs is the generation of random weights for the \\ufb01rst layer. Evidence suggests that this choice\\nhas little impact once the weights are \\ufb01xed. In this sense, a common trend among previous works is the use of the Linear\\nCongruential Generator (LCG), a simple pseudo-random number generator in the form of xk+1= (axk+b) modc.\\nThe RNN can be used as a randomized autoencoder (RAE) [17] by considering the input feature matrix Xas the target\\noutputY=X. In this sense, the model is composed of a random encoder and a least-squares-based decoder that can\\nmap the input data. Kasun et al. [17] also suggests the use of random orthogonal weights [37] for the initialization of\\nthe encoder. In this way, the weight matrix fwill represent the transformation of the projected random space back into\\nthe input data X(output).\\n3 RADAM for Texture Feature Encoding\\nThe main idea of the proposed RADAM method is to use multi-depth feature aggregation and randomized pixel-wise\\nencoding to compose a single feature vector, given an input image processed by the backbone. First of all, consider\\nan input image I2Rw0\\u0002h0\\u00023fed into a backbone B= (d1;:::;d n), consisting of nblocks of convolutional layers.\\nAn activation map, i.e., the output of any convolutional block given I, is a 3-dimensional tensor (ignoring the batch\\ndimension, for simplicity) Xi2Rwi\\u0002hi\\u0002zi. The process of feature aggregation consists of combining the outputs\\nof different activation maps at different depths. To that end, we divide the backbone into a \\ufb01xed number of blocks\\naccording to different depths. This division is made to keep a \\ufb01xed number of blocks for feature extraction, regardless according to different depths. This division is made to keep a \\ufb01xed number of blocks for feature extraction, regardless\\nof the total depth of the backbone architecture.\\n3.1 Pre-trained Deep Convolutional Networks: Backbone selection\\nMost previous works on texture analysis consider pre-trained ResNets [11] (18 or 50) as backbones. Here, we consider\\nthe output of \\ufb01ve blocks of layers according to the ResNet architecture, meaning that \\ufb01ve activation maps are considered\\nfor feature aggregation. Additionally, we consider the ConvNeXt architecture [20], a more recent method with promising\\nresults in image recognition. For this backbone, we consider the activation maps from the four blocks of layers according\\nto the architecture described in the original work. More speci\\ufb01cally, the following ConvNeXt con\\ufb01gurations are used,\\nwith their corresponding number of channels ( zi) of each block:\\n\\u2022 ConvNeXt-nano2:zi= (80;160;320;640) .\\n2This variant was not presented in the original paper, but is available at https:\\/\\/github.com\\/rwightman\\/\\npytorch-image-models\\/blob\\/main\\/timm\\/models\\/ConvNeXt.py\\n3 \\u2022 ConvNeXt-T: zi= (96;192;384;768) .\\n\\u2022 ConvNeXt-B: zi= (128;256;512;1024)\\n\\u2022 ConvNeXt-L: zi= (192;384;768;1536) .\\n\\u2022 ConvNeXt-XL: zi= (256;512;1024;2048) .\\nAs a lightweight alternative, MobileNet V2 [36] is also compared within our experiments (with \\ufb01ve blocks\\nof layers) using either the original architecture or a 1.4 width multiplier, i.e., zi= (16;24;32;96;320) and\\nzi= (24;32;48;136;448) , respectively.\\n3.2 Deep Activation Map Preparation\\nGiven each deep activation map Xi, we apply a depth-wise lp-normalization ( p= 2, i.e., Euclidean norm)\\nXi(:;:;j) =Xi(:;:;j)\\nmax(jjXi(:;:;j)jj2); (2)\\nwhereXi(:;:;j)represents the 2-dimensional activation map at each channel j2ziwith spatial sizes (wi;hi).\\nFor feature aggregation, we propose to concatenate the activation maps along the third dimension ( zi). However, each\\nmapXiinitially has a different spatial dimension wiandhi. To overcome this, we simply resize all activation maps\\nwith bilinear interpolation using the spatial dimensions of Xn\\n2,(wn\\n2;hn\\n2), as the target sizes. In other words, we\\nconsider the spatial dimensions at the middle of the backbone as our anchor size, meaning that some activation maps\\nwill require upscaling (if i>n\\n2) and others downscaling (if i